BRADFORD SCHOLARS

    • Sign in
    View Item 
    •   Bradford Scholars
    • Engineering and Informatics
    • Engineering and Informatics Publications
    • View Item
    •   Bradford Scholars
    • Engineering and Informatics
    • Engineering and Informatics Publications
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of Bradford ScholarsCommunitiesAuthorsTitlesSubjectsPublication DateThis CollectionAuthorsTitlesSubjectsPublication Date

    My Account

    Sign in

    HELP

    Bradford Scholars FAQsCopyright Fact SheetPolicies Fact SheetDeposit Terms and ConditionsDigital Preservation Policy

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    A Toolkit for Multimodal Interface Design: An Empirical Investigation

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    Publication date
    2007
    Author
    Rigas, Dimitrios I.
    Alsuraihi, M.
    Keyword
    Speech recognition
    Text-to-speech
    Interface design
    Usability
    Learnability
    Effectiveness
    Efficiency
    Satisfaction
    Visual
    oral
    Aural
    Multimodal
    Auditory-icons
    Earcons
    Speech
    Voice-instruction
    Show allShow less
    Peer-Reviewed
    Yes
    
    Metadata
    Show full item record
    Abstract
    This paper introduces a comparative multi-group study carried out to investigate the use of multimodal interaction metaphors (visual, oral, and aural) for improving learnability (or usability from first time use) of interface-design environments. An initial survey was used for taking views about the effectiveness and satisfaction of employing speech and speech-recognition for solving some of the common usability problems. Then, the investigation was done empirically by testing the usability parameters: efficiency, effectiveness, and satisfaction of three design-toolkits (TVOID, OFVOID, and MMID) built especially for the study. TVOID and OFVOID interacted with the user visually only using typical and time-saving interaction metaphors. The third environment MMID added another modality through vocal and aural interaction. The results showed that the use of vocal commands and the mouse concurrently for completing tasks from first time use was more efficient and more effective than the use of visual-only interaction metaphors.
    URI
    http://hdl.handle.net/10454/3156
    Version
    No full-text available in the repository
    Citation
    Rigas, D. and Alsuraihi, M. (2007). A Toolkit for Multimodal Interface Design: An Empirical Investigation. Lecture Notes in Computer Science. Vol. 4552, pp. 196-205.
    Link to publisher’s version
    http://dx.doi.org/10.1007/978-3-540-73110-8_21
    Type
    Article
    Collections
    Engineering and Informatics Publications

    entitlement

     
    DSpace software (copyright © 2002 - 2023)  DuraSpace
    Quick Guide | Contact Us
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.