A Toolkit for Multimodal Interface Design: An Empirical Investigation
Publication date
2007Keyword
Speech recognitionText-to-speech
Interface design
Usability
Learnability
Effectiveness
Efficiency
Satisfaction
Visual
oral
Aural
Multimodal
Auditory-icons
Earcons
Speech
Voice-instruction
Peer-Reviewed
YesOpen Access status
closedAccess
Metadata
Show full item recordAbstract
This paper introduces a comparative multi-group study carried out to investigate the use of multimodal interaction metaphors (visual, oral, and aural) for improving learnability (or usability from first time use) of interface-design environments. An initial survey was used for taking views about the effectiveness and satisfaction of employing speech and speech-recognition for solving some of the common usability problems. Then, the investigation was done empirically by testing the usability parameters: efficiency, effectiveness, and satisfaction of three design-toolkits (TVOID, OFVOID, and MMID) built especially for the study. TVOID and OFVOID interacted with the user visually only using typical and time-saving interaction metaphors. The third environment MMID added another modality through vocal and aural interaction. The results showed that the use of vocal commands and the mouse concurrently for completing tasks from first time use was more efficient and more effective than the use of visual-only interaction metaphors.Version
No full-text in the repositoryCitation
Rigas, D. and Alsuraihi, M. (2007). A Toolkit for Multimodal Interface Design: An Empirical Investigation. Lecture Notes in Computer Science. Vol. 4552, pp. 196-205.Link to Version of Record
https://doi.org/10.1007/978-3-540-73110-8_21Type
Articleae974a485f413a2113503eed53cd6c53
https://doi.org/10.1007/978-3-540-73110-8_21