Integration Model of Eye-Gaze, Voice and Manual Response in Multimodal User Interface
-
Abstract
This paper reports the utility of eye-gaze, voice and manual response in the design of multimodal user interface. A device- and application-independent user interface model (VisualMan) of 3D object selection and manipulation was developed and validated in a prototype interface based on a 3D cube manipulation task. The multimodal inputs are integrated in the prototype interface based on the priority of modalities and interaction context. The implications of the model for virtual reality interface are discuss…
-
-