Multimodal fusion: Gesture and speech input in augmented reality environment
Augmented Reality (AR) has the capability to interact with the virtual objects and physical objects simultaneously since it combines the real world with virtual world seamlessly. However, most AR interface applies conventional Virtual Reality (VR) interaction techniques without modification. In this...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference or Workshop Item |
Published: |
2015
|
Subjects: | |
Online Access: | http://eprints.utm.my/id/eprint/59385/ http://dx.doi.org/10.1007/978-3-319-13153-5_24 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.utm.59385 |
---|---|
record_format |
eprints |
spelling |
my.utm.593852022-04-10T05:48:33Z http://eprints.utm.my/id/eprint/59385/ Multimodal fusion: Gesture and speech input in augmented reality environment Ismail, Ajune Wanis Sunar, Mohd. Shahrizal QA75 Electronic computers. Computer science Augmented Reality (AR) has the capability to interact with the virtual objects and physical objects simultaneously since it combines the real world with virtual world seamlessly. However, most AR interface applies conventional Virtual Reality (VR) interaction techniques without modification. In this paper we explore the multimodal fusion for AR with speech and hand gesture input. Multimodal fusion enables users to interact with computers through various input modalities like speech, gesture, and eye gaze. At the first stage to propose the multimodal interaction, the input modalities are decided to be selected before be integrated in an interface. The paper presents several related works about to recap the multimodal approaches until it recently has been one of the research trends in AR. It presents the assorted existing works in multimodal for VR and AR. In AR, multimodal considers as the solution to improve the interaction between the virtual and physical entities. It is an ideal interaction technique for AR applications since AR supports interactions in real and virtual worlds in the real-time. This paper describes the recent studies in AR developments that appeal gesture and speech inputs. It looks into multimodal fusion and its developments, followed by the conclusion.This paper will give a guideline on multimodal fusion on how to integrate the gesture and speech inputs in AR environment. 2015 Conference or Workshop Item PeerReviewed Ismail, Ajune Wanis and Sunar, Mohd. Shahrizal (2015) Multimodal fusion: Gesture and speech input in augmented reality environment. In: 4th International Neural Network Society Symposia Series on Computational Intelligence in Information Systems, INNS-CIIS 2014, 7 - 9 November 2014, Bandar Seri Begawan, Brunei. http://dx.doi.org/10.1007/978-3-319-13153-5_24 |
institution |
Universiti Teknologi Malaysia |
building |
UTM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Teknologi Malaysia |
content_source |
UTM Institutional Repository |
url_provider |
http://eprints.utm.my/ |
topic |
QA75 Electronic computers. Computer science |
spellingShingle |
QA75 Electronic computers. Computer science Ismail, Ajune Wanis Sunar, Mohd. Shahrizal Multimodal fusion: Gesture and speech input in augmented reality environment |
description |
Augmented Reality (AR) has the capability to interact with the virtual objects and physical objects simultaneously since it combines the real world with virtual world seamlessly. However, most AR interface applies conventional Virtual Reality (VR) interaction techniques without modification. In this paper we explore the multimodal fusion for AR with speech and hand gesture input. Multimodal fusion enables users to interact with computers through various input modalities like speech, gesture, and eye gaze. At the first stage to propose the multimodal interaction, the input modalities are decided to be selected before be integrated in an interface. The paper presents several related works about to recap the multimodal approaches until it recently has been one of the research trends in AR. It presents the assorted existing works in multimodal for VR and AR. In AR, multimodal considers as the solution to improve the interaction between the virtual and physical entities. It is an ideal interaction technique for AR applications since AR supports interactions in real and virtual worlds in the real-time. This paper describes the recent studies in AR developments that appeal gesture and speech inputs. It looks into multimodal fusion and its developments, followed by the conclusion.This paper will give a guideline on multimodal fusion on how to integrate the gesture and speech inputs in AR environment. |
format |
Conference or Workshop Item |
author |
Ismail, Ajune Wanis Sunar, Mohd. Shahrizal |
author_facet |
Ismail, Ajune Wanis Sunar, Mohd. Shahrizal |
author_sort |
Ismail, Ajune Wanis |
title |
Multimodal fusion: Gesture and speech input in augmented reality environment |
title_short |
Multimodal fusion: Gesture and speech input in augmented reality environment |
title_full |
Multimodal fusion: Gesture and speech input in augmented reality environment |
title_fullStr |
Multimodal fusion: Gesture and speech input in augmented reality environment |
title_full_unstemmed |
Multimodal fusion: Gesture and speech input in augmented reality environment |
title_sort |
multimodal fusion: gesture and speech input in augmented reality environment |
publishDate |
2015 |
url |
http://eprints.utm.my/id/eprint/59385/ http://dx.doi.org/10.1007/978-3-319-13153-5_24 |
_version_ |
1731225561008701440 |
score |
13.211869 |