Object selection and scaling using multimodal interaction in mixed reality
Mixed Reality (MR) is the next evolution of human interacting with the computer as MR has the ability to combine the physical environment and digital environment and making them coexist with each other. Interaction is still a huge research area in Augmented Reality (AR) but very less in MR, this is...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | http://eprints.utm.my/id/eprint/92493/1/AWIsmail2020_ObjectSelectionAndScalingUsingMultimodalInteraction.pdf http://eprints.utm.my/id/eprint/92493/ http://dx.doi.org/10.1088/1757-899X/979/1/012004 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.utm.92493 |
---|---|
record_format |
eprints |
spelling |
my.utm.924932021-09-30T15:12:11Z http://eprints.utm.my/id/eprint/92493/ Object selection and scaling using multimodal interaction in mixed reality Aladin, M. Y. F. Ismail, A. W. Ismail, N. A. Rahim, M. S. M. QA75 Electronic computers. Computer science Mixed Reality (MR) is the next evolution of human interacting with the computer as MR has the ability to combine the physical environment and digital environment and making them coexist with each other. Interaction is still a huge research area in Augmented Reality (AR) but very less in MR, this is due to current advanced MR display techniques still not robust and intuitive enough to let the user to naturally interact with 3D content. New techniques on user interaction have been widely studied, the advanced technique in interaction when the system able to invoke more than one input modalities. Multimodal interaction undertakes to deliver intuitive multiple objects manipulation with gestures. This paper discusses the multimodal interaction technique using gesture and speech which the proposed experimental setup to implement multimodal in the MR interface. The real hand gesture is combined with speech inputs in MR to perform spatial object manipulations. The paper explains the implementation stage that involves interaction using gesture and speech inputs to enhance user experience in MR workspace. After acquiring gesture input and speech commands, spatial manipulation for selection and scaling using multimodal interaction has been invoked, and this paper ends with a discussion. 2020 Conference or Workshop Item PeerReviewed application/pdf en http://eprints.utm.my/id/eprint/92493/1/AWIsmail2020_ObjectSelectionAndScalingUsingMultimodalInteraction.pdf Aladin, M. Y. F. and Ismail, A. W. and Ismail, N. A. and Rahim, M. S. M. (2020) Object selection and scaling using multimodal interaction in mixed reality. In: International Conference on Virtual and Mixed Reality Interfaces 2020, ICVRMR 2020, 16 - 17 November 2020, Johor, Malaysia. http://dx.doi.org/10.1088/1757-899X/979/1/012004 |
institution |
Universiti Teknologi Malaysia |
building |
UTM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Teknologi Malaysia |
content_source |
UTM Institutional Repository |
url_provider |
http://eprints.utm.my/ |
language |
English |
topic |
QA75 Electronic computers. Computer science |
spellingShingle |
QA75 Electronic computers. Computer science Aladin, M. Y. F. Ismail, A. W. Ismail, N. A. Rahim, M. S. M. Object selection and scaling using multimodal interaction in mixed reality |
description |
Mixed Reality (MR) is the next evolution of human interacting with the computer as MR has the ability to combine the physical environment and digital environment and making them coexist with each other. Interaction is still a huge research area in Augmented Reality (AR) but very less in MR, this is due to current advanced MR display techniques still not robust and intuitive enough to let the user to naturally interact with 3D content. New techniques on user interaction have been widely studied, the advanced technique in interaction when the system able to invoke more than one input modalities. Multimodal interaction undertakes to deliver intuitive multiple objects manipulation with gestures. This paper discusses the multimodal interaction technique using gesture and speech which the proposed experimental setup to implement multimodal in the MR interface. The real hand gesture is combined with speech inputs in MR to perform spatial object manipulations. The paper explains the implementation stage that involves interaction using gesture and speech inputs to enhance user experience in MR workspace. After acquiring gesture input and speech commands, spatial manipulation for selection and scaling using multimodal interaction has been invoked, and this paper ends with a discussion. |
format |
Conference or Workshop Item |
author |
Aladin, M. Y. F. Ismail, A. W. Ismail, N. A. Rahim, M. S. M. |
author_facet |
Aladin, M. Y. F. Ismail, A. W. Ismail, N. A. Rahim, M. S. M. |
author_sort |
Aladin, M. Y. F. |
title |
Object selection and scaling using multimodal interaction in mixed reality |
title_short |
Object selection and scaling using multimodal interaction in mixed reality |
title_full |
Object selection and scaling using multimodal interaction in mixed reality |
title_fullStr |
Object selection and scaling using multimodal interaction in mixed reality |
title_full_unstemmed |
Object selection and scaling using multimodal interaction in mixed reality |
title_sort |
object selection and scaling using multimodal interaction in mixed reality |
publishDate |
2020 |
url |
http://eprints.utm.my/id/eprint/92493/1/AWIsmail2020_ObjectSelectionAndScalingUsingMultimodalInteraction.pdf http://eprints.utm.my/id/eprint/92493/ http://dx.doi.org/10.1088/1757-899X/979/1/012004 |
_version_ |
1713199739836563456 |
score |
13.211869 |