Hybrid markerless augmented reality tracking method for planar surface

Markerless tracking for augmented reality should not only accurate but also fast enough to provide a seamless synchronization between real and virtual beings. Current methods showed that a vision-based tracking is accurate but requires high computational power. This paper proposes a real-time hybrid...

Full description

Saved in:
Bibliographic Details
Main Author: Afif, Fadhil Noer
Format: Thesis
Published: 2013
Subjects:
Online Access:http://eprints.utm.my/id/eprint/41646/
http://dms.library.utm.my:8080/vital/access/manager/Repository/vital:77754?queryType=vitalDismax&query=Hybrid+markerless+augmented+reality+tracking+method&public=true
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Markerless tracking for augmented reality should not only accurate but also fast enough to provide a seamless synchronization between real and virtual beings. Current methods showed that a vision-based tracking is accurate but requires high computational power. This paper proposes a real-time hybrid-based method for tracking unknown environments in markerless augmented reality. The proposed method provides collaboration of vision-based approach with accelerometers and gyroscopes sensors as camera pose predictor. To align the augmentation relative to camera motion, the tracking method is done by substituting feature-based camera estimation with combination of inertial sensors with complementary filter to provide more dynamic response. The proposed method managed to track unknown environment with faster processing time compared to feature-based approach. Moreover, the proposed method can sustain its estimation in a situation where feature-based tracking loses its track. The collaboration of sensor tracking managed to perform the task for about 22,97 FPS, up to five times faster than feature-based tracking method used as comparison. Therefore, the proposed method can be used to track unknown environments without depending on amount of features on scene, while requiring lower computational cost.