3D motion and skeleton construction from monocular video
We describe 3D motion construction as a framework for constructing a 3D motion and skeleton using a monocular video source (2D video). The processes include 3 main phases which involved in generating ground truth 2D annotation using OpenPose, generating the mesh and 3D matrices of person in the sequ...
保存先:
主要な著者: | , , , , |
---|---|
フォーマット: | Conference or Workshop Item |
出版事項: |
2020
|
主題: | |
オンライン・アクセス: | http://eprints.utm.my/id/eprint/89722/ http://dx.doi.org/10.1007/978-981-15-0058-9_8 |
タグ: |
タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!
|
要約: | We describe 3D motion construction as a framework for constructing a 3D motion and skeleton using a monocular video source (2D video). The processes include 3 main phases which involved in generating ground truth 2D annotation using OpenPose, generating the mesh and 3D matrices of person in the sequences of images based on 2D annotation and bounding box and lastly using the matrices to create a HIK 3D skeleton in a standalone Maya application. For the process, we highly relied on the 2D annotation using Convolution Neural Network. We demonstrate our result using a video of Malaysia Traditional Dance called zapin. |
---|