Personality recognition using composite audio-video features on custom CNN architecture
Automatic personality recognition is becoming more prominent in the domain of intelligent job matching. Traditionally, individual personality traits are measured through questionnaire carefully design based on personality models like the big-five or MBTI. Although the attributes in these models are...
Saved in:
Main Author: | |
---|---|
Format: | Final Year Project / Dissertation / Thesis |
Published: |
2020
|
Subjects: | |
Online Access: | http://eprints.utar.edu.my/3864/1/16ACB05282_FYP.pdf http://eprints.utar.edu.my/3864/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my-utar-eprints.3864 |
---|---|
record_format |
eprints |
spelling |
my-utar-eprints.38642021-01-06T13:14:53Z Personality recognition using composite audio-video features on custom CNN architecture Eng, Zi Jye Q Science (General) Automatic personality recognition is becoming more prominent in the domain of intelligent job matching. Traditionally, individual personality traits are measured through questionnaire carefully design based on personality models like the big-five or MBTI. Although the attributes in these models are proven effective; data collection through surveys can result in biased scoring due to illusory superiority. Machine-learning based personality models alleviate these constraints by modelling behavioural cues from videos annotated by personality experts; For example, the ECCV ChaLearn LAP 2016 challenge seek to recognise and quantise human personality traits. Using variants of CNN(s), existing methods attempt to improve model accuracy through adding custom layers and hyperparameters tuning; trained on the full ChaLearn LAP 2016 datasets that are computeintensive. This project proposes a rapid behavioural modelling technique for short videos to improve model accuracy and prevent overfitting while minimizing the amount of training data needed. The contribution of this work is two folds: (1) a selective sampling technique using the first seven-seconds of video for training and (2) Using limited amount of dataset to model a personality trait recognition model with optimum performance. By applying selective sampling technique and inclusion of multiple modalities, the model performance able to achieve 90.30 in testing result with almost 600% smaller training data. 2020-05-14 Final Year Project / Dissertation / Thesis NonPeerReviewed application/pdf http://eprints.utar.edu.my/3864/1/16ACB05282_FYP.pdf Eng, Zi Jye (2020) Personality recognition using composite audio-video features on custom CNN architecture. Final Year Project, UTAR. http://eprints.utar.edu.my/3864/ |
institution |
Universiti Tunku Abdul Rahman |
building |
UTAR Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Tunku Abdul Rahman |
content_source |
UTAR Institutional Repository |
url_provider |
http://eprints.utar.edu.my |
topic |
Q Science (General) |
spellingShingle |
Q Science (General) Eng, Zi Jye Personality recognition using composite audio-video features on custom CNN architecture |
description |
Automatic personality recognition is becoming more prominent in the domain of intelligent job matching. Traditionally, individual personality traits are measured through questionnaire carefully design based on personality models like the big-five or MBTI. Although the attributes in these models are proven effective; data collection through surveys can result in biased scoring due to illusory superiority. Machine-learning based personality models alleviate these constraints by modelling behavioural cues from videos annotated by personality experts; For example, the ECCV ChaLearn LAP 2016 challenge seek to recognise and quantise human personality traits. Using variants of CNN(s), existing methods attempt to improve model accuracy through adding custom layers and hyperparameters tuning; trained on the full ChaLearn LAP 2016 datasets that are computeintensive. This project proposes a rapid behavioural modelling technique for short videos to improve model accuracy and prevent overfitting while minimizing the amount of training data needed. The contribution of this work is two folds: (1) a selective sampling technique using the first seven-seconds of video for training and (2) Using limited amount of dataset to model a personality trait recognition model with optimum performance. By applying selective sampling technique and inclusion of multiple modalities, the model performance able to achieve 90.30 in testing result with almost 600% smaller training data. |
format |
Final Year Project / Dissertation / Thesis |
author |
Eng, Zi Jye |
author_facet |
Eng, Zi Jye |
author_sort |
Eng, Zi Jye |
title |
Personality recognition using composite audio-video features on custom CNN architecture |
title_short |
Personality recognition using composite audio-video features on custom CNN architecture |
title_full |
Personality recognition using composite audio-video features on custom CNN architecture |
title_fullStr |
Personality recognition using composite audio-video features on custom CNN architecture |
title_full_unstemmed |
Personality recognition using composite audio-video features on custom CNN architecture |
title_sort |
personality recognition using composite audio-video features on custom cnn architecture |
publishDate |
2020 |
url |
http://eprints.utar.edu.my/3864/1/16ACB05282_FYP.pdf http://eprints.utar.edu.my/3864/ |
_version_ |
1688551784779350016 |
score |
13.160551 |