ACORT: A compact object relation transformer for parameter efficient image captioning
Recent research that applies Transformer-based architectures to image captioning has resulted in stateof-the-art image captioning performance, capitalising on the success of Transformers on natural language tasks. Unfortunately, though these models work well, one major flaw is their large model size...
Saved in:
Main Authors: | Tan, Jia Huei, Tan, Ying Hua, Chan, Chee Seng, Chuah, Joon Huang |
---|---|
Format: | Article |
Published: |
Elsevier
2022
|
Subjects: | |
Online Access: | http://eprints.um.edu.my/32731/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
COMIC: Toward A Compact Image Captioning Model With Attention
by: Tan, Jia Huei, et al.
Published: (2019) -
End-to-end supermask pruning: Learning to prune image captioning models
by: Tan, Jia Huei, et al.
Published: (2022) -
Phrase-based image caption generator with hierarchical LSTM network
by: Tan, Ying Hua, et al.
Published: (2019) -
Automated image captioning with deep neural networks
by: Abdullah, Ahmad Zarir, et al.
Published: (2020) -
Protect, show, attend and tell: Empowering image captioning models with ownership protection
by: Lim, Jian Han, et al.
Published: (2022)