Pre-trained language model with feature reduction and no fine-tuning

Pre-trained language models were proven to achieve excellent results in Natural Language Processing tasks such as Sentiment Analysis. However, the number of sentence embeddings from the base model of Bidirectional Encoder from Transformer (BERT) is 768 for a sentence, and there will be more than mil...

全面介紹

Saved in:
書目詳細資料
Main Authors: Kit, Y. H., Mokji, M.
格式: Conference or Workshop Item
出版: 2022
主題:
在線閱讀:http://eprints.utm.my/id/eprint/98842/
http://dx.doi.org/10.1007/978-981-19-3923-5_59
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!