Pre-trained language model with feature reduction and no fine-tuning

Pre-trained language models were proven to achieve excellent results in Natural Language Processing tasks such as Sentiment Analysis. However, the number of sentence embeddings from the base model of Bidirectional Encoder from Transformer (BERT) is 768 for a sentence, and there will be more than mil...

詳細記述

保存先:
書誌詳細
主要な著者: Kit, Y. H., Mokji, M.
フォーマット: Conference or Workshop Item
出版事項: 2022
主題:
オンライン・アクセス:http://eprints.utm.my/id/eprint/98842/
http://dx.doi.org/10.1007/978-981-19-3923-5_59
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!