Pre-trained language model with feature reduction and no fine-tuning
Pre-trained language models were proven to achieve excellent results in Natural Language Processing tasks such as Sentiment Analysis. However, the number of sentence embeddings from the base model of Bidirectional Encoder from Transformer (BERT) is 768 for a sentence, and there will be more than mil...
Saved in:
Main Authors: | , |
---|---|
格式: | Conference or Workshop Item |
出版: |
2022
|
主题: | |
在线阅读: | http://eprints.utm.my/id/eprint/98842/ http://dx.doi.org/10.1007/978-981-19-3923-5_59 |
标签: |
添加标签
没有标签, 成为第一个标记此记录!
|