Pre-trained language model with feature reduction and no fine-tuning
Pre-trained language models were proven to achieve excellent results in Natural Language Processing tasks such as Sentiment Analysis. However, the number of sentence embeddings from the base model of Bidirectional Encoder from Transformer (BERT) is 768 for a sentence, and there will be more than mil...
Saved in:
Main Authors: | Kit, Y. H., Mokji, M. |
---|---|
Format: | Conference or Workshop Item |
Published: |
2022
|
Subjects: | |
Online Access: | http://eprints.utm.my/id/eprint/98842/ http://dx.doi.org/10.1007/978-981-19-3923-5_59 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Pre-trained language model with feature reduction and no fine-tuning
by: Kit, Y. H., et al.
Published: (2022) -
Sentiment analysis using pre-trained language model with no fine-tuning and less resource
by: Kit, Yuheng, et al.
Published: (2022) -
Modelling and PSO fine-tuned PID control of quadrotor UAV
by: Noordin, A., et al.
Published: (2017) -
Patterned ground shield for inductance fine-tuning
by: Yusof, Nur S., et al.
Published: (2022) -
The Parametric Study and Fine-Tuning of Bow-Tie Slot Antenna with Loaded Stub
by: Shafiei, M.M., et al.
Published: (2017)