Enhancements to the sequence-to-sequence-based natural answer generation models
There is a great interest shown by academic researchers to continuously improve the sequence-to-sequence (Seq2Seq) model for natural answer generation (NAG) in chatbots. The Seq2Seq model shows a weakness whereby the model tends to generate answers that are generic, meaningless and inconsistent with...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Institute of Electrical and Electronics Engineers
2020
|
Online Access: | http://psasir.upm.edu.my/id/eprint/88819/1/BOT.pdf http://psasir.upm.edu.my/id/eprint/88819/ https://ieeexplore.ieee.org/document/9025265 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | There is a great interest shown by academic researchers to continuously improve the sequence-to-sequence (Seq2Seq) model for natural answer generation (NAG) in chatbots. The Seq2Seq model shows a weakness whereby the model tends to generate answers that are generic, meaningless and inconsistent with the questions. However, a comprehensive literature review on the factors contributing to the weakness and potential solutions are still missing. Therefore, this review article fills the gap by reviewing Seq2Seq based natural answer generation-based literature to identify those factors and proposed methods to address the weakness. This literature review identified several factors such as input question is not sufficient to determine a meaningful output, usage of cross-entropy function as the loss function during training, infrequent words in training data, language model influence which generates answers not relevant to the question, utilization of teacher forcing method during training which results in exposure bias, long sentences and inability to consider dialogue history as the factors. Additionally, this literature review also identified and reviewed the methods proposed to address the weakness such as utilizing additional embedding and encoders, using different loss functions and training approaches, as well as utilizing other mechanisms like copying source word(s) and paying attention to a certain portion of the input. For discussion, these methods are categorized into four broad categories which are Structural Modifications, Augmented Learning, Beam Search and Complementary Mechanisms. Additionally, the paper highlights unexplored areas in Seq2Seq modeling and proposes potential future works for natural answer generation. |
---|