Improving Neural Conversational Models with Entropy-Based Data Filtering
Current neural-network based conversational models lack diversity and generate boring responses to open-ended utterances. Priors such as persona, emotion, or topic provide additional information to dialog models to aid response generation, but annotating a dataset with priors is expensive and such annotations are rarely available. While previous methods for improving the quality of open-domain response generation focused on either the underlying model or the training objective, we present a method of filtering dialog datasets by removing generic utterances from training data using a simple entropy-based approach that does not require human supervision. We conduct extensive experiments with different variations of our method, and compare dialog models across 13 evaluation metrics to show that training on datasets filtered in this way results in better conversational quality as chatbots learn to output more diverse responses.
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

Richard Csaky (edit)
Patrik Purgai (add twitter)
Gabor Recski (add twitter)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
Stargazers:
635
Forks:
146
Open Issues:
20
Network:
146
Subscribers:
37
Language:
C++
General purpose unsupervised sentence representations
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
05/14/19 06:04PM
10,576
3,224
Tweets
arxiv_cscl: Improving Neural Conversational Models with Entropy-Based Data Filtering https://t.co/On0u2SG9PL
arxiv_in_review: #acl2019nlp Improving Neural Conversational Models with Entropy-Based Data Filtering. (arXiv:1905.05471v1 [cs\.CL]) https://t.co/RLnRdZBNML
udmrzn: RT @arxiv_cscl: Improving Neural Conversational Models with Entropy-Based Data Filtering https://t.co/On0u2SG9PL
ballforest: RT @arxiv_cscl: Improving Neural Conversational Models with Entropy-Based Data Filtering https://t.co/On0u2SG9PL
SythonUK: RT @arxiv_cscl: Improving Neural Conversational Models with Entropy-Based Data Filtering https://t.co/On0u2SG9PL
ryo_masumura: RT @arxiv_cscl: Improving Neural Conversational Models with Entropy-Based Data Filtering https://t.co/On0u2SG9PL
arxiv_cscl: Improving Neural Conversational Models with Entropy-Based Data Filtering https://t.co/On0u2SG9PL
arxiv_cscl: Improving Neural Conversational Models with Entropy-Based Data Filtering https://t.co/On0u2Soyrb
SciFi: Improving Neural Conversational Models with Entropy-Based Data Filtering. https://t.co/63b5AlflNI
BrundageBot: Improving Neural Conversational Models with Entropy-Based Data Filtering. Richard Csaky, Patrik Purgai, and Gabor Recski https://t.co/NTPsNeIQnr
Images
Related