Language GANs Falling Short
Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks. Maximum-Likelihood (MLE) models trained with teacher forcing have constantly been reported as weak baselines, where poor performance is attributed to exposure bias; at inference time, the model is fed its own prediction instead of a ground-truth token, which can lead to accumulating errors and poor samples. This line of reasoning has led to an outbreak of adversarial based approaches for NLG, on the account that GANs do not suffer from exposure bias. In this work, wake make several surprising observations with contradict common beliefs. We first revisit the canonical evaluation framework for NLG, and point out fundamental flaws with quality-only evaluation: we show that one can outperform such metrics using a simple, well-known temperature parameter to artificially reduce the entropy of the model's conditional distributions. Second, we leverage the control over the quality / diversity tradeoff given by this parameter to evaluate models over the whole quality-diversity spectrum, and find MLE models constantly outperform the proposed GAN variants, over the whole quality-diversity space. Our results have several implications: 1) The impact of exposure bias on sample quality is less severe than previously thought, 2) temperature tuning provides a better quality / diversity trade off than adversarial training, while being easier to train, easier to cross-validate, and less computationally expensive.
Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

Massimo Caccia (edit)
Lucas Caccia (add twitter)
William Fedus (edit)
Hugo Larochelle (edit)
Joelle Pineau (add twitter)
Laurent Charlin (edit)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
None (add)
Repo:
None (add)
Stargazers:
0
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
None
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
11/06/18 06:02PM
6,043
1,971
Tweets
arxiv_pop: 2018/11/06 投稿 2位 CL(Computation and Language) Language GANs Falling Short https://t.co/8zrrxq5nS0 22 Tweets 36 Retweets 105 Favorites
icoxfog417: @sei_shinagawa @caesar_wanya 横からですが、最近テキスト生成系のGANは普通の最尤推定に劣るという話がありました。自然言語XGANがどこまで実用的なのか、結構深い話題だと思います。 https://t.co/dFAvTL0S8O
icoxfog417: @sei_shinagawa @caesar_wanya 横からですが、最近テキスト生成系のGANは普通の最尤推定に劣るという話がありました。FM-GANもこの中に含まれてます。 https://t.co/dFAvTL0S8O
Montreal_AI: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
arxiv_cscl: Language GANs Falling Short https://t.co/7YppMW8PJy
arxiv_cscl: Language GANs Falling Short https://t.co/7YppMW8PJy
kz311: 自然言語処理の多様性と品質 やはり目をつけてきているか。自分は自分の手法でやるだけだ。感情がないとマネジメントは理想的にゆくかどうかにつき興味があるだけだしなぶっちゃけ Language GANs Falling Short https://t.co/WlPBZo8ULp
arxiv_cscl: Language GANs Falling Short https://t.co/7YppMW8PJy
RexDouglass: RT @arxiv_cscl: Language GANs Falling Short https://t.co/7YppMW8PJy
arxiv_cscl: Language GANs Falling Short https://t.co/7YppMW8PJy
arxiv_cscl: Language GANs Falling Short https://t.co/7YppMW8PJy
KreutzerJulia: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
arxiv_cscl: Language GANs Falling Short https://t.co/7YppMWqqB6
arxiv_cscl: Language GANs Falling Short https://t.co/7YppMW8PJy
DocXavi: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
arxiv_cscl: Language GANs Falling Short https://t.co/7YppMW8PJy
sariganesha: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
narges_razavian: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
Rahul_Bhalley: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
Torontoedge: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
SigP226: #stanfordnlp RT sleepinyourhat: Thank you, MILA, for articulating why most NLP researchers have been so frustrated by the wave of GANs-for-text papers: https://t.co/aMAOBkAVFQ
AmineKorchiMD: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
CarrieGaard: [1811.02549] Language GANs Falling Short https://t.co/57u2dOoghc
arxiv_cscl: Language GANs Falling Short https://t.co/7YppMW8PJy
annabelle_nlp: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
ceobillionaire: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
balajiln: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
eDezhic: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
SamiraEKahou: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
nicogontier: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
kotti_sasikanth: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
AmirSaffari: Language GANs Falling Short https://t.co/j64909QM2K
davlanade: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
onucharlesc: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
_lpag: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
truskovskiy: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
montrealcdl: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
skrish_13: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
lcharlin: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
noveens97: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
_willfalcon: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
adropboxspace: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
jrbtaylor: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
reddit_ml: [R] [1811.02549] Language GANs Falling Short https://t.co/5KQ8sJfVro
rllabmcgill: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
tarantulae: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
noecasas: RT @MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCacci…
MILAMontreal: Check out our new work exploring why language GANs are falling short https://t.co/bsFOZvkpZy cc @masscaccia @LucasPCaccia @LiamFedus @hugo_larochelle @lcharlin
LiamFedus: With the careful investigative work of @masscaccia and @LucasPCaccia, we find that NLP GAN models still aren't improving over a simple maximum-likelihood baseline with reduced softmax temperature as assessed on (local/global) quality-diversity spectrum! https://t.co/Wofbmg7EIH
arxiv_cscl: Language GANs Falling Short https://t.co/7YppMWqqB6
arxiv_cscl: Language GANs Falling Short https://t.co/7YppMW8PJy
sleepinyourhat: Thank you, MILA, for articulating why most NLP researchers have been so frustrated by the wave of GANs-for-text papers: https://t.co/fd0EMGQjMZ
BrundageBot: Language GANs Falling Short. Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin https://t.co/VZRY5fnA8Q
arxiv_cscl: Language GANs Falling Short https://t.co/7YppMW8PJy
Images
Related