Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks
Many state-of-the-art neural models for NLP are heavily parameterized and thus memory inefficient. This paper proposes a series of lightweight and memory efficient neural architectures for a potpourri of natural language processing (NLP) tasks. To this end, our models exploit computation using Quaternion algebra and hypercomplex spaces, enabling not only expressive inter-component interactions but also significantly ($75\%$) reduced parameter size due to lesser degrees of freedom in the Hamilton product. We propose Quaternion variants of models, giving rise to new architectures such as the Quaternion attention Model and Quaternion Transformer. Extensive experiments on a battery of NLP tasks demonstrates the utility of proposed Quaternion-inspired models, enabling up to $75\%$ reduction in parameter size without significant loss in performance.
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

Yi Tay (edit)
Aston Zhang (add twitter)
Luu Anh Tuan (add twitter)
Jinfeng Rao (edit)
Shuai Zhang (add twitter)
Shuohang Wang (add twitter)
Jie Fu (edit)
Siu Cheung Hui (add twitter)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
Stargazers:
20
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
7
Language:
None
Repository for ACL 2019 paper
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
06/11/19 06:04PM
6,027
2,206
Tweets
morioka: RT @stakemura: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks, ACL 2019 https://t.co/xeCM12C5eF Qua…
tsubosaka: RT @stakemura: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks, ACL 2019 https://t.co/xeCM12C5eF Qua…
stakemura: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks, ACL 2019 https://t.co/xeCM12C5eF Quaternion導入による、自然言語処理のモデル軽量化。メモリ消費量削減に効果あり。今やCV以外の領域にも応用されているのね…
arxiv_in_review: #acl2019nlp Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks. (arXiv:1906.04393v1 [cs\.CL]) https://t.co/bTZSLeKmbk
arxiv_cscl: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks https://t.co/PQdNA1OFCh
Memoirs: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks. https://t.co/A479t8SYIK
arxiv_cs_LG: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks. Yi Tay, Aston Zhang, Luu Anh Tuan, Jinfeng Rao, Shuai Zhang, Shuohang Wang, Jie Fu, and Siu Cheung Hui https://t.co/tnHWNsUThL
BrundageBot: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks. Yi Tay, Aston Zhang, Luu Anh Tuan, Jinfeng Rao, Shuai Zhang, Shuohang Wang, Jie Fu, and Siu Cheung Hui https://t.co/vRlj6SMSyz
arxiv_cscl: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks https://t.co/PQdNA1OFCh
ytay017: Check out our ACL19 paper - Quaternion Attention and Quaternion Transformer models for NLP tasks. 4x parameter saving without much performance drop. https://t.co/q15aPKZh7W joint work with @astonzhangAZ @Jeffy_Sailing @DavenCheung @bigaidream #nlpconf #aclnlp2019
Images
Related