##### Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks
Many state-of-the-art neural models for NLP are heavily parameterized and thus memory inefficient. This paper proposes a series of lightweight and memory efficient neural architectures for a potpourri of natural language processing (NLP) tasks. To this end, our models exploit computation using Quaternion algebra and hypercomplex spaces, enabling not only expressive inter-component interactions but also significantly ($75\%$) reduced parameter size due to lesser degrees of freedom in the Hamilton product. We propose Quaternion variants of models, giving rise to new architectures such as the Quaternion attention Model and Quaternion Transformer. Extensive experiments on a battery of NLP tasks demonstrates the utility of proposed Quaternion-inspired models, enabling up to $75\%$ reduction in parameter size without significant loss in performance.
###### NurtureToken New!

Token crowdsale for this paper ends in

###### Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

###### Subcategories

#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

###### Github
User:
Repo:
Stargazers:
20
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
7
Language:
None
Repository for ACL 2019 paper
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
0
###### Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
06/11/19 06:04PM
6,027
2,206
###### Tweets
arxiv_cscl: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks https://t.co/PQdNA1OFCh
Memoirs: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks. https://t.co/A479t8SYIK
arxiv_cs_LG: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks. Yi Tay, Aston Zhang, Luu Anh Tuan, Jinfeng Rao, Shuai Zhang, Shuohang Wang, Jie Fu, and Siu Cheung Hui https://t.co/tnHWNsUThL
BrundageBot: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks. Yi Tay, Aston Zhang, Luu Anh Tuan, Jinfeng Rao, Shuai Zhang, Shuohang Wang, Jie Fu, and Siu Cheung Hui https://t.co/vRlj6SMSyz
arxiv_cscl: Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks https://t.co/PQdNA1OFCh
ytay017: Check out our ACL19 paper - Quaternion Attention and Quaternion Transformer models for NLP tasks. 4x parameter saving without much performance drop. https://t.co/q15aPKZh7W joint work with @astonzhangAZ @Jeffy_Sailing @DavenCheung @bigaidream #nlpconf #aclnlp2019