Inducing Relational Knowledge from BERT
One of the most remarkable properties of word embeddings is the fact that they capture certain types of semantic and syntactic relationships. Recently, pre-trained language models such as BERT have achieved groundbreaking results across a wide range of Natural Language Processing tasks. However, it is unclear to what extent such models capture relational knowledge beyond what is already captured by standard word embeddings. To explore this question, we propose a methodology for distilling relational knowledge from a pre-trained language model. Starting from a few seed instances of a given relation, we first use a large text corpus to find sentences that are likely to express this relation. We then use a subset of these extracted sentences as templates. Finally, we fine-tune a language model to predict whether a given word pair is likely to be an instance of some relation, when given an instantiated template for that relation as input.
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

Zied Bouraoui (add twitter)
Jose Camacho-Collados (edit)
Steven Schockaert (add twitter)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
None (add)
Repo:
None (add)
Stargazers:
0
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
None
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
12/01/19 06:06PM
7,181
2,218
Tweets
arxiv_pop: 2019/11/28 投稿 2位 CL(Computation and Language) Inducing Relational Knowledge from BERT https://t.co/mHJPdvYjz5 11 Tweets 11 Retweets 85 Favorites
IntuitMachine: "For relations that require encyclopedic or commonsense knowledge, we found that our model consistently, and often substantially, outperformed methods relying on word vectors." https://t.co/oqzRxDgkC1
arxiv_cscl: Inducing Relational Knowledge from BERT https://t.co/THKuavWdAN
tallinzen: Hi @omerlevy_, @yoavgo and @ramatgan https://t.co/xQi5lFy3Ps https://t.co/YzXYXkpAX0
CamachoCollados: Our #AAAI2020 preprint “Inducing Relational Knowledge from BERT” (w/ Z. Bouraoui and S. Schockaert) is now out: https://t.co/a28NSUC1fg In this paper, our aim is to understand to what extent pre-trained language models like BERT capture relational knowledge. A short thread 👇 https://t.co/H7hMAR2DhQ
SciFi: Inducing Relational Knowledge from BERT. https://t.co/nGZKzo3KiW
fanks_vision: Inducing Relational Knowledge from BERT. (arXiv:1911.12753v1 [https://t.co/2n3QKknOox]) https://t.co/NKHEUZXGwk
Images
Related