Measuring abstract reasoning in neural networks
Whether neural networks can learn abstract reasoning or whether they merely rely on superficial statistics is a topic of recent debate. Here, we propose a dataset and challenge designed to probe abstract reasoning, inspired by a well-known human IQ test. To succeed at this challenge, models must cope with various generalisation `regimes' in which the training and test data differ in clearly-defined ways. We show that popular models such as ResNets perform poorly, even when the training and test sets differ only minimally, and we present a novel architecture, with a structure designed to encourage reasoning, that does significantly better. When we vary the way in which the test questions and training data differ, we find that our model is notably proficient at certain forms of generalisation, but notably weak at others. We further show that the model's ability to generalise improves markedly if it is trained to predict symbolic explanations for its answers. Altogether, we introduce and explore ways to both measure and induce stronger abstract reasoning in neural networks. Our freely-available dataset should motivate further progress in this direction.
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

David G. T. Barrett (add twitter)
Felix Hill (add twitter)
Adam Santoro (add twitter)
Ari S. Morcos (add twitter)
Timothy Lillicrap (add twitter)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
None (add)
Repo:
None (add)
Stargazers:
0
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
None
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
07/11/18 06:43PM
8,170
2,560
Tweets
pretendsmarts: @kchonyc @GaryMarcus @yoavgo @ylecun @LakeBrenden @egrefen @LittleBimble I'd actually like to hear your take on solving Raven Progressive Matrices and abstract analogies (https://t.co/bv4qIV9XMe and https://t.co/IxLNTTgEWg). Is this the right direction?
eucondrio: Measuring abstract reasoning in #NeuralNetworks https://t.co/lRcbVMycQi
ahammami0: Measuring abstract reasoning in neural networks David G.T. Barrett, Felix Hill, Adam Santoro, Ari S. Morcos, Timothy Lillicrap : https://t.co/MTBg5ICEZi #artificialintelligence #deeplearning #neuralnetworks #reasoning https://t.co/5HB3Uno0Xs
evolvingstuff: Measuring abstract reasoning in neural networks "the model's ability to generalise improves markedly if it is trained to predict symbolic explanations for its answers" https://t.co/wWhxlWQmR7 #DeepMind https://t.co/oa8bRK4iYZ
Images
Related