Growing a Brain: Fine-Tuning by Increasing Model Capacity
CNNs have made an undeniable impact on computer vision through the ability to learn high-capacity models with large annotated training sets. One of their remarkable properties is the ability to transfer knowledge from a large source dataset to a (typically smaller) target dataset. This is usually accomplished through fine-tuning a fixed-size network on new target data. Indeed, virtually every contemporary visual recognition system makes use of fine-tuning to transfer knowledge from ImageNet. In this work, we analyze what components and parameters change during fine-tuning, and discover that increasing model capacity allows for more natural model adaptation through fine-tuning. By making an analogy to developmental learning, we demonstrate that "growing" a CNN with additional units, either by widening existing layers or deepening the overall network, significantly outperforms classic fine-tuning approaches. But in order to properly grow a network, we show that newly-added units must be appropriately normalized to allow for a pace of learning that is consistent with existing units. We empirically validate our approach on several benchmark datasets, producing state-of-the-art results.
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

Yu-Xiong Wang (add twitter)
Deva Ramanan (add twitter)
Martial Hebert (add twitter)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
None (add)
Repo:
None (add)
Stargazers:
0
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
None
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
07/18/19 06:04PM
7,545
2,285
Tweets
udmrzn: RT @arxiv_cscv: Growing a Brain: Fine-Tuning by Increasing Model Capacity https://t.co/dt0ZEIl3dn
disigandalf: RT @arxiv_cscv: Growing a Brain: Fine-Tuning by Increasing Model Capacity https://t.co/dt0ZEIl3dn
arxiv_cscv: Growing a Brain: Fine-Tuning by Increasing Model Capacity https://t.co/dt0ZEIl3dn
arxiv_cscv: Growing a Brain: Fine-Tuning by Increasing Model Capacity https://t.co/dt0ZEIl3dn
arxiv_org: Growing a Brain: Fine-Tuning by Increasing Model Capacity. https://t.co/umD3wh07gh https://t.co/IpKpkIVGwP
arxiv_cscv: Growing a Brain: Fine-Tuning by Increasing Model Capacity https://t.co/dt0ZEIl3dn
arxiv_cscv: Growing a Brain: Fine-Tuning by Increasing Model Capacity https://t.co/dt0ZEIl3dn
arxiv_cscv: Growing a Brain: Fine-Tuning by Increasing Model Capacity https://t.co/dt0ZEIl3dn
Memoirs: Growing a Brain: Fine-Tuning by Increasing Model Capacity. https://t.co/43TWnLu8V4
arxivml: "Growing a Brain: Fine-Tuning by Increasing Model Capacity", Yu-Xiong Wang, Deva Ramanan, Martial Hebert https://t.co/yRkkICrEiA
arxiv_cscv: Growing a Brain: Fine-Tuning by Increasing Model Capacity https://t.co/dt0ZEIl3dn
arxiv_cscv: Growing a Brain: Fine-Tuning by Increasing Model Capacity https://t.co/dt0ZEIl3dn
arxiv_cs_LG: Growing a Brain: Fine-Tuning by Increasing Model Capacity. Yu-Xiong Wang, Deva Ramanan, and Martial Hebert https://t.co/2upuUSqwCT
BrundageBot: Growing a Brain: Fine-Tuning by Increasing Model Capacity. Yu-Xiong Wang, Deva Ramanan, and Martial Hebert https://t.co/wY8Tm7Xwmk
Images
Related