Shapley Interpretation and Activation in Neural Networks
We propose a novel Shapley value approach to help address neural networks' interpretability and "vanishing gradient" problems. Our method is based on an accurate analytical approximation to the Shapley value of a neuron with ReLU activation. This analytical approximation admits a linear propagation of relevance across neural network layers, resulting in a simple, fast and sensible interpretation of neural networks' decision making process. We then derived a globally continuous and non-vanishing Shapley gradient, which can replace the conventional gradient in training neural network layers with ReLU activation, and leading to better training performance. We further derived a Shapley Activation (SA) function, which is a close approximation to ReLU but features the Shapley gradient. The SA is easy to implement in existing machine learning frameworks. Numerical tests show that SA consistently outperforms ReLU in training convergence, accuracy and stability.
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

Yadong Li (add twitter)
Xin Cui (add twitter)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
None (add)
Repo:
None (add)
Stargazers:
0
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
None
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
09/15/19 06:04PM
5,691
1,856
Tweets
arxiv_cs_LG: Shapley Interpretation and Activation in Neural Networks. Yadong Li and Xin Cui https://t.co/hTQRBdEJX6
StatsPapers: Shapley Interpretation and Activation in Neural Networks. https://t.co/O6FxXaxfjK
arxivml: "Shapley Interpretation and Activation in Neural Networks", Yadong Li, Xin Cui https://t.co/nHzKMilJad
arxiv_cs_LG: Shapley Interpretation and Activation in Neural Networks. Yadong Li and Xin Cui https://t.co/hTQRBdEJX6
BrundageBot: Shapley Interpretation and Activation in Neural Networks. Yadong Li and Xin Cui https://t.co/tgpgdQnjBs
Images
Related