Adversarial Initialization -- when your network performs the way I want
The increase in computational power and available data has fueled a wide deployment of deep learning in production environments. Despite their successes, deep architectures are still poorly understood and costly to train. We demonstrate in this paper how a simple recipe enables a market player to harm or delay the development of a competing product. Such a threat model is novel and has not been considered so far. We derive the corresponding attacks and show their efficacy both formally and empirically. These attacks only require access to the initial, untrained weights of a network. No knowledge of the problem domain and the data used by the victim is needed. On the initial weights, a mere permutation is sufficient to limit the achieved accuracy to for example 50% on the MNIST dataset or double the needed training time. While we can show straightforward ways to mitigate the attacks, the respective steps are not part of the standard procedure taken by developers so far.
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

Kathrin Grosse (add twitter)
Thomas A. Trost (add twitter)
Marius Mosbach (edit)
Michael Backes (edit)
Dietrich Klakow (add twitter)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
None (add)
Repo:
None (add)
Stargazers:
0
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
None
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
[4]
02/10/19 06:04PM
12,149
2,995
Tweets
emulenews: RT @arxiv_org: Adversarial Initialization -- when your network performs the way I want. https://t.co/YPQ1jM79CU https://t.co/cnVJkKwvky
kli_nlpr: RT @arxiv_org: Adversarial Initialization -- when your network performs the way I want. https://t.co/YPQ1jM79CU https://t.co/cnVJkKwvky
DrPjenFI: RT @arxiv_org: Adversarial Initialization -- when your network performs the way I want. https://t.co/YPQ1jM79CU https://t.co/cnVJkKwvky
arxiv_org: Adversarial Initialization -- when your network performs the way I want. https://t.co/YPQ1jM79CU https://t.co/cnVJkKwvky
arxivml: "Adversarial Initialization -- when your network performs the way I want", Kathrin Grosse, Thomas A. Trost, Marius … https://t.co/1BO98Hb9X3
cynicalsecurity: K. Grosse et al., “Adversarial Initialization -- when your network performs the way I want” […deployment of deep learning in production environments…how a simple recipe enables a market player to harm or delay the development of a competing product…] https://t.co/vVBbkLMsrR
arxiv_cs_LG: Adversarial Initialization -- when your network performs the way I want. Kathrin Grosse, Thomas A. Trost, Marius Mosbach, Michael Backes, and Dietrich Klakow https://t.co/fbxryU3XpC
BrundageBot: Adversarial Initialization -- when your network performs the way I want. Kathrin Grosse, Thomas A. Trost, Marius Mosbach, Michael Backes, and Dietrich Klakow https://t.co/HWUtBFyvUE
Images
Related