Stateful Detection of Black-Box Adversarial Attacks
The problem of adversarial examples, evasion attacks on machine learning classifiers, has proven extremely difficult to solve. This is true even when, as is the case in many practical settings, the classifier is hosted as a remote service and so the adversary does not have direct access to the model parameters. This paper argues that in such settings, defenders have a much larger space of actions than have been previously explored. Specifically, we deviate from the implicit assumption made by prior work that a defense must be a stateless function that operates on individual examples, and explore the possibility for stateful defenses. To begin, we develop a defense designed to detect the process of adversarial example generation. By keeping a history of the past queries, a defender can try to identify when a sequence of queries appears to be for the purpose of generating an adversarial example. We then introduce query blinding, a new class of attacks designed to bypass defenses that rely on such a defense approach. We believe that expanding the study of adversarial examples from stateless classifiers to stateful systems is not only more realistic for many black-box settings, but also gives the defender a much-needed advantage in responding to the adversary.
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

Steven Chen (edit)
Nicholas Carlini (add twitter)
David Wagner (edit)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
Stargazers:
2
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
Python
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
07/14/19 06:04PM
13,068
3,255
Tweets
arxivml: "Stateful Detection of Black-Box Adversarial Attacks", Steven Chen, Nicholas Carlini, David Wagner https://t.co/Ylz4EvLHn1
cynicalsecurity: S. Chen et al., “Stateful Detection of Black-Box Adversarial Attacks” […deviate from the implicit assumption made by prior work that a defense must be a stateless function that operates on individual examples… explore… stateful defenses…] https://t.co/mnbIuvpQX0
arxiv_cs_LG: Stateful Detection of Black-Box Adversarial Attacks. Steven Chen, Nicholas Carlini, and David Wagner https://t.co/DMfcLI3Kpl
Memoirs: Stateful Detection of Black-Box Adversarial Attacks. https://t.co/bEU1yJMVY5
BrundageBot: Stateful Detection of Black-Box Adversarial Attacks. Steven Chen, Nicholas Carlini, and David Wagner https://t.co/irsZW4q8XF
Images
Related