Ilyas et al.
its correlation with the label while under attack. Ilyas et al.
Our search is simplified when we realize the following: non-robust features are not unique to the complex,
nonlinear models encountered in deep learning. As Ilyas et al
The robust usefulness of a linear feature admits an elegant decomposition
In the above equation deontes the dual norm of . This decomposition gives us an instrument for visualizing any set of linear features in a two dimensional plot.
The elusive non-robust useful features, however, seem conspicuously absent in the above plot. Fortunately, we can construct such features by strategically combining elements of this basis.
We demonstrate two constructions:
It is surprising, thus, that the experiments of Madry et al.
Response Summary: The construction of explicit non-robust features is
very interesting and makes progress towards the challenge of visualizing some of
the useful non-robust features detected by our experiments. We also agree that
non-robust features arising as “distractors” is indeed not precluded by our
theoretical framework, even if it is precluded by our experiments.
This simple theoretical framework sufficed for reasoning about and
predicting the outcomes of our experiments
Response: These experiments (visualizing the robustness and usefulness of different linear features) are very interesting! They both further corroborate the existence of useful, non-robust features and make progress towards visualizing what these non-robust features actually look like.
We also appreciate the point made by the provided construction of non-robust features (as defined in our theoretical framework) that are combinations of useful+robust and useless+non-robust features. Our theoretical framework indeed enables such a scenario, even if — as the commenter already notes — our experimental results do not. (In this sense, the experimental results and our main takeaway are actually stronger than our theoretical framework technically captures.) Specifically, in such a scenario, during the construction of the dataset, only the non-robust and useless term of the feature would be flipped. Thus, a classifier trained on such a dataset would associate the predictive robust feature with the wrong label and would thus not generalize on the test set. In contrast, our experiments show that classifiers trained on do generalize.
Overall, our focus while developing our theoretical framework was on enabling us to formally describe and predict the outcomes of our experiments. As the comment points out, putting forth a theoretical framework that captures non-robust features in a very precise way is an important future research direction in itself.
Shan Carter (design overhaul), Preetum (technical discussion), Chris Olah (technical discussion), Ludwig (overall feedback), Ria (feedback) Aditiya (feedback)
Research: Alex developed …
Writing & Diagrams: The text was initially drafted by…
If you see mistakes or want to suggest changes, please create an issue on GitHub.
Diagrams and text are licensed under Creative Commons Attribution CC-BY 4.0 with the source available on GitHub, unless noted otherwise. The figures that have been reused from other sources don’t fall under this license and can be recognized by a note in their caption: “Figure from …”.
For attribution in academic contexts, please cite this work as
Goh, "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Two Examples of Useful, Non-Robust Features", Distill, 2019.
BibTeX citation
@article{goh2019a, author = {Goh, Gabriel}, title = {A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Two Examples of Useful, Non-Robust Features}, journal = {Distill}, year = {2019}, note = {https://distill.pub/2019/advex-bugs-discussion/response-3}, doi = {10.23915/distill.00019.3} }
This article is part of a discussion of the Ilyas et al. paper “Adversarial examples are not bugs, they are features”. You can learn more in the main discussion article .
Other Comments Comment by Ilyas et al.