A Fourier Perspective on Model Robustness in Computer Vision

Dong Yin
Rapha Gontijo Lopes
Jon Shlens
Justin Gilmer
NeurIPS (2019)

Abstract

Achieving robustness to distributional shift is a longstanding and challenging
goal of computer vision. Data augmentation is a commonly used approach for
improving robustness, however robustness gains are typically not uniform across
corruption types. Indeed increasing performance in the presence of random noise is
often met with reduced performance on other corruptions such as contrast change.
Understanding when and why these sorts of trade-offs occur is a crucial step
towards mitigating them. Towards this end, we investigate recently observed tradeoffs caused by Gaussian data augmentation and adversarial training. We find that
both methods improve robustness to corruptions that are concentrated in the high
frequency domain while reducing robustness to corruptions that are concentrated in
the low frequency domain. This suggests that one way to mitigate these trade-offs
via data augmentation is to use a more diverse set of augmentations. Towards this
end we observe that AutoAugment [5], a recently proposed data augmentation
policy optimized for clean accuracy, achieves state-of-the-art robustness on the
CIFAR-10-C and ImageNet-C benchmarks.