Abstract: On visual self-supervision and its effect on model robustness
Recent self-supervision methods have found success in learning feature representations that could rival ones from full supervision, and have been shown to be beneficial to the model in several ways: for example improving model robustness and out-of-distribution detection. In this talk I will try to explore the following question: when does visual self-supervision (SSL) aid adversarial training in improving robustness to adversarial perturbations? Including visual self-supervision as part of adversarial training can indeed improve model robustness, however it turns out the devil is in the details. We will discuss the primary ways in which self-supervision can be added to adversarial training, and observe that using the self-supervised loss to optimize both network parameters and find adversarial examples leads to the strongest improvement in model robustness against adversarial perturbations and natural corruptions, as this can be viewed as a form of semi-supervised ensemble adversarial training (SSEAT).