Computer vision systems nowadays often perform on super-human level in complex cognitive tasks but research in adversarial machine learning also shows that they are not as robust as the human vision system. In this context, perturbation-based adversarial examples have received great attention.
Recent work has shown that deep neural networks are also easily challenged by real-world adversarial examples, e.g. including partial occlusion, viewpoint changes, atmospheric changes, or style changes. Discovering and harnessing those adversarial examples helps us understand and improve the robustness of computer vision methods in real-world environments, which will also accelerate the deployment of computer vision systems in safety-critical applications. In this workshop, we aim to bring together researchers from various fields, including robust vision, adversarial machine learning, and explainable AI, to discuss recent research and future directions for adversarial robustness and explainability, with a particular focus on real-world scenarios.
The workshop is sponsored by RealAI. The funding covers a Best Paper Award, a Best Paper Honorable Mention Award, a Female Leader in Computer Vision Award, and multiple travel grants.
08:45 - 09:00         Opening Remarks
09:00 - 09:30         Invited Talk 1: Alan Yuille - Adversarial Patches and Compositional Networks
09:30 - 10:00         Invited Talk 2: Tomaso Poggio - Biologically-inspired Defenses Against Adversarial Attacks
10:00 - 11:00         Poster Session 1
11:00 - 11:30         Invited Talk 3: Raquel Urtasun - Adversarial Robustness for Self-driving
11:30 - 12:00         Invited Talk 4: Aleksander Madry - Adversarial Robustness: An Update from the Trenches
12:00 - 12:30        Panel Discussion 1
12:30 - 13:30         Lunch Break
13:30 - 14:00         Invited Talk 5: Kate Saenko
14:00 - 14:30        Invited Talk 6: Cihang Xie - Not All Networks Are Born Equal for ROBUSTNESS
14:30 - 15:00        Invited Talk 7: Ludwig Schmidt - Adversarial robustness vs. the real world
15:00 - 15:30         Invited Talk 8: Nicholas Carlini - Adversarial Attacks That Matter
15:30 - 16:00        Panel Discussion 2
16:00 - 17:00        Poster Session 2
Submission deadline:July 31 August 5, 2021 Anywhere on Earth (AoE)
Notification sent to authors: August 7 August 12, 2021 Anywhere on Earth (AoE)
Camera ready deadline: August 10 August 15, 2021 Anywhere on Earth (AoE)
Submission server: https://cmt3.research.microsoft.com/AROW2021/
Submission format:
Submissions need to be anonymized and follow the
ICCV 2021 Author Instructions.
The workshop considers two types of submissions:
(1) Long Paper: Papers are limited to 8 pages excluding references and will be included in the official ICCV proceedings;
(2) Extended Abstract: Papers are limited to 4 pages excluding references and will NOT be included
in the official ICCV proceedings. Please use the ICCV template for extended abstracts .
Based on the PC recommendations, the accepted long papers/extended abstracts will be allocated either a
contributed talk or a poster presentation.
We invite submissions on any aspect of adversarial robustness in real-world computer vision. This includes, but is not limited to:
Uncertainty & Robustness in Deep Learning (Workshop at ICML 2021)
Security and Safety in Machine Learning Systems (Workshop at ICLR 2021)
Generalization beyond the Training Distribution in Brains and Machines (Workshop at ICLR 2021)
1st International Workshop on Adversarial Learning for Multimedia (Workshop at ACM Multimedia 2021)
Please contact Yingwei Li or Adam Kortylewski if you have questions. The webpage template is by the courtesy of ECCV 2020 Workshop on Adversarial Robustness in the Real World.