[Join Instruction]

[Facebook Live]


Overview

Computer vision systems nowadays often perform on super-human level in complex cognitive tasks but research in adversarial machine learning also shows that they are not as robust as the human vision system. In this context, perturbation-based adversarial examples have received great attention.

Recent work has shown that deep neural networks are also easily challenged by real-world adversarial examples, e.g. including partial occlusion, viewpoint changes, atmospheric changes, or style changes. Discovering and harnessing those adversarial examples helps us understand and improve the robustness of computer vision methods in real-world environments, which will also accelerate the deployment of computer vision systems in safety-critical applications. In this workshop, we aim to bring together researchers from various fields, including robust vision, adversarial machine learning, and explainable AI, to discuss recent research and future directions for adversarial robustness and explainability, with a particular focus on real-world scenarios.


Awards

The workshop is sponsored by RealAI. The funding covers a Best Paper Award, a Best Paper Honorable Mention Award, a Female Leader in Computer Vision Award, and multiple travel grants.

  • RealAI Best Paper Award
  • RealAI Best Paper Honorable Mention Award
  • RealAI Female Leader in Computer Vision Award
  • RealAI Travel Award

  • Schedule (EDT)

    08:45 - 09:00         Opening Remarks

    09:00 - 09:30         Invited Talk 1: Alan Yuille - Adversarial Patches and Compositional Networks

    09:30 - 10:00         Invited Talk 2: Tomaso Poggio - Biologically-inspired Defenses Against Adversarial Attacks

    10:00 - 11:00         Poster Session 1

    11:00 - 11:30         Invited Talk 3: Raquel Urtasun - Adversarial Robustness for Self-driving

    11:30 - 12:00         Invited Talk 4: Aleksander Madry - Adversarial Robustness: An Update from the Trenches

    12:00 - 12:30        Panel Discussion 1

    12:30 - 13:30         Lunch Break

    13:30 - 14:00         Invited Talk 5: Kate Saenko

    14:00 - 14:30        Invited Talk 6: Cihang Xie - Not All Networks Are Born Equal for ROBUSTNESS

    14:30 - 15:00        Invited Talk 7: Ludwig Schmidt - Adversarial robustness vs. the real world

    15:00 - 15:30         Invited Talk 8: Nicholas Carlini - Adversarial Attacks That Matter

    15:30 - 16:00        Panel Discussion 2

    16:00 - 17:00        Poster Session 2


    Call For Papers

    Submission deadline:July 31 August 5, 2021 Anywhere on Earth (AoE)

    Notification sent to authors: August 7 August 12, 2021 Anywhere on Earth (AoE)

    Camera ready deadline: August 10 August 15, 2021 Anywhere on Earth (AoE)

    Submission server: https://cmt3.research.microsoft.com/AROW2021/

    Submission format: Submissions need to be anonymized and follow the ICCV 2021 Author Instructions. The workshop considers two types of submissions: (1) Long Paper: Papers are limited to 8 pages excluding references and will be included in the official ICCV proceedings; (2) Extended Abstract: Papers are limited to 4 pages excluding references and will NOT be included in the official ICCV proceedings. Please use the ICCV template for extended abstracts .
    Based on the PC recommendations, the accepted long papers/extended abstracts will be allocated either a contributed talk or a poster presentation.

    We invite submissions on any aspect of adversarial robustness in real-world computer vision. This includes, but is not limited to:


    Accepted Long Paper (Proceeding)


    Accepted Extended Abstract


    Speakers

    Aleksander Mądry
    MIT
    Ludwig Schmidt
    Toyota Research
    Tomaso Poggio
    MIT
    Nicholas Carlini
    Google Brain

    Organizing Committee

    Program Committee

    • Aishan Liu (Beihang University)
    • Akshayvarun Subramanya (UMBC)
    • Alexander Robey (University of Pennsylvania)
    • Ali Shahin Shamsabadi (Queen Mary University of London, UK)
    • Angtian Wang (Johns Hopkins University)
    • Aniruddha Saha (University of Maryland Baltimore County)
    • Anshuman Suri (University of Virginia)
    • Bernhard Egger (Massachusetts Institute of Technology)
    • Chenglin Yang (Johns Hopkins University)
    • Chirag Agarwal (Harvard University)
    • Gaurang Sriramanan (Indian Institute of Science)
    • Jamie Hayes
    • Jiachen Sun (University of Michigan)
    • Jieru Mei (Johns Hopkins University)
    • Ju He (Johns Hopkins University)
    • Kibok Lee (University of Michigan)
    • Lifeng Huang (SunYat-sen university)
    • Maura Pintor (University of Cagliari)
    • Muhammad Awais (Kyung-Hee University)
    • Muzammal Naseer (ANU)
    • Nataniel Ruiz (Boston University)
    • Qihang Yu (Johns Hopkins University)
    • Qing Jin (Northeastern University)
    • Rajkumar Theagarajan (University of California, Riverside)
    • Ruihao Gong (SenseTime)
    • Shiyu Tang (Beihang University)
    • shunchang liu (Beihang University)
    • Sravanti Addepalli (Indian Institute of Science)
    • Tianlin Li (NTU)
    • Wenxiao Wang (Tsinghua University)
    • Won Park (University of Michigan)
    • Xiangning Chen (University of California, Los Angeles)
    • Xiaohui Zeng (University of Toronto)
    • Xingjun Ma (Deakin University)
    • Xinwei Zhao (Drexel University)
    • yulong cao (University of Michigan, Ann Arbor)
    • Yutong Bai (Johns Hopkins University)
    • Zihao Xiao (Johns Hopkins University)
    • Zixin Yin (Beihang University)

    Sponsor


    Related Workshops

    Uncertainty & Robustness in Deep Learning (Workshop at ICML 2021)

    Security and Safety in Machine Learning Systems (Workshop at ICLR 2021)

    Generalization beyond the Training Distribution in Brains and Machines (Workshop at ICLR 2021)

    1st International Workshop on Adversarial Learning for Multimedia (Workshop at ACM Multimedia 2021)

    Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (Workshop at CVPR 2021)


    Please contact Yingwei Li or Adam Kortylewski if you have questions. The webpage template is by the courtesy of ECCV 2020 Workshop on Adversarial Robustness in the Real World.