Overview

Computer vision systems nowadays often perform on super-human level in complex cognitive tasks but research in adversarial machine learning also shows that they are not as robust as the human vision system. In this context, perturbation-based adversarial examples have received great attention.

Recent work has shown that deep neural networks are also easily challenged by real-world adversarial examples, e.g. including partial occlusion, viewpoint changes, atmospheric changes, or style changes. Discovering and harnessing those adversarial examples helps us understand and improve the robustness of computer vision methods in real-world environments, which will also accelerate the deployment of computer vision systems in safety-critical applications. In this workshop, we aim to bring together researchers from various fields, including robust vision, adversarial machine learning, and explainable AI, to discuss recent research and future directions for adversarial robustness and explainability, with a particular focus on real-world scenarios.


Awards

The workshop is sponsored by RealAI. The funding covers a Best Paper Award, a Best Paper Honorable Mention Award, a Female Leader in Computer Vision Award, and multiple travel grants.

  • RealAI Best Paper Award
  • RealAI Best Paper Honorable Mention Award
  • RealAI Female Leader in Computer Vision Award
  • RealAI Travel Award

  • Call For Papers

    Submission deadline:July 31 August 5, 2021 Anywhere on Earth (AoE)

    Notification sent to authors: August 7 August 12, 2021 Anywhere on Earth (AoE)

    Camera ready deadline: August 10 August 15, 2021 Anywhere on Earth (AoE)

    Submission server: https://cmt3.research.microsoft.com/AROW2021/

    Submission format: Submissions need to be anonymized and follow the ICCV 2021 Author Instructions. The workshop considers two types of submissions: (1) Long Paper: Papers are limited to 8 pages excluding references and will be included in the official ICCV proceedings; (2) Extended Abstract: Papers are limited to 4 pages excluding references and will NOT be included in the official ICCV proceedings. Please use the ICCV template for extended abstracts .
    Based on the PC recommendations, the accepted long papers/extended abstracts will be allocated either a contributed talk or a poster presentation.

    We invite submissions on any aspect of adversarial robustness in real-world computer vision. This includes, but is not limited to:


    Speakers

    Aleksander Mądry
    MIT
    Ludwig Schmidt
    Toyota Research
    Tomaso Poggio
    MIT
    Nicholas Carlini
    Google Brain

    Organizing Committee

    Program Committee

    • Aishan Liu (Beihang University)
    • Zihao Xiao (Johns Hopkins University)
    • Zhenyu Liao (Kuaishou)
    • Qing Jin (Northeastern University)
    • Xianfeng Tang (Amazon)
    • Akshayvarun Subramanya (UMBC)
    • Ali Shahin Shamsabadi (Queen Mary University of London)
    • Aniruddha Saha (University of Maryland Baltimore County)
    • Anshuman Suri (University of Virginia)
    • Bernhard Egger (MIT)
    • Chen Zhu (University of Maryland)
    • Chenglin Yang (Johns Hopkins University)
    • Chirag Agarwal (UIC)
    • Gaurang Sriramanan (Indian Institute of Science)
    • Jamie Hayes (University College London)
    • Jiachen Sun (University of Michigan)
    • Jieru Mei (Johns Hopkins University)
    • Kibok Lee (University of Michigan)
    • Lifeng Huang (Sun Yat-Sen university)
    • Mario Wieser (University of Basel)
    • Maura Pintor (University of Cagliari)
    • Muhammad Awais (Kyung-Hee University)
    • Muzammal Naseer (ANU)
    • Nataniel Ruiz (Boston University)
    • Peng Tang (Salesforce Research)
    • Qihang Yu (Johns Hopkins University)
    • Rajkumar Theagarajan (University of California, Riverside)
    • Sravanti Addepalli (Indian Institute of Science)
    • Won Park (University of Michigan)
    • Xiangning Chen (University of California, Los Angeles)
    • Xingjun Ma (Deakin University)
    • Xinwei Zhao (Drexel University)
    • Yash Sharma (University Tuebingen)
    • Yulong Cao (University of Michigan, Ann Arbor)
    • Yuzhe Yang (MIT)

    Sponsor


    Related Workshops

    Uncertainty & Robustness in Deep Learning (Workshop at ICML 2021)

    Security and Safety in Machine Learning Systems (Workshop at ICLR 2021)

    Generalization beyond the Training Distribution in Brains and Machines (Workshop at ICLR 2021)

    1st International Workshop on Adversarial Learning for Multimedia (Workshop at ACM Multimedia 2021)

    Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (Workshop at CVPR 2021)


    Please contact Yingwei Li or Adam Kortylewski if you have questions. The webpage template is by the courtesy of ECCV 2020 Workshop on Adversarial Robustness in the Real World.