Overview

The overarching goal of this workshop is to gather researchers, students, and advocates who work at the intersection of accessibility, computer vision, and autonomous systems. We plan to use the workshop to identify challenges and pursue solutions for the current lack of shared and principled development tools for data-driven vision-based accessibility systems. For instance, there is a general lack of vision-based benchmarks and methods relevant to accessibility (e.g., people with disabilities and mobility aids are currently mostly absent from large-scale datasets in pedestrian detection). Our workshop will provide a unique opportunity for fostering a mutual discussion between accessibility, computer vision, and robotics researchers and practitioners.

Invited Speakers

Ed Cutrell
Ed Cutrell
Microsoft Research
Dan Parker
Dan Parker
Blind Machinist, World’s Fastest Blind Man
Geoffrey Peddle
Geoffrey Peddle
Aira CTO
Saqib Shaikh
Saqib Shaikh
Project Lead for Seeing AI at Microsoft
Venkatesh Potluri
Venkatesh Potluri
Ph.D. Candidate at the University of Washington

Schedule

Times (PDT)
13:00-13:35Welcome Remarks
13:35-14:05Ed Cutrell, Interactive AI for Blind and Low Vision Users
14:05-14:35Geoffrey Peddle, Aira: A company's journey towards AI Remote Assistance
14:35-14:50Challenge Overview and Results
14:50-15:10Challenge Winner Talks
15:10-15:40Poster Highlights + Coffee Break
15:40-16:10Dan Parker
16:10-16:40Saqib Shaikh
16:40-17:10Venkatesh Potluri: A Paradigm Shift in Nonvisual Programming
17:10-17:20Panel + Concluding Remarks

Abstracts

"AVA Segmentation Track 1st Solution: Synthetic Instance Segmentation with Vision Transformer".Xiangheng Shan, Huayu Zhang, Jialong Zuo, Nong Sang, Changxin Gao
"AVA Segmentation Track 2nd Solution".Xiaoqiang Lu, Licheng Jiao, Xu Liu, Lingling Li, Fang Liu, Wenping Ma, Shuyuan Yang
"AVA Segmentation Track 3rd Solution: BEiTv2-Adapter". Qin Ma, Jinming Chai, Zhongjian Huang
"AVA Keypoint Track 1st Solution".Jiajun Fu
"AVA Keypoint Track 2nd Solution: HDIFN: Hierarchical Dilated Fusion Information Network for Pedestrian Keypoints Detection". Chuchu Xie
"Disability Representations: Finding Biases in Automatic Image Generation". Yannis Tevissen
"AI-Assisted Generation of Customizable Sign Language Videos With Enhanced Realism". Sudha Krishnamurthy, Vimal Bhat, Abhinav Jain
"Building Embodied 3D Foundation Models". Yining Hong
"Integrating Ergonomic Support with Augmented Reality for Elderly Rehabilitation". Zhenhong (Brad) Lei , Jeanne Xinjun Li
"Motion Diversification Networks". Hee Jae Kim, Eshed Ohn-Bar

Organizers

Eshed Ohn-Bar
Eshed Ohn-Bar
Boston University
Danna Gurari
Danna Gurari
University of Colorado Boulder
Chieko Asakawa
Chieko Asakawa
Carnegie Mellon University and IBM
Hernisa Kacorri
Hernisa Kacorri
University of Maryland
Kris Kitani
Kris Kitani
Carnegie Mellon University
Jennifer Mankoff
Jennifer Mankoff
University of Washington

Challenge Organization

Zhongkai Shangguan
Zhongkai Shangguan
Boston University
Jimuyang Zhang
Jimuyang Zhang
Boston University
Hee-Jae Kim
Hee-Jae Kim
Boston University

Challenge

As an updated challenge for 2024, we release the following:
  1. Training, validation, and testing data, which can be found in this link
  2. An evaluation server for instance segmentation and for pose estimation.
More info on data and submission can be found in the eval.ai links above. Note that the data this year includes both instance segmentation and pose estimation challenge. Moreover, we provide access to temporal history and LiDAR data for each image.
The challenge builds on our prior workshop's synthetic instance segmentation benchmark with mobility aids (see Zhang et al., X-World: Accessibility, Vision, and Autonomy Meet, ICCV 2021 bit.ly/2X8sYoX). The benchmark contains challenging accessibility-related person and object categories, such as `cane' and `wheelchair.' We aim to use the challenge to uncover research opportunities and spark the interest of computer vision and AI researchers working on more robust visual reasoning models for accessibility.
fig2

An example from the instance segmentation challenge for perceiving people with mobility aids.

fig2

An example from the pose challenge added in 2023.


The team with the top-performing submission will be invited to give short talks during the workshop and will receive a financial award of $500 for first place and $300 for second place (We thank the National Science Foundation, US Department of Transportation's Inclusive Design Challenge, Special Interest Group on Accessible Computing, and Intel for their support for these awards).

Call for Papers

We encourage submission of relevant research (including work in progress, novel perspectives, formative studies, benchmarks, methods) as extended abstracts for the poster session and workshop discussion (up to 4 pages in CVPR format, not including references). CVPR Overleaf template can be found here. Latex/Word templates can be found here. Please send your extended abstracts to mobility@bu.edu. Note that submissions do not need to be anonymized. Extended abstracts of already published works can also be submitted. Accepted abstracts will be presented at the poster session, and will not be included in the printed proceedings of the workshop. Topics of interests by this workshop include, but are not limited to:
  1. AI for Accessibility
  2. Accessibility-Centered Computer Vision Tasks and Datasets
  3. Data-Driven Accessibility Tools, Metrics and Evaluation Frameworks
  4. Practical Challenges in Ability-Based Assistive Technologies
  5. Accessibility in Robotics and Autonomous Vehicles
  6. Long-Tail and Low-Shot Recognition of Accessibility-Based Tasks
  7. Accessible Homes, Hospitals, Cities, Infrastructure, Transportation
  8. Crowdsourcing and Annotation Tools for Vision and Accessibility
  9. Empirical Real-World Studies in Inclusive System Design
  10. Assistive Human-Robot Interaction
  11. Remote Accessibility Systems
  12. Multi-Modal (Audio, Visual, Inertial, Haptic) Learning and Interaction
  13. Accessible Mobile and Information Technologies
  14. Virtual, Augmented, and Mixed Reality for Accessibility
  15. Novel Designs for Robotic, Wearable and Smartphone-Based Assistance
  16. Intelligent Assistive Embodied and Navigational Agents
  17. Socially Assistive Mobile Applications
  18. Human-in-the-Loop Machine Learning Techniques
  19. Accessible Tutoring and Education
  20. Personalization for Diverse Physical, Motor, and Cognitive Abilities
  21. Embedded Hardware-Optimized Assistive Systems
  22. Intelligent Robotic Wheelchairs
  23. Medical and Social and Cultural Models of Disability
  24. New Frameworks for Taxonomies and Terminology

Important workshop dates

Previous workshops

2nd AVA: Accessibility, Vision, and Autonomy Meet, CVPR 2023

Acknowledgements

Supported by Special Interest Group on Accessible Computing.