This is about using what The Economist calls artificial artificial intelligence (like Mechanical Turk, which uses people as artificial computers) to enhance (artificially intelligent) machine vision

The idea is that the disabled can finally turn the tables on disability. They’re getting involved in developing tools to help the rest of us help them.

The video is an amazing talk given by Jeffrey P. Bigham of Rochester University called:

Real-Time Crowd Support for People with Disabilities

It was given at Dartmouth College in New Hampshire, co-sponsored by the Computer Science Colloquium and the the Institute for Security, Technology, and Society on November the 15th, 2011

Here’s an introduction to the talk:

The past few decades have seen the development of wonderful new intelligent technology that serves as sensors and agents onto an inaccessible world for people with disabilities, but it remains both too prone to errors and too limited in the scope to reliably address many problems faced by people with disabilities in their everyday lives.

The big challenges that Jeffrey addesses stem from the fact that:

  • the disabled user needs tools to help them ‘present their requirements to the crowd’
  • the crowd needs to be reliably and quickly ‘recruited’ 
  • the naturally unruly nature of crowds needs to be ‘tamed’ and turned into coherent action
Somehow, he’s managing to come up with ingenious solutions and producing credible results 

We have been developing approaches to crowdsourcing that work in real-time to overcome these problems.

In this talk, I’ll discuss the following recent projects that use real-time crowdsourcing:

  • VizWiz, an accessible iPhone application that blind people use to take a picture, speak a question, and receive answers from the crowd in under a minute.

More than 20,000 questions have been asked so far, giving us insight into the types of questions blind people want answered.

  • Legion, a system that lets dynamic groups collaboratively control existing user interfaces using a VNC-like setup.

These applications collectively inform a new model of human-computer interaction in which a dynamic group of unreliable individuals act as a single reliable user.

Here’s a summary of Jeffrey P. Bigham’s Biography:

He  is an Assistant Professor in the Department of Computer Science at the University of Rochester where he directs ROC HCI.

His works spans Access Technology, Human Computation, and Intelligent User Interfaces.

He is specifically interested in technology that engages the crowd to assist people with disabilities in their everyday lives.

Professor Bigham received his Ph.D. in 2009 in Computer Science and Engineering from the University of Washington working with Dr. Richard Ladner, and his B.S.E.from Princeton in 2003.

Jeffrey has received a number of awards for his work, including the Andrew W. Mellon Foundation Award for Technology Collaboration, the MIT Technology Review Top 35 Innovators Under 35 Award, two ASSETS Best Student Paper Awards, and the UIST 2010 Best Paper Award.

Here’s a link to the Economist article on artificial artificial intelligence (it was way back in 2006).