The NYT tech columnist Nick Bilton announced that 2014 is going to be the year of wearables. And we believe he's right. The last years brough us playful gadgets like Jawbone's UP or Nike's Fuel. This year, DLD has brought together a colourful mix of next generation wearables, many of them at the intersection of technology, health, and medicine. In this blog piece, the DLD14 speaker Yonatan Wexler introduces his supersmart wearable camera OrCam.
It is fitting that the solution to many difficulties experienced by the visually impaired should have been found in the field of Computer Vision – a branch of computer science that teaches computers to see.
According to the 2011 National Health Survey by the U.S. National Center for Health Statistics, 21.2 million people in the United States over the age of 18 have some kind of visual impairment, including age-related conditions, diseases and birth defects. It is estimated that worldwide there are 342 million adults with significant visual impairment.
Despite very significant technological advances in many fields, it is striking that so little assistive technology is available to the visually disabled. The assistive devices that are available tend to be awkward to use, and with limited capabilities.
Enter OrCam, a small, wearable camera that allows the user to perform a variety of tasks that, although taken for granted by sighted people, are very difficult and complicated for those with limited vision. OrCam is unobtrusive and easily clips onto the wearer’s existing glasses, connected by a thin cable to a small pocket-sized computer. A bone-conduction speaker provides discrete yet clear speech as it reads aloud the words or object pointed to by the user. OrCam can read text (books, newspapers, menus, signs and more) and recognize objects such as product, landmarks, traffic lights and faces. One of its most useful features is being able to learn a new object so that the user can teach it to memorize a favorite product.
OrCam is based on computer vision algorithms – most notably the Shareboost algorithm – pioneered by Dr. Amnon Shashua, Dr. Shai Shalev-Shwartz and myself. The Shareboost method offers a reasonable trade-off between recognition accuracy and speed by actually minimizing the amount of additional computer power required with each new object it learns to recognize. This stands in sharp contrast to other approaches such as “deep learning” techniques which require huge computing resources. One of our biggest challenges was successfully recognizing visual information in different lighting conditions and on variable surfaces.
The device is not a medical device and is specifically designed with a very simple user interface. Simply stated, “point to read, wave to memorize” - to recognize an object or text, the wearer simply points at it with his or her finger, and the device then interprets the scene. The device is also programmed to recognize a pre-stored set of objects and allows the user to add to its collection by simply waving the object in the camera’s field of view.
I cannot begin to verbalize the intense satisfaction when I see a visually impaired person try the device and experience new freedom and independence for the first time. Our pilot shipment of the first 100 devices was completed this past October. We’re working hard on making more improvements based on the user feedback we’ve received. Helping the visually disabled to overcome their challenges – particularly easy access to information – is a rewarding task indeed.
Yonatan Wexler will speak at the upcoming DLD14, taking place in Munich January 19 - 21, 2014. Apply for a ticket to this exclusive conference, tune in on the beat of our community on the DLDpulse and find regular updates on the DLD14 programme and speakers here.