HOLOLENS-2.jpg
 

AR for VIPs

AUGMENTED REALITY FOR VISUALLY IMPAIRED PEOPLE

A woman with a white cane passing by a bus stop. Photo credit Anne Miller

A woman with a white cane passing by a bus stop. Photo credit Anne Miller

WE ARE on a mission to help visually impaired people navigate in a sighted world.

About 36 million people worldwide are blind. For many of these people, navigating new spaces can be a cumbersome or frustrating experience as they listen and feel their way around their environment.

AR for VIPs seeks to utilize the spatial mapping power of augmented reality devices in order to solve 5 meter problems for users.  This could improve independence for blind people by allowing them to find and read objects such as bus stops without sighted assistance.

 

SEE HOW IT WORKS

 

Our project uses Augmented Reality devices and a combination of spatial audio clues and speech sounds to deliver semantic information in the surroundings to a blind and visually impaired person.

Watch our demonstration video below, or find audio descriptions here: https://bit.ly/2VtYl6M

The Spatial Mapping feature creates a digital replica of the user’s environment automatically as the user walks around.

The Spatial Mapping feature creates a digital replica of the user’s environment automatically as the user walks around.

spatial mapping

The HoloLens builds up a 3D map of the user’s environment over time.  It uses infrared mapping to create a triangular mesh that lets applications approximate where objects are.

Obstacle beacons (red) make noise when the user is looking at them. Wall beacons (yellow) are more subtle to allow differentiation.

Obstacle beacons (red) make noise when the user is looking at them. Wall beacons (yellow) are more subtle to allow differentiation.

obstacle sonification

We use the mesh generated by the HoloLens to implant digital audio beacons that sonify obstacles in the environment.

When the user wants to read text, they can use the “capture text” command to place an audio beacon (blue) and “read text” to read it.

When the user wants to read text, they can use the “capture text” command to place an audio beacon (blue) and “read text” to read it.

text recognition

We capture images using the HoloLens camera and read out semantic information in the environment with the help of Google’s text recognition APIs.

 

TEAM

 

School of Information

We are an interdisciplinary team of Master's students at UC Berkeley's School of Information. Our advisor is UCB faculty member and HCI expert Kimiko Ryokai.

Alyssa Li

XR Designer, Spatial Audio Composer

Rohan Kar

XR Engineer, Text Recognition

Dylan Fox

XR Designer and Engineer, Spatial Mapping

Anu Pandey

User Researcher

Extended Reality at Berkeley

Extended Reality at Berkeley is a student group dedicated to bringing virtual reality to the campus community. These undergraduates helped develop the original AR for VIPs prototype and are now responsible for taking it in exciting new directions!

Elliot Choi

Sound Design

 

Rajandeep Singh

Obstacle Detection

Manish Kondapolu

Text Recognition

microsoft_hololens_wallpaper_by_ljdesigner_da9v778-pre.jpg

 SUPPORTED BY