Product Designer | User Experience Designer
thumbnail.png

Aeye.space | Winner of IBM Watson Award | Pedramvahabi.co

We built an award-winning product to help the visually impaired in under 48 hours at TechCrunch Disrupt’s 2016London hackathon.

Banner.png
 

DESCRIPTION —

We built an award-winning product to help the visually impaired in under 48 hours. 

WHAT DID I DO —
Voice UI
Interaction Design

DATE —
Dec 2016

WEBSITE —
http://aeye.space/


We spent the weekend at TechCrunch Disrupt’s 2016 London hackathonIt was a pretty intense 48 hours of brainstorming, plotting, researching, eating, building, testing, bit more eating, before finally beingtopped off with a presentation to a few hundred attendees in the room.
Gulp. But we won a prize. The special IBM Watson award!

This was the first time we had donned our branded hoodies and stormed a hackathon as a team, so we were pretty happy with the outcome. Here’s some more details about what we built.

 
 
 
 


Inspiration

One of my family members has an eye disease called Glaucoma which damaged the optic nerves in their eyes. The damage has resulted in severe vision loss, meaning they struggle to find everyday items around them. With advancements in computer vision, it seemed reasonable that we could replace their eyes with an artificial one. 

What It Actually Does

Aeye is an artificial eye. Using a combination of the microphone and camera on the device, someone is able to ask where an object is and be guided to it. Moving around the room will result in a “hot” or “cold” reading.
 

 
 
 


How We Built It

Split into two teams: 3 people concentrating on the mobile app & the UX design; 2 concentrating on the backend and visual recognition training.

We have trained multiple image classifiers on the Watson visual recognition service. This can give a reliable match when the image contains the object we’re searching for. This service is wrapped by a Python web service, which maps the desired object class to the associated classifier.

 

VUI Flow

 
 

The iOS app uses the Watson Speech to Text service to translate voice input from user. Once the user selects a class to search for, we give aural feedback (“warmer”, “colder”, “found it”, etc) as the user moves the phone’s camera around the room or surface.

 
 
 
 

Tech We Used

  • Swift
  • Python
  • IBM-Watson
  • Radix

Design Tools We Used

  • Sketch
  • FramerJs

 

So yeah, without a doubt the next step would be to add celebrity voiceovers. Jeff Goldblum perhaps?

Looking forward to the next Hack! 😬

—  Github 

—  Devpost