top of page
Jesada_Zsolt_Zarya_SignAll_edited.jpg

CASE STUDY

Teaching Sign Language with Real-Time Feedback

  • Pioneered the first sign language learning system of its kind, translating experimental computer vision + AI into real-time feedback on signing accuracy

  • Enabled independent practice outside the classroom, helping learners correct mistakes without instructor oversight

  • Designed a structured learning flow (learn → practice → quiz) to support different learning styles and reinforce retention

Signall.png

​​Client:  SignAll 

​​​​

The Problem:​

SignAll was a computer vision technology company building a dataset of signed words and phrases, capturing variations in articulation across users. When I joined, the product existed purely as a technical capability—with no user interface or defined application.

Their ambition was to become the world’s first sign language learning platform with real-time feedback, transforming this experimental technology into an educational product.

  • Students learning sign language often lack access to real-time feedback outside the classroom, making it difficult to know whether they are signing correctly.

  • Practicing without instructor guidance can reinforce mistakes and slow progress.

  • At the same time, the underlying sign-language recognition technology was entirely new—there were no established interaction patterns, learning models, or usability conventions to follow.

  • Designing effective feedback required careful consideration of human factors, accessibility, and the wide variation in how individuals sign.

  • The challenge was to translate experimental computer vision technology into a usable, motivating learning experience that could function in real educational environments.

 

My Role

  • Built foundational understanding of Deaf culture, American Sign Language, and learning needs

  • Led end-to-end product design from discovery through MVP and initial launch

  • Applied human factors principles to computer vision capture and indexing of sign language

  • Designed an intuitive calibration workflow for non-technical users

  • Iterated rapidly on prototypes, exploring interaction models for a completely new technology with no precedents

Outcome:

  • Productized a previously UI-less technology into a usable educational platform

  • Translated experimental computer vision + sign-language recognition into a real-world learning experience

  • Launched a pilot at Gallaudet University

  • Enabled independent practice through real-time visual feedback outside the classroom

  • Introduced a structured learn → practice → quiz model supporting multiple learning styles

  • Increased engagement through gamified repetition and progress tracking

  • Validated by educators as a valuable supplement to in-person instruction


Design Deliverables

  • Initial concepts, prototypes, and core UI patterns

  • Gamification models to drive engagement and repetition

  • Iterative design refinements and developer-ready handoffs

  • User testing with Deaf and hearing participants, informing continuous improvements

  • Created user documentation

Providing real-time feedback for precise movement
The interface gives immediate visual feedback to help learners adjust hand shape and positioning as they practice.

Screen Shot 2020-06-02 at 12.39.19 PM (1).png

Designing for accuracy without intimidation
The experience was designed to guide learners gently, without overwhelming them or discouraging experimentation.

Screen Shot 2020-06-02 at 12.38.18 PM.png

Motivating practice through play
Gamified exercises encourage repetition and sustained engagement while reinforcing correct signing.

Spaceman screenshot.png

© 2025 Andrea Breitling.  All Rights Reserved.

This website contains general information only and does not include controlled technical data.

bottom of page