SignBridge Duo — Case Study.fig
Accessibility · AR Wearables · 2025

SignBridge Duo:
Bridging the
Auditory Gap

SignBridge Duo AR Glasses

An intelligent AR captioning system for DHH and neurodiverse users — real-time speech-to-text with context-aware modes and Important Information Detection.

Accessibility AR Wearables HCI Research Inclusive Design
TypeCourse Project
RoleProduct Designer
TimelineSep – Dec 2025
SchoolCornell Tech
TeamMingyuan Pang, Jiawen Chen, Yuhan Zhang
01 / Overview

Restoring autonomy to the "in-between" user

DHH individuals who aren't fluent in sign language live between two worlds — excluded from the Deaf community due to language barriers, isolated from the hearing world due to communication fatigue. Existing tools don't solve this. They route through human interpreters, removing agency, or offer static one-size-fits-all captions that fail neurodiverse users entirely.

SignBridge Duo replaces human intermediaries with a context-aware AR system that adapts to the user's environment automatically.

4
Interaction Modes
IID
Memory Retention Layer
3 mo.
Sep – Dec 2025
Jump to Final Product ↓
02 / Challenge & Research

Finding the user
no one designed for

We started with Jessica Kellgren-Fozard, a late-deafened YouTuber who lacks fluency in formal sign language but can't rely solely on lip-reading. She exists in a gap — excluded from the Deaf community, isolated from the hearing world. We called this the "in-between" user, and we found that no existing tool was built for her.

🔍
Competitor Analysis
We analyzed Lingvano, SignVideo, and Captify Pro — and found the same two failures across every one of them.
🔗
Dependency on Intermediaries
All three tools route through third-party human interpreters, removing autonomy rather than restoring it.
🧠
Rigid, Context-Blind Design
Static one-way captions that fail neurodiverse users — no environmental adaptation, no cognitive load support.
⚠️
The "In-Between" Gap
Late-deafened and hard-of-hearing users who use neither sign language nor lip-reading alone are entirely underserved.
03 / Design Process

Intelligence, not just text

With the user gap defined, we built a storyboard around Alex — a DHH professional navigating a chaotic airport. High background noise makes lip-reading impossible. He misses a PA announcement about a gate change and only realizes when the crowd moves. This scenario sharpened our design brief: the system needed to adapt to the environment, not the other way around.

Storyboard — Alex at the airport

Storyboard: Alex's invisible barrier — the moment that defined our design direction

From that storyboard we designed four context-aware modes that switch automatically — no manual toggling, no cognitive overhead for the user.

🗣
1-on-1 Mode
Focused single-speaker dialogue. Captions appear close and stable, minimizing visual noise.
👥
Group Mode
Multi-speaker tracking with positional cues and speaker labels for social settings.
🏫
Presentation Mode
Stabilized captioning with sentence-level buffering for academic lectures.
📢
Environmental Mode
PA announcements and ambient alerts converted to high-contrast 3D spatial cues with directional arrows.

Beyond mode-switching, we designed Important Information Detection (IID) — a memory layer that detects and temporarily holds critical verbal instructions (deadlines, names, announcements) before they fade, specifically addressing ADHD and APD overlap.

Information architecture and adaptive modes

Adaptive context intelligence — automated mode switching with IID memory retention

Full information architecture

Full information architecture diagram

04 / Expert Validation

From captioning to holistic support

Consulting with Jazmin Cano (Senior UX Research Specialist, Accessibility at Owlchemy Labs) fundamentally shifted our strategy. Three pivots followed that changed the whole scope of the project.

01
Intersectionality: acknowledging overlapping needs

DHH needs often overlap with ADHD and APD. We added memory retention tools and cognitive load indicators that weren't in the original scope.

02
Safety redesign: removing seizure risks

Replaced photosensitive "red alert" flashes with redundant shape-based cues — triangle warnings, border pulses — that convey urgency without triggering photosensitivity.

03
Contrast & text as primary controls, not settings

Elevated customization to the primary UI layer, treating accessibility controls as core usability requirements rather than buried preferences.

"An impressive example of inclusive design that prioritizes user autonomy. SignBridge Duo bridges the critical gap between raw information and meaningful, accessible communication."

Jazmin Cano — Senior UX Research Specialist in Accessibility, Owlchemy Labs
05 / Final Product

See it in action

The final system delivers an adaptive AR ecosystem that replaces human intermediaries with real-time context-aware support — four modes, IID memory layer, and a fully redesigned safety system for neurodiverse users.

SignBridge Duo final design overview

Final design system — all four modes and IID layer

Through SignBridge Duo, I learned that accessibility is not a feature checklist — it's a fundamental architectural framework. By designing for the intersectional needs of DHH and neurodiverse users rather than treating them as a monolith, we built a system that adapts to the human, not the other way around.

The expert consultation was the most formative moment of the project. It taught me that the best design decisions come from admitting what you don't know — and that designing for the margins almost always improves the experience for everyone.

Innovation serves no purpose unless it restores autonomy to those who need it most.

Next Project
MR Alert Vision Pro
View next →