Skip to content

subhamsje/SignSpeak

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 

Repository files navigation

🀟 SignSpeak

AI-Powered Real-Time Sign Language Recognition in the Browser
Breaking communication barriers using Computer Vision & Deep Learning.


🌍 Overview

SignSpeak is a real-time sign language recognition web application that translates hand gestures into text (and optional speech) directly in the browser.

Built using MediaPipe hand tracking and TensorFlow-powered gesture recognition, the system detects 21 hand landmarks per hand and performs ultra-low latency inference without sending any data to a server.

All processing happens locally for maximum privacy and speed.


πŸš€ Live Demo

πŸ‘‰ https://subhamsje.github.io/SignSpeak/


🧠 How It Works

  1. Camera feed is captured in the browser.
  2. MediaPipe detects 21 landmark points per hand.
  3. Landmark coordinates are processed by a trained gesture classification model.
  4. The predicted sign is displayed instantly.
  5. Optional Text-to-Speech converts it into audio output.

✨ Features

  • πŸŽ₯ Real-time hand tracking (60 FPS)
  • 🧠 AI-based gesture recognition
  • πŸ”Š Built-in text-to-speech
  • ⚑ <30ms response time
  • πŸ”’ 100% client-side processing (no server uploads)
  • 🌐 Works in any modern browser
  • πŸ“± Mobile & Desktop compatible
  • πŸ“š Gesture learning library

πŸ›  Tech Stack

  • HTML5 / CSS3
  • JavaScript (ES6)
  • MediaPipe Hands
  • TensorFlow / TensorFlow Lite
  • WebGL acceleration
  • Web Speech API
  • GSAP Animations

🧩 Supported Gestures

  • ASL Alphabet
  • Numbers
  • Common Phrases
  • Custom Gesture Extensions (scalable)

πŸ“¦ Installation (Local Setup)

git clone https://github.com/YOUR_USERNAME/SignSpeak.git
cd SignSpeak

Then open:

index.html

in your browser.

No backend required.


πŸ” Privacy

All gesture detection runs directly in the browser.
No images, videos, or user data are transmitted to any server.


πŸ“ˆ Future Improvements

  • Sentence prediction using NLP
  • Multi-language sign support
  • Custom gesture training module
  • User authentication & profile tracking
  • Mobile PWA optimization
  • AI-powered contextual corrections

🎯 Why This Project Matters

Over 70+ million deaf individuals worldwide rely on sign language for communication.
SignSpeak aims to make sign language universally understandable using AI.


πŸ‘¨β€πŸ’» Author

Subham
Computer Science Engineer
Focused on AI, Startups & Scalable Systems


⭐ If You Like This Project

Give it a star and share it with others. Let’s build a more accessible world together.

About

πŸ‘‹ About SignSpeak is an AI-powered real-time sign language recognition web application that translates hand gestures into text (and optional speech) directly in the browser. Built using MediaPipe hand tracking and TensorFlow-based gesture recognition, the system detects 21 hand landmarks per hand and performs ultra-low latency inference

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages