You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This notebook demonstrates an end-to-end pipeline for sign language action detection. Keypoints are extracted from video frames using Mediapipe and stored as coordinate sequences. An LSTM neural network is developed and trained to classify actions based on these sequences from live data.
SignSync is a sign-language translation system I built to support both Indian Sign Language (ISL) and American Sign Language (ASL). I used MediaPipe for gesture tracking and integrated NLP and deep learning models to convert signs into speech and text. The system achieved around 82% accuracy in translation.