Skip to content

Real-time sign language alphabet recognition system using Python, MediaPipe, and Random Forest. Includes custom dataset preprocessing, model training, and live webcam-based gesture classification.

Notifications You must be signed in to change notification settings

ayushd1919/Sign-Language-Recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

4 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿง  Sign Language Alphabet Detection using Machine Learning

A real-time sign language alphabet recognition system built with Python, MediaPipe, and a Random Forest classifier. The project is structured into three main phases: dataset creation and preprocessing, model training, and real-time inference using webcam input.


๐Ÿ“Œ Features

  • Detects and classifies hand gestures representing alphabets.
  • Real-time gesture recognition using webcam feed.
  • Uses hand landmark detection for robust feature extraction.
  • Custom dataset creation and preprocessing.
  • Trained using the Random Forest classification algorithm.
  • Visual feedback with prediction results displayed live.

๐Ÿง  Methodology

1. Dataset Creation and Preprocessing

  • Image Collection: Hand gesture images are collected and organized into labeled directories by gesture.
  • Feature Extraction: Uses MediaPipe to extract hand landmarks from images.
  • Normalization: Features are normalized for consistency.
  • Labeling: Each processed feature set is paired with the correct label.
  • Storage: Processed features and labels are stored for training.

2. Model Training

  • Data Splitting: Dataset is split into training and testing sets.
  • Training: A Random Forest classifier is trained on the extracted features.
  • Evaluation: Performance is measured using accuracy, precision, recall, and F1-score.
  • Model Saving: The trained model is saved for use in real-time inference.

3. Real-Time Inference

  • Live Input: Captures video from webcam.
  • Feature Extraction: Processes live images to extract hand landmarks.
  • Prediction: Classifies the gesture using the trained model.
  • Feedback: Displays the predicted alphabet on the screen.

๐Ÿ› ๏ธ Tech Stack

  • Programming Language: Python
  • Libraries: OpenCV, MediaPipe, Scikit-learn, NumPy, joblib
  • Model: Random Forest Classifier

๐Ÿ“Š Results

The system demonstrates:

  • โœ… High accuracy in classifying alphabetic hand gestures
  • โšก Fast real-time inference using webcam input
  • ๐Ÿ” Reliable feature extraction with MediaPipe for landmark detection
  • ๐Ÿ“ˆ Consistent performance across various lighting conditions and hand positions
  • ๐Ÿ‘๏ธ Visual feedback with recognized gesture displayed live on-screen

About

Real-time sign language alphabet recognition system using Python, MediaPipe, and Random Forest. Includes custom dataset preprocessing, model training, and live webcam-based gesture classification.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages