Real-time sign language detection with a Flask web UI, model management endpoints, and a training pipeline.
Sign language accessibility tools are often fragmented between research scripts and production apps, making it hard to train, ship, and test models in one place.
Provide a practical end-to-end system that supports:
- image collection and dataset generation,
- model training and report conversion,
- real-time browser-based inference with model switching.
This project implements:
- a training framework under
training/for dataset and model lifecycle, - Flask applications (
app.py,app_multi_client.py) for single and multi-client usage, - shared utilities in
utils/for consistent processing and configuration, - model report parsing and UI display for model quality visibility.
You get a workflow that can move from data collection to live inference quickly, with configurable runtime behavior and reusable scripts for local development.
SignLanguageDetector/
├── app.py
├── app_multi_client.py
├── training/
├── utils/
├── models/
├── templates/
├── static/
└── docs/
- Clone repository
git clone https://github.com/Life-Experimentalist/SignLanguageDetector.git
Set-Location SignLanguageDetector- Create a fresh uv environment
uv venv --python 3.12 .venvIf your machine only has Python 3.13 installed, uv can still provision Python 3.12 for this project automatically.
- Install dependencies
uv sync --python .venv\Scripts\python.exe- Run app
uv run --python .venv\Scripts\python.exe python app.pyOpen the app in your browser:
http://localhost:5000
uv venv --python 3.12 .venv
uv pip install --python .venv\Scripts\python.exe -r requirements.txt
.\.venv\Scripts\python.exe app.py- Docs Overview
- Landing Page
- Contributing Guide
- Roadmap
- API Reference
- Architecture Charts
- Branding Prompts
- Release Notes
- Scripts and Commands
- Project TODO
- Integration Guide
- Telemetry Integration
The landing page is ready in the docs/ folder for GitHub Pages.
- Entry file:
docs/index.html - Styles:
docs/styles.css - Stats integration:
docs/landing.js - SEO files:
docs/robots.txt,docs/sitemap.xml - Custom domain file:
docs/CNAME(set tosign.vkrishna04.me) - Branding asset target:
docs/static/branding/
uv run --python .venv\Scripts\python.exe python app.py
uv run --python .venv\Scripts\python.exe python app_multi_client.py
uv run --python .venv\Scripts\python.exe python training_pipeline.py
uv run --python .venv\Scripts\python.exe python training/convert_model_reports.pyAfter starting the server, you can call a direct API endpoint:
POST /api/predict
Supported input formats:
multipart/form-datawith file fieldimageapplication/jsonwithimage_base64
Optional flags:
show_landmarks(true/false, defaultfalse)include_visuals(true/false, defaultfalse)
PowerShell example (multipart upload):
$form = @{
image = Get-Item .\data\0\sample.jpg
show_landmarks = "false"
include_visuals = "false"
}
Invoke-RestMethod -Uri "http://127.0.0.1:5000/api/predict" -Method Post -Form $formPowerShell example (base64 JSON):
$bytes = [System.IO.File]::ReadAllBytes(".\data\0\sample.jpg")
$payload = @{
image_base64 = [System.Convert]::ToBase64String($bytes)
show_landmarks = $false
include_visuals = $false
} | ConvertTo-Json
Invoke-RestMethod -Uri "http://127.0.0.1:5000/api/predict" -Method Post -ContentType "application/json" -Body $payloadThis project is linked with CFlair-Counter hosted at https://counter.vkrishna04.me.
- Increment endpoint used by this app:
POST https://counter.vkrishna04.me/api/views/sign-language-detector - Badge endpoint:
https://counter.vkrishna04.me/api/views/sign-language-detector/badge?style=flat-square&color=brightgreen&label=views
Licensed under Apache License 2.0. See LICENSE.md.
Anonymized telemetry is collected using the CFlair-Counter project and is used only to display project stats.
Training quality matters: better capture quality produces better predictions. The dataset builder automatically skips images without detectable hand landmarks and discards blurry frames before training.
To disable it, create a .env file and set:
DISABLE_ANONYMOUS_TELMETRY=true