I'm working on a new app that will lower system requirements, improve accuracy and add new features like a proper save system. Waitlist here (ETA this year): https://tally.so/r/3q85yO
This project is no longer maintained. The goal was an easy-to-install accessibility tool for everyone, but Whisper models turned out to be difficult to run for many users, and the chunking strategy used to compensate for the fact that Whisper models are not made for live captioning was somewhat error-prone.
Generates and shows real-time captions by listening to your Windows PC's audio. Makes digital content more accessible for those who are deaf or hard of hearing, aids language learning, and more.
SystemCaptionerDemo.mp4
- Captures system audio in real-time through Windows audio loopback using PyAudioWPatch
- Locally transcribes the recordings using faster-whisper
- Displays the transcriptions as captions in a overlay window that remains always on top
Language auto-detection, user-friendly GUI, draggable captions box, and intelligent mode that shows captions only when speech is detected.
By default, the app runs on and requires nVidia CUDA (dependencies included). The app should work with RTX 2000, 3000 and 4000 series cards. Turning off GPU mode will make the app run on CPU; start with the smallest model and settle with the model that's stable.
-
Download the latest standalone .zip (currently 1.38) from the releases section and extract all files.
-
Run SystemCaptioner.exe and follow the instructions.
Alternatively build the standalone executable yourself using build_portable.py. You will need the nvidia_dependencies folder from the standalone .zip (/SystemCaptioner/Controller/_internal/nvidia_dependencies) and install all the dependencies using requirements.txt inside a venv first.
If you experienced any issues with System Captioner, let me know in the 'Issues' page of this repo! Include the Console window log if possible.