AI-powered video study companion β your personal lecture assistant
If you're serious about learning, you know the pain:
- β Taking notes while watching is impossible
- β Can't find that one concept from last week
- β Re-watching hours of content to find a specific moment
- β No way to search what was said in a video
- β Your notes don't connect concepts together
What if AI could watch the videos for you, take perfect notes, and help you understand the big picture?
Upload any video β lectures, YouTube, courses:
- Whisper AI speech-to-text (super accurate)
- Speaker identification
- Timestamped transcript
- Export as subtitles
Searchable video content.
- Detects slide changes automatically
- Screenshots key diagrams and charts
- Captures important visual moments
- OCR extracts text from slides
See the highlights without watching.
This is the magic:
- Extracts concepts from the content
- Connects related ideas together
- Shows you the "big picture"
- Find hidden relationships
Not just notes β actual understanding.
- Auto-generated Reveal.js slides
- Transcript snippets with timestamps
- Jump to exact moments in video
- Review key points quickly
Navigate 2-hour lectures in 10 minutes.
Ask questions like:
- "What did they say about consciousness?"
- "Find the part about the four states"
- "Show me all diagrams"
- "Summarize the main teaching"
AI understands the content, not just the words.
Your videos NEVER leave your computer:
- β 100% local processing β No cloud, no subscriptions
- β Your data stays yours β Everything on your machine
- β Open source β Audit every line of code
- β Offline capable β Works without internet
Perfect for sensitive or private content.
- Ollama installed and running
- Whisper.cpp compiled locally
- Python 3.10+
# Clone
git clone https://github.com/Sensible-Analytics/video_analysis.git
cd video_analysis
# Setup
make setup
# Add your Whisper paths to .envStep 1: Download a Video
make download URL="https://www.youtube.com/watch?v=..."Step 2: Process & Transcribe
make runStep 3: Launch the Split-Helix UI
cd frontend
npm install
npm startStep 4: Build Knowledge Graph
make indexStep 5: Search & Discover
make search QUERY="What are the main concepts?"That's it. Deep understanding in minutes.
Students
"Upload lecture recordings. Get searchable notes. Study 3x faster."
Researchers
"Interview transcripts without manual transcription. Find quotes instantly."
Philosophy/Spiritual Seekers
"Deep study of philosophical texts β connect concepts across lectures."
Online Learners
"Coursera, YouTube tutorials, conference talks β make them all searchable."
Video Input β Whisper.cpp β Transcript
β FFmpeg β Visual Frames
β
Cognee AI β Knowledge Graph
β
Ollama LLM β Insights & Diagrams
β
Split-Helix UI β Interactive Experience
Privacy-first. Local processing. Deep understanding.
Other video tools:
- Upload to their servers β
- Monthly subscription β
- Basic transcripts only β
- No concept connections β
Mandukya AI:
- Runs on YOUR computer β
- Free forever β
- Knowledge graphs + transcripts β
- Connects concepts automatically β
This tool was created for deep study of the Mandukya Upanishad β exploring the four states of consciousness (waking, dreaming, deep sleep, pure consciousness).
But it works beautifully for ANY educational content:
- University lectures
- YouTube tutorials
- Conference talks
- Online courses
- Interview recordings
- Whisper.cpp β Local speech recognition
- Ollama β Local LLM for insights
- Cognee β Knowledge graph generation
- FFmpeg β Video processing
- Python β Backend processing
- React β Split-Helix UI
Sensible Analytics β AI that respects your privacy
Want custom AI learning tools? Let's talk.
Start understanding deeper.
- Ollama installed and running.
- Whisper.cpp compiled locally.
- Python 3.10+
make setupThis will install dependencies and create your .env file. Edit .env to point to your Whisper binary and model.
make download URL="https://www.youtube.com/playlist?list=..."
make run # Start the background brain
cd frontend && npm install && npm start # Launch the Split-Helix UIand generates Reveal.js slides in the slides/ directory.
make indexUses Cognee to extract entities and relationships across all lessons, building your local RDBMS, Vector, and Graph databases.
make search QUERY="The four states of consciousness"The system follows a modular "Knowledge Extraction" architecture:
- Perception: Whisper.cpp (Audio -> Text) & FFmpeg (Video -> Frames).
- Memory: Cognee (Text -> RDBMS/Vector/Graph).
- Reasoning: Ollama (Context + Chunk -> Insights/Diagrams).
- Presentation: Reveal.js (Data -> UI).
Created for the study of Mandukya Upanishad.