FunGen is a Python-based tool that uses AI to generate Funscript files from VR and 2D POV videos. It enables fully automated funscript creation for individual scenes or entire folders of videos.
Join the Discord community for discussions and support: Discord Community
This project is still at the early stages of development. It is not intended for commercial use. Please, do not use this project for any commercial purposes without prior consent from the author. It is for individual use only.
FunGen now automatically detects your system's display scaling settings (DPI) and adjusts the UI accordingly. This feature works on Windows, macOS, and Linux, ensuring the application looks crisp and properly sized on high-DPI displays.
- Automatically applies the correct font scaling based on your system settings
- Supports Windows display scaling (125%, 150%, etc.)
- Supports macOS Retina displays
- Supports Linux high-DPI configurations
- Can be enabled/disabled in the Settings menu
- Manual detection button available for when you change display settings
Automatic installer that handles everything for you:
- Download: install.bat
- Double-click to run (or run from command prompt)
- Wait for automatic installation of Python, Git, FFmpeg, and FunGen
curl -fsSL https://raw.githubusercontent.com/ack00gar/FunGen-AI-Powered-Funscript-Generator/main/install.sh | bashThe installer automatically:
- Installs Python 3.11 (Miniconda)
- Installs Git and FFmpeg/FFprobe
- Downloads and sets up FunGen AI
- Installs all required dependencies
- Creates launcher scripts for easy startup
- Detects your GPU and optimizes PyTorch installation
That's it! The installer creates launch scripts - just run them to start FunGen.
If you prefer manual installation or need custom configuration:
Before using this project, ensure you have the following installed:
- Git https://git-scm.com/downloads/ or 'winget install --id Git.Git -e --source winget' from a command prompt for Windows users as described below for easy install of Miniconda.
- FFmpeg added to your PATH or specified under the settings menu (https://www.ffmpeg.org/download.html)
- Miniconda (https://www.anaconda.com/docs/getting-started/miniconda/install)
Easy install of Miniconda for Windows users:
Open Command Prompt and run: winget install -e --id Anaconda.Miniconda3
After installing Miniconda look for a program called "Anaconda prompt (miniconda3)" in the start menu (on Windows) and open it
conda create -n VRFunAIGen python=3.11
conda activate VRFunAIGen- Please note that any pip or python commands related to this project must be run from within the VRFunAIGen virtual environment.
Open a command prompt and navigate to the folder where you'd like FunGen to be located. For example, if you want it in C:\FunGen, navigate to C:\ ('cd C:'). Then run
git clone --branch main https://github.com/ack00gar/FunGen-AI-Powered-Funscript-Generator.git FunGen
cd FunGenpip install -r core.requirements.txtQuick Setup:
- Install NVIDIA Drivers: Download here
- Install CUDA 12.8: Download here
- Install cuDNN for CUDA 12.8: Download here (requires free NVIDIA account)
Install Python Packages:
For 20xx, 30xx and 40xx-series NVIDIA GPUs:
pip install -r cuda.requirements.txt
pip install tensorrtFor 50xx series NVIDIA GPUs (RTX 5070, 5080, 5090):
pip install -r cuda.50series.requirements.txt
pip install tensorrtNote: NVIDIA 10xx series GPUs are not supported.
Verify Installation:
nvidia-smi # Check GPU and driver
nvcc --version # Check CUDA version
python -c "import torch; print(torch.cuda.is_available())" # Check PyTorch CUDA
python -c "import torch; print(torch.backends.cudnn.is_available())" # Check cuDNNpip install -r cpu.requirements.txtROCm is supported for AMD GPUs on Linux. To install the required packages, run:
pip install -r rocm.requirements.txtThe necessary YOLO models will be automatically downloaded on the first startup. If you want to use a specific model, you can download it from our Discord and place it in the models/ sub-directory. If you aren't sure, you can add all the models and let the app decide the best option for you.
python main.pyWe support multiple model formats across Windows, macOS, and Linux.
- NVIDIA Cards: we recommend the .engine model
- AMD Cards: we recommend .pt (requires ROCm see below)
- Mac: we recommend .mlmodel
- .pt (PyTorch): Requires CUDA (for NVIDIA GPUs) or ROCm (for AMD GPUs) for acceleration.
- .onnx (ONNX Runtime): Best for CPU users as it offers broad compatibility and efficiency.
- .engine (TensorRT): For NVIDIA GPUs: Provides very significant efficiency improvements (this file needs to be build by running "Generate TensorRT.bat" after adding the base ".pt" model to the models directory)
- .mlpackage (Core ML): Optimized for macOS users. Runs efficiently on Apple devices with Core ML.
In most cases, the app will automatically detect the best model from your models directory at launch, but if the right model wasn't present at this time or the right dependencies where not installed, you might need to override it under settings. The same applies when we release a new version of the model.
Common Issues:
- Driver version mismatch: Ensure NVIDIA drivers are compatible with your CUDA version
- PATH issues: Make sure CUDA bin directory is in your system PATH
- Version conflicts: Ensure all components (driver, CUDA, cuDNN, PyTorch) are compatible versions
Verification Commands:
nvidia-smi # Check GPU and driver
nvcc --version # Check CUDA version
python -c "import torch; print(torch.cuda.is_available())" # Check PyTorch CUDA
python -c "import torch; print(torch.backends.cudnn.is_available())" # Check cuDNNFind the settings menu in the app to configure optional option.
You can use Start windows.bat to launch the gui on windows.
FunGen includes an update system that allows you to download and switch between different versions of the application. To use this feature, you'll need to set up a GitHub Personal Access Token. This is optional and only required for the update functionality.
GitHub's API has rate limits:
- Without a token: 60 requests per hour
- With a token: 5,000 requests per hour
This allows FunGen to fetch commit information, changelogs, and version data without hitting rate limits.
-
Go to GitHub Settings:
- Visit GitHub Settings
- Sign in to your GitHub account
-
Navigate to Developer Settings:
- Click your GitHub avatar (top right) → "Settings"
- Scroll down to the bottom left of the Settings page
- Click "Developer settings" in the left menu list
-
Create a Personal Access Token:
- Click "Personal access tokens" → "Tokens (classic)"
- Click "Generate new token" → "Generate new token (classic)"
-
Confirm Access
- If you created a 2FA you will be prompted to eter it
- If you have not yet created a 2FA you will be prompted to do so
-
Configure the Token:
- Note: Give it a descriptive name like "FunGen Updates"
- Expiration: Choose an appropriate expiration (30 days, 60 days, etc.)
- Scopes: Select only these scopes:
public_repo(to read public repository information)read:user(to read your user information for validation)
-
Generate and Copy:
- Click "Generate token"
- Important: Copy the token immediately - you won't be able to see it again!
- Open FunGen and go to the Updates menu
- Click "Select Update Commit"
- Go to the "GitHub Token" tab
- Paste your token in the text field
- Click "Test Token" to verify it works
- Click "Save Token" to store it
The GitHub token enables these features in FunGen:
- Version Selection: Browse and download specific commits from the
mainbranch - Changelog Display: View detailed changes between versions
- Update Notifications: Check for new versions and updates
- Rate Limit Management: Avoid hitting GitHub's API rate limits
- The token is stored locally in
github_token.ini - Only
public_repoandread:userpermissions are required - The token is used only for reading public repository data
- You can revoke the token anytime from your GitHub settings
FunGen can be run in two modes: a graphical user interface (GUI) or a command-line interface (CLI) for automation and batch processing.
To start the GUI, simply run the script without any arguments:
python main.pyTo use the CLI mode, you must provide an input path to a video or a folder.
To generate a script for a single video with default settings:
python main.py "/path/to/your/video.mp4"To process an entire folder of videos recursively using a specific mode and overwrite existing funscripts:
python main.py "/path/to/your/folder" --mode <your_mode> --overwrite --recursive| Argument | Short | Description |
|---|---|---|
input_path |
Required for CLI mode. Path to a single video file or a folder containing videos. | |
--mode |
Sets the processing mode. The available modes are discovered dynamically. | |
--od-mode |
Sets the oscillation detector mode to use in Stage 3. Choices: current, legacy. Default is current. |
|
--overwrite |
Forces the app to re-process and overwrite any existing funscripts. By default, it skips videos that already have a funscript. | |
--no-autotune |
Disables the automatic application of Ultimate Autotune after generation. | |
--no-copy |
Prevents saving a copy of the final funscript next to the video file. It will only be saved in the application's output folder. | |
--recursive |
-r |
If the input path is a folder, this flag enables scanning for videos in all its subdirectories. |
FunGen features a modular architecture for both funscript filtering and motion tracking, allowing for easy extension and customization.
The funscript filter system allows you to apply a variety of transformations to your generated funscripts. These can be chained together to achieve complex effects.
- Amplify: Amplifies or reduces position values around a center point.
- Autotune SG: Automatically finds optimal Savitzky-Golay filter parameters.
- Clamp: Clamps all positions to a specific value.
- Invert: Inverts position values (0 becomes 100, etc.).
- Keyframes: Simplifies the script to significant peaks and valleys.
- Resample: Resamples the funscript at regular intervals while preserving peak timing.
- Simplify (RDP): Simplifies the funscript by removing redundant points using the RDP algorithm.
- Smooth (SG): Applies a Savitzky-Golay smoothing filter.
- Speed Limiter: Limits speed and adds vibrations for hardware device compatibility.
- Threshold Clamp: Clamps positions to 0/100 based on thresholds.
- Ultimate Autotune: A comprehensive 7-stage enhancement pipeline.
The tracker system is responsible for analyzing the video and generating the raw motion data. Trackers are organized into categories based on their functionality.
These trackers process the video in real-time.
- Hybrid Intelligence Tracker: A multi-modal approach combining frame differentiation, optical flow, YOLO detection, and oscillation analysis.
- Oscillation Detector (Experimental 2): A hybrid approach combining experimental timing precision with legacy amplification and signal conditioning.
- Oscillation Detector (Legacy): The original oscillation tracker with cohesion analysis and superior amplification.
- Relative Distance Tracker: An optimized high-performance tracker with vectorized operations and intelligent caching.
- User ROI Tracker: A manual ROI definition with optical flow tracking and optional sub-tracking.
- YOLO ROI Tracker: Automatic ROI detection using YOLO object detection with optical flow tracking.
These trackers process the video in stages for higher accuracy.
- Contact Analysis (2-Stage): Offline contact detection and analysis using YOLO detection results.
- Mixed Processing (3-Stage): A hybrid approach using Stage 2 signals and selective live ROI tracking for BJ/HJ chapters.
- Optical Flow Analysis (3-Stage): Offline optical flow tracking using live tracker algorithms on Stage 2 segments.
These trackers are in development and may not be as stable as the others.
- Enhanced Axis Projection Tracker: A production-grade motion tracking system with multi-scale analysis, temporal coherence, and adaptive thresholding.
- Working Axis Projection Tracker: A simplified but reliable motion tracking with axis projection.
- Beat Marker (Visual/Audio): Generates actions from visual brightness changes, audio beats, or metronome.
- DOT Marker (Manual Point): Tracks a manually selected colored dot/point on screen.
- Community Example Tracker: A template tracker showing basic motion detection and funscript generation.
Our pipeline's current bottleneck lies in the Python code within YOLO.track (the object detection library we use), which is challenging to parallelize effectively in a single process.
However, when you have high-performance hardware you can use the command line (see above) to processes multiple videos simultaneously. Alternatively you can launch multiple instances of the GUI.
We tested speeds of about 60 to 110 fps for 8k 8bit vr videos when running a single process. Which translates to faster then realtime processing already. However, running in parallel mode we tested speeds of about 160 to 190 frames per second (for object detection). Meaning processing times of about 20 to 30 minutes for 8bit 8k VR videos for the complete process. More then twice the speed of realtime!
Keep in mind your results may vary as this is very dependent on your hardware. Cuda capable cards will have an advantage here. However, since the pipeline is largely CPU and video decode bottlenecked a top of the line card like the 4090 is not required to get similar results. Having enough VRAM to run 3-6 processes, paired with a good CPU, will speed things up considerably though.
Important considerations:
- Each instance requires the YOLO model to load which means you'll need to keep checks on your VRAM to see how many you can load.
- The optimal number of instances depends on a combination of factors, including your CPU, GPU, RAM, and system configuration. So experiment with different setups to find the ideal configuration for your hardware! 😊
- For VR only sbs (side by side) Fisheye and Equirectangular 180° videos are supported at the moment
- 2D POV videos are supported but work best when they are centered properly
- 2D / VR is automatically detected as is fisheye / equirectangular and FOV (make sure you keep the file format information in the filename _FISHEYE190, _MKX200, _LR_180, etc.)
- Detection settings can also be overwritten in the UI if the app doesn't detect it properly
The script generates the following files in a dedicated subfolder within your specified output directory:
_preprocessed.mkv: A standardized video file used by the analysis stages for reliable frame processing..msgpack: Raw YOLO detection data from Stage 1. Can be re-used to accelerate subsequent runs._stage2_overlay.msgpack: Detailed tracking and segmentation data from Stage 2, used for debugging and visualization._t1_raw.funscript: The raw, unprocessed funscript generated by the analysis before any enhancements are applied..funscript: The final, post-processed funscript file for the primary (up/down) axis..roll.funscript: The final funscript file for the secondary (roll/twist) axis, generated in 3-stage mode..fgp(FunGen Project): A project file containing all settings, chapter data, and paths related to the video.
The pipeline for generating Funscript files is as follows:
- YOLO Object Detection: A YOLO model detects relevant objects (e.g., penis, hands, mouth, etc.) in each frame of the video.
- Tracking and Segmentation: A custom tracking algorithm processes the YOLO detections to identify and segment continuous actions and interactions over time.
- Funscript Generation: Based on the mode (2-stage, 3-stage, etc.), the tracked data is used to generate a raw Funscript file.
- Post-Processing: The raw Funscript is enhanced with features like Ultimate Autotune to smooth motion, normalize intensity, and improve the overall quality of the final
.funscriptfile.
This project started as a dream to automate Funscript generation for VR videos. Here’s a brief history of its development:
- Initial Approach (OpenCV Trackers): The first version relied on OpenCV trackers to detect and track objects in the video. While functional, the approach was slow (8–20 FPS) and struggled with occlusions and complex scenes.
- Transition to YOLO: To improve accuracy and speed, the project shifted to using YOLO object detection. A custom YOLO model was trained on a dataset of 1000nds annotated VR video frames, significantly improving detection quality.
- Original Post: For more details and discussions, check out the original post on EroScripts: VR Funscript Generation Helper (Python + CV/AI)
Contributions are welcome! If you'd like to contribute, please follow these steps:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Commit your changes.
- Submit a pull request.
This project is licensed under the Non-Commercial License. You are free to use the software for personal, non-commercial purposes only. Commercial use, redistribution, or modification for commercial purposes is strictly prohibited without explicit permission from the copyright holder.
This project is not intended for commercial use, nor for generating and distributing in a commercial environment.
For commercial use, please contact me.
See the LICENSE file for full details.
- YOLO: Thanks to the Ultralytics team for the YOLO implementation.
- FFmpeg: For video processing capabilities.
- Eroscripts Community: For the inspiration and use cases.
If you see [unknown@unknown] in the application logs or git errors like "returned non-zero exit status 128":
Cause: The installer was run with administrator privileges, causing git permission/ownership issues.
Solution 1 - Fix git permissions:
cd "C:\path\to\your\FunGen\FunGen"
git config --add safe.directory .Solution 2 - Reinstall as normal user:
- Redownload
install.bat - Run it as a normal user (NOT as administrator)
- Use the launcher script created by the installer instead of
python main.py
If you get "ffmpeg/ffprobe not found" errors:
- Use the launcher script (
launch.batorlaunch.sh) instead of runningpython main.pydirectly - Rerun the installer to get updated launcher scripts with FFmpeg PATH fixes
- The launcher automatically adds FFmpeg to PATH
- Always use launcher scripts - Don't run
python main.pydirectly - Run installer as normal user - Avoid administrator mode
- Rerun installer for updates - Get latest fixes by rerunning the installer
- Check working directory - Make sure you're in the FunGen project folder
If you encounter any issues or have questions, please open an issue on GitHub.
Join the Discord community for discussions and support: Discord Community