Skip to content

Emerge-Lab/PufferDrive

 
 

Repository files navigation

PufferDrive

PufferDrive is a fast and friendly driving simulator to train and test RL-based models.











Installation

Clone the repo

https://github.com/Emerge-Lab/PufferDrive.git

Make a venv (uv venv), activate the venv

source .venv/bin/activate

Inside the venv, install the dependencies

uv pip install -e .

Compile the C code

python setup.py build_ext --inplace --force

To test your setup, you can run

puffer train puffer_drive

See also the puffer docs.

Quick start

Start a training run

puffer train puffer_drive

Dataset

Downloading and using data

Data preparation

To train with PufferDrive, you need to convert JSON files to map binaries. Run the following command with the path to your data folder:

python pufferlib/ocean/drive/drive.py

Downloading Waymo Data

You can download the WOMD data from Hugging Face in two versions:

Note: Replace 'GPUDrive_mini' with 'GPUDrive' in your download commands if you want to use the full dataset.

Additional Data Sources

For more training data compatible with PufferDrive, see ScenarioMax. The GPUDrive data format is fully compatible with PufferDrive.

Visualizer

Dependencies and usage

Headless server setup

Run the Raylib visualizer on a headless server and export as .mp4. This will rollout the pre-trained policy in the env.

Install dependencies

sudo apt update
sudo apt install ffmpeg xvfb

For HPC (There are no root privileges), so install into the conda environment

conda install -c conda-forge xorg-x11-server-xvfb-cos6-x86_64
conda install -c conda-forge ffmpeg
  • ffmpeg: Video processing and conversion
  • xvfb: Virtual display for headless environments

Build and run

  1. Build the application:
bash scripts/build_ocean.sh visualize local
  1. Run with virtual display:
xvfb-run -s "-screen 0 1280x720x24" ./visualize

The -s flag sets up a virtual screen at 1280x720 resolution with 24-bit color depth.


To force a rebuild, you can delete the cached compiled executable binary using rm ./visualize.


Benchmarks

Distributional realism

We provide a PufferDrive implementation of the Waymo Open Sim Agents Challenge (WOSAC) for fast, easy evaluation of how well your trained agent matches distributional properties of human behavior. See details here.

WOSAC evaluation with random policy:

puffer eval puffer_drive --eval.wosac-realism-eval True
  • Small clean eval dataset. A clean validation set with 229 scenarios can be downloaded here.
  • Large eval dataset. [TODO]

WOSAC evaluation with your checkpoint (must be .pt file):

puffer eval puffer_drive --eval.wosac-realism-eval True --load-model-path <your-trained-policy>.pt

Human-compatibility

You may be interested in how compatible your agent is with human partners. For this purpose, we support an eval where your policy only controls the self-driving car (SDC). The rest of the agents in the scene are stepped using the logs. While it is not a perfect eval since the human partners here are static, it will still give you a sense of how closely aligned your agent's behavior is to how people drive. You can run it like this:

puffer eval puffer_drive --eval.human-replay-eval True --load-model-path <your-trained-policy>.pt

Releases

No releases published

Packages

No packages published

Languages

  • Python 68.3%
  • C 27.3%
  • Jupyter Notebook 1.1%
  • Cuda 0.9%
  • Shell 0.7%
  • Cython 0.7%
  • Other 1.0%