Warning
The main branch of this repository contains work-in-progress development code for an upcoming release, and is not guaranteed to be stable or working.
For the latest stable release 👉 Releases
The Loss Prevention Pipeline System is an open-source reference implementation for building and deploying video analytics pipelines for retail use cases:
- Loss Prevention
- Automated self checkout
- User Defined Workloads
It leverages Intel® hardware and software, GStreamer, and OpenVINO™ to enable scalable, real-time object detection and classification at the edge.
The system includes an integrated RTSP server (MediaMTX) that streams video files for testing and development:
-
RTSP Server Container (
rtsp-streamer):- Automatically starts MediaMTX server on port 8554
- Streams all
.mp4files fromperformance-tools/sample-media/ - Each video becomes an RTSP stream:
rtsp://rtsp-streamer:8554/<video-name>
-
Pipeline Consumption:
- GStreamer pipelines connect via
rtspsrcelement - Supports TCP transport with configurable latency
- Automatic retry and timeout handling
- GStreamer pipelines connect via
-
Stream Naming Convention:
- Video:
items-in-basket-32658421-1080-15-bench.mp4 - Stream:
rtsp://rtsp-streamer:8554/items-in-basket-32658421-1080-15-bench
- Video:
- Loop Playback: Videos restart automatically when finished
- TCP Transport: Reliable streaming over corporate networks
- Low Latency: Default 200ms latency for real-time processing
- Multiple Streams: Supports concurrent camera streams
- Proxy Support: Works through corporate HTTP/HTTPS proxies
-
Ubuntu 24.04 or newer (Linux recommended), Desktop edition (or Server edition with GUI installed).
-
Make (
sudo apt install make) -
Python 3 (
sudo apt install python3) - required for video download and validation scripts -
Intel hardware (CPU, iGPU, dGPU, NPU)
-
Intel drivers:
-
Sufficient disk space for models, videos, and results
-
For Corporate Networks with Proxy:
# HTTP/HTTPS Proxy settings export HTTP_PROXY=<HTTP PROXY> export HTTPS_PROXY=<HTTPS PROXY> export NO_PROXY=localhost,127.0.0.1,rabbitmq,minio-service,rtsp-streamer
-
Optional RTSP Configuration:
# RTSP Server configuration (defaults shown) export RTSP_STREAM_HOST=rtsp-streamer # Hostname of RTSP server export RTSP_STREAM_PORT=8554 # RTSP port export RTSP_MEDIA_DIR=../performance-tools/sample-media # Video source directory export STREAM_LOOP=false # Set to 'true' to loop video streams indefinitely
- Clone the repo with the below command
git clone -b <release-or-tag> --single-branch https://github.com/intel-retail/loss-preventionReplace with the version you want to clone (for example, v4.0.0).
git clone -b v4.0.0 --single-branch https://github.com/intel-retail/loss-prevention
Important
Default Settings
- Run with Pre-built images.
- Headless mode is enabled.
- Default workload : loss prevention(CPU)
- To know more about available default and preconfigured workloads 👉 Workloads
-
Run the application
Headless Mode
make run-lpVisual Mode
RENDER_MODE=1 DISPLAY=:0 make run-lp
💡 For the first time execution, it will take some time to download videos, models and docker images
Headless Mode
make run-lp CAMERA_STREAM=camera_to_workload_asc_object_detection.json WORKLOAD_DIST=workload_to_pipeline_asc_object_detection_cpu.json
Visual Mode
make run-lp CAMERA_STREAM=camera_to_workload_asc_object_detection.json WORKLOAD_DIST=workload_to_pipeline_asc_object_detection_cpu.json RENDER_MODE=1 DISPLAY=:0
Headless Mode
make run-lp CAMERA_STREAM=camera_to_workload_asc_age_verification.json WORKLOAD_DIST=workload_to_pipeline_asc_age_verification_gpu.json
Visual Mode
make run-lp CAMERA_STREAM=camera_to_workload_asc_age_verification.json WORKLOAD_DIST=workload_to_pipeline_asc_age_verification_gpu.json RENDER_MODE=1 DISPLAY=:0
Headless Mode
make run-lp CAMERA_STREAM=camera_to_workload_asc_object_detection_classification.json WORKLOAD_DIST=workload_to_pipeline_asc_object_detection_classification_gpu.json
Visual Mode
make run-lp CAMERA_STREAM=camera_to_workload_asc_object_detection_classification.json WORKLOAD_DIST=workload_to_pipeline_asc_object_detection_classification_gpu.json RENDER_MODE=1 DISPLAY=:0
Important
For more Automated Self Checkout Workloads, 👉 Loss Prevention Documentation Guide
What to Expect
-
Visual Mode
-
A video window opens showing the retail video with detection overlays
Note: The pipeline runs until the video completes
-
-
Visual and Headless Mode
- Verify Output files:
-
<loss-prevention-workspace>/results/pipeline_stream*.log- FPS metrics (one value per line) -
<oss-prevention-workspace>/results/gst-launch_*.log- Full GStreamer output✅ Content in files ❌ No Files ❌ No Content in files
In case of failure 👉 TroubleShooting
-
- Verify Output files:
Stop the application
make down-lpImportant
For a comprehensive and advanced guide, 👉 Loss Prevention Documentation Guide
#Download the models
make download-models REGISTRY=false
#Update github performance-tool submodule
make update-submodules REGISTRY=false
#Download sample videos used by the performance tools
make download-sample-videos REGISTRY=false
#Run the LP application for visual mode
make run-render-mode DISPLAY=:0 REGISTRY=false RENDER_MODE=1
or
#Run the LP application for headless mode
make run REGISTRY=false- Or simply:
- Visual Mode
make run-lp DISPLAY=:0 REGISTRY=false RENDER_MODE=1- Headless Mode
make run-lp REGISTRY=falseImportant
Set the below bash Environment Variables
#MinIO credentials (object storage)
export MINIO_ROOT_USER=<your-minio-username>
export MINIO_ROOT_PASSWORD=<your-minio-password>
#RabbitMQ credentials (message broker)
export RABBITMQ_USER=<your-rabbitmq-username>
export RABBITMQ_PASSWORD=<your-rabbitmq-password>
#Hugging Face token (required for gated models)
#Generate a token from: https://huggingface.co/settings/tokens
export GATED_MODEL=true
export HUGGINGFACE_TOKEN=<your-huggingface-token>- Run the workload
make run-lp CAMERA_STREAM=camera_to_workload_vlm.json STREAM_LOOP=false
The application is highly configurable via JSON files in the configs/ directory and with environment variables CAMERA_STREAM and WORKLOAD_DIST.
For more details, please refer Pre Configured Workloads
By default, the configuration is set to use the Loss Prevention (CPU) workload.
make benchmark-
See the benchmarking results.
make consolidate-metrics cat benchmark/metrics.csv
Important
For Advanced Benchmark settings, 👉 Benchmarking Guide
configs/— Configuration files (camera/workload mapping, pipeline mapping)docker/— Dockerfiles for downloader and pipeline containersdocs/— Documentation (HLD, LLD, system design)download-scripts/— Scripts for downloading models and videossrc/— Main source code and pipeline runner scriptssrc/rtsp-streamer/— RTSP server container (MediaMTX + FFmpeg)src/gst-pipeline-generator.py— Dynamic GStreamer pipeline generatorsrc/docker-compose.yml— Multi-container orchestrationperformance-tools/sample-media/— Video files for RTSP streamingMakefile— Build automation and workflow commands
The application runs the following Docker containers:
| Service | Purpose | Port | Notes |
|---|---|---|---|
rtsp-streamer |
RTSP video streaming server | 8554 | Streams videos from sample-media |
rabbitmq |
Message broker for VLM workload | 5672, 15672 | Requires credentials |
minio-service |
Object storage for frames | 4000, 4001 | S3-compatible storage |
model-downloader |
Downloads AI models | - | Runs once at startup |
lp-vlm-workload-handler |
VLM inference processor | - | GPU/CPU inference |
vlm-pipeline-runner |
VLM pipeline orchestrator | - | Requires DISPLAY variable |
lp-pipeline-runner |
Main inference pipeline | - | Supports CPU/GPU/NPU |
Network Configuration:
- All services run on
my_networkbridge network for DNS resolution - Use
rtsp-streamer,rabbitmq,minio-serviceas hostnames for inter-service communication
- Make Commands
make validate-all-configs— Validate all configuration filesmake clean-images— Remove dangling Docker imagesmake clean-containers— Remove stopped containersmake clean-all— Remove all unused Docker resources
- Known Issues
- On EMT OS, containers built on Alpine base images (e.g., MinIO) may report as unhealthy despite the service functioning normally. Docker health checks are failing with OCI runtime errors, preventing proper container orchestration and monitoring.