The Context-Aware, Cross-Selling Digital Signage application is a fully containerized, end-to-end solution from product detection to dynamic advertisement generation using Intel AI and media pipelines. The architecture follows a microservices design pattern and enables optimized edge deployment on Intel® platforms in retail and similar environments.
Key Features:
- Product Provisioning: Configure product items with predefined advertisements, slogans, promotional offers, and pricing
- Context-Aware Detection: Real-time product identification from video streams (file-based or RTSP camera input)
- Cross-Sell & Up-Sell Recommendations: Intelligent suggestion logic based on detected products and contextual data
- Hybrid Advertisement Display: Serves predefined ads for provisioned products or generates dynamic ads via generative AI when not provisioned
- Interactive Web Interface: Live video stream visualization with near real-time advertisement rendering
- Containerized Deployment: Single-node deployment via Docker Compose
- Intel® Optimized AI Pipeline:
- DL Streamer Pipeline Server for video frames ingestion and analytics
- OpenVINO™ GenAI for generating the dynamic advertisement based on the detected item
- CPU: Intel(R) Core(TM) Ultra (Series 2) processor or above having at least 16 cores
- RAM: 16 GB minimum (32 GB recommended)
- Disk: 500 GB free space
- GPU: Intel® iGPU
- OS: Ubuntu 24.04 LTS
- Docker: Docker Engine 24.x+, Docker Compose v2
- Network: Access to internet for model downloads and container pulls
The solution is composed of the following main components:
- Product Identification (PID):
- Detects products in video streams using DL Streamer Pipeline Server and YOLO models (Can be replaced w/ Geti trained models).
- Publishes detection results via MQTT.
- Advertise Image Generator (AIG):
- Generates custom advertisements using generative AI (Stable Diffusion XL Turbo, MiniLM, etc.).
- Supports logo, slogan, and price overlays.
- Advertise Searcher (ASe):
- Retrieves and ranks relevant ads based on detected products and context.
- Uses ChromaDB for vector search.
- Web UI:
- Provides a browser-based interface for video and ad display.
- Integrates with AIG and PID for real-time updates.
Supporting Services:
- MediaMTX: WebRTC streaming relay
- Mosquitto: MQTT broker for inter-service communication
- ChromaDB: Vector database for ad search
- COTURN: TURN server for WebRTC
High-Level Architecture:
digital-signage/
├── aig/ # Advertise Image Generator (AIG) microservice
├── pid/ # Product Identification (PID) microservice
├── web-ui/ # Web-based user interface
├── diagrams/ # Architecture diagrams
├── docker-compose.yml
├── Makefile
├── .env # Main environment configuration
└── ...
git clone https://github.com/intel-retail/digital-signage
cd digital-signageNOTE: Run all commands as a regular (non-root) user, without using
sudo.
Please review the YOLO11s license.
cd pid && \
wget https://raw.githubusercontent.com/intel-retail/automated-self-checkout/v3.6.3/download_models/downloadAndQuantizeModel.sh && \
sed -i 's|MODELS_PATH="${MODELS_DIR:-/workspace/models}"|MODELS_PATH="${MODELS_DIR:-$PWD/models}"|g' downloadAndQuantizeModel.sh && \
sed -i 's/MODEL_NAME="yolo11n"/MODEL_NAME="yolo11s"/g' downloadAndQuantizeModel.sh && \
rm -rf .modelenv && \
python3 -m venv .modelenv && \
source .modelenv/bin/activate && \
pip3 install -r model_download_requirements.txt && \
rm -rf models && \
chmod +x downloadAndQuantizeModel.sh && \
./downloadAndQuantizeModel.sh && \
rm ./downloadAndQuantizeModel.sh && \
deactivate && \
cd ..The quantized model will be saved to ./pid/models/object_detection/yolo11s.
Please review the SDXL-Turbo license.
cd aig && \
rm -rf .modelenv && \
python3 -m venv .modelenv && \
source ./.modelenv/bin/activate && \
pip3 install -r export-requirements.txt && \
export HF_HUB_ENABLE_HF_TRANSFER=1 && \
optimum-cli export openvino --model stabilityai/sdxl-turbo --task stable-diffusion-xl --weight-format int8 ./models/sdxl_turbo_ov/int8 && \
huggingface-cli download sentence-transformers/all-MiniLM-L12-v2 --local-dir ./models/all-MiniLM-L12-v2 && \
deactivate && \
cd ../Models will be downloaded to ./aig/models/.
make build-
Edit the
.envfile and configure the following variables (refer to the comments in the file for additional guidance):HOST_IP: Specify the host system IP address.MTX_WEBRTCICESERVERS2_0_USERNAME: Set a username with a minimum of 5 alphabetic characters.MTX_WEBRTCICESERVERS2_0_PASSWORD: Set a password with at least 8 alphanumeric characters, including at least one digit.- (Optional) Configure
RTSP_CAMERA_IP,AIG_*, andASE_*variables for advanced settings as needed.
-
(Optional) To enable pre-defined advertisements, update the web-ui/ProductAssociations.csv file and the web-ui/pre-defined-ads/ directory accordingly. The CSV file should reference image filenames located in the
pre-defined-adsdirectory. Please note that only JPEG/JPG image formats are supported.
make upThis command validates your environment configuration, verifies that required models are available, removes any previously running containers, and starts all containers.
Open Google Chrome and navigate to:
http://<HOST_IP>:5000
You should see the live video stream and dynamic advertisements.
Check container status:
docker psIf any container is restarting, check logs:
docker logs -f <container_name>To stop and remove all containers and volumes:
make downBy default, the PID component performs inference on the CPU, while the AIG component uses the GPU. You can customize the target device for AI inferencing in both components as follows:
-
Configuration: Update the
deviceparameter withinpid/config.json. -
Example:
"parameters": { "detection-properties": { "model": "<model_path>", "device": "CPU" } }
-
Available options:
CPU,GPU, orNPU
- Configuration: Set the
AIG_MODEL_DEVICEvariable in the.envfile. - Example:
AIG_MODEL_DEVICE=GPU
- Available options:
CPUorGPU
After updating the device configuration, redeploy the application to apply changes:
make down
make upThe AIG service exposes REST endpoints for generating and managing advertisements.
Base URL: http://<HOST_IP>:<AIG_PORT> (By default, AIG_PORT value is set to 5003 in .env file)
-
POST /aig/minf/- Description: Generate an advertisement image from input text and parameters (logo, slogan, price, etc.).
- Request Body (JSON):
{ "prompt": "string", // Description for image generation "logo_path": "string", // (optional) Path to logo file "slogan": "string", // (optional) Slogan text "price": "string" // (optional) Price text } - Response: Image (binary or base64-encoded)
-
POST /ase/predef/- Description: Store a predefined advertisement in the database.
- Request Body: Ad metadata and image
-
POST /ase/predef/query/ad- Description: Query for relevant ads based on product/context.
- Request Body: Query parameters
- Response: List of matching ads
The PID service (DL Streamer Pipeline Server) exposes REST endpoints for pipeline management and status.
Base URL: http://<PID_HOST>:8080
For REST API docs, refer link
-
Obtain RTSP URI from your camera software (test with VLC if needed).
-
Edit
pid/config.jsonand update thepipelinestring:"pipeline": "rtspsrc location=\"rtsp://<USERNAME>:<PASSWORD>@<RTSP_CAMERA_IP>:<PORT>/<FEED>\" latency=100 name=source ! rtph264depay ! avdec_h264 ! videoconvert ! videoscale ! video/x-raw,format=BGR,width=1280,height=720 ! gvadetect name=detection ! queue ! gvawatermark ! gvafpscounter ! appsink name=destination"
-
Set
RTSP_CAMERA_IPin.env. -
Redeploy with
make down && make up.
For more on RTSP, see RTSP protocol and DL Streamer Pipeline Server RTSP guide.
Prerequisites:
- Refer to the official Geti documentation for offline installation instructions. As DL Streamer Pipeline Server is using 2.7.1 geti sdk version, install the latest geti version or version above 2.6 as per compatibility
- See the Geti Tutorials for step-by-step guides on creating projects, labeling data, training models, and exporting results.
- Review the Supported Models in Geti to ensure your project uses a YOLO or other object detection architecture for export to OpenVINO™ IR format.
- Follow the Model Download Instructions to export your trained model as OpenVINO™ IR files (
.xml/.bin). This process includes selecting the correct export format and downloading the files for deployment.
-
Export your YOLO model from Intel® Geti™ as OpenVINO™ IR (
.xml/.bin). -
Place files in
./pid/models/yolo11_geti_ir/. -
Edit the
pid/config.jsonfile and update themodelparameter to point to your exported model file:"parameters": { "detection-properties": { "model": "/home/pipeline-server/yolo11_geti_ir/<YOUR_MODEL_NAME>.xml", "device": "CPU" } }
Replace
<YOUR_MODEL_NAME>with the actual filename (without extension) of your exported YOLO model. -
Redeploy the application to apply changes:
make up
-
Check logs for model loading success:
docker logs -f <pid_container_name>
This project is licensed under the Apache 2.0 License. See LICENSE for details.
