diff --git a/docs_src/use-cases/automated-self-checkout/advanced.md b/docs_src/use-cases/automated-self-checkout/advanced.md deleted file mode 100644 index e74f897..0000000 --- a/docs_src/use-cases/automated-self-checkout/advanced.md +++ /dev/null @@ -1,224 +0,0 @@ -# Advanced Settings - -## Applying Environment Variables(EV) to Run Pipeline - -EV can be applied in two ways: - - 1. As a Docker Compose environment parameter input - 2. In the env files - -The input parameter will override the one in the env files if both are used. - -### Run with Custom Environment Variables - -Environment variables with make commands - -!!! Example - - ```bash - make PIPELINE_SCRIPT=yolo11n_effnetb0.sh RESULTS_DIR="../render_results" run-render-mode - ``` - -Environment variable with docker compose up - -!!! Example - - ```bash - PIPELINE_SCRIPT=yolo11n_effnetb0.sh RESULTS_DIR="../render_results" docker compose -f src/docker-compose.yml --env-file src/res/yolov5-cpu.env up -d - ``` - -!!! Note - The environment variables set like this are known as command line environment overrides and are applied to this run only. - They will override the default values in env files and docker-compose.yml. - -### Editing the Environment Files - -Environment variable files can be used to persist environment variables between deployments. You can find these files in `src/res/` folder with our default environment variables for Automated Self Checkout. - -| Environment File | Description | -|-------------------------------------------|-------------------------------------------------------------------------| -| `src/res/all-cpu.env` | Runs pipeline on **CPU** for decoding, pre-processing, and inferencing | -| `src/res/all-gpu.env` | Runs pipeline on **GPU** for decoding, pre-processing, and inferencing | -| `src/res/all-dgpu.env` | Runs pipeline on **discrete GPU** for decoding, pre-processing, and inferencing | -| `src/res/all-npu.env` | Runs pipeline on **NPU** for inferencing only | -| `src/res/yolov5-cpu-class-gpu.env` | Uses **CPU** for detection and **GPU** for classification | -| `src/res/yolov5-gpu-class-cpu.env` | Uses **GPU** for detection and **CPU** for classification | - - -After modifying or creating a new .env file you can load the .env file through the make command or docker compose up - -!!! Example "Make" - ```bash - make PIPELINE_SCRIPT=yolo11n_effnetb0.sh DEVICE_ENV=res/all-gpu.env run-render-mode - ``` - -!!! Example "Docker compose" - ```bash - docker compose -f src/docker-compose.yml --env-file src/res/yolov5-cpu-class-gpu.env up -d - ``` - -## Environment Variables (EVs) - -The table below lists the environment variables (EVs) that can be used as inputs for the container running the inferencing pipeline. - -=== "Docker Compose EVs" - This list of EVs is for running through the make file or docker compose up - - | Variable | Description | Values | - |:----|:----|:---| - |`DEVICE_ENV` | Path to device specific environment file that will be loaded into the pipeline container | res/all-gpu.env | - |`DOCKER_COMPOSE` | The docker-compose.yml file to run | src/docker-compose.yml | - |`RETAIL_USE_CASE_ROOT` | The root directory for Automated Self Checkout in relation to the docker-compose.yml | .. | - |`RESULTS_DIR` | Directory to output results | ../results | - -=== "Docker Compose Parameters" - This list of parameters that can be set when running docker compose up - - | Variable | Description | Values | - |:----|:----|:---| - |`-v` | Volume binding for containers in the Docker Compose | -v results/:/tmp/results | - |`-e` | Override environment variables inside of the Docker Container | -e LOG_LEVEL debug | - -=== "Common EVs" - This list of EVs is common for all profiles. - - | Variable | Description | Values | - |:----|:----|:---| - |`BARCODE_RECLASSIFY_INTERVAL` | time interval in seconds for barcode classification | Ex: 5 | - |`BATCH_SIZE` | number of frames batched together for a single inference to be used in [gvadetect batch-size element](https://dlstreamer.github.io/elements/gvadetect.html) | 0-N | - |`CLASSIFICATION_OPTIONS` | extra classification pipeline instruction parameters | "", "reclassify-interval=1 batch-size=1 nireq=4 gpu-throughput-streams=4" | - |`DETECTION_OPTIONS` | extra object detection pipeline instruction parameters | "", "ie-config=NUM_STREAMS=2 nireq=2" | - |`GST_DEBUG` | for running pipeline in gst debugging mode | 0, 1 | - |`LOG_LEVEL` | log level to be set when running gst pipeline | ERROR, INFO, WARNING, and [more](https://gstreamer.freedesktop.org/documentation/tutorials/basic/debugging-tools.html?gi-language=c#the-debug-log) | - |`OCR_RECLASSIFY_INTERVAL` | time interval in seconds for OCR classification | Ex: 5 | - |`REGISTRY` | Option to pull the pre-built images rather than creating them locally (by default:true) | false, true | - |`RENDER_MODE` | for displaying pipeline and overlay CV metadata | 1, 0 | - |`PIPELINE_COUNT` | Number of Automated Self Checkout Docker container instances to launch | Ex: 1 | - |`PIPELINE_SCRIPT` | Pipeline script to run. | yolo11n.sh, yolo11n_effnetb0.sh, yolo11n_full.sh | - - -## Available Pipelines - -- `yolo11n.sh` - Runs object detection only. -- `yolo11n_full.sh` - Runs object detection, object classification, text detection, text recognition, and barcode detection. -- `yolo11n_effnetb0.sh` - Runs object detection, and object classification. -- `obj_detection_age_prediction.sh` - Runs two parallel streams:
-  Stream 1: Object detection and classification on retail video.
-  Stream 2: Face detection and age/gender recognition on age prediction video. - -### Models used - -- Age/Gender Recognition - `age-gender-recognition-retail-0013` -- Face Detection - `face-detection-retail-0004` -- Object Classification - `efficientNet-B0` -- Object Detection - `YOLOv11n` -- Text Detectoin - `horizontal-text-detection-0002` -- Text Recognition - `text-recognition-0012` - -## Using a Custom Model - -You can replace the default detection model with your own trained model by following these steps: - -1. Clone the `automated-self-checkout` repository. This will create a folder named `automated-self-checkout`. - -2. Inside the `automated-self-checkout` folder, ensure there is a `models` directory. If it doesn’t exist, create one. - -3. Copy your custom model files into the `models` directory. For example, use this structure: - - ```text - ./automated-self-checkout/models/object_detection//INT8/ - ``` - -4. Open the `yolo11n.sh` script. Locate the `gstLaunchCmd` line and update the `model` path to point to your custom model: - - !!! Example - - ```bash - model=/home/pipeline-server/models/object_detection//INT8/ - ``` - -5. Run the pipeline as usual to start using your custom model. - -When you add a custom model, it replaces the default detection model used by the pipeline. - -!!! Note - If your custom model includes a `labels` (`.txt`) file or a `model-proc` (`.json`) file, place them in the same folder as your `.xml` file. Then set the variables in `gstLaunchCmd` as shown below before running the pipeline. - - ```bash - model-proc=/home/pipeline-server/models/object_detection//INT8/ - ``` - - ```bash - labels=/home/pipeline-server/models/object_detection//INT8/ - ``` - -## Configure the system proxy - -Please follow the below steps to configure the proxy - -### 1. Configure Proxy for the Current Shell Session - -```bash -export http_proxy=http://: -export https_proxy=http://: -export HTTP_PROXY=http://: -export HTTPS_PROXY=http://: -export NO_PROXY=localhost,127.0.0.1,::1 -export no_proxy=localhost,127.0.0.1,::1 -export socks_proxy=http://: -export SOCKS_PROXY=http://: -``` - -### 2. System-Wide Proxy Configuration - -System-wide environment (/etc/environment) -(Run: sudo nano /etc/environment and add or update) - -```bash -http_proxy=http://: -https_proxy=http://: -ftp_proxy=http://: -socks_proxy=http://: -no_proxy=localhost,127.0.0.1,::1 - -HTTP_PROXY=http://: -HTTPS_PROXY=http://: -FTP_PROXY=http://: -SOCKS_PROXY=http://: -NO_PROXY=localhost,127.0.0.1,::1 -``` -### 3. Docker Daemon & Client Proxy Configuration - -Docker daemon drop-in (/etc/systemd/system/docker.service.d/http-proxy.conf) -Create dir if missing: -sudo mkdir -p /etc/systemd/system/docker.service.d -sudo nano /etc/systemd/system/docker.service.d/http-proxy.conf - -```bash -[Service] -Environment="http_proxy=http://:" -Environment="https_proxy=http://:" -Environment="no_proxy=localhost,127.0.0.1,::1" -Environment="HTTP_PROXY=http://:" -Environment="HTTPS_PROXY=http://:" -Environment="NO_PROXY=localhost,127.0.0.1,::1" -Environment="socks_proxy=http://:" -Environment="SOCKS_PROXY=http://:" - -# Reload & restart: -sudo systemctl daemon-reload -sudo systemctl restart docker - -# Docker client config (~/.docker/config.json) -# mkdir -p ~/.docker -# nano ~/.docker/config.json -{ - "proxies": { - "default": { - "httpProxy": "http://:", - "httpsProxy": "http://:", - "noProxy": "localhost,127.0.0.1,::1" - } - } -} -``` \ No newline at end of file diff --git a/docs_src/use-cases/automated-self-checkout/automated-self-checkout.md b/docs_src/use-cases/automated-self-checkout/automated-self-checkout.md index 743e70f..d487b61 100644 --- a/docs_src/use-cases/automated-self-checkout/automated-self-checkout.md +++ b/docs_src/use-cases/automated-self-checkout/automated-self-checkout.md @@ -1,22 +1,57 @@ # Intel® Automated Self-Checkout Reference Package -## Overview +> **🔄 Package Integration Notice** +> The Automated Self-Checkout functionality has been consolidated into the [Intel® Loss Prevention Reference Package](../loss-prevention/loss-prevention.html) for a unified retail computer vision platform. -As Computer Vision becomes more and more mainstream, especially for industrial & retail use cases, development and deployment of these solutions becomes more challenging. Vision workloads are large and complex and need to go through many stages. For instance, in the pipeline below, the video data is ingested, pre-processed before each inferencing step, inferenced using two models - YOLOv5 and EfficientNet, and post processed to generate metadata and show the bounding boxes for each frame. This pipeline is just an example of the supported models and pipelines found within this reference. +## What This Means for You -[![Vision Data Flow](./images/vision-data-flow.jpg)](./images/vision-data-flow.jpg) - -Automated self-checkout solutions are complex, and retailers, independent software vendors (ISVs), and system integrators (SIs) require a good understanding of hardware and software, the costs involved in setting up and scaling the system, and the configuration that best suits their needs. Vision workloads are significantly larger and require systems to be architected, built, and deployed with several considerations. Hence, a set of ingredients needed to create an automated self-checkout solution is necessary. More details are available on the [Intel Developer Focused Webpage](https://www.intel.com/content/www/us/en/developer/articles/reference-implementation/automated-self-checkout.html) and on this [LinkedIn Blog](https://www.linkedin.com/pulse/retail-innovation-unlocked-open-source-vision-enabled-mohideen/) +- **Existing Users**: Your automated self-checkout use cases are now supported in the Loss Prevention package +- **New Users**: Start directly with the Loss Prevention package for the latest features +- **Migration**: No code changes needed - simply use the new package location -The Intel® Automated Self-Checkout Reference Package provides critical components required to build and deploy a self-checkout use case using Intel® hardware, software, and other open-source software. This reference implementation provides a pre-configured automated self-checkout pipeline that is optimized for Intel® hardware. +## Why Computer Vision for Retail? -## Next Steps +Automated self-checkout systems process complex visual data through multiple stages to transform raw video into actionable business insights: -!!! Note - If coming from the catalog please follow the [Catalog Getting Started Guide](./catalog/Overview.md). +1. **Video Ingestion**: Capture customer interactions and product movements in real-time +2. **Object Detection**: Identify products and items using YOLOv5 models +3. **Classification**: Categorize and verify items with EfficientNet algorithms +4. **Analytics**: Generate loss prevention data and checkout validation -To begin using the automated self-checkout solution you can follow the [Getting Started Guide](./getting_started.md). +The pipeline below demonstrates this workflow, where video data flows through preprocessing, dual AI model inference (YOLOv5 and EfficientNet), and post-processing to generate metadata and visual bounding boxes for each frame. -## Releases +[![Vision Data Flow](./images/vision-data-flow.jpg)](./images/vision-data-flow.jpg) -For the project release notes, refer to the [GitHub* Repository](../../releasenotes.md). +This unified platform simplifies deployment complexity with pre-configured, hardware-optimized workflows that scale from pilot programs to enterprise-wide implementations. + +## Integration Benefits + +The automated self-checkout functionality has been consolidated into the Intel® Loss Prevention Reference Package, providing a unified platform for retail computer vision solutions. This integration offers several advantages: +> +> - **Unified Platform**: Single application supporting both loss prevention and automated self-checkout use cases +> - **Hardware Optimization**: Pre-configured workloads optimized for Intel® CPU, GPU, and NPU hardware +> - **Flexible Deployment**: Multiple workload configurations including: +> - Object Detection (CPU/GPU/NPU) +> - Object Detection & Classification (CPU/GPU/NPU) +> - Age Prediction & Face Detection (CPU/GPU/NPU) +> - Heterogeneous configurations +> - **Simplified Management**: Single codebase, unified configuration, and streamlined deployment process +## What You Want to Do + +### 🚀 I'm New to Intel Retail Solutions +**Quick Start (15 minutes)**: [Loss Prevention Getting Started Guide](https://intel-retail.github.io/documentation/use-cases/loss-prevention/getting_started.html) +- Set up your environment +- Run your first automated self-checkout demo +- Understand the basic workflow + +### ⚙️ I Want to Customize the Solution +**Advanced Configuration (30-60 minutes)**: [Loss Prevention Advanced Guide](https://intel-retail.github.io/documentation/use-cases/loss-prevention/advanced.html) +- Customize workload configurations +- Optimize for your hardware setup +- Configure multiple detection models + +### 📊 I Need Performance Data +**Benchmark & Optimize**: [Loss Prevention Performance Guide](https://intel-retail.github.io/documentation/use-cases/loss-prevention/performance.html) +- Compare CPU/GPU/NPU performance +- Optimize for your specific use case +- Understand throughput metrics \ No newline at end of file diff --git a/docs_src/use-cases/automated-self-checkout/catalog/Get-Started-Guide.md b/docs_src/use-cases/automated-self-checkout/catalog/Get-Started-Guide.md deleted file mode 100644 index 7fc1dcb..0000000 --- a/docs_src/use-cases/automated-self-checkout/catalog/Get-Started-Guide.md +++ /dev/null @@ -1,217 +0,0 @@ -# Getting Started Guide - -- **Time to Complete:** 30 minutes -- **Programming Language:** Python*3, Bash* - -## Prerequisites for Target System - -* Intel® Core™ processor -* At least 16 GB RAM -* At least 64 GB hard drive -* An Internet connection -* Docker* -* Docker Compose* v2 (Optional) -* Git* -* Ubuntu* LTS Boot Device - -If Ubuntu is not installed on the target system, follow the instructions and [install Ubuntu](https://ubuntu.com/tutorials/install-ubuntu-desktop/). - -## Install Automated Self-Checkout Package Software - -Do the following to install the software package: - - 1. Download the reference implementation package: - [Automated Self-Checkout Retail Reference Implementation](https://edgesoftware.intel.com/automated-self-checkout). - - 1. Open a new terminal and navigate to the download folder to unzip the ``automated-self-checkout`` package: - - ``` bash - unzip automated-self-checkout.zip - ``` - - 1. Navigate to the ``automated-self-checkout/`` directory: - - ``` bash - cd automated-self-checkout - ``` - - 1. Change permission of the executable edgesoftware file: - - ``` bash - chmod 755 edgesoftware - ``` - - 1. Install the package: - - ``` bash - ./edgesoftware install - ``` - - 1. You will be prompted for the Product Key during the installation. The Product Key is in the email you received from Intel confirming your download. - - When the installation is complete, you will see the message “Installation of package complete” and the installation status for each module. - - ![Figure 3: Installation Status](images/automated-selfcheckout-installation-status.png) - - If the installation fails because of proxy-related issues, follow the [troubleshooting steps](#troubleshooting). - -## Run and Evaluate Pre-Configured Pipelines - -In a retail environment, self-checkout solutions analyze video streams from multiple cameras to streamline the checkout process. The system detects and classifies products as items are scanned. Barcode and text recognition ensure accuracy. This data is processed to verify purchases and update inventory in real time. Factors such as latency and frames per second (FPS) help assess the automated self-checkout solution's real-time responsiveness and efficiency. - -This demonstration shows how to run the pre-configured pipeline, view a simulation that detects and tracks objects, and check the pipeline's status. - -## Step 1: Run Pipeline - -Do the following to run the pre-configured pipeline: - - 1. Navigate to the ``automated-self-checkout`` directory: - - ``` bash - cd automated-self-checkout - ``` - - 1. Modify the following host IP addresses to match the IP address of the system running the reference implementation: - - * ``HOST_IP`` and ``RSTP_CAMERA_IP`` in the ``src/pipeline-server/.env`` file. - * ``host_ip`` in the ``src/pipeline-server/postman/env.json`` file. - - 1. Run the pipeline server: - - ``` bash - make run-pipeline-server - ``` - - The containers will start to run. - - ![Figure 4: Pipeline Status](images/automated-selfcheckout-run-pipeline.png) - -## Step 2: Launch Grafana Dashboard - -Do the following to launch the Grafana* dashboard to view the objects being detected and tracked: - - 1. Open a web browser and enter the following URL to access the Grafana dashboard: - ``http://:3000``. - - To get ````, run the ``hostname -I`` command. - - 1. When prompted, provide the following credentials: - - * Username: ``root`` - * Password: ``evam123`` - - 1. On the dashboard, go to **Menu** > **Home**, and select **Video Analytics Dashboard**. - - The dashboard visualizes the object detection and tracking pipelines. The bounding boxes around the products indicate their detection and tracking. The dashboard also shows the active streams and their corresponding average FPS. - - ![Figure 5: Object Detection and Tracking](images/automated-selfcheckout-grafana.png) - -## Step 3: Check Pipeline Status - -Do the following to check the metrics: - - 1. Check whether the docker containers are running: - - ``` bash - docker ps --format 'table{{.Names}}\t{{.Image}}\t{{.Status}}' - ``` - ![Figure 6: Docker Container Status](images/automated-selfcheckout-pipeline-status.png) - - 1. Check the MQTT inference output: - - ``` bash - mosquitto_sub -v -h localhost -p 1883 -t 'AnalyticsData0' - mosquitto_sub -v -h localhost -p 1883 -t 'AnalyticsData1' - mosquitto_sub -v -h localhost -p 1883 -t 'AnalyticsData2' - ``` - - Here is the result for ``AnalyticsData0``: - - ``` shell - AnalyticsData0 {"objects":[{"detection":{"bounding_box":{"x_max":0.3163176067521043,"x_min":0.20249048400491532,"y_max":0.7995593662281202,"y_min":0.12237883070032396},"confidence":0.868196964263916,"label":"bottle","label_id":39},"h":731,"region_id":6199,"roi_type":"bottle","w":219,"x":389,"y":132},{"detection":{"bounding_box":{"x_max":0.7833052431819754,"x_min":0.6710088227893136,"y_max":0.810283140877349,"y_min":0.1329853767638305},"confidence":0.8499506711959839,"label":"bottle","label_id":39},"h":731,"region_id":6200,"roi_type":"bottle","w":216,"x":1288,"y":144}],"resolution":{"height":1080,"width":1920},"tags":{},"timestamp":67297301635} - - AnalyticsData0 {"objects":[{"detection":{"bounding_box":{"x_max":0.3163306922646063,"x_min":0.20249845268772138,"y_max":0.7984013488063937,"y_min":0.12254781445953},"confidence":0.8666459321975708,"label":"bottle","label_id":39},"h":730,"region_id":6201,"roi_type":"bottle","w":219,"x":389,"y":132},{"detection":{"bounding_box":{"x_max":0.7850104587729607,"x_min":0.6687324296210857,"y_max":0.7971464600783804,"y_min":0.13681757042794374},"confidence":0.8462932109832764,"label":"bottle","label_id":39},"h":713,"region_id":6202,"roi_type":"bottle","w":223,"x":1284,"y":148}],"resolution":{"height":1080,"width":1920},"tags":{},"timestamp":67330637174} - ``` - - 1. Check the pipeline status: - - ``` bash - ./src/pipeline-server/status.sh - ``` - The pipeline status should be like: - - ``` shell - --------------------- Pipeline Status --------------------- - ----------------8080---------------- - [ - { - "avg_fps": 11.862402507697258, - "avg_pipeline_latency": 0.5888091060475129, - "elapsed_time": 268.07383918762207, - "id": "95204aba458211efa9080242ac180006", - "message": "", - "start_time": 1721361269.6349292, - "state": "RUNNING" - } - ] - ``` - - The pipeline status displays the average FPS and average pipeline latency, among other metrics. - - 1. Stop the services: - - ``` bash - make down-pipeline-server - ``` - -## Summary ---------- - -In this get started guide, you learned how to: - -* Install the automated self-checkout package software. -* Verify the installation. -* Run pre-configured pipelines, visualize object detection and tracking, and extract data from them. - -## Learn More ------------- - -* To apply custom environment variables, see [Advanced Settings](../advanced.md). -* To evaluate the pipeline system performance across different hardware, see [Test Performance](../performance.md). - -## Troubleshooting - -Issues with Docker Installation - -If you are behind a proxy and if you experience connectivity issues, the Docker installation might fail. Do the following to install Docker manually: - - 1. [Install Docker from a package](https://docs.docker.com/engine/install/ubuntu/#install-from-a-package). - 1. Complete the post-installation steps to [manage Docker as a non-root user](https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user). - 1. [Configure the Docker CLI to use proxies](https://docs.docker.com/engine/cli/proxy/). - -## Error Logs - - To access the Docker Logs for EVAM server 0, run the following command: - - ``` bash - docker logs evam_0 - ``` - Here is an example of the error log when the RSTP stream is unreachable for a pipeline: - - ``` shell - {"levelname": "INFO", "asctime": "2024-07-31 23:26:47,257", "message": "===========================", "module": "pipeline_manager"} - {"levelname": "INFO", "asctime": "2024-07-31 23:26:47,257", "message": "Completed Loading Pipelines", "module": "pipeline_manager"} - {"levelname": "INFO", "asctime": "2024-07-31 23:26:47,257", "message": "===========================", "module": "pipeline_manager"} - {"levelname": "INFO", "asctime": "2024-07-31 23:26:47,330", "message": "Starting Tornado Server on port: 8080", "module": "__main__"} - {"levelname": "INFO", "asctime": "2024-07-31 23:26:51,177", "message": "Creating Instance of Pipeline detection/yolov5", "module": "pipeline_manager"} - {"levelname": "INFO", "asctime": "2024-07-31 23:26:51,180", "message": "Gstreamer RTSP Server Started on port: 8555", "module": "gstreamer_rtsp_server"} - {"levelname": "ERROR", "asctime": "2024-07-31 23:26:51,200", "message": "Error on Pipeline 5d5b3b0a4f9411efb60d0242ac120007: gst-resource-error-quark: Could not open resource for reading. (5): ../gst/rtsp/gstrtspsrc.c(6427): gst_rtspsrc_setup_auth (): /GstPipeline:pipeline3/GstURISourceBin:source/GstRTSPSrc:rtspsrc0:\nNo supported authentication protocol was found", "module": "gstreamer_pipeline"} - ``` - -## Known Issues -------------- - -For the list of known issues, see [known issues](https://github.com/intel-retail/automated-self-checkout/issues). - - - diff --git a/docs_src/use-cases/automated-self-checkout/catalog/Overview.md b/docs_src/use-cases/automated-self-checkout/catalog/Overview.md deleted file mode 100644 index 29b0a63..0000000 --- a/docs_src/use-cases/automated-self-checkout/catalog/Overview.md +++ /dev/null @@ -1,54 +0,0 @@ -# Automated Self-Checkout Retail Reference Implementation - -Use pre-configured optimized computer vision pipelines to build and deploy a self-checkout use case using Intel® hardware, software, and other open source software. - -## Summary - -The Automated Self-Checkout Reference Implementation provides essential components to build and deploy a self-checkout solution using Intel® hardware, software, and open source software. It includes the basic services to get you started running optimized Intel® Deep Learning Streamer (Intel® DLStreamer)-based computer vision pipelines. These services are modular, allowing for customization or replacement with your solutions to address specific needs. - -### Features and Benefits - -With this reference implementation, the self-checkout stations can: - -* Recognize the non-barcoded items more quickly. -* Recognize the product SKU and items placed in transparent bags without requiring manual input. -* Reduce the steps in identifying products when there is no match by suggesting the top five closest choices. - -The pre-configured, optimized computer vision pipelines also accelerate the time to market. Inference results are published to Message Queuing Telemetry Transport (MQTT), allowing easy integration with other applications. The implementation includes examples of using different devices such as CPUs, integrated GPUs, and discrete GPUs. - -## How It Works - -In this reference implementation, the video streams from various cameras are cropped and resized to enable the inference engine to run the associated models. The object detection and product classification features identify the SKUs during checkout. The barcode detection, text detection, and recognition features further verify and increase the accuracy of the detected SKUs. The inference details are then aggregated and pushed to MQTT to process the combined results further. - -As Figure 1 shows, Docker Compose is used to deploy the reference implementation on different system setups easily. At the same time, MQTT Broker publishes the inference data that external applications or systems can use. Unique MQTT topics are created for each pipeline for a more refined approach to organizing inference outputs. - -![A simple architectural diagram for Automated Self-checkout](images/automated-selfcheckout-arch-diagram.png) - -Figure 1: Automated Self-Checkout Architectural Diagram - -Each automated self-checkout pipeline has a pre-configured setup optimized for running on Intel hardware. The following are the available pipelines: - -* ``yolov5``: yolov5 object detection only. -* ``yolov5_effnet``: yolov5 object detection and ``efficientnet_b0`` classification. -* ``yolov5_full``: yolov5 object detection, ``efficientnet_b0`` classification, text detection, text recognition, and barcode detection. - - -Figure 2 shows a pipeline in which the video data is ingested and pre-processed before each inferencing step. The data is then analyzed using two models, ``YOLOv5`` and ``EfficientNet``, and post-processed to generate metadata and display bounding boxes for each frame. This pipeline is an example of the models and processing workflows supported in this reference implementation. - -![A pipeline flow](images/pipeline-example.png) - -Figure 2: Example of a Pipeline Flow - -The number of streams and pipelines that can be used are system-dependent. For more details, see the latest [performance data](https://www.intel.com/content/www/us/en/developer/topic-technology/edge-5g/tools/automated-self-checkout-benchmark-results.html). - -The following are the components in the reference implementation. - -* **Edge Video Analytics Microservice (EVAM)** is a Python-based, interoperable containerized microservice for the easy development and deployment of video analytics pipelines. It is built on [GStreamer](https://gstreamer.freedesktop.org/documentation/) and [Intel® DL Streamer](https://dlstreamer.github.io/), which provide video ingestion and deep learning inferencing functionalities, respectively. -* **Multimodal Data Visualization Microservice** enables the visualization of video streams and time-series data. - - -## Learn More - -- Get started with the Automated Self-Checkout Retail Reference Implementation using the [Get Started Guide](Get-Started-Guide.md). -- Know more about [GStreamer](https://gstreamer.freedesktop.org/documentation/) and [Intel® Deep Learning Streamer (DL Streamer)](https://dlstreamer.github.io/). - diff --git a/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-arch-diagram.png b/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-arch-diagram.png deleted file mode 100644 index c9dd24b..0000000 Binary files a/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-arch-diagram.png and /dev/null differ diff --git a/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-grafana.png b/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-grafana.png deleted file mode 100644 index 052577e..0000000 Binary files a/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-grafana.png and /dev/null differ diff --git a/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-installation-status.png b/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-installation-status.png deleted file mode 100644 index 32a9c2e..0000000 Binary files a/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-installation-status.png and /dev/null differ diff --git a/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-pipeline-status.png b/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-pipeline-status.png deleted file mode 100644 index 511eae8..0000000 Binary files a/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-pipeline-status.png and /dev/null differ diff --git a/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-run-pipeline.png b/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-run-pipeline.png deleted file mode 100644 index 5bd4c92..0000000 Binary files a/docs_src/use-cases/automated-self-checkout/catalog/images/automated-selfcheckout-run-pipeline.png and /dev/null differ diff --git a/docs_src/use-cases/automated-self-checkout/catalog/images/pipeline-example.png b/docs_src/use-cases/automated-self-checkout/catalog/images/pipeline-example.png deleted file mode 100644 index 57be836..0000000 Binary files a/docs_src/use-cases/automated-self-checkout/catalog/images/pipeline-example.png and /dev/null differ diff --git a/docs_src/use-cases/automated-self-checkout/getting_started.md b/docs_src/use-cases/automated-self-checkout/getting_started.md deleted file mode 100644 index 9eff6f8..0000000 --- a/docs_src/use-cases/automated-self-checkout/getting_started.md +++ /dev/null @@ -1,140 +0,0 @@ -# Getting Started - -### **NOTE:** - -By default the application runs by pulling the pre-built images. If you want to build the images locally and then run the application, set the flag: - -```bash -REGISTRY=false - -usage: make REGISTRY=false (applicable for all commands like benchmark, benchmark-stream-density..) -Example: make run-demo REGISTRY=false -``` - -(If this is the first time, it will take some time to download videos, models, docker images and build images) - -## Step by step instructions: - -1. Download the models using download_models/downloadModels.sh - - ```bash - make download-models - ``` - -2. Update github submodules - - ```bash - make update-submodules - ``` - -3. Download sample videos used by the performance tools - - ```bash - make download-sample-videos - ``` - -4. Start Automated Self Checkout using the Docker Compose file. - - ```bash - make run-render-mode - ``` - -- The above series of commands can be executed using only one command: - - ```bash - make run-demo - ``` -5. To build the images locally step by step: - - Follow the following steps: - ```bash - make download-models REGISTRY=false - make update-submodules REGISTRY=false - make download-sample-videos - ``` - - Now build the pipeline-runner image locally: - ```bash - make build REGISTRY=false - ``` - - Finally, start Automated self checkout using docker compose up. - ```bash - make run-render-mode REGISTRY=false - ``` - - The above series of commands can be executed using only one command: - - ```bash - make run-demo REGISTRY=false - ``` - -6. Verify Docker containers - - Verify Docker images - ```bash - docker ps --format 'table{{.Names}}\t{{.Status}}\t{{.Image}}' - ``` - Result: - ```bash - NAMES STATUS IMAGE - camera-simulator0 Up 12 seconds jrottenberg/ffmpeg:4.1-alpine - src-ClientGst-1 Up 14 seconds dlstreamer:dev - camera-simulator Up 13 seconds aler9/rtsp-simple-server - ``` - -7. Verify Results - - After starting Automated Self Checkout you will begin to see result files being written into the results/ directory. Here are example outputs from the 3 log files. - - gst-launch_