This project provides a simple and lightweight object detection inference service designed to work in conjunction with Home Assistant. It serves as the AI-powered second-tier in a two-tier approach for camera monitoring:
- Simple Motion Detection: Handled by a custom Home Assistant component (simple_motion_detector) that detects motion.
- Object Detection: Once motion is detected, this service uses a Coral USB Accelerator to run efficient object detection on the camera snapshots.
Note: This service runs as a standalone Python program (using Python 3.9 due to the
pycoralpackage constraints) and is configured to run as a systemd service. It cannot be deployed as a custom component inside Home Assistant directly.
- Efficient Inference: Utilizes the Coral USB Accelerator with TensorFlow Lite models for high-speed object detection.
- MQTT Integration: Listens for trigger messages and publishes detailed detection results.
- Two-Tier Approach: Complements a simple motion detection system in Home Assistant, handling intensive object detection only when motion is detected.
- Customizable and Lightweight: Designed for resource efficiency and ease of integration with existing Home Assistant setups.
.
├── config.py
├── inference_service.py
├── README.md
├── coral_models/
│ └── efficientdet_lite3/
│ ├── efficientdet_lite3_512_ptq_edgetpu.tflite
│ └── coco_labels.txt
└── utils
├── image_processing.py
├── logging_setup.py
└── model_handling.py
└── mqtt_handler.py
-
inference_service.py
The main entry point for the inference service. It initializes logging, loads the Edge TPU model, listens for MQTT messages, processes images for inference, and publishes detection results. -
config.py
Contains configuration parameters such as the MQTT broker settings, paths to the model and labels, and the confidence threshold. -
utils/model_handling.py
Provides functions for loading the TensorFlow Lite model and running inference on image data. -
utils/image_processing.py
Contains functions for drawing bounding boxes and overlaying labels on the images based on detection results. -
mqtt_handler.py
Handles MQTT client setup and subscription management.
-
Python 3.9
Required for compatibility with thepycoralpackage. -
Hardware:
- Coral USB Accelerator (https://coral.ai/products/accelerator) is required to run the AI-based object detection inference efficiently.
- A supported camera that provides snapshots (saved temporarily on disk).
-
Software Dependencies:
- OpenCV: For image reading, processing, and annotation.
- NumPy: For array manipulations.
- pycoral: For Edge TPU integration and object detection.
- MQTT Broker: Ensure that an MQTT broker (such as Mosquitto) is running and that its connection details are set in
config.py.
-
Clone the Repository:
git clone https://github.com/graus/simple_cv_detector.git cd simple_cv_detector -
Install Dependencies: It is recommended to use a virtual environment:
python3.9 -m venv venv source venv/bin/activate # On Windows use: venv\Scripts\activate pip install -r requirements.txt
-
Prepare Model and Labels:
- Ensure the model file (
efficientdet_lite3_512_ptq_edgetpu.tflite) and the label file (coco_labels.txt) are in the correct paths as specified inconfig.py. - You can use any compatible model from Coral's object detection models repository by updating the model and label paths in
config.py.
- Ensure the model file (
-
Configure the Service:
- Open
config.pyand adjust the following settings as needed:- MQTT_BROKER, MQTT_PORT, MQTT_TOPIC
- MODEL_PATH and LABELS_PATH
- CONFIDENCE_THRESHOLD
- Open
-
Start the Service as a Systemd Service or Standalone Script:
python inference_service.py
You should see output indicating that the model has been loaded and that the service is listening on the configured MQTT trigger topic.
-
Trigger Object Detection:
- Publish a JSON message to the MQTT topic (e.g.,
object_detection/state/trigger) to trigger inference.
Example payload:{ "camera_id": "camera1" } - The service will:
- Load a snapshot from
/tmp/snapshot_camera1.jpg. - Run object detection using the Coral USB Accelerator.
- Annotate the image with bounding boxes and labels (saved as
/tmp/output_with_bboxes_camera1.jpg). - Publish the detection results to the MQTT results topic (e.g.,
object_detection/state/results).
- Load a snapshot from
- Publish a JSON message to the MQTT topic (e.g.,
-
Trigger Message:
- Topic:
object_detection/state/trigger(modifiable viaconfig.py). - Payload: JSON containing a
camera_idkey.
- Topic:
-
Results Message:
- Topic:
object_detection/state/results - Payload Structure:
{ "camera_id": "camera1", "objects": [ { "label": "person", "score": 0.97, "bbox": [xmin, ymin, xmax, ymax] } ], "total_objects": 1, "image_path": "/tmp/output_with_bboxes_camera1.jpg" }
- Topic:
-
Motion Detection Trigger:
The service is designed to work with the simple_motion_detector component for Home Assistant. The custom component detects motion in camera feeds, and can be used in an automation to publish a trigger to the MQTT topic. -
Systemd Service:
Since the service runs on Python 3.9 and relies on system-level libraries and hardware (Coral USB Accelerator), it is deployed as a standalone systemd service rather than as a Home Assistant custom component.Example systemd service file:
[Unit] Description=Coral Object Detection Inference Service After=network.target [Service] User=your_username WorkingDirectory=/path/to/your/project ExecStart=/path/to/venv/bin/python inference_service.py Restart=always [Install] WantedBy=multi-user.target
Adjust the paths and user as necessary.
- PyCoral – For providing the libraries and support for Edge TPU integration.
- Home Assistant Community – For continuous inspiration in smart home automation.