WildBerryEye is a cost-effective ecological monitoring system designed to capture images of pollinators using embedded AI and motion detection. Built around the Raspberry Pi Zero 2 W and the Sony IMX500 AI camera, the system operates autonomously in the field, enabling researchers to track wildlife activity without constant human supervision. It integrates object detection with YOLOv11 and motion detection through frame differencing, saving metadata-rich images with accurate timestamps.
The system supports two modes:
- AI Detection Mode: Runs a quantized YOLOv11 model on the IMX500 for species-specific detection.
- Motion Detection Mode: Uses frame differencing on the Pi Camera to detect movement and capture images.
Users can access the system through a responsive web interface hosted on the device, which provides:
- Live image preview
- Manual and automatic image capture
- REST API for remote control
- Gallery with batch download and deletion tools
All images are stored with filenames that embed detection labels, timestamps, and capture mode information.
For detailed instructions look at the setup README
- Run the server
cd wildberryeye/backend
python3 app.py --mode object- Open your browser to
http://<PI_IP>:5000 - Click Start Detection and verify the live overlay
- Click Capture Now and confirm a new image appears in the gallery
- Navigate to Gallery and test download / delete functionality
- Install as a systemd service
cd wildberryeye
chmod +x setup/setup_flask_service.sh
./setup/setup_flask_service.sh wildberryeye backend object
sudo systemctl daemon-reload
sudo systemctl enable wildberryeye
sudo systemctl start wildberryeye- Verify service status
sudo systemctl status wildberryeyeOnce the service is running, the web interface will be available at http://<PI_IP>:5000 on every boot.
For issues or questions, contact the authors or open an issue in the project repository.
- Outdoor hummingbird detection and classification
- Integration with cloud data storage
- Improved thermal management and battery logging
The system has been tested in a controlled lab environment and supports both object and motion detection modes. Software reliability, power usage, and inference behavior have been evaluated using simulated workloads and scheduled image capture. Field deployment is planned as a next phase.
Isaac Espinosa, Sage Silberman, Teodor Langan, Sophie Tao
With thanks to Rossana Maguiña for the original dataset and inspiration
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
This project included a GSoC 2025 contribution by Sophie Tao, who developed the new web interface for the Raspberry Pi 5 version of WildBerryEye.
Mentor: Isaac Espinosa Contributor: Sophie Tao
Her work focused on:
- Building a React-based frontend with pages for live preview, dashboard, and contact.
- Adding image capture, video recording, and download features.
- Integrating with the Flask backend using REST API and SocketIO for real-time updates.
- Providing setup and usage instructions. (Link: WildberryEye5)
This contribution improves accessibility and usability of WildBerryEye, making it easier for researchers and contributors to interact with the system. In the future, she would explore more features in integrating a Machine Learning model(YOLOv11) into the image processing and video processing on the interface.
This project is licensed under the MIT License.