- Set up object detection and tracking in ROS 2 using OpenCV, YOLO, or another model.
- Capture real-time camera data (either from a physical camera or a Gazebo simulation).
- Publish detected object data as ROS 2 topics for use in robotic applications.
- Repository & Environment Setup ● Clone and build the necessary repositories (e.g., OpenCV, YOLO, ROS 2 perception packages). ● Set up a ROS 2 package to handle real-time image processing. ● Ensure a camera feed is available from a physical camera or a simulated Gazebo camera.
- Object Detection & Tracking ● Implement an object detection pipeline using OpenCV, YOLO, or TensorFlow. ● Detect objects from a camera feed and classify them (e.g., bottle, box, person). ● Track the detected object over time and publish its position as a ROS 2 topic.
- ROS 2 Integration ● Publish the detected object’s class and bounding box coordinates as a ROS 2 topic. ● Implement a ROS 2 subscriber that reads the object data and displays it in RViz or prints it in the terminal.
● ROS 2 workspace with: ○ Object detection and tracking nodes. ○ Launch files to start the detection pipeline. ● Documentation: ○ A README.md with setup instructions and dependencies. ○ Steps to build and run the object detection system. ● Demonstration Video or GIF:
- Make a workspace(or get inside one):
mkdir -p ~/ros2_ws/src cd ~/ros2_ws/src
- Clone the package inside a workspace:
git clone https://github.com/Nandostream11/object_detection_tracking.git
- Build the package and source the build using source install/setup.bash:
cd ~/ros2_ws colcon build --packages-select object_detection_tracking
- Run the camera
ros2 run object_detection_tracking camera_publisher
- Run the launch file for detection
ros2 launch object_detection_tracking detection_launch.py
for running april tag tracking,
pip install pupil-apriltags