The goal of this project is to integrate a semantic mapping system into a power wheelchair, enhancing user interaction and environmental awareness. This system utilizes a tablet to provide real-time feedback of the environment to the user. The feedback includes semantic labeling, which can be shown on the mapped environment when a landmark is identified through further inference. Additionally, for autonomous navigation towards a landmark, the user can initiate a plan by selecting it from the displayed landmarks.
For more details, please go to my portfolio post.
- LUCI Wheelchair
- Intel RealSense D435i
- Windows Surface Pro 2 (with Ubuntu 22.04 LTS)
- ROS 2 Humble
- SLAM Toolbox
- robot_localization
- Nav2
- YOLOv8
- Foxglove Studio
I am not allowed to share any repository or code related to LUCI. If you have any questions, please contact the Assistive & Rehabilitation Robotics Laboratory (argallab) at Shirley Ryan AbilityLab. The subsequent steps are based on the assumption that you have access to their repositories.
To clone the packages, use the following commands:
mkdir -p luci_ws/src
cd luci_ws/src
git clone https://github.com/r-shima/semantic_mapping.git
cd ..
vcs import < src/semantic_mapping/semantic_mapping.repos
Once you have access to the Dockerfile for LUCI, build the image:
docker build -t argallab/luci:humble .
Then, run the container with the necessary mounts:
docker run -it --privileged \
-v /dev:/dev \
-v /tmp/.X11-unix:/tmp/.X11-unix:rw \
-v /home/user/luci_ws/src:/home/luci_ws/src \
-e DISPLAY \
-e QT_X11_NO_MITSHM=1 \
--name luci-humble \
--net=host argallab/luci:humble
Start the container:
docker start -i luci-humble
Build packages:
cd home/luci_ws
colcon build
If there are missing dependencies, run the following for each missing dependency:
apt install ros-humble-<package_name>
Clone and build the packages:
mkdir -p luci_ws/src
cd luci_ws/src
git clone https://github.com/r-shima/semantic_mapping.git
cd ..
colcon build
Install Touchegg:
sudo apt install touchegg
Add the following in ~/.config/touchegg/touchegg.conf to make a single tap emulate a left-click on Foxglove Studio:
<application name="Foxglove Studio">
<gesture type="TAP" fingers="1" direction="">
<action type="MOUSE_CLICK">BUTTON=1</action>
</gesture>
</application>
- Go to your workspace in Docker (
home/luci_ws) and run the following in different terminals to start LUCI, SLAM Toolbox, robot_localization, and Nav2:Make sure to source the workspace by runningros2 launch awl_launch awl_wheelchair.launch.py ip:=192.168.1.8 ros2 launch awl_launch awl_teleop.launch.py joystick:=true ps3:=true ros2 param set /global_params assistance ROUTE ros2 launch awl_navigation luci_nav.launch.py slam_toolbox:=true ros2 launch awl_navigation navigation.launch.pysource install/setup.bashbefore each command. - Connect to the RealSense camera via USB-C and start YOLOv8:
ros2 launch object_detection yolov8_realsense.launch.py - Start the
landmark_managerandsemantic_labelingnodes:ros2 launch landmark_manager landmark_manager.launch.py - Launch the Foxglove bridge to connect ROS 2 to Foxglove Studio:
ros2 launch foxglove_bridge foxglove_bridge_launch.xml - On the Surface Pro, open Foxglove Studio. Go to "Open connection" and enter
ws://<computer_ip_address>:8765for the WebSocket URL. Go to Layout -> Import from file and selectluci_map.json, which is available in the foxglove directory of the landmark_manager package. - On the Surface Pro, start Touchegg:
touchegg - On the Surface Pro, go to your workspace, source it, and launch the GUI:
ros2 launch navigation_gui navigation_gui.launch.py
- landmark_manager: This is a ROS 2 package that provides services for saving landmarks, canceling navigation, and navigating to landmarks. It also performs semantic labeling by publishing markers for detected doors and tables.
- navigation_gui: This is a ROS 2 package that contains a GUI that allows users to save landmarks and cancel navigation. The GUI displays buttons for detected doors and tables in a scroll area.