ROS Noetic workspace for the FLO V2 robot demo, including dual-arm control, face control, camera-based pose tracking, MoveIt/Gazebo simulation, and the Simon Says game flow.
flo_core: Simon Says game runner, MoveIt action server, GUI, Gazebo and MoveIt launch files.flo_humanoid: Dynamixel-based arm hardware interface and motor-control nodes.flo_vision: USB camera launcher and MediaPipe-based arm/hand pose tracker.flo_face: Face display launcher and serial communication support.flo_core_defs: Shared ROS messages, services, and actions.flov2_robot_description: URDF, SRDF, meshes, and robot description assets.utility/: Host-side helpers for device-path resolution and udev rule generation.
The current Docker workflow launches the full demo stack automatically inside a tmux session:
roscore- robot simulation via
roslaunch flo_core full_robot_arm_sim.launch - camera via
roslaunch flo_vision usb_cam_launcher.launch - pose tracking via
roslaunch flo_vision arm_hand_tracker_launcher.launch - pose-score monitor
- arm hardware interface via
roslaunch flo_humanoid dual_arm_hardware.launch - face launcher via
roslaunch flo_face flo_face_launcher.launch - Simon Says game via
roslaunch flo_core simonsays_launcher_prod.launch
That behavior is defined in ros_docker_auto_startup_launcher.sh.
- Ubuntu host machine
- Docker Engine with the Docker Compose plugin
- Git
- X11 access for GUI apps
- PulseAudio or PipeWire audio on the host
- Optional but recommended:
- USB camera
- motor controller connected over serial
- face controller connected over serial
- AWS credentials for Amazon Polly
git clone https://github.com/anht-nguyen/FloSystemV2_GameDemo.git
cd FloSystemV2_GameDemosudo apt update
sudo apt install -y docker.io docker-compose-plugin git
sudo systemctl enable docker
sudo systemctl start docker
sudo usermod -aG docker "$USER"
newgrp dockerThe Docker image expects local files under certs/, and docker-compose.yml bind-mounts them into the container.
mkdir -p certsCreate certs/aws-credentials:
[flo]
aws_access_key_id = YOUR_AWS_KEY_ID
aws_secret_access_key = YOUR_AWS_SECRET_KEYCreate certs/aws-config:
[profile flo]
region = us-east-1
output = jsonIf you are not using Polly yet, create placeholder files so the Docker build and bind mounts still succeed.
The repo now includes a device helper that exports stable environment variables used by Docker and ROS launch files:
eval "$(python3 utility/device_paths.py --format shell)"This sets:
FLO_CAMERA_DEVICEFLO_MOTORS_DEVICEFLO_FACE_DEVICEFLO_CAMERA_SOURCEFLO_MOTORS_SOURCEFLO_FACE_SOURCE
By default it uses:
- udev symlinks if
/etc/udev/rules.d/99-flo-devices.rulesexists - otherwise legacy paths such as
/dev/video0,/dev/ttyUSB0, and/dev/ttyACM0
Before starting GUI apps from Docker, allow local X11 access:
xhost +local:
export UID
export GID=$(id -g)
export XAUTHORITY=${XAUTHORITY:-$HOME/.Xauthority}
export PULSE_DIR=${XDG_RUNTIME_DIR:-/run/user/$(id -u)}/pulse
eval "$(python3 utility/device_paths.py --format shell)"
docker compose build
docker compose up -dThe container stays up and launches ROS processes in tmux.
docker exec -it flo_game tmux ls
docker exec -it flo_game tmux attach -t rosTo stop everything:
docker compose downTo avoid device-number changes across reboots, generate stable symlinks on the host:
sudo python3 utility/create_udev_rules.py \
--camera /dev/video0 \
--motors /dev/ttyUSB0 \
--face /dev/ttyACM0 \
-o /etc/udev/rules.d/99-flo-devices.rules
sudo udevadm control --reload-rules
sudo udevadm trigger
ls -l /dev/flo_camera /dev/flo_motors /dev/flo_faceOnce installed, utility/device_paths.py will prefer /dev/flo_camera, /dev/flo_motors, and /dev/flo_face.
The main game launch file is:
roslaunch flo_core simonsays_launcher_prod.launchThat launch file starts:
moveit_controller.pygame_runner.pysimon_says_gui.py
Game parameters are loaded from flo_core/config/simonsays_game_params.yaml.
eval "$(python3 utility/device_paths.py --format shell)"
roslaunch flo_vision usb_cam_launcher.launch
roslaunch flo_vision arm_hand_tracker_launcher.launchroslaunch flo_humanoid dual_arm_hardware.launchroslaunch flo_face flo_face_launcher.launch./demo_show_actions.sh --side right --repeat 1The repo now includes a host-side startup path for kiosk/demo deployments:
host_auto_startup_launcher.sh: waits for network, updates the repo, resolves devices, then starts Docker Composeflo_game.service: systemd unit for launching the demo at boot
Important: flo_game.service is currently checked in with User=rrl and paths under /home/rrl/.... Update those values before installing it on another machine.
Install example:
sudo cp host_auto_startup_launcher.sh /usr/local/bin/host_auto_startup_launcher.sh
sudo chmod 755 /usr/local/bin/host_auto_startup_launcher.sh
sudo cp flo_game.service /etc/systemd/system/flo_game.service
sudo systemctl daemon-reload
sudo systemctl enable flo_game.serviceCheck logs after reboot:
systemctl status flo_game.service
tail -n 100 "$HOME/flo_game_startup.log"For host-native setup, use dev_install.sh. It currently targets Ubuntu 22.04-style developer bootstrapping and installs:
- Docker and desktop helpers
- ROS Noetic binaries
- catkin tools and rosdep
- Astra/OpenNI-related dependencies
After setup, the non-Docker tmux launcher is:
./nondocker_ros_auto_startup_launcher.shFloSystemV2_GameDemo/
├── Dockerfile
├── docker-compose.yml
├── flo_core/
├── flo_core_defs/
├── flo_face/
├── flo_humanoid/
├── flo_vision/
├── flov2_robot_description/
├── utility/
├── ros_docker_auto_startup_launcher.sh
├── nondocker_ros_auto_startup_launcher.sh
├── host_auto_startup_launcher.sh
└── flo_game.service
- The stack currently targets ROS Noetic.
- The Docker image builds the catkin workspace during image creation.
docker-compose.ymlruns the container in privileged mode so camera, serial, audio, and hot-plugged devices can be passed through cleanly.- GUI, audio, and AWS credentials all rely on host-side environment and mounted files being present.
License information has not been finalized in this repository yet.