(The equiped suction end effector with servo control for axis rotation isn't connected to the external suction module)
-
Started by setting up the
pydobotpackage — a lightweight USB serial communication library — to interface the local system with the Dobot Magician hardware. -
Conducted experiments using the package to test various PTP (Point-To-Point) motion modes for robot control.
-
Here is the Dobot Magician Hardware user guide
-
Note: The joint sensor values extracted by
pydobotprovide:- The end-effector (EEF) pose in millimeters within the robot’s workspace,
- Joint positions in degrees,
- An
rparameter representing the end-effector rotation angle in degrees, which is meaningful only if an end-effector is physically attached.
-
Software stack used: ROS2-Humble(would definitely work in Jazzy and Kilted as long as the Ubuntu 24.04 is used for those distros), Tested on Ubuntu 22.04, Python3.10.12, PyDobot
The Dobot Magician does not use high-resolution absolute encoders on all joints. Instead, its position feedback relies on:
- Stepper motors (instead of servo motors),
- Step counting implemented in firmware to keep track of motor positions,
- Mechanical limit switches to establish the home (zero) position reference.
- Upon performing a homing operation, the robot establishes a known zero reference point using the physical limit switches.
- From this zero position, it counts every motor step to estimate the current joint angles.
- The Cartesian position
(x, y, z, r)is calculated by the robot’s firmware through forward kinematics based on these joint angles.
Thus, when querying the robot’s pose, you are not receiving direct sensor measurements but rather the internal model maintained by the firmware, derived from commanded and tracked stepper motor movements.
- Because this is an open-loop system after homing, if you physically move the robot arm by hand, the firmware cannot detect this disturbance.
- The system assumes that no steps are missed or skipped during operation.
- If the arm is bumped, overloaded, or experiences mechanical slips causing lost steps, the reported pose can become inaccurate until the robot is homed again.
Understanding this behavior is crucial when interpreting pose feedback and designing experiments or applications involving the Dobot Magician.
-
Interface dobot magician to ros2 by extracting joint states and eef states and publishing them to ros2 topics at fixed rates (10Hz)
-
Home Pose Figure given below (all joint pos at 0 rads(degs))
-
Verified the ros2 state interface package on RViz using a digital twin examples the states are matching (need to fine tune a bit more after control pkg is integrated) ---> shown below
-
Control functionality via ros2 to be done. So the entire hardware was sucessfully interfaced with ROS2 with both c-space as well as task-space control functionality. codebase here
-
From the videos below there is clearly an undesireable offset in the digital twin model (the URDF in RViz) the URDF was taken from the official docs of the dobot but for some reason there was already an issue with the joint offsets but the joint axis is clearly matched up properly with the hardware joint axes. Will need to fix the URDF for it.
-
Another issue that is visible is the update rate of the feedback from the hardware is close to 2.5Hz hence the RViz update is a little on the choppy side. (any suggestions to reduce the choppyness if it is even possible is welcomed)
-
The current ros2 interface architecture for the dobot hardware is written in rclpy with pub/sub, but I feel like creating an action-server for it may make it more robust (something to work on later on)
-
Video Demos Given below for both joint as well as end effector control
-
So for now the digital twin part is complete as a rough prototype.
-
Note May build a custom pkg for Dobot Communication in CPP later on after testing out Moveit2 and maybe even some VLA/RL algo on this hardware.
-
Gesture Control
- Note: Currently using RL_Games framework with PPO RL algo for the training process (for reach to pose).
- Model Selected from default assets
- Custom GymEnv and MDP Package created
- First Test Demo (3 envs)
-
First Training Demo
-
After training 5 envs for more than 500 episodes (less than 1000-episodes)
-
reach_to_pose_500epi_training.1.mp4
-
After training 5 envs for more than 2k episodes (less than 4k-episodes)
-
reach_to_pose_2kepi_training.1.mp4
-
-
Second Training Demo
-
Trained on an improved reward model (better than the first check the IsaacLab MDP pkg for reference. The results were far from satisfactory on rl_games PP0 model even after training for 18k episodes. The rewards peaked for a range of 20k episodes after around 8kth episode the max reward from 0-18k episodes was around -1.42(net per episode). Allthough on the bright side the jerky motion of joints stopped which is a huge improvement in terms of motion. As of now the motion is JointPositionBased Control without any kinematics maybe I have to train more or improve the reward model for better reach to pose accuracy.
-
After training 5 envs for more for 10400 episodes
-
result_after_10400epi.webm
-
-
Next Phase (moving to end-effector/task space based control via IDK of the model)
- During the initial test the action space control was purely joint control (c-space control) which since there were like 8-states to control, with no kinematic or dynamic constraints mapping to the desired actions was very hard and plus clearly don't have too much time to train the model to fit this complex of a model. So in order to get better control and response plus faster convergence to the required solution the kinematic constraints was introduced and by using IDK the action control will be in the t-space (in global frame of ref/env frame of ref).
- Just a small side track: teleop policy integration demo video (down here) --> (will use to collect data/demonstration for imitation learning for more advanced taks)

