-
Notifications
You must be signed in to change notification settings - Fork 25
Why does GraspNet inference require reversed points as input? #28
Copy link
Copy link
Open
Description
Hi there,
Thx for the amazing work! I have a question regarding the usage of GraspNet.
In your implementation, I noticed that the inference step requires reversing the point cloud:
def compute_grasp_pose(self, full_pcd):
points, _ = o3dp.pcd2array(full_pcd)
grasp_pcd = copy.deepcopy(full_pcd)
grasp_pcd.points = o3d.utility.Vector3dVector(-points)
# generating grasp poses.
gg = self.graspnet_baseline.inference(grasp_pcd)
gg.translations = -gg.translations
gg.rotation_matrices = -gg.rotation_matrices
gg.translations = gg.translations + gg.rotation_matrices[:, :, 0] * self.config['refine_approach_dist']
gg = self.graspnet_baseline.collision_detection(gg, points)
My questions are:
- Why does graspnet_baseline.inference() here take the negated points as input?
- Is this related to the oracle “ideal camera” setup?
- Or is it due to a left-handed vs right-handed coordinate system issue from OpenGL rendering?
Any clarification would be really helpful. Thanks!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels