Skip to content

Why does GraspNet inference require reversed points as input? #28

@Terence233

Description

@Terence233

Hi there,

Thx for the amazing work! I have a question regarding the usage of GraspNet.

In your implementation, I noticed that the inference step requires reversing the point cloud:

def compute_grasp_pose(self, full_pcd):
    points, _ = o3dp.pcd2array(full_pcd)
    grasp_pcd = copy.deepcopy(full_pcd)
    grasp_pcd.points = o3d.utility.Vector3dVector(-points)

    # generating grasp poses.
    gg = self.graspnet_baseline.inference(grasp_pcd)
    gg.translations = -gg.translations
    gg.rotation_matrices = -gg.rotation_matrices
    gg.translations = gg.translations + gg.rotation_matrices[:, :, 0] * self.config['refine_approach_dist']
    gg = self.graspnet_baseline.collision_detection(gg, points)

My questions are:

  1. Why does graspnet_baseline.inference() here take the negated points as input?
  2. Is this related to the oracle “ideal camera” setup?
  3. Or is it due to a left-handed vs right-handed coordinate system issue from OpenGL rendering?

Any clarification would be really helpful. Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions