Skip to content

Some observations during test #21

@ge79puv

Description

@ge79puv

Hi @xukechun,

thanks for your great job! I have tested your repo successfully, but I also notice some issues during my test.

First of all, sometimes it seems that the robot can directly grasp the target object theoretically. However, the robot always tends to grasp other objects for several times, and even some objects are far away from the target object. I guess it's because the output of logits is not accurate enough. In my opinion, the selectivity of the grasp object is not very obvious.

Moreover, when the robot tries to grasp an object, it usually touches the object and then the object slips away, which results in the failure of grasp. I think if we try to use better pretrained grasp model, it can be improved.

In addition, for the real world experiments, should we also need to obtain the segmentation masks? Because for the training and test in the Pybullet simulator, we need to get all the "color_image, depth_image, mask_image". Should we add a segmentation model for the real world test? And in the real world, could we directly use the "trained_model.pth" from https://drive.google.com/drive/folders/1LCuoXX92X8L9wqJTbVqvskjRhTJrDDay?usp=sharing? Which part of your code should be changed for the real world test?

I am not sure if my observation and analysis is correct. It would be great if you can give me some advices :)

Best regards

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions