Skip to content
View LucioSunj's full-sized avatar

Block or report LucioSunj

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
LucioSunj/README.md

Jun Sun (Lucio)

I focus on embodied manipulation. I am currently on a gap year, and will join the MSE in Robotics at the University of Pennsylvania next year. I'm at Xi'an Jiaotong-Liverpool University (XJTLU) advised by Prof. Yong Yue and Prof. Yaran Chen, and I previously interned at Westlake Robotics and at Nanjing University advised by Prof. Shangke Lyu.

Goal

Build an embodied system that can robustly generalize and complete long-horizon, complex manipulation tasks in open environments.

Research focus

I aim to model how embodied agents understand and represent the world, and to develop systems that can interact with the environment in a manner consistent with such internal representations.

I study why Vision-Language-Action (VLA)-based manipulation systems struggle on long-horizon, complex, open-environment tasks: they often lack stable representations of task-relevant state, temporal dependencies, and transition conditions, limiting robustness, generalization, and success rates.

My work aims to introduce stronger state modeling and decision-support mechanisms for VLA.

Research interests

  • world modeling
  • state representation & usage
  • physical reasoning

Selected projects

Pinned Loading

  1. VidTailor_Prototypes VidTailor_Prototypes Public

    HTML 2

  2. robotics_arXiv_daily robotics_arXiv_daily Public

    Forked from jiangranlv/robotics_arXiv_daily

    Python 1

  3. TempoFit TempoFit Public

    TempoFit: Plug-and-Play Layer-Wise Temporal KV Memory for Long-Horizon Vision-Language-Action Manipulation

    11