thx for your wonderful work about VLA finetuning with preference optimization, but I am a little confused as I read the paper.
As you mentioned that
let` ζw and ζl denotes the chosen and rejected trajectory starting from the same initial state
so are these trajs collected during the online sampling?
Does "the same initial state" means the same robot proprio and the same objects position&pose?
how you decide the init state? random or you choose ?
and you mentioned
during the k th iteration, we(1) first sample numerous trajectories for a variety of tasks and obtain Dk; (2) then we calculate the costs for each trajectory using Eq. (9) and rank these trajectories accordingly per task; (3) we pair the top-m and bottom-m trajectories
so the trajs are from a same initial state?
hope for your early replay ,thx you in advance~