-
Notifications
You must be signed in to change notification settings - Fork 76
Open
Description
In the method generate_sequences of vLLMRolloutWithTool located at src/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py, I have a question regarding the batch construction:
response_attention_mask = torch.stack(response_attention_mask_list, dim=0)
response = torch.stack(response_list, dim=0)
result_mask = torch.stack(result_mask_list_padded, dim=0)
if self.config.n > 1 and do_sample:
ori_input_ids = ori_input_ids.repeat_interleave(self.config.n, dim=0)
attention_mask = attention_mask.repeat_interleave(self.config.n, dim=0)
position_ids = position_ids.repeat_interleave(self.config.n, dim=0)
batch_size = batch_size * self.config.n
seq = torch.cat([ori_input_ids, response], dim=-1)
response_length = response.size(1)
delta_position_id = torch.arange(1, response_length + 1, device=position_ids.device)
delta_position_id = delta_position_id.unsqueeze(0).repeat(batch_size, 1)
response_position_ids = position_ids[:, -1:] + delta_position_id
position_ids = torch.cat([position_ids, response_position_ids], dim=-1)
attention_mask = torch.cat((attention_mask, response_attention_mask), dim=-1)
# result mask: result part is 0, other part is 1
loss_mask = result_mask * response_attention_mask
batch = TensorDict({
'prompts': ori_input_ids,
'responses': response,
'input_ids': seq, # here input_ids become the whole sentences
'attention_mask': attention_mask,
'loss_mask': loss_mask,
'position_ids': position_ids
}, batch_size=batch_size)
Here, the loss_mask is shorter than both attention_mask and seq (since it only covers the response part). Will this mismatch cause any problems during training? For example, could it result in misaligned masking, uncovered tokens, or potential errors in downstream loss computation?
Will the loss computation logic later on automatically handle this kind of length mismatch? I’d appreciate any clarification. Thank you!
Metadata
Metadata
Assignees
Labels
No labels