Skip to content

Conversation

@zzjweb
Copy link

@zzjweb zzjweb commented Jan 6, 2026

Problem

Agent-lightning inherits VeRL's default advantage estimation, which assumes each batch sample is independent. In multi-turn scenarios, this causes turn-level bias: trajectories with more turns contribute more to baseline statistics (mean/std), leading to biased advantage estimation and inefficient optimization.

Solution

Implements trajectory-level deduplication using (data_id, rollout_id) pairs. Set trainer.compute_mean_std_cross_all_data=False to ensure each trajectory is counted only once when computing baselines.

In agentlightning.verl.core_algos, we re-register part of VeRL's adv_estimator_fn implementations to integrate the new trajectory-level deduplication logic.

seen_pairs = set()
for i in range(bsz):
    if (index[i], traj_index[i]) in seen_pairs:
        continue  # Skip duplicate turns from same trajectory
    id2score[index[i]].append(scores[i])
    if not compute_mean_std_cross_all_data:
        seen_pairs.add((index[i], traj_index[i]))

Example Configuration

Control the normalization behavior via the compute_mean_std_cross_all_data parameter:

  • compute_mean_std_cross_all_data=True (default): Cross-all-data normalization, more stable but still counts each turn
  • compute_mean_std_cross_all_data=False: Trajectory-level normalization - each trajectory counted only once, eliminates bias
config = {
    "algorithm": {
        "adv_estimator": "grpo",
        "norm_adv_by_std_in_grpo": True,
        "compute_mean_std_cross_all_data": False,  # Enable trajectory-level
    }
}

Implementation

Affected algorithms:

  • ✅ GRPO
  • ✅ GRPO Pass@k
  • ✅ REINFORCE++ Baseline
  • ✅ RLOO

Files modified:

  • agentlightning/verl/core_algos.py: Trajectory-aware advantage estimators
  • agentlightning/verl/trainer.py: Unified compute_advantage entry point

fix: overwrite verl other adv algorithms

Remove unnecessary comments

fix some bugs

fix some bugs
Copilot AI review requested due to automatic review settings January 6, 2026 17:06
@microsoft-github-policy-service

@zzjweb please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

@microsoft-github-policy-service agree [company="{your company}"]

Options:

  • (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
  • (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"
Contributor License Agreement

Contribution License Agreement

This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your
contributions to Microsoft open source projects. This Agreement is effective as of the latest signature
date below.

  1. Definitions.
    “Code” means the computer software code, whether in human-readable or machine-executable form,
    that is delivered by You to Microsoft under this Agreement.
    “Project” means any of the projects owned or managed by Microsoft and offered under a license
    approved by the Open Source Initiative (www.opensource.org).
    “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any
    Project, including but not limited to communication on electronic mailing lists, source code control
    systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of
    discussing and improving that Project, but excluding communication that is conspicuously marked or
    otherwise designated in writing by You as “Not a Submission.”
    “Submission” means the Code and any other copyrightable material Submitted by You, including any
    associated comments and documentation.
  2. Your Submission. You must agree to the terms of this Agreement before making a Submission to any
    Project. This Agreement covers any and all Submissions that You, now or in the future (except as
    described in Section 4 below), Submit to any Project.
  3. Originality of Work. You represent that each of Your Submissions is entirely Your original work.
    Should You wish to Submit materials that are not Your original work, You may Submit them separately
    to the Project if You (a) retain all copyright and license information that was in the materials as You
    received them, (b) in the description accompanying Your Submission, include the phrase “Submission
    containing materials of a third party:” followed by the names of the third party and any licenses or other
    restrictions of which You are aware, and (c) follow any other instructions in the Project’s written
    guidelines concerning Submissions.
  4. Your Employer. References to “employer” in this Agreement include Your employer or anyone else
    for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your
    Submission is made in the course of Your work for an employer or Your employer has intellectual
    property rights in Your Submission by contract or applicable law, You must secure permission from Your
    employer to make the Submission before signing this Agreement. In that case, the term “You” in this
    Agreement will refer to You and the employer collectively. If You change employers in the future and
    desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement
    and secure permission from the new employer before Submitting those Submissions.
  5. Licenses.
  • Copyright License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the
    Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute
    the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third
    parties.
  • Patent License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under
    Your patent claims that are necessarily infringed by the Submission or the combination of the
    Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and
    import or otherwise dispose of the Submission alone or with the Project.
  • Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement.
    No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are
    granted by implication, exhaustion, estoppel or otherwise.
  1. Representations and Warranties. You represent that You are legally entitled to grant the above
    licenses. You represent that each of Your Submissions is entirely Your original work (except as You may
    have disclosed under Section 3). You represent that You have secured permission from Your employer to
    make the Submission in cases where Your Submission is made in the course of Your work for Your
    employer or Your employer has intellectual property rights in Your Submission by contract or applicable
    law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You
    have the necessary authority to bind the listed employer to the obligations contained in this Agreement.
    You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS
    REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES
    EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS
    PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF
    NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
  2. Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which
    You later become aware that would make Your representations in this Agreement inaccurate in any
    respect.
  3. Information about Submissions. You agree that contributions to Projects and information about
    contributions may be maintained indefinitely and disclosed publicly, including Your name and other
    information that You submit with Your Submission.
  4. Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and
    the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County,
    Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to
    exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all
    defenses of lack of personal jurisdiction and forum non-conveniens.
  5. Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and
    supersedes any and all prior agreements, understandings or communications, written or oral, between
    the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements trajectory-level advantage estimation to address turn bias in multi-turn scenarios. The changes introduce a deduplication mechanism using (data_id, rollout_id) pairs to ensure each trajectory is counted only once when computing baseline statistics, controlled by the new compute_mean_std_cross_all_data parameter.

Key Changes:

  • Added trajectory-level deduplication logic to GRPO, GRPO_PASSK, REINFORCE++_BASELINE, and RLOO advantage estimators
  • Created a unified compute_advantage function in trainer.py to centralize advantage computation logic
  • Introduced compute_mean_std_cross_all_data parameter to control normalization behavior

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 9 comments.

File Description
agentlightning/verl/trainer.py Adds unified compute_advantage entry point that handles all advantage estimators and passes trajectory identification parameters
agentlightning/verl/core_algos.py Implements trajectory-aware versions of GRPO, GRPO_PASSK, REINFORCE++_BASELINE, and RLOO with deduplication logic using seen_pairs set

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

num_repeat: int = 1,
norm_adv_by_std_in_grpo: bool = True,
compute_mean_std_cross_all_data: bool = True,
config: Any = None,
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The type hint Any is used but not imported. Add Any to the import statement on line 11: from typing import Any, Dict, Tuple, Type

Copilot uses AI. Check for mistakes.
compute_mean_std_cross_all_data = self.config.algorithm.get(
"compute_mean_std_cross_all_data", True
)

Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a trailing whitespace after the closing parenthesis on this line. Remove the trailing whitespace to maintain code cleanliness.

Suggested change

Copilot uses AI. Check for mistakes.
Comment on lines 83 to 98
"""
Compute advantage for GRPO, operating only on Outcome reward
(with only one scalar reward for each response).
Args:
token_level_rewards: `(torch.Tensor)`
shape is (bs, response_length)
response_mask: `(torch.Tensor)`
shape is (bs, response_length)
norm_adv_by_std_in_grpo: (bool)
whether to scale the GRPO advantage.
If True, the advantage is scaled by the std, as in the original GRPO.
If False, the advantage is not scaled, as in Dr.GRPO (https://arxiv.org/abs/2503.20783).
compute_mean_std_cross_all_data: bool
If True (more stable), the mean and std are computed across all data in the batch.
If False (i.e., standard episode-level adv), the mean and std are computed across N trajectories.
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docstring incorrectly documents missing parameters index and traj_index which are essential to the trajectory-level deduplication feature. Add documentation for these parameters to explain their roles in identifying trajectories.

Copilot uses AI. Check for mistakes.
Comment on lines +231 to +243
Compute advantage for RF++-baseline (https://arxiv.org/abs/2501.03262), operating only on Outcome reward
(with only one scalar reward for each response).
Args:
token_level_rewards: `(torch.Tensor)`
shape: (bs, response_length)
response_mask: `(torch.Tensor)`
shape: (bs, response_length)
Returns:
advantages: `(torch.Tensor)`
shape: (bs, response_length)
Returns: `(torch.Tensor)`
shape: (bs, response_length)
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docstring is missing documentation for the parameters index, traj_index, and reward_baselines which are part of the function signature. Add documentation for these parameters to clarify their purpose.

Suggested change
Compute advantage for RF++-baseline (https://arxiv.org/abs/2501.03262), operating only on Outcome reward
(with only one scalar reward for each response).
Args:
token_level_rewards: `(torch.Tensor)`
shape: (bs, response_length)
response_mask: `(torch.Tensor)`
shape: (bs, response_length)
Returns:
advantages: `(torch.Tensor)`
shape: (bs, response_length)
Returns: `(torch.Tensor)`
shape: (bs, response_length)
Compute advantage for RF++-baseline (https://arxiv.org/abs/2501.03262), operating only on outcome reward
(with only one scalar reward for each response).
Args:
token_level_rewards: `(torch.Tensor)`
Per-token rewards for each response; shape: (bs, response_length).
response_mask: `(torch.Tensor)`
Binary mask indicating valid tokens; shape: (bs, response_length).
index: `(np.ndarray)`
Array of prompt or data identifiers used to group trajectories for
computing per-prompt baselines; shape: (bs,).
traj_index: `(np.ndarray)`
Array of trajectory identifiers (e.g., rollout IDs) used together with
`index` for trajectory-level deduplication; shape: (bs,).
reward_baselines: `(torch.Tensor)`
Baseline reward values associated with each sample; shape typically
broadcastable to (bs,) or (bs, response_length). Currently not used in
this implementation but kept for API compatibility.
epsilon: `(float)`
Small constant for numerical stability in normalization operations.
compute_mean_std_cross_all_data: `(bool)`
If True, compute normalization statistics across all data; if False,
respect trajectory-level deduplication when aggregating scores.
config: `Optional[Any]`
Optional configuration object; currently unused.
**kwargs:
Additional keyword arguments for compatibility with other estimators.
Returns:
advantages: `(torch.Tensor)`
Advantage values per token; shape: (bs, response_length).
returns: `(torch.Tensor)`
Return values per token; shape: (bs, response_length).

Copilot uses AI. Check for mistakes.
Comment on lines +286 to +299
"""
Compute advantage for RLOO based on https://arxiv.org/abs/2402.14740
Args:
token_level_rewards: `(torch.Tensor)`
shape: (bs, response_length)
response_mask: `(torch.Tensor)`
shape: (bs, response_length)
Returns:
advantages: `(torch.Tensor)`
shape: (bs, response_length)
Returns: `(torch.Tensor)`
shape: (bs, response_length)
"""
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docstring is missing documentation for the parameters index and traj_index which are part of the function signature. Add documentation for these parameters to explain their role in the RLOO algorithm.

Copilot uses AI. Check for mistakes.
if len(id2score[idx]) == 1:
id2mean[idx] = torch.tensor(0.0)
elif len(id2score[idx]) > 1:
id2mean[idx] = torch.mean(torch.tensor(id2score[idx]))
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Converting a list of tensors to a single tensor using torch.tensor(id2score[idx]) may not work correctly. Use torch.stack(id2score[idx]) instead to properly stack the tensor list, consistent with how it's done in the GRPO function at line 125.

Suggested change
id2mean[idx] = torch.mean(torch.tensor(id2score[idx]))
id2mean[idx] = torch.mean(torch.stack(id2score[idx]))

Copilot uses AI. Check for mistakes.
if len(id2score[idx]) == 1:
id2mean[idx] = torch.tensor(0.0)
elif len(id2score[idx]) > 1:
id2mean[idx] = torch.mean(torch.tensor(id2score[idx]))
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Converting a list of tensors to a single tensor using torch.tensor(id2score[idx]) may not work correctly. Use torch.stack(id2score[idx]) instead to properly stack the tensor list, consistent with how it's done in the GRPO function at line 125.

Suggested change
id2mean[idx] = torch.mean(torch.tensor(id2score[idx]))
id2mean[idx] = torch.mean(torch.stack(id2score[idx]))

Copilot uses AI. Check for mistakes.
Comment on lines +50 to +51


Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'compute_gae_advantage_return' is not used.

Suggested change
# Create a benign alias so this import is recognized as used while
# still allowing external code to import `compute_gae_advantage_return`
# directly from this module.
compute_gae_advantage_return_fn = compute_gae_advantage_return

Copilot uses AI. Check for mistakes.
@zzjweb zzjweb force-pushed the main branch 3 times, most recently from aac7c26 to 70798e4 Compare January 8, 2026 13:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant