Skip to content

Conversation

@IzzyPutterman
Copy link
Collaborator

@IzzyPutterman IzzyPutterman commented Oct 29, 2025

Summary by CodeRabbit

  • Optimization

    • Enhanced speculative decoding model performance through conditional layer normalization and optimized layer operations based on model structure, reducing computational overhead in specific configurations.
  • Configuration

    • Added configuration option for flexible post-normalization output behavior, enabling users to optimize model behavior for their specific use cases.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Izzy Putterman <iputterman@nvidia.com>
@IzzyPutterman IzzyPutterman requested a review from a team as a code owner October 29, 2025 03:38
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 29, 2025

📝 Walkthrough

Walkthrough

Modifies the speculative decoding model to add conditional component instantiation in Eagle3Attention and Eagle3DecoderLayer based on a new is_first_layer parameter and derived _next_layer_regular flag. Eagle3DraftModel now passes first-layer information during layer construction and conditionally returns post-norm hidden states via a new _return_hidden_post_norm flag.

Changes

Cohort / File(s) Summary
Speculative Decoding Layer Conditionals
tensorrt_llm/_torch/models/modeling_speculative.py
Updated Eagle3Attention to conditionally create qkv_proj Linear based on is_first_layer logic. Modified Eagle3DecoderLayer to accept is_first_layer parameter and compute _next_layer_regular flag; input layer normalization creation now gated by this flag. Updated forward pass to conditionally apply normalization and concatenation. Modified Eagle3DraftModel to introduce _return_hidden_post_norm internal flag, pass is_first_layer (i == 0) during layer instantiation, and optionally return duplicated hidden_states when flag is enabled.

Sequence Diagram(s)

sequenceDiagram
    participant EDM as Eagle3DraftModel
    participant EDL as Eagle3DecoderLayer
    participant EA as Eagle3Attention
    
    Note over EDM: Initialization
    loop For each layer i
        EDM->>EDL: __init__(config, layer_idx, is_first_layer=(i==0))
        activate EDL
        EDL->>EDL: Compute _next_layer_regular
        alt is_first_layer=True (next_layer_regular=True)
            EDL->>EA: __init__(config, layer_idx, is_first_layer=True)
            activate EA
            EA->>EA: Skip qkv_proj creation
            deactivate EA
            EDL->>EDL: Skip input_norm creation
        else is_first_layer=False (next_layer_regular=False)
            EDL->>EA: __init__(config, layer_idx, is_first_layer=False)
            activate EA
            EA->>EA: Create qkv_proj
            deactivate EA
            EDL->>EDL: Create input_norm
        end
        deactivate EDL
    end
    
    Note over EDM: Forward Pass
    rect rgb(200, 220, 240)
        Note over EDM,EA: Processing through layers
        EDM->>EDL: forward(hidden_states, ...)
        activate EDL
        alt _next_layer_regular=False
            EDL->>EDL: Apply input_norm
            EDL->>EDL: Concatenate embeddings
        end
        EDL->>EA: forward(normalized_hidden_states)
        deactivate EDL
    end
    
    rect rgb(240, 220, 200)
        Note over EDM: Post-processing
        alt _return_hidden_post_norm=True
            EDM->>EDM: Apply final norm
            EDM->>EDM: Return (hidden_states, hidden_states)
        else _return_hidden_post_norm=False
            EDM->>EDM: Apply final norm
            EDM->>EDM: Return hidden_states
        end
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Parameter propagation: Verify is_first_layer (i == 0) is correctly passed through layer instantiation in Eagle3DraftModel
  • Conditional instantiation logic: Confirm qkv_proj and input_norm creation logic aligns with _next_layer_regular computation across both Eagle3Attention and Eagle3DecoderLayer
  • Forward pass alignment: Ensure conditionally gated normalization and embedding concatenation in Eagle3DecoderLayer.forward() matches the initialization assumptions
  • Post-norm return behavior: Validate that _return_hidden_post_norm flag from config correctly controls the optional return of duplicated hidden_states without breaking downstream consumers
  • Constructor signature changes: Check that all call sites instantiating Eagle3DecoderLayer now supply the is_first_layer argument

Pre-merge checks and finishing touches

❌ Failed checks (3 warnings)
Check name Status Explanation Resolution
Title Check ⚠️ Warning The PR title "Draft: PostNorm and multilayer options" does not follow the repository's required template format, which specifies "[JIRA ticket/NVBugs ID/GitHub issue/None][type] Summary" structure. While the title is concise and relates to some aspects of the changes (particularly the PostNorm feature), it lacks the mandatory ticket reference and change type designation. Additionally, "Draft:" prefix and "multilayer options" are vague; based on the change summary, more specific language about the architectural modifications to Eagle3Attention and Eagle3DecoderLayer would be beneficial. The title appears incomplete for a repository that enforces specific PR title conventions. Revise the PR title to follow the required format: provide a ticket reference (JIRA, NVBugs, GitHub issue, or "None"), specify the change type in lowercase (such as [feat], [fix], [infra]), and provide a clear, specific summary. For example: "[None][feat] Add configurable PostNorm and layer-conditional behavior to Eagle3 attention layers" or similar, depending on the actual issue tracking.
Description Check ⚠️ Warning The PR description is largely incomplete and consists primarily of the repository's template structure with no substantive content filled in. The "Description" section is empty (contains only the comment placeholder), the "Test Coverage" section is empty, and all items in the "PR Checklist" are unchecked. While the author included the CodeRabbit AI summary marker, there is no actual description of what the PR accomplishes, why the changes are necessary, or what tests validate the new functionality. The substantial change summary exists only in the raw_summary metadata, not in the PR description itself. Complete the PR description by filling in all required sections: provide a clear explanation of what changes are made and why (in the "Description" section), list the relevant tests that safeguard these changes (in the "Test Coverage" section), and verify or complete all items in the "PR Checklist". Ensure that reviewers can understand the purpose and scope of the changes directly from the PR description.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
tensorrt_llm/_torch/models/modeling_speculative.py (4)

29-68: Critical: Missing parameter in constructor signature.

The __init__ method references self._next_layer_regular at line 55, but this attribute is never assigned within Eagle3Attention. Additionally, line 83 in Eagle3DecoderLayer passes three arguments to this constructor (model_config, layer_idx, self._next_layer_regular), but the signature at lines 29-33 only declares two parameters.

Apply this diff to add the missing parameter and assign it:

 def __init__(
     self,
     model_config: ModelConfig[LlamaConfig],
     layer_idx: Optional[int] = None,
+    next_layer_regular: bool = False,
 ):
     config = model_config.pretrained_config
+    self._next_layer_regular = next_layer_regular
     super().__init__(

73-78: Fix incorrect return type annotation.

The return type annotation at line 78 specifies Tuple[torch.Tensor, torch.Tensor], but __init__ methods should return None.

Apply this diff:

 def __init__(
     self,
     model_config: LlamaConfig,
     layer_idx: int = 0,
     is_first_layer: bool = True,
-) -> Tuple[torch.Tensor, torch.Tensor]:
+) -> None:

99-125: Potential AttributeError from conditional attribute creation.

input_layernorm is created only when not self._next_layer_regular (lines 99-102), but the forward method unconditionally attempts to access it at line 124. When _next_layer_regular is True, this will raise an AttributeError.

Verify the logic is correct. If input_layernorm should always exist, remove the conditional. Otherwise, ensure the forward method only accesses it when it exists:

 def forward(
     ...
 ) -> torch.Tensor:
     residual = hidden_states

     hidden_states = self.hidden_norm(hidden_states)
     if not self._next_layer_regular:
+        # Only access input_layernorm when it exists
         embeds = self.input_layernorm(embeds)
         hidden_states = torch.cat([embeds, hidden_states], dim=-1)

However, this appears to already be the case in the code. Double-check that input_layernorm is not accessed elsewhere in paths where _next_layer_regular is True.


1-1: Missing required NVIDIA Apache-2.0 copyright header.

Per coding guidelines, all source files must include the NVIDIA Apache-2.0 copyright header with the current year at the top.

As per coding guidelines.

Add the copyright header at the top of the file:

+# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 from typing import Dict, Generic, List, Optional, Tuple
🧹 Nitpick comments (1)
tensorrt_llm/_torch/models/modeling_speculative.py (1)

257-259: Consider documenting the conditional return behavior.

When _return_hidden_post_norm is True, the method returns (hidden_states, hidden_states) instead of the usual (hidden_states, hidden_states_to_save). This changes the semantic meaning of the second return value, which could confuse callers or lead to subtle bugs.

Consider adding a docstring or inline comment explaining this behavior:

+# When return_hidden_post_norm is enabled, return post-norm hidden states twice
+# instead of returning both post-norm and pre-norm hidden states
 if self._return_hidden_post_norm:
     return hidden_states, hidden_states
 return hidden_states, hidden_states_to_save
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7ba98a6 and 12bc2e2.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/models/modeling_speculative.py (7 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/models/modeling_speculative.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/models/modeling_speculative.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/models/modeling_speculative.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/models/modeling_speculative.py (2)
tensorrt_llm/_torch/modules/linear.py (4)
  • Linear (1821-2045)
  • TensorParallelMode (50-62)
  • WeightsLoadingConfig (45-47)
  • WeightMode (35-41)
tensorrt_llm/_torch/model_config.py (1)
  • get_quant_config (244-251)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

self.layer_idx = layer_idx

self.self_attn = Eagle3Attention(model_config, layer_idx)
self._next_layer_regular = config.eagle_config.get("next_layer_regular", True) and not is_first_layer
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate config.eagle_config before access.

Line 82 accesses config.eagle_config.get(...) without verifying that eagle_config exists. If the config lacks this attribute, the code will raise an AttributeError.

Apply this diff to add a safe default:

-self._next_layer_regular = config.eagle_config.get("next_layer_regular", True) and not is_first_layer
+self._next_layer_regular = getattr(config, "eagle_config", {}).get("next_layer_regular", True) and not is_first_layer
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self._next_layer_regular = config.eagle_config.get("next_layer_regular", True) and not is_first_layer
self._next_layer_regular = getattr(config, "eagle_config", {}).get("next_layer_regular", True) and not is_first_layer
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/models/modeling_speculative.py around line 82, the code
directly calls config.eagle_config.get(...), which will raise if eagle_config is
missing or None; change this to safely obtain a dict first (e.g. use
getattr(config, "eagle_config", {}) or check if config.eagle_config is truthy
and fallback to {}), then call .get("next_layer_regular", True) and keep the
existing 'and not is_first_layer' logic so the expression safely defaults to
True when the key or eagle_config is absent.

else:
self.hidden_size_in = config.hidden_size

self._return_hidden_post_norm = config.eagle_config.get("return_hidden_post_norm", False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate config.eagle_config before access.

Similar to line 82 in Eagle3DecoderLayer, line 167 accesses config.eagle_config.get(...) without verifying the attribute exists.

Apply this diff:

-self._return_hidden_post_norm = config.eagle_config.get("return_hidden_post_norm", False)
+self._return_hidden_post_norm = getattr(config, "eagle_config", {}).get("return_hidden_post_norm", False)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self._return_hidden_post_norm = config.eagle_config.get("return_hidden_post_norm", False)
self._return_hidden_post_norm = getattr(config, "eagle_config", {}).get("return_hidden_post_norm", False)
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/models/modeling_speculative.py around line 167, the code
accesses config.eagle_config.get(...) without verifying config.eagle_config
exists; update the code to first validate that config has attribute eagle_config
(e.g., use getattr(config, "eagle_config", None) or hasattr) and that it is a
dict-like object before calling .get; if missing, either set a safe default for
_return_hidden_post_norm (False) or raise a clear ValueError explaining the
missing configuration; ensure similar defensive pattern as used at line 82 in
Eagle3DecoderLayer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant