-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Draft: PostNorm and multilayer options #8746
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Draft: PostNorm and multilayer options #8746
Conversation
Signed-off-by: Izzy Putterman <iputterman@nvidia.com>
📝 WalkthroughWalkthroughModifies the speculative decoding model to add conditional component instantiation in Eagle3Attention and Eagle3DecoderLayer based on a new Changes
Sequence Diagram(s)sequenceDiagram
participant EDM as Eagle3DraftModel
participant EDL as Eagle3DecoderLayer
participant EA as Eagle3Attention
Note over EDM: Initialization
loop For each layer i
EDM->>EDL: __init__(config, layer_idx, is_first_layer=(i==0))
activate EDL
EDL->>EDL: Compute _next_layer_regular
alt is_first_layer=True (next_layer_regular=True)
EDL->>EA: __init__(config, layer_idx, is_first_layer=True)
activate EA
EA->>EA: Skip qkv_proj creation
deactivate EA
EDL->>EDL: Skip input_norm creation
else is_first_layer=False (next_layer_regular=False)
EDL->>EA: __init__(config, layer_idx, is_first_layer=False)
activate EA
EA->>EA: Create qkv_proj
deactivate EA
EDL->>EDL: Create input_norm
end
deactivate EDL
end
Note over EDM: Forward Pass
rect rgb(200, 220, 240)
Note over EDM,EA: Processing through layers
EDM->>EDL: forward(hidden_states, ...)
activate EDL
alt _next_layer_regular=False
EDL->>EDL: Apply input_norm
EDL->>EDL: Concatenate embeddings
end
EDL->>EA: forward(normalized_hidden_states)
deactivate EDL
end
rect rgb(240, 220, 200)
Note over EDM: Post-processing
alt _return_hidden_post_norm=True
EDM->>EDM: Apply final norm
EDM->>EDM: Return (hidden_states, hidden_states)
else _return_hidden_post_norm=False
EDM->>EDM: Apply final norm
EDM->>EDM: Return hidden_states
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Pre-merge checks and finishing touches❌ Failed checks (3 warnings)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
tensorrt_llm/_torch/models/modeling_speculative.py (4)
29-68: Critical: Missing parameter in constructor signature.The
__init__method referencesself._next_layer_regularat line 55, but this attribute is never assigned withinEagle3Attention. Additionally, line 83 inEagle3DecoderLayerpasses three arguments to this constructor (model_config, layer_idx, self._next_layer_regular), but the signature at lines 29-33 only declares two parameters.Apply this diff to add the missing parameter and assign it:
def __init__( self, model_config: ModelConfig[LlamaConfig], layer_idx: Optional[int] = None, + next_layer_regular: bool = False, ): config = model_config.pretrained_config + self._next_layer_regular = next_layer_regular super().__init__(
73-78: Fix incorrect return type annotation.The return type annotation at line 78 specifies
Tuple[torch.Tensor, torch.Tensor], but__init__methods should returnNone.Apply this diff:
def __init__( self, model_config: LlamaConfig, layer_idx: int = 0, is_first_layer: bool = True, -) -> Tuple[torch.Tensor, torch.Tensor]: +) -> None:
99-125: Potential AttributeError from conditional attribute creation.
input_layernormis created only whennot self._next_layer_regular(lines 99-102), but the forward method unconditionally attempts to access it at line 124. When_next_layer_regularisTrue, this will raise anAttributeError.Verify the logic is correct. If
input_layernormshould always exist, remove the conditional. Otherwise, ensure the forward method only accesses it when it exists:def forward( ... ) -> torch.Tensor: residual = hidden_states hidden_states = self.hidden_norm(hidden_states) if not self._next_layer_regular: + # Only access input_layernorm when it exists embeds = self.input_layernorm(embeds) hidden_states = torch.cat([embeds, hidden_states], dim=-1)However, this appears to already be the case in the code. Double-check that
input_layernormis not accessed elsewhere in paths where_next_layer_regularisTrue.
1-1: Missing required NVIDIA Apache-2.0 copyright header.Per coding guidelines, all source files must include the NVIDIA Apache-2.0 copyright header with the current year at the top.
As per coding guidelines.
Add the copyright header at the top of the file:
+# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + from typing import Dict, Generic, List, Optional, Tuple
🧹 Nitpick comments (1)
tensorrt_llm/_torch/models/modeling_speculative.py (1)
257-259: Consider documenting the conditional return behavior.When
_return_hidden_post_normisTrue, the method returns(hidden_states, hidden_states)instead of the usual(hidden_states, hidden_states_to_save). This changes the semantic meaning of the second return value, which could confuse callers or lead to subtle bugs.Consider adding a docstring or inline comment explaining this behavior:
+# When return_hidden_post_norm is enabled, return post-norm hidden states twice +# instead of returning both post-norm and pre-norm hidden states if self._return_hidden_post_norm: return hidden_states, hidden_states return hidden_states, hidden_states_to_save
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tensorrt_llm/_torch/models/modeling_speculative.py(7 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Use only spaces, no tabs; indent with 4 spaces.
Files:
tensorrt_llm/_torch/models/modeling_speculative.py
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.
Files:
tensorrt_llm/_torch/models/modeling_speculative.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).
Files:
tensorrt_llm/_torch/models/modeling_speculative.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/models/modeling_speculative.py (2)
tensorrt_llm/_torch/modules/linear.py (4)
Linear(1821-2045)TensorParallelMode(50-62)WeightsLoadingConfig(45-47)WeightMode(35-41)tensorrt_llm/_torch/model_config.py (1)
get_quant_config(244-251)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
| self.layer_idx = layer_idx | ||
|
|
||
| self.self_attn = Eagle3Attention(model_config, layer_idx) | ||
| self._next_layer_regular = config.eagle_config.get("next_layer_regular", True) and not is_first_layer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate config.eagle_config before access.
Line 82 accesses config.eagle_config.get(...) without verifying that eagle_config exists. If the config lacks this attribute, the code will raise an AttributeError.
Apply this diff to add a safe default:
-self._next_layer_regular = config.eagle_config.get("next_layer_regular", True) and not is_first_layer
+self._next_layer_regular = getattr(config, "eagle_config", {}).get("next_layer_regular", True) and not is_first_layer📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| self._next_layer_regular = config.eagle_config.get("next_layer_regular", True) and not is_first_layer | |
| self._next_layer_regular = getattr(config, "eagle_config", {}).get("next_layer_regular", True) and not is_first_layer |
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/models/modeling_speculative.py around line 82, the code
directly calls config.eagle_config.get(...), which will raise if eagle_config is
missing or None; change this to safely obtain a dict first (e.g. use
getattr(config, "eagle_config", {}) or check if config.eagle_config is truthy
and fallback to {}), then call .get("next_layer_regular", True) and keep the
existing 'and not is_first_layer' logic so the expression safely defaults to
True when the key or eagle_config is absent.
| else: | ||
| self.hidden_size_in = config.hidden_size | ||
|
|
||
| self._return_hidden_post_norm = config.eagle_config.get("return_hidden_post_norm", False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate config.eagle_config before access.
Similar to line 82 in Eagle3DecoderLayer, line 167 accesses config.eagle_config.get(...) without verifying the attribute exists.
Apply this diff:
-self._return_hidden_post_norm = config.eagle_config.get("return_hidden_post_norm", False)
+self._return_hidden_post_norm = getattr(config, "eagle_config", {}).get("return_hidden_post_norm", False)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| self._return_hidden_post_norm = config.eagle_config.get("return_hidden_post_norm", False) | |
| self._return_hidden_post_norm = getattr(config, "eagle_config", {}).get("return_hidden_post_norm", False) |
🤖 Prompt for AI Agents
In tensorrt_llm/_torch/models/modeling_speculative.py around line 167, the code
accesses config.eagle_config.get(...) without verifying config.eagle_config
exists; update the code to first validate that config has attribute eagle_config
(e.g., use getattr(config, "eagle_config", None) or hasattr) and that it is a
dict-like object before calling .get; if missing, either set a safe default for
_return_hidden_post_norm (False) or raise a clear ValueError explaining the
missing configuration; ensure similar defensive pattern as used at line 82 in
Eagle3DecoderLayer.
Summary by CodeRabbit
Optimization
Configuration
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.