Skip to content

Conversation

@ForBetterCodeNine
Copy link
Contributor

@ForBetterCodeNine ForBetterCodeNine commented Oct 31, 2025

What this PR does / why we need it?

The current test cases lack end-to-end (e2e) testing for the deepseek-v2-lite network in ge graph mode.

Does this PR introduce any user-facing change?

No

How was this patch tested?

Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds end-to-end tests for the deepseek-v2-lite model in graph mode, which is a valuable addition for test coverage. The implementation is straightforward. I have one suggestion to refactor the newly added test functions using pytest.mark.parametrize to reduce code duplication and improve maintainability.

Comment on lines 273 to 299
def test_e2e_deepseekv2lite_with_torchair():
additional_config = {
"torchair_graph_config": {
"enabled": True,
},
}
_deepseek_v2_lite_torchair_test_fixure(additional_config)


def test_e2e_deepseekv2lite_with_torchair_ms_mla():
additional_config = {
"torchair_graph_config": {
"enabled": True,
"enable_multistream_mla": True,
},
}
_deepseek_v2_lite_torchair_test_fixure(additional_config)


def test_e2e_deepseekv2lite_with_torchair_v1scheduler():
additional_config = {
"torchair_graph_config": {
"enabled": True,
},
}
_deepseek_v2_lite_torchair_test_fixure(additional_config,
use_v1_schduler=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The three new test functions test_e2e_deepseekv2lite_with_torchair, test_e2e_deepseekv2lite_with_torchair_ms_mla, and test_e2e_deepseekv2lite_with_torchair_v1scheduler are very similar and contain duplicated code, especially the additional_config dictionary which is identical in two of the tests. This makes the tests harder to maintain, as future changes might be missed in one of the copies.

To improve maintainability and reduce code duplication, you can refactor these three functions into a single parameterized test using pytest.mark.parametrize. This will make the test suite cleaner and easier to extend with more configurations in the future.

As a minor note, the helper function _deepseek_v2_lite_torchair_test_fixure has a typo and should be renamed to _deepseek_v2_lite_torchair_test_fixture. The suggestion below uses the corrected name, so you will also need to rename the function definition at line 231.

@pytest.mark.parametrize(
    "config_updates, use_v1_scheduler",
    [
        ({}, False),
        ({"enable_multistream_mla": True}, False),
        ({}, True),
    ],
    ids=[
        "default",
        "ms_mla",
        "v1scheduler",
    ])
def test_e2e_deepseekv2lite_with_torchair(config_updates, use_v1_scheduler):
    additional_config = {
        "torchair_graph_config": {
            "enabled": True,
        },
    }
    additional_config["torchair_graph_config"].update(config_updates)
    _deepseek_v2_lite_torchair_test_fixture(additional_config,
                                           use_v1_schduler=use_v1_scheduler)

Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
@weijinqian0 weijinqian0 added ready read for review ready-for-test start test by label for PR labels Nov 3, 2025
Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
@yiz-liu yiz-liu merged commit 49d7478 into vllm-project:main Nov 3, 2025
21 checks passed
luolun pushed a commit to luolun/vllm-ascend that referenced this pull request Nov 19, 2025
…roject#3937)

### What this PR does / why we need it?
The current test cases lack end-to-end (e2e) testing for the
deepseek-v2-lite network in ge graph mode.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
vllm-project/vllm@83f478b

---------

Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
Signed-off-by: luolun <luolun1995@cmbchina.com>
hwhaokun pushed a commit to hwhaokun/vllm-ascend that referenced this pull request Nov 19, 2025
…roject#3937)

### What this PR does / why we need it?
The current test cases lack end-to-end (e2e) testing for the
deepseek-v2-lite network in ge graph mode.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
vllm-project/vllm@83f478b

---------

Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
Signed-off-by: hwhaokun <haokun0405@163.com>
NSDie pushed a commit to NSDie/vllm-ascend that referenced this pull request Nov 24, 2025
…roject#3937)

### What this PR does / why we need it?
The current test cases lack end-to-end (e2e) testing for the
deepseek-v2-lite network in ge graph mode.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?

- vLLM version: v0.11.0
- vLLM main:
vllm-project/vllm@83f478b

---------

Signed-off-by: CodeNine-CJ <chenjian343@huawei.com>
Signed-off-by: nsdie <yeyifan@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants