Skip to content

Conversation

@winskuo-quic
Copy link
Collaborator

Summary

  • AOT Debug Handle Enablement. Also supported debug_handle map, although debugger did not use it. Enable this because of community request.
  • Runtime Debug Handle Enablement: Parse debug handle in runtime and use debug_handle and tensors key when storing result into etdump.
  • Reuse ExecuTorch Debugger feature and reduce redundancy between QNN ExecuTorch Debugger and ExecuTorch Debugger Utils.

Additional Topics:

  • What is the official way of retrieving an edge module that does not carry backend info?

Test plan

  • E2E example script test
    • python backends/qualcomm/tests/test_qnn_delegate.py -k TestExampleUtilsScript.test_intermediate_debugger -s $DEVICE --model SM8650 --build_folder build-android/ --executorch_root . --image_dataset ../imagenet-mini/val/ --artifact ./e2e_test_debug
  • Simple model test
    • python backends/qualcomm/tests/test_qnn_delegate.py -k TestQNNQuantizedUtils.test_qnn_backend_dump_intermediate_outputs_simple_model --model SM8550 --device $DEVICE --build_folder build-android
    • python backends/qualcomm/tests/test_qnn_delegate.py -k TestQNNQuantizedUtils.test_qnn_backend_dump_intermediate_outputs_topk --model SM8550 --device $DEVICE --build_folder build-android

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16316

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit cc92ab4 with merge base 8e8d97e (image):

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 18, 2025
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@winskuo-quic winskuo-quic marked this pull request as draft December 18, 2025 09:51
@Gasoonjia
Copy link
Contributor

Is the PR ready to be reviewed now?

@winskuo-quic winskuo-quic marked this pull request as ready for review December 19, 2025 01:36
@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/debug_handle branch 2 times, most recently from 9a7ca59 to dc72614 Compare December 19, 2025 01:45
@winskuo-quic
Copy link
Collaborator Author

Is the PR ready to be reviewed now?

Hi @Gasoonjia,
I was rebasing previously, so I set it to draft. This PR should now be ready for review. Thanks.

@winskuo-quic
Copy link
Collaborator Author

Hi @cccclai, @Gasoonjia, @kimishpatel,
I have supported the debug_handle in this PR, which has been mentioned in #5310 and #15735.
We will now be using the ExecuTorch's official Intermediate_Output_Capturer to capture CPU's intermediate result.

Also, I would also like to get some suggestions on the official API to retrieve an edge IR. The current way of retrieving an edge IR is through:

edge_module = lower_module.original_module.module()

However, I encountered following issues when retrieving edge IR using the above method.

  1. If there are partitions, I'll be getting a graph that fuses supported nodes to delegate node(s). However, it would be helpful for debugging if we could get the edge IR graph that does not fuse the backend supported nodes to delegate node(s).
  2. I noticed that by using the edge IR graph above, the input order might have changed. This can be easily reproduced by using model that has more than 1 input (e.g., Roberta). I would like to know how if there's any way I can get a graph that has the correct input order.

Thanks

@Gasoonjia
Copy link
Contributor

hi @winskuo-quic

I think instead of using the edge graph IR as the ground truth for comparsion, it will be great if we can use the export program ET stack get at the first place (e.g. the export graph of model variable

here), since it should be the source graph ET stack take and our job is making sure our intermediate output the same as the input graph as much as possible.

You can see how we calculate intermediate output numercal descrepancy:

def calculate_numeric_gap(

https://github.com/pytorch/executorch/blob/0fb422f9c59e0e5526c0082352a583baf0510fb7/exir/passes/debug_handle_generator_pass.py here's pass for debug handle generation, where the debug handle of a node is the same as the node sharing the same greatest ancestor node in the export flow.

@Gasoonjia
Copy link
Contributor

Gasoonjia commented Dec 19, 2025

Here's an example of how our current API works on VIT model on xnnpack backend: https://gist.github.com/Gasoonjia/db6285ac39ad5759b95c7a92d37cd4f8

and below is the expected output. For some ops like layernorm there're still some issue i need to fix.

idx aot_ops aot_intermediate_output runtime_ops runtime_intermediate_output gap
0 [conv2d] [[[tensor(0.0253, -0.0287, -0.0042, 0.0118, …)]] [DELEGATE_CALL] [[[tensor(0.0253, -0.0287, -0.0042, 0.0118, …)]] 3.2825094945114346e-15
1 [permute, cat, add, dropout] [[[tensor(-0.0024), tensor(0.0054), tensor(0.0…), …]] [DELEGATE_CALL] [[[tensor(-0.0024), tensor(0.0054), tensor(0.0…), …]] 3.281230918554512e-15
2 [expand] [[[tensor(-0.0012), tensor(0.0027), tensor(0.0…), …]] [native_call_expand_copy.out] [[[tensor(-0.0012), tensor(0.0027), tensor(0.0…), …]] 0.0
3 [layer_norm] [[[tensor(-0.0001), tensor(0.0009), tensor(-0.…), …]] [native_call_native_layer_norm.out] [[[tensor(31.1172)], [tensor(4.3549)], [tensor(…)…]] 19.7299543374596
4 [transpose, linear, unflatten, …, transpose] [[[tensor(0.0027), tensor(-0.0032), tensor(0.0…), …]] [DELEGATE_CALL, DELEGATE_CALL, DELEGATE_…] [[[tensor(0.0027), tensor(-0.0032), tensor(0.0…), …]] 9.381436078525961e-05
61 [layer_norm_23] [[[tensor(-0.8604), tensor(-0.1713), tensor(-0.…),…]] [native_call_native_layer_norm.out] [[[tensor(2.2180)], [tensor(1.8462)], [tensor(…)…]] 2.8061147356332854
62 [linear_46, gelu_11, dropout_35, …, dropout_36] [[[tensor(-0.6561), tensor(-0.0496), tensor(-0.…),…]] [DELEGATE_CALL] [[[tensor(-0.6561), tensor(-0.0496), tensor(-0.…),…]] 1.0872256686587983e-11
63 [layer_norm_24] [[[tensor(-0.9040), tensor(-0.1004), tensor(-0.…),…]] [native_call_native_layer_norm.out] [[[tensor(1.9138)], [tensor(1.9031)], [tensor(…)…]] 3.104443617092582
64 [select_36] [[tensor(-0.9040), tensor(-0.1004), tensor(-0.…),…]] [native_call_select_copy.int_out] [[tensor(-0.9040), tensor(-0.1004), tensor(-0.…),…]] 1.1178469901123618e-12
65 [linear_48] [[tensor(-0.9624), tensor(0.7285), tensor(0.79…),…]] [DELEGATE_CALL] [[tensor(-0.9624), tensor(0.7285), tensor(0.79…),…]] 1.7864835786911282e-12

I would love to chat with you regarding how we can make the pipeline works on qualcomm backend!
Hope it can help you!

@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/debug_handle branch from dc72614 to 00c4f7e Compare December 19, 2025 05:25
@winskuo-quic winskuo-quic marked this pull request as draft December 19, 2025 05:26
@winskuo-quic
Copy link
Collaborator Author

Hi @Gasoonjia,
I have turned this PR back to draft for now.
I would love to learn more about ExecuTorch's Debugger Framework and let's move this conversation to email first.
I would also love to chat with you to discuss more in details.

Copy link
Contributor

@Gasoonjia Gasoonjia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thx for the work!

def call(self, graph_module: torch.fx.GraphModule):
handle_counter = 1
visited = set()
for node in graph_module.graph.nodes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if Qualcomm can handle conditional graph. If so i think the way you are adding debug handle might not able to equip debug handle to all branches. You can follow what i'm doing here:
https://github.com/pytorch/executorch/blob/main/exir/passes/debug_handle_generator_pass.py#L14

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to confirm on definition of conditional graph. Initially, I thought we can't have conditional graph (e.g., if else) in Aten. The only way to do it is through operations such as where op, which still makes it a single graph. We actually use a lot of for loop for graph traversing in our passes. If you find this being a concern, please let me know and I'll take a look at all our other passes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

conditiion graph refers to some branch like torch.cond. https://github.com/pytorch/executorch/blob/main/exir/tests/test_passes.py#L1275 is an example.
For conditional graph like torch.cond we will capture the graph for both branch.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for sharing the operation. I think we currently does not have this op. I think I can add a TODO or some checks for now and actually made the change if QNN supports this operation in future. This will probably require a bigger refactor since all our passes are now using for loops.

assert (
source_node.name in visited
), "Graph is not traversed in topological order, unexpected behavior."
node.meta[QCOM_DEBUG_HANDLE] = source_node.meta[QCOM_DEBUG_HANDLE]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious why do we need to set the get_item node the same debug handle as the soruce node? Since it will introduce duplicate debug handle in the graph and im a little bit worried if it could cause any issue in the down stream,

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am actually just following the same behavior prior to this pass. For node's that has multi-output, it will have get_item node as users. I noticed that the debug_handle for all get_item node and node itself all have the same debug_handle, as shown below.
image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like the reason of get_item and its user sharing the same debug handle is because of exporting in strict mode and both aten_topk and its get_item function are from same source.
For here i don't have strong preference of whether or not we should make get_item's debug handle the same as source node, as long as it can support Qualcomm's requirement.

tensor_name = f"{node.name}_{wrapper_idx}"

# Only append special namings when enable tensor dump, since longer name results bigger .pte
if (handle_id := node.meta.get(QCOM_DEBUG_HANDLE)) and self.enable_tensor_dump:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wondering if we still need this file since we will migrate to devtool infra?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this can be eventually deprecated as we discussed offline. However, since currently we don't have an official API to bring debug_handle to runtime, we will need to stick with parsing tensor name during runtime to extract debug_handle ID.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh will the tensor name here be part of runtime?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

During runtime, we will get tensor name like relu__debugID_1, where 1 here is the debug handle. We use regex in runtime to parse the ID 1 and save it to ETDUMP as the key.
I agree we will migrate to devtool infra, but due to some of the limitations where we don't really have an official flow to pass debug_handle ID related info to runtime, we need to keep it this way so the debugger can still function after this PR is merged.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx for detailed explanation! Mind share me the ptr of how you retreive the op name during runtime?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course! Please refer to the message below.

This class serves as an intermediate point and is inserted right after the call_function node.
It also saves some metadata such as scale, offset, etc.
Since we just want to check the intermediate output, we will directly return the value during the forward call.
class QNNIntermediateDebugger:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can change or update the class target; from the comments it plays as the same role of Inspector.calculate_nunmeric_gap()

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to confirm on this part. Are you suggesting that it is better to leave some comments stating that part of this class will eventually be deprecated and be replace by Inspector.calculate_numeric_gap()? As synced offline, currently we cannot directly use calculate_numeric_gap as QNN requires some output post processing prior to comparison.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the comment!

My idea is this class should exist, but should only focus on the advance analysis after we fetch the operator-level numerical descrepancy from calculate_numeric_gap.

For essential post-process you mentioned, we can make that happen by either update our current calculate_numeric_gap function, or make it as part of custom_comparator.

Please let me know if that works for you

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. As discussed last time, since current calculate_numeric_gap does not support these post processing, we will use our own comparator to calculate numeric gap for now. Once mainline has post processing introduced, we can then migrate to the official calculate_numeric_gap flow.
If we directly use calculate_numeric_gap for now, we are comparing CPU (FP32/NCHW) with HTP(Quantized/NHWC), which we will get low accuracy for all operators.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes i totally understand your concern, is there any chance we can work on solving the gap first then directly using the et pipeline in your debugger class? I feel like there are much things in the current Inspector class we can reuse to expedite our development.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation. I would just like to confirm about the current status. Please correct me if I am having the wrong assumption or understanding.
I am currently using our own comparison metrics since HTP has a lot of difference from CPU. This causes us unable to call calculate_numeric_gap since it requires some post processing. On top of this, calculate_numeric_gap will be using exported_program from torch.export.export as golden, which we also discussed in our meeting last time that this probably isn't the best golden for Qualcomm, and we prefer to use edge dialect graph after running edge passes.
Are you suggesting that we should keep this PR on hold, and try to have the following issues resolved before merging this PR?

  1. Supports post-processing for calculate_numeric_gap
  2. Able to use edge graph as golden graph.
  3. Migrate to use etrecord's edge graph instead of using AOT runtime's edge_manager's edge graph.

Thanks

@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/debug_handle branch from 00c4f7e to 7ec377f Compare December 29, 2025 07:48
@winskuo-quic winskuo-quic marked this pull request as ready for review December 29, 2025 07:48
std::string qnn_tensor_name =
std::string(QNN_TENSOR_VER_PTR(output_tensor)->name);
if (std::regex_search(qnn_tensor_name, match, re)) {
debug_handle_id = static_cast<uint32_t>(std::stoul(match[1].str()));
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Gasoonjia,
This is where we parse the qnn_tensor_name and get the debug_handle id.

@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/debug_handle branch from 403300d to cc92ab4 Compare January 8, 2026 05:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants