-
Notifications
You must be signed in to change notification settings - Fork 796
Qualcomm AI Engine Direct - Support Debug Handle and Integrate IntermediateOutputCapturer #16316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16316
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit cc92ab4 with merge base 8e8d97e ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
|
Is the PR ready to be reviewed now? |
9a7ca59 to
dc72614
Compare
Hi @Gasoonjia, |
|
Hi @cccclai, @Gasoonjia, @kimishpatel, Also, I would also like to get some suggestions on the official API to retrieve an edge IR. The current way of retrieving an edge IR is through: executorch/examples/qualcomm/utils.py Line 499 in 0fb422f
However, I encountered following issues when retrieving edge IR using the above method.
Thanks |
|
I think instead of using the edge graph IR as the ground truth for comparsion, it will be great if we can use the export program ET stack get at the first place (e.g. the export graph of executorch/examples/qualcomm/utils.py Line 480 in 0fb422f
You can see how we calculate intermediate output numercal descrepancy: executorch/devtools/inspector/_inspector.py Line 1407 in 0fb422f
https://github.com/pytorch/executorch/blob/0fb422f9c59e0e5526c0082352a583baf0510fb7/exir/passes/debug_handle_generator_pass.py here's pass for debug handle generation, where the debug handle of a node is the same as the node sharing the same greatest ancestor node in the export flow. |
|
Here's an example of how our current API works on VIT model on xnnpack backend: https://gist.github.com/Gasoonjia/db6285ac39ad5759b95c7a92d37cd4f8 and below is the expected output. For some ops like layernorm there're still some issue i need to fix.
I would love to chat with you regarding how we can make the pipeline works on qualcomm backend! |
dc72614 to
00c4f7e
Compare
|
Hi @Gasoonjia, |
Gasoonjia
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thx for the work!
| def call(self, graph_module: torch.fx.GraphModule): | ||
| handle_counter = 1 | ||
| visited = set() | ||
| for node in graph_module.graph.nodes: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if Qualcomm can handle conditional graph. If so i think the way you are adding debug handle might not able to equip debug handle to all branches. You can follow what i'm doing here:
https://github.com/pytorch/executorch/blob/main/exir/passes/debug_handle_generator_pass.py#L14
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to confirm on definition of conditional graph. Initially, I thought we can't have conditional graph (e.g., if else) in Aten. The only way to do it is through operations such as where op, which still makes it a single graph. We actually use a lot of for loop for graph traversing in our passes. If you find this being a concern, please let me know and I'll take a look at all our other passes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
conditiion graph refers to some branch like torch.cond. https://github.com/pytorch/executorch/blob/main/exir/tests/test_passes.py#L1275 is an example.
For conditional graph like torch.cond we will capture the graph for both branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for sharing the operation. I think we currently does not have this op. I think I can add a TODO or some checks for now and actually made the change if QNN supports this operation in future. This will probably require a bigger refactor since all our passes are now using for loops.
| assert ( | ||
| source_node.name in visited | ||
| ), "Graph is not traversed in topological order, unexpected behavior." | ||
| node.meta[QCOM_DEBUG_HANDLE] = source_node.meta[QCOM_DEBUG_HANDLE] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious why do we need to set the get_item node the same debug handle as the soruce node? Since it will introduce duplicate debug handle in the graph and im a little bit worried if it could cause any issue in the down stream,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like the reason of get_item and its user sharing the same debug handle is because of exporting in strict mode and both aten_topk and its get_item function are from same source.
For here i don't have strong preference of whether or not we should make get_item's debug handle the same as source node, as long as it can support Qualcomm's requirement.
| tensor_name = f"{node.name}_{wrapper_idx}" | ||
|
|
||
| # Only append special namings when enable tensor dump, since longer name results bigger .pte | ||
| if (handle_id := node.meta.get(QCOM_DEBUG_HANDLE)) and self.enable_tensor_dump: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wondering if we still need this file since we will migrate to devtool infra?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this can be eventually deprecated as we discussed offline. However, since currently we don't have an official API to bring debug_handle to runtime, we will need to stick with parsing tensor name during runtime to extract debug_handle ID.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh will the tensor name here be part of runtime?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
During runtime, we will get tensor name like relu__debugID_1, where 1 here is the debug handle. We use regex in runtime to parse the ID 1 and save it to ETDUMP as the key.
I agree we will migrate to devtool infra, but due to some of the limitations where we don't really have an official flow to pass debug_handle ID related info to runtime, we need to keep it this way so the debugger can still function after this PR is merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thx for detailed explanation! Mind share me the ptr of how you retreive the op name during runtime?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course! Please refer to the message below.
| This class serves as an intermediate point and is inserted right after the call_function node. | ||
| It also saves some metadata such as scale, offset, etc. | ||
| Since we just want to check the intermediate output, we will directly return the value during the forward call. | ||
| class QNNIntermediateDebugger: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can change or update the class target; from the comments it plays as the same role of Inspector.calculate_nunmeric_gap()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to confirm on this part. Are you suggesting that it is better to leave some comments stating that part of this class will eventually be deprecated and be replace by Inspector.calculate_numeric_gap()? As synced offline, currently we cannot directly use calculate_numeric_gap as QNN requires some output post processing prior to comparison.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the comment!
My idea is this class should exist, but should only focus on the advance analysis after we fetch the operator-level numerical descrepancy from calculate_numeric_gap.
For essential post-process you mentioned, we can make that happen by either update our current calculate_numeric_gap function, or make it as part of custom_comparator.
Please let me know if that works for you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. As discussed last time, since current calculate_numeric_gap does not support these post processing, we will use our own comparator to calculate numeric gap for now. Once mainline has post processing introduced, we can then migrate to the official calculate_numeric_gap flow.
If we directly use calculate_numeric_gap for now, we are comparing CPU (FP32/NCHW) with HTP(Quantized/NHWC), which we will get low accuracy for all operators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes i totally understand your concern, is there any chance we can work on solving the gap first then directly using the et pipeline in your debugger class? I feel like there are much things in the current Inspector class we can reuse to expedite our development.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the explanation. I would just like to confirm about the current status. Please correct me if I am having the wrong assumption or understanding.
I am currently using our own comparison metrics since HTP has a lot of difference from CPU. This causes us unable to call calculate_numeric_gap since it requires some post processing. On top of this, calculate_numeric_gap will be using exported_program from torch.export.export as golden, which we also discussed in our meeting last time that this probably isn't the best golden for Qualcomm, and we prefer to use edge dialect graph after running edge passes.
Are you suggesting that we should keep this PR on hold, and try to have the following issues resolved before merging this PR?
- Supports post-processing for
calculate_numeric_gap - Able to use edge graph as golden graph.
- Migrate to use etrecord's edge graph instead of using AOT runtime's edge_manager's edge graph.
Thanks
00c4f7e to
7ec377f
Compare
| std::string qnn_tensor_name = | ||
| std::string(QNN_TENSOR_VER_PTR(output_tensor)->name); | ||
| if (std::regex_search(qnn_tensor_name, match, re)) { | ||
| debug_handle_id = static_cast<uint32_t>(std::stoul(match[1].str())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Gasoonjia,
This is where we parse the qnn_tensor_name and get the debug_handle id.
403300d to
cc92ab4
Compare

Summary
Additional Topics:
Test plan
python backends/qualcomm/tests/test_qnn_delegate.py -k TestExampleUtilsScript.test_intermediate_debugger -s $DEVICE --model SM8650 --build_folder build-android/ --executorch_root . --image_dataset ../imagenet-mini/val/ --artifact ./e2e_test_debugpython backends/qualcomm/tests/test_qnn_delegate.py -k TestQNNQuantizedUtils.test_qnn_backend_dump_intermediate_outputs_simple_model --model SM8550 --device $DEVICE --build_folder build-androidpython backends/qualcomm/tests/test_qnn_delegate.py -k TestQNNQuantizedUtils.test_qnn_backend_dump_intermediate_outputs_topk --model SM8550 --device $DEVICE --build_folder build-android