-
Notifications
You must be signed in to change notification settings - Fork 10
Switch to pytest #44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to pytest #44
Conversation
📊 Test Results for Small Benchmark/Test Suitef1c6ca5 (2025_12_04_23_31_14) IRONCLADTested on
📈 Trends (vs main branch) for Small Benchmark/Test Suitef1c6ca5 (2025_12_04_23_31_14) IRONCLAD Trendsaxpy_1_cols_2_channels_2048_tile_2048_3.0
axpy_2_cols_2_channels_2048_tile_1024_3.0
axpy_4_cols_2_channels_2048_tile_512_3.0
axpy_8_cols_2_channels_2048_tile_256_3.0
dequant_1_cols_1_channels_2048_tile_2048
dequant_1_cols_2_channels_2048_tile_1024
dequant_2_cols_1_channels_2048_tile_1024
dequant_2_cols_2_channels_2048_tile_512
dequant_4_cols_1_channels_2048_tile_512
dequant_4_cols_2_channels_2048_tile_256
dequant_8_cols_1_channels_2048_tile_256
dequant_8_cols_2_channels_2048_tile_128
eltwise_add_1_cols_2_channels_2048_tile_2048
eltwise_add_2_cols_2_channels_2048_tile_1024
eltwise_add_4_cols_2_channels_2048_tile_512
eltwise_add_8_cols_2_channels_2048_tile_256
eltwise_mul_1_cols_2_channels_2048_tile_2048
eltwise_mul_2_cols_2_channels_2048_tile_1024
eltwise_mul_4_cols_2_channels_2048_tile_512
eltwise_mul_8_cols_2_channels_2048_tile_256
gelu_1_cols_1_channels_2048_tile_2048
gelu_1_cols_2_channels_2048_tile_1024
gelu_2_cols_1_channels_2048_tile_1024
gelu_2_cols_2_channels_2048_tile_512
gelu_4_cols_1_channels_2048_tile_512
gelu_4_cols_2_channels_2048_tile_256
gelu_8_cols_1_channels_2048_tile_256
gelu_8_cols_2_channels_2048_tile_128
gemm_2048x2048x2048_64x64x32_8_cols_0_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x32_8_cols_0_bcolmaj_1_ccolmaj_0
gemm_2048x2048x2048_64x64x32_8_cols_1_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_0_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_0_bcolmaj_1_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_1_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x64_8_cols_0_bcolmaj_0_ccolmaj_0
layer_norm_1_cols_1_channels_2048_tile_2048
layer_norm_1_cols_2_channels_2048_tile_1024
layer_norm_2_cols_1_channels_2048_tile_1024
layer_norm_2_cols_2_channels_2048_tile_512
layer_norm_4_cols_1_channels_2048_tile_512
layer_norm_4_cols_2_channels_2048_tile_256
layer_norm_8_cols_1_channels_2048_tile_256
layer_norm_8_cols_2_channels_2048_tile_128
matrix_vector_mul_128x128_32_1col
matrix_vector_mul_2048x8192_1_1col
matrix_vector_mul_2048x8192_1_2col
matrix_vector_mul_2048x8192_1_4col
matrix_vector_mul_2048x8192_1_8col
matrix_vector_mul_8192x2048_4_1col
matrix_vector_mul_8192x2048_4_2col
matrix_vector_mul_8192x2048_4_4col
matrix_vector_mul_8192x2048_4_8col
mem_copy_16_cores_2_chans_2048_tile_128_False
mem_copy_1_cols_1_channels_2048_tile_2048
mem_copy_1_cols_2_channels_2048_tile_1024
mem_copy_1_cores_1_chans_2048_tile_2048_False
mem_copy_2_cols_1_channels_2048_tile_1024
mem_copy_2_cols_2_channels_2048_tile_512
mem_copy_2_cores_1_chans_2048_tile_1024_False
mem_copy_2_cores_2_chans_2048_tile_1024_False
mem_copy_4_cols_1_channels_2048_tile_512
mem_copy_4_cols_2_channels_2048_tile_256
mem_copy_4_cores_1_chans_2048_tile_512_False
mem_copy_4_cores_2_chans_2048_tile_512_False
mem_copy_8_cols_1_channels_2048_tile_256
mem_copy_8_cols_2_channels_2048_tile_128
mem_copy_8_cores_1_chans_2048_tile_256_False
mem_copy_8_cores_2_chans_2048_tile_256_False
mha
relu_1_cols_1_channels_2048_tile_2048
relu_2_cols_1_channels_2048_tile_1024
relu_4_cols_1_channels_2048_tile_512
relu_8_cols_1_channels_2048_tile_256
rms_norm_1_cols_1_channels_2048_tile_2048
rms_norm_1_cols_2_channels_2048_tile_1024
rms_norm_2_cols_1_channels_2048_tile_1024
rms_norm_2_cols_2_channels_2048_tile_512
rms_norm_4_cols_1_channels_2048_tile_512
rms_norm_4_cols_2_channels_2048_tile_256
rms_norm_8_cols_1_channels_2048_tile_256
rms_norm_8_cols_2_channels_2048_tile_128
rope_1_cols_2_channels_4096_tile_4096_0
rope_2_cols_2_channels_4096_tile_2048_0
rope_4_cols_2_channels_4096_tile_1024_0
rope_8_cols_2_channels_4096_tile_512_0
sigmoid_1_cols_1_channels_2048_tile_2048
sigmoid_2_cols_1_channels_2048_tile_1024
sigmoid_4_cols_1_channels_2048_tile_512
sigmoid_8_cols_1_channels_2048_tile_256
silu_1_cols_1_channels_2048_tile_2048
silu_2_cols_1_channels_2048_tile_1024
silu_4_cols_1_channels_2048_tile_512
silu_8_cols_1_channels_2048_tile_256
softmax_1_cols_2_channels_4096_tile_2048
softmax_2_cols_2_channels_4096_tile_1024
softmax_2_cols_2_channels_4096_tile_512
swigluNo metrics available. swiglu_decode_1x2048x2048
tanh_1_cols_1_channels_2048_tile_2048
tanh_2_cols_1_channels_2048_tile_1024
tanh_4_cols_1_channels_2048_tile_512
tanh_8_cols_1_channels_2048_tile_256
transpose_2048_M_64_N_1_cols_1_channels_64_m_64_n_8_s
transpose_2048_M_64_N_1_cols_2_channels_64_m_64_n_8_s
weighted_rms_norm_1_cols_2_channels_2048_weights_2048
weighted_rms_norm_2_cols_2_channels_2048_weights_1024
weighted_rms_norm_4_cols_2_channels_2048_weights_512
weighted_rms_norm_8_cols_2_channels_2048_weights_256
|
📊 Test Results for Test Example Applicationsf1c6ca5 (2025_12_04_23_37_14) IRONCLADTested on
📈 Trends (vs main branch) for Test Example Applicationsf1c6ca5 (2025_12_04_23_37_14) IRONCLAD Trendsllama_3.2_1b
llama_3.2_1b_prompt_2048_tokens_40No metrics available. llama_3.2_1b_prompt_64_tokens_40No metrics available. |
📊 Test Results for Test Example Applications2d9f116 (2025_12_05_00_13_48) IRONCLADTested on
📈 Trends (vs main branch) for Test Example Applications2d9f116 (2025_12_05_00_13_48) IRONCLAD Trendsllama_3.2_1b
llama_3.2_1b_prompt_2048_tokens_40
|
📊 Test Results for Small Benchmark/Test Suite2d9f116 (2025_12_05_00_23_10) IRONCLADTested on
📈 Trends (vs main branch) for Small Benchmark/Test Suite2d9f116 (2025_12_05_00_23_10) IRONCLAD Trendsaxpy_1_cols_2_channels_2048_tile_2048_3.0
axpy_2_cols_2_channels_2048_tile_1024_3.0
axpy_4_cols_2_channels_2048_tile_512_3.0
axpy_8_cols_2_channels_2048_tile_256_3.0
dequant_1_cols_1_channels_2048_tile_2048
dequant_1_cols_2_channels_2048_tile_1024
dequant_2_cols_1_channels_2048_tile_1024
dequant_2_cols_2_channels_2048_tile_512
dequant_4_cols_1_channels_2048_tile_512
dequant_4_cols_2_channels_2048_tile_256
dequant_8_cols_1_channels_2048_tile_256
dequant_8_cols_2_channels_2048_tile_128
eltwise_add_1_cols_2_channels_2048_tile_2048
eltwise_add_2_cols_2_channels_2048_tile_1024
eltwise_add_4_cols_2_channels_2048_tile_512
eltwise_add_8_cols_2_channels_2048_tile_256
eltwise_mul_1_cols_2_channels_2048_tile_2048
eltwise_mul_2_cols_2_channels_2048_tile_1024
eltwise_mul_4_cols_2_channels_2048_tile_512
eltwise_mul_8_cols_2_channels_2048_tile_256
gelu_1_cols_1_channels_2048_tile_2048
gelu_1_cols_2_channels_2048_tile_1024
gelu_2_cols_1_channels_2048_tile_1024
gelu_2_cols_2_channels_2048_tile_512
gelu_4_cols_1_channels_2048_tile_512
gelu_4_cols_2_channels_2048_tile_256
gelu_8_cols_1_channels_2048_tile_256
gelu_8_cols_2_channels_2048_tile_128
gemm_2048x2048x2048_64x64x32_8_cols_0_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x32_8_cols_0_bcolmaj_1_ccolmaj_0
gemm_2048x2048x2048_64x64x32_8_cols_1_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_0_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_0_bcolmaj_1_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_1_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x64_8_cols_0_bcolmaj_0_ccolmaj_0
layer_norm_1_cols_1_channels_2048_tile_2048
layer_norm_1_cols_2_channels_2048_tile_1024
layer_norm_2_cols_1_channels_2048_tile_1024
layer_norm_2_cols_2_channels_2048_tile_512
layer_norm_4_cols_1_channels_2048_tile_512
layer_norm_4_cols_2_channels_2048_tile_256
layer_norm_8_cols_1_channels_2048_tile_256
layer_norm_8_cols_2_channels_2048_tile_128
matrix_vector_mul_128x128_32_1col
matrix_vector_mul_2048x8192_1_1col
matrix_vector_mul_2048x8192_1_2col
matrix_vector_mul_2048x8192_1_4col
matrix_vector_mul_2048x8192_1_8col
matrix_vector_mul_8192x2048_4_1col
matrix_vector_mul_8192x2048_4_2col
matrix_vector_mul_8192x2048_4_4col
matrix_vector_mul_8192x2048_4_8col
mem_copy_16_cores_2_chans_2048_tile_128_False
mem_copy_1_cols_1_channels_2048_tile_2048
mem_copy_1_cols_2_channels_2048_tile_1024
mem_copy_1_cores_1_chans_2048_tile_2048_False
mem_copy_2_cols_1_channels_2048_tile_1024
mem_copy_2_cols_2_channels_2048_tile_512
mem_copy_2_cores_1_chans_2048_tile_1024_False
mem_copy_2_cores_2_chans_2048_tile_1024_False
mem_copy_4_cols_1_channels_2048_tile_512
mem_copy_4_cols_2_channels_2048_tile_256
mem_copy_4_cores_1_chans_2048_tile_512_False
mem_copy_4_cores_2_chans_2048_tile_512_False
mem_copy_8_cols_1_channels_2048_tile_256
mem_copy_8_cols_2_channels_2048_tile_128
mem_copy_8_cores_1_chans_2048_tile_256_False
mem_copy_8_cores_2_chans_2048_tile_256_False
mha
relu_1_cols_1_channels_2048_tile_2048
relu_2_cols_1_channels_2048_tile_1024
relu_4_cols_1_channels_2048_tile_512
relu_8_cols_1_channels_2048_tile_256
rms_norm_1_cols_1_channels_2048_tile_2048
rms_norm_1_cols_2_channels_2048_tile_1024
rms_norm_2_cols_1_channels_2048_tile_1024
rms_norm_2_cols_2_channels_2048_tile_512
rms_norm_4_cols_1_channels_2048_tile_512
rms_norm_4_cols_2_channels_2048_tile_256
rms_norm_8_cols_1_channels_2048_tile_256
rms_norm_8_cols_2_channels_2048_tile_128
rope_1_cols_2_channels_4096_tile_4096_0
rope_2_cols_2_channels_4096_tile_2048_0
rope_4_cols_2_channels_4096_tile_1024_0
rope_8_cols_2_channels_4096_tile_512_0
sigmoid_1_cols_1_channels_2048_tile_2048
sigmoid_2_cols_1_channels_2048_tile_1024
sigmoid_4_cols_1_channels_2048_tile_512
sigmoid_8_cols_1_channels_2048_tile_256
silu_1_cols_1_channels_2048_tile_2048
silu_2_cols_1_channels_2048_tile_1024
silu_4_cols_1_channels_2048_tile_512
silu_8_cols_1_channels_2048_tile_256
softmax_1_cols_2_channels_4096_tile_2048
softmax_2_cols_2_channels_4096_tile_1024
softmax_2_cols_2_channels_4096_tile_512
swigluNo metrics available. swiglu_decode_1x2048x2048
tanh_1_cols_1_channels_2048_tile_2048
tanh_2_cols_1_channels_2048_tile_1024
tanh_4_cols_1_channels_2048_tile_512
tanh_8_cols_1_channels_2048_tile_256
transpose_2048_M_64_N_1_cols_1_channels_64_m_64_n_8_s
transpose_2048_M_64_N_1_cols_2_channels_64_m_64_n_8_s
weighted_rms_norm_1_cols_2_channels_2048_weights_2048
weighted_rms_norm_2_cols_2_channels_2048_weights_1024
weighted_rms_norm_4_cols_2_channels_2048_weights_512
weighted_rms_norm_8_cols_2_channels_2048_weights_256
|
hunhoffe
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, a good step forward (and very speedy!)
I do wonder if a little further simplification is possible. One is whether you can make the parameterization of the suites more streamlined (I wrote a comment about this). The other is whether the context is really needed... would a session-scoped fixture serve the same purpose?
e.g., something like,
@pytest.fixture(scope="session")
def setup_once_per_suite():
"""
This fixture will run only once for the entire pytest session.
"""
print("\nSetting up resource once per suite...")
resource = "Shared resource"
yield resource
print("Tearing down resource once per suite.")I have mixed feelings about this. I think generally the context is somewhat of a good idea, so maybe we want to keep it even if pytest may offer a workaround. On the other hand, simplicity is good.
operators/dequant/test.py
Outdated
| extensive_params, | ||
| ids=extensive_names, | ||
| ) | ||
| def test_dequant_extensive( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have not tried this myself, but rather than having two functions, can you use a combination of mark + parametrize to make this work with just one function definition?
@pytest.mark.parametrize("input_value", [
10,
pytest.param(12, marks=pytest.mark.suiteB),
pytest.param(13, marks=pytest.mark.suiteC),
])|
Thanks for the review.
Having the "context" (before it was called that) be session-scoped is exactly what was the issue before (all AIEBaseOperators shared the same state of registered operators, no explicit context means all had the same context). If we test operator A, then operator B, within the same Python session, we need to "reset" the XRT runtime buffers etc. that we have set up, because B will use different buffers than A. So that's what having one fixture per test is trying to fix, rather than per session, and why I added the context. Open to suggestions on how to simplify this further. Will look into your other comment. 👍 |
|
Ah, so session scoped is bad, makes sense... what about function scoped, e.g., |
|
I think that's what I'm already doing? |
|
I think your confusion is accurate, because I think I am missing something in my understanding. To check my knowledge: you are allocating a bunch of buffers, compiling all the operator kernels, and flushing the entire cached kernel device every time the context is refreshed which happens at function scope for every single pytest? I had thought all you'd need to do is flush the cache for the device handler, and then also maybe pre-allocate a small number of buffers and compile the operator as part of each test. Maybe this is not true? |
|
To be clear, I only really care about these details to ensure the test infrastructure has the following properties:
|
|
I think I'm also confused. What do you mean by "cached kernel device" and "kernel is refreshed"? What the current implementation does: Collect a new set of artifacts for each operator and set up a new set of buffers for each operator. A set of artifacts and set of buffers for a set of operators you plan to call is a "context." For these tests, each context should only contain a single operator. Concrete example: 1. test foo, 2. then test bar. Results in: context 1: artifacts "foo.o, foo.xclbin, foo.mlir", buffers "foo_input", "foo_output". context 2: artifacts "bar.o, bar.xclbin, bar.mlir", buffers "bar_input", "bar_output"... What the previous implementation did (single implicit context): Gather all artifacts and buffers across all operators into one context. Meaning you'd end up with artifact list "foo.o, foo.xclbin, foo.mlir, bar.o, bar.xclbin, bar.mlir" and buffers "foo_input, foo_output, bar_input, bar_output". Potential issues with this: name collisions of artifacts (should be solved more robustly but isn't currently), allocating too many buffers at once (since we're allocating all buffers for all operators of the context ahead of time, even if we don't call all of those operators) resulting in an out of memory condition. Concrete issue that I actually faced: You can currently only call "prepare_runtime" once since it goes through and allocates all buffers. If you try to do it a second time, it'll try to reallocate the same buffers and I think that's where things failed for me. Since I want to run operator test A first, then operator test B, I need to set up a runtime for test A first, but then I can't test operator B since the runtime is already set up. The way around this would be to first set up both operators, then test both operators, (i.e. |
|
I was talking about the kernel being loaded already into the cache managed by the device_manager. But if you have a test that only tests one operator, why do you gather everything for all operators? If that is indeed what is happening, then you don't have modularity in the tests between operators? Sorry for my confusion! It seems like what you are doing is probably reasonable, I just don't understand it. |
This pytest implementation should be the same or slightly better runtime-wise as was we currently have in the repository. Since each test currently runs as a standalone Python script (called using
The added AIEContext should help with this. We lost some of this isolation by not running separate Python scripts, so adding the AIEContext should restore some of that isolation. |
Hi guys, I'm also trying to understand...I think these are the same idea? |
That is exactly what the What you describe is the current state of the repository. Look here -- the current implementation uses a global variable to keep track of all registered operators and compiles and sets up the runtime for all of them. However, this broke the isolation of tests. So rather than collecting all registered operators in one global state, you now get to specify where to collect them into (i.e., the AIEContext). Again, to be clear, these tests add and test just a single top-level operator in each context. (Tests with combined operators like SwiGLU do create multiple sub-operators, but that is how they are meant to work.) |
Don't apologize for scrutinizing the code and helping make it better! |
Yeah I think you might be right. I think the code does what Erika says it should do, it's just probably unnecessarily confusing. |
|
Yes! I think it get it now. I was misunderstanding how the operation registration works in this model. I thought the operators were automatically registered, but if they are registered per test per context on demand, this makes sense I believe! |
|
Open to any suggestions to make this clearer. A lot of this is just a chain of hacks stringed together. And to confirm your understanding yes, test registration happens in the constructor of the operator. (Think of each parametrization of an operator as a separate operator, so we can only register once it's constructed with concrete parameters.) Since that constructor is called in the test itself, and each test gets its own context via the fixture ( |
|
I think this is plenty good for now, it was just unclear to me why the context needed to be so complex. I think I understand now that it encompasses a lot of what would otherwise be boilerplate. Fine-tuning can happen later as needed, IMO. |
|
A lot of the context runtime set up logic has just moved. That complexity was previously in AIEBaseOperator. The reason the buffer allocation is complex is because it (a) tries to find a small set of buffers to allocate and share between operators, while (b) also allowing the "static data" optimization (i.e., buffers that cannot be shared between operators because their data is weights that should not be changed). I picture a lot of this will go away once the IRON Tensor makes its way here (I think the "sharing buffers" optimization might get dropped). Bigger picture comment: My vision is that operators are as declarative (as opposed to imperative) as possible. The idea in this version of the operator is that My hope is that in that way, it should be easier to swap out backends: e.g., from the buffer description (buffer names + sizes + runlist), you could both generate a C++ host code to run it or use PyXRT or HSA. All consume the same runtime description (a static list of buffer names, sizes, runlist sequence). Same for the artifacts: You can use our current Python-based compilation process, or use the declaration of artifacts to instead "export" a Makefile or even CMakeLists.txt (not yet implemented, but could be done). The declarative nature of each operator makes the operator description slimmer. It means that the "backend implementation" of how that declaration gets turned into imperative instructions (e.g. "allocate this buffer using XRT, call this kernel, ...") gets pushed elsewhere, in this case the context. Otherwise, we'd have this logic embedded in each operator (and, in turn, it might be harder to swap out backends). |
|
That makes a lot of sense to me! Thank you for elaborating on your vision of how the context fits into the larger picture -- it did help clarify some things for me! |
📊 Test Results for Small Benchmark/Test Suite7d5af21 (2025_12_10_18_37_22) IRONCLADTested on
📈 Trends (vs main branch) for Small Benchmark/Test Suite7d5af21 (2025_12_10_18_37_22) IRONCLAD Trendsaxpy_1_cols_2_channels_2048_tile_2048_3.0
axpy_1_cols_2_channels_2048_tile_2048_3.0_0
axpy_2_cols_2_channels_2048_tile_1024_3.0
axpy_2_cols_2_channels_2048_tile_1024_3.0_0
axpy_4_cols_2_channels_2048_tile_512_3.0
axpy_4_cols_2_channels_2048_tile_512_3.0_0
axpy_8_cols_2_channels_2048_tile_256_3.0
axpy_8_cols_2_channels_2048_tile_256_3.0_0
dequant_1_cols_1_channels_2048_tile_2048
dequant_1_cols_1_channels_2048_tile_2048_0
dequant_1_cols_2_channels_2048_tile_1024
dequant_1_cols_2_channels_2048_tile_1024_0
dequant_2_cols_1_channels_2048_tile_1024
dequant_2_cols_1_channels_2048_tile_1024_0
dequant_2_cols_2_channels_2048_tile_512
dequant_2_cols_2_channels_2048_tile_512_0
dequant_4_cols_1_channels_2048_tile_512
dequant_4_cols_1_channels_2048_tile_512_0
dequant_4_cols_2_channels_2048_tile_256
dequant_4_cols_2_channels_2048_tile_256_0
dequant_8_cols_1_channels_2048_tile_256
dequant_8_cols_1_channels_2048_tile_256_0
dequant_8_cols_2_channels_2048_tile_128
dequant_8_cols_2_channels_2048_tile_128_0
eltwise_add_1_cols_2_channels_2048_tile_2048
eltwise_add_2_cols_2_channels_2048_tile_1024
eltwise_add_4_cols_2_channels_2048_tile_512
eltwise_add_8_cols_2_channels_2048_tile_256
eltwise_mul_1_cols_2_channels_2048_tile_2048
eltwise_mul_2_cols_2_channels_2048_tile_1024
eltwise_mul_4_cols_2_channels_2048_tile_512
eltwise_mul_8_cols_2_channels_2048_tile_256
gelu_1_cols_1_channels_2048_tile_2048
gelu_1_cols_2_channels_2048_tile_1024
gelu_2_cols_1_channels_2048_tile_1024
gelu_2_cols_2_channels_2048_tile_512
gelu_4_cols_1_channels_2048_tile_512
gelu_4_cols_2_channels_2048_tile_256
gelu_8_cols_1_channels_2048_tile_256
gelu_8_cols_2_channels_2048_tile_128
gemm_2048x2048x2048_64x64x32_8_cols_0_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x32_8_cols_0_bcolmaj_1_ccolmaj_0
gemm_2048x2048x2048_64x64x32_8_cols_1_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_0_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_0_bcolmaj_0_ccolmaj_0_0
gemm_2048x2048x2048_64x64x64_2_cols_0_bcolmaj_1_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_0_bcolmaj_1_ccolmaj_0_0
gemm_2048x2048x2048_64x64x64_2_cols_1_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_1_bcolmaj_0_ccolmaj_0_0
gemm_2048x2048x2048_64x64x64_8_cols_0_bcolmaj_0_ccolmaj_0
layer_norm_1_cols_1_channels_2048_tile_2048
layer_norm_1_cols_2_channels_2048_tile_1024
layer_norm_2_cols_1_channels_2048_tile_1024
layer_norm_2_cols_2_channels_2048_tile_512
layer_norm_4_cols_1_channels_2048_tile_512
layer_norm_4_cols_2_channels_2048_tile_256
layer_norm_8_cols_1_channels_2048_tile_256
layer_norm_8_cols_2_channels_2048_tile_128
matrix_vector_mul_128x128_32_1col
matrix_vector_mul_128x128_32_1col0
matrix_vector_mul_2048x8192_1_1col
matrix_vector_mul_2048x8192_1_1col0
matrix_vector_mul_2048x8192_1_2col
matrix_vector_mul_2048x8192_1_2col0
matrix_vector_mul_2048x8192_1_4col
matrix_vector_mul_2048x8192_1_4col0
matrix_vector_mul_2048x8192_1_8col
matrix_vector_mul_2048x8192_1_8col0
matrix_vector_mul_8192x2048_4_1col
matrix_vector_mul_8192x2048_4_1col0
matrix_vector_mul_8192x2048_4_2col
matrix_vector_mul_8192x2048_4_2col0
matrix_vector_mul_8192x2048_4_4col
matrix_vector_mul_8192x2048_4_4col0
matrix_vector_mul_8192x2048_4_8col
matrix_vector_mul_8192x2048_4_8col0
mem_copy_16_cores_2_chans_2048_tile_128_False
mem_copy_16_cores_2_chans_2048_tile_128_False0
mem_copy_1_cols_1_channels_2048_tile_2048
mem_copy_1_cols_2_channels_2048_tile_1024
mem_copy_1_cores_1_chans_2048_tile_2048_False
mem_copy_1_cores_1_chans_2048_tile_2048_False0
mem_copy_2_cols_1_channels_2048_tile_1024
mem_copy_2_cols_2_channels_2048_tile_512
mem_copy_2_cores_1_chans_2048_tile_1024_False
mem_copy_2_cores_1_chans_2048_tile_1024_False0
mem_copy_2_cores_2_chans_2048_tile_1024_False
mem_copy_2_cores_2_chans_2048_tile_1024_False0
mem_copy_4_cols_1_channels_2048_tile_512
mem_copy_4_cols_2_channels_2048_tile_256
mem_copy_4_cores_1_chans_2048_tile_512_False
mem_copy_4_cores_1_chans_2048_tile_512_False0
mem_copy_4_cores_2_chans_2048_tile_512_False
mem_copy_4_cores_2_chans_2048_tile_512_False0
mem_copy_8_cols_1_channels_2048_tile_256
mem_copy_8_cols_2_channels_2048_tile_128
mem_copy_8_cores_1_chans_2048_tile_256_False
mem_copy_8_cores_1_chans_2048_tile_256_False0
mem_copy_8_cores_2_chans_2048_tile_256_False
mem_copy_8_cores_2_chans_2048_tile_256_False0
mha
mha0
relu_1_cols_1_channels_2048_tile_2048
relu_2_cols_1_channels_2048_tile_1024
relu_4_cols_1_channels_2048_tile_512
relu_8_cols_1_channels_2048_tile_256
rms_norm_1_cols_1_channels_2048_tile_2048
rms_norm_1_cols_2_channels_2048_tile_1024
rms_norm_2_cols_1_channels_2048_tile_1024
rms_norm_2_cols_2_channels_2048_tile_512
rms_norm_4_cols_1_channels_2048_tile_512
rms_norm_4_cols_2_channels_2048_tile_256
rms_norm_8_cols_1_channels_2048_tile_256
rms_norm_8_cols_2_channels_2048_tile_128
rope_1_cols_2_channels_4096_tile_4096_0
rope_2_cols_2_channels_4096_tile_2048_0
rope_4_cols_2_channels_4096_tile_1024_0
rope_8_cols_2_channels_4096_tile_512_0
sigmoid_1_cols_1_channels_2048_tile_2048
sigmoid_2_cols_1_channels_2048_tile_1024
sigmoid_4_cols_1_channels_2048_tile_512
sigmoid_8_cols_1_channels_2048_tile_256
silu_1_cols_1_channels_2048_tile_2048
silu_2_cols_1_channels_2048_tile_1024
silu_4_cols_1_channels_2048_tile_512
silu_8_cols_1_channels_2048_tile_256
softmax_1_cols_2_channels_4096_tile_2048
softmax_2_cols_2_channels_4096_tile_1024
softmax_2_cols_2_channels_4096_tile_512
swigluNo metrics available. swiglu_decode_1x2048x2048
swiglu_decode_1x2048x2048_0
tanh_1_cols_1_channels_2048_tile_2048
tanh_2_cols_1_channels_2048_tile_1024
tanh_4_cols_1_channels_2048_tile_512
tanh_8_cols_1_channels_2048_tile_256
transpose_2048_M_64_N_1_cols_1_channels_64_m_64_n_8_s
transpose_2048_M_64_N_1_cols_1_channels_64_m_64_n_8_s0
transpose_2048_M_64_N_1_cols_2_channels_64_m_64_n_8_s
transpose_2048_M_64_N_1_cols_2_channels_64_m_64_n_8_s0
weighted_rms_norm_1_cols_2_channels_2048_weights_2048
weighted_rms_norm_2_cols_2_channels_2048_weights_1024
weighted_rms_norm_4_cols_2_channels_2048_weights_512
weighted_rms_norm_8_cols_2_channels_2048_weights_256
|
📊 Test Results for Test Example Applications7d5af21 (2025_12_10_18_46_06) IRONCLADTested on
📈 Trends (vs main branch) for Test Example Applications7d5af21 (2025_12_10_18_46_06) IRONCLAD Trendsllama_3.2_1b
llama_3.2_1b_prompt_2048_tokens_40
|
📊 Test Results for Test Example Applications7d5af21 (2025_12_10_23_04_36) IRONCLADTested on
📈 Trends (vs main branch) for Test Example Applications7d5af21 (2025_12_10_23_04_36) IRONCLAD Trendsllama_3.2_1b
llama_3.2_1b_prompt_2048_tokens_40
|
📊 Test Results for Test Example Applications7d5af21 (2025_12_10_23_21_19) IRONCLADTested on
📈 Trends (vs main branch) for Test Example Applications7d5af21 (2025_12_10_23_21_19) IRONCLAD Trendsllama_3.2_1b
llama_3.2_1b_prompt_2048_tokens_40
|
📊 Test Results for Test Example Applicationsf7933fb (2025_12_10_23_40_34) IRONCLADTested on
📈 Trends (vs main branch) for Test Example Applicationsf7933fb (2025_12_10_23_40_34) IRONCLAD Trendsllama_3.2_1b
llama_3.2_1b_prompt_2048_tokens_40No metrics available. |
📊 Test Results for Test Example Applications0bc7941 (2025_12_10_23_48_58) IRONCLADTested on
📈 Trends (vs main branch) for Test Example Applications0bc7941 (2025_12_10_23_48_58) IRONCLAD Trendsllama_3.2_1b
llama_3.2_1b_prompt_2048_tokens_40
|
📊 Test Results for Small Benchmark/Test Suite0bc7941 (2025_12_10_23_57_57) IRONCLADTested on
📈 Trends (vs main branch) for Small Benchmark/Test Suite0bc7941 (2025_12_10_23_57_57) IRONCLAD Trendsaxpy_1_cols_2_channels_2048_tile_2048_3.0
axpy_1_cols_2_channels_2048_tile_2048_3.0_0
axpy_2_cols_2_channels_2048_tile_1024_3.0
axpy_2_cols_2_channels_2048_tile_1024_3.0_0
axpy_4_cols_2_channels_2048_tile_512_3.0
axpy_4_cols_2_channels_2048_tile_512_3.0_0
axpy_8_cols_2_channels_2048_tile_256_3.0
axpy_8_cols_2_channels_2048_tile_256_3.0_0
dequant_1_cols_1_channels_2048_tile_2048
dequant_1_cols_1_channels_2048_tile_2048_0
dequant_1_cols_2_channels_2048_tile_1024
dequant_1_cols_2_channels_2048_tile_1024_0
dequant_2_cols_1_channels_2048_tile_1024
dequant_2_cols_1_channels_2048_tile_1024_0
dequant_2_cols_2_channels_2048_tile_512
dequant_2_cols_2_channels_2048_tile_512_0
dequant_4_cols_1_channels_2048_tile_512
dequant_4_cols_1_channels_2048_tile_512_0
dequant_4_cols_2_channels_2048_tile_256
dequant_4_cols_2_channels_2048_tile_256_0
dequant_8_cols_1_channels_2048_tile_256
dequant_8_cols_1_channels_2048_tile_256_0
dequant_8_cols_2_channels_2048_tile_128
dequant_8_cols_2_channels_2048_tile_128_0
eltwise_add_1_cols_2_channels_2048_tile_2048
eltwise_add_2_cols_2_channels_2048_tile_1024
eltwise_add_4_cols_2_channels_2048_tile_512
eltwise_add_8_cols_2_channels_2048_tile_256
eltwise_mul_1_cols_2_channels_2048_tile_2048
eltwise_mul_2_cols_2_channels_2048_tile_1024
eltwise_mul_4_cols_2_channels_2048_tile_512
eltwise_mul_8_cols_2_channels_2048_tile_256
gelu_1_cols_1_channels_2048_tile_2048
gelu_1_cols_2_channels_2048_tile_1024
gelu_2_cols_1_channels_2048_tile_1024
gelu_2_cols_2_channels_2048_tile_512
gelu_4_cols_1_channels_2048_tile_512
gelu_4_cols_2_channels_2048_tile_256
gelu_8_cols_1_channels_2048_tile_256
gelu_8_cols_2_channels_2048_tile_128
gemm_2048x2048x2048_64x64x32_8_cols_0_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x32_8_cols_0_bcolmaj_1_ccolmaj_0
gemm_2048x2048x2048_64x64x32_8_cols_1_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_0_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_0_bcolmaj_0_ccolmaj_0_0
gemm_2048x2048x2048_64x64x64_2_cols_0_bcolmaj_1_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_0_bcolmaj_1_ccolmaj_0_0
gemm_2048x2048x2048_64x64x64_2_cols_1_bcolmaj_0_ccolmaj_0
gemm_2048x2048x2048_64x64x64_2_cols_1_bcolmaj_0_ccolmaj_0_0
gemm_2048x2048x2048_64x64x64_8_cols_0_bcolmaj_0_ccolmaj_0
layer_norm_1_cols_1_channels_2048_tile_2048
layer_norm_1_cols_2_channels_2048_tile_1024
layer_norm_2_cols_1_channels_2048_tile_1024
layer_norm_2_cols_2_channels_2048_tile_512
layer_norm_4_cols_1_channels_2048_tile_512
layer_norm_4_cols_2_channels_2048_tile_256
layer_norm_8_cols_1_channels_2048_tile_256
layer_norm_8_cols_2_channels_2048_tile_128
matrix_vector_mul_128x128_32_1col
matrix_vector_mul_128x128_32_1col0
matrix_vector_mul_2048x8192_1_1col
matrix_vector_mul_2048x8192_1_1col0
matrix_vector_mul_2048x8192_1_2col
matrix_vector_mul_2048x8192_1_2col0
matrix_vector_mul_2048x8192_1_4col
matrix_vector_mul_2048x8192_1_4col0
matrix_vector_mul_2048x8192_1_8col
matrix_vector_mul_2048x8192_1_8col0
matrix_vector_mul_8192x2048_4_1col
matrix_vector_mul_8192x2048_4_1col0
matrix_vector_mul_8192x2048_4_2col
matrix_vector_mul_8192x2048_4_2col0
matrix_vector_mul_8192x2048_4_4col
matrix_vector_mul_8192x2048_4_4col0
matrix_vector_mul_8192x2048_4_8col
matrix_vector_mul_8192x2048_4_8col0
mem_copy_16_cores_2_chans_2048_tile_128_False
mem_copy_16_cores_2_chans_2048_tile_128_False0
mem_copy_1_cols_1_channels_2048_tile_2048
mem_copy_1_cols_2_channels_2048_tile_1024
mem_copy_1_cores_1_chans_2048_tile_2048_False
mem_copy_1_cores_1_chans_2048_tile_2048_False0
mem_copy_2_cols_1_channels_2048_tile_1024
mem_copy_2_cols_2_channels_2048_tile_512
mem_copy_2_cores_1_chans_2048_tile_1024_False
mem_copy_2_cores_1_chans_2048_tile_1024_False0
mem_copy_2_cores_2_chans_2048_tile_1024_False
mem_copy_2_cores_2_chans_2048_tile_1024_False0
mem_copy_4_cols_1_channels_2048_tile_512
mem_copy_4_cols_2_channels_2048_tile_256
mem_copy_4_cores_1_chans_2048_tile_512_False
mem_copy_4_cores_1_chans_2048_tile_512_False0
mem_copy_4_cores_2_chans_2048_tile_512_False
mem_copy_4_cores_2_chans_2048_tile_512_False0
mem_copy_8_cols_1_channels_2048_tile_256
mem_copy_8_cols_2_channels_2048_tile_128
mem_copy_8_cores_1_chans_2048_tile_256_False
mem_copy_8_cores_1_chans_2048_tile_256_False0
mem_copy_8_cores_2_chans_2048_tile_256_False
mem_copy_8_cores_2_chans_2048_tile_256_False0
mha
mha0
relu_1_cols_1_channels_2048_tile_2048
relu_2_cols_1_channels_2048_tile_1024
relu_4_cols_1_channels_2048_tile_512
relu_8_cols_1_channels_2048_tile_256
rms_norm_1_cols_1_channels_2048_tile_2048
rms_norm_1_cols_2_channels_2048_tile_1024
rms_norm_2_cols_1_channels_2048_tile_1024
rms_norm_2_cols_2_channels_2048_tile_512
rms_norm_4_cols_1_channels_2048_tile_512
rms_norm_4_cols_2_channels_2048_tile_256
rms_norm_8_cols_1_channels_2048_tile_256
rms_norm_8_cols_2_channels_2048_tile_128
rope_1_cols_2_channels_4096_tile_4096_0
rope_2_cols_2_channels_4096_tile_2048_0
rope_4_cols_2_channels_4096_tile_1024_0
rope_8_cols_2_channels_4096_tile_512_0
sigmoid_1_cols_1_channels_2048_tile_2048
sigmoid_2_cols_1_channels_2048_tile_1024
sigmoid_4_cols_1_channels_2048_tile_512
sigmoid_8_cols_1_channels_2048_tile_256
silu_1_cols_1_channels_2048_tile_2048
silu_2_cols_1_channels_2048_tile_1024
silu_4_cols_1_channels_2048_tile_512
silu_8_cols_1_channels_2048_tile_256
softmax_1_cols_2_channels_4096_tile_2048
softmax_2_cols_2_channels_4096_tile_1024
softmax_2_cols_2_channels_4096_tile_512
swigluNo metrics available. swiglu_decode_1x2048x2048
swiglu_decode_1x2048x2048_0
tanh_1_cols_1_channels_2048_tile_2048
tanh_2_cols_1_channels_2048_tile_1024
tanh_4_cols_1_channels_2048_tile_512
tanh_8_cols_1_channels_2048_tile_256
transpose_2048_M_64_N_1_cols_1_channels_64_m_64_n_8_s
transpose_2048_M_64_N_1_cols_1_channels_64_m_64_n_8_s0
transpose_2048_M_64_N_1_cols_2_channels_64_m_64_n_8_s
transpose_2048_M_64_N_1_cols_2_channels_64_m_64_n_8_s0
weighted_rms_norm_1_cols_2_channels_2048_weights_2048
weighted_rms_norm_2_cols_2_channels_2048_weights_1024
weighted_rms_norm_4_cols_2_channels_2048_weights_512
weighted_rms_norm_8_cols_2_channels_2048_weights_256
|
Added
Removed
PR Merge Checklist
develcommit and pointing todevel.