Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Oct 28, 2025

  • Understand the issue: tl.atomic_add is not supported in triton-viz
  • Add AtomicRMW operation class to data.py
  • Add AtomicRMW to the op_list and mapping in patch.py
  • Add support in sanitizer for atomic RMW operations
  • Add atomic_rmw to SymbolicExpr SUPPORTED_OPS and OP_SPEC
  • Implement op_atomic_rmw_overrider following atomic_cas pattern
  • Add test_atomic_add to verify atomic_add functionality
  • Add test_atomic_cas to verify atomic_cas functionality
  • Run pre-commit/ruff format to fix code style
  • All tests pass (verified in earlier session)
Original prompt

This section details on the original issue you should resolve

<issue_title>[FEATURE] Support tl.atomic_add in int8_matmul_quantization.py</issue_title>
<issue_description>```
Traceback (most recent call last):
File "/home/hwu27/workspace/triton-viz/.venv/bin/triton-sanitizer", line 10, in
sys.exit(apply())
^^^^^^^
File "/home/hwu27/workspace/triton-viz/triton_viz/wrapper.py", line 58, in apply
runpy.run_path(script, run_name="main")
File "", line 286, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "int8_matmul_quantization.py", line 267, in
result_gold = test_quantize_and_matmul()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "int8_matmul_quantization.py", line 253, in test_quantize_and_matmul
c_quantized = matmul_quantize_int8(fpa, b, b_scale)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "int8_matmul_quantization.py", line 194, in matmul_quantize_int8
return matmul_int8(a, a_scale, b, b_scale, out)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "int8_matmul_quantization.py", line 211, in matmul_int8
matmul_kernel[grid](
File "/home/hwu27/workspace/triton-viz/.venv/lib/python3.12/site-packages/triton/runtime/jit.py", line 390, in
return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hwu27/workspace/triton-viz/.venv/lib/python3.12/site-packages/triton/runtime/autotuner.py", line 239, in run
benchmark()
File "/home/hwu27/workspace/triton-viz/.venv/lib/python3.12/site-packages/triton/runtime/autotuner.py", line 228, in benchmark
timings = {config: self._bench(*args, config=config, **kwargs) for config in pruned_configs}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hwu27/workspace/triton-viz/.venv/lib/python3.12/site-packages/triton/runtime/autotuner.py", line 160, in _bench
return self.do_bench(kernel_call, quantiles=(0.5, 0.2, 0.8))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hwu27/workspace/triton-viz/.venv/lib/python3.12/site-packages/triton/testing.py", line 149, in do_bench
fn()
File "/home/hwu27/workspace/triton-viz/.venv/lib/python3.12/site-packages/triton/runtime/autotuner.py", line 146, in kernel_call
self.fn.run(
File "/home/hwu27/workspace/triton-viz/triton_viz/core/trace.py", line 68, in run
ret = self.interpreter_fn.run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hwu27/workspace/triton-viz/.venv/lib/python3.12/site-packages/triton/runtime/interpreter.py", line 1380, in run
return GridExecutor(fn, self.arg_names, grid)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hwu27/workspace/triton-viz/triton_viz/core/patch.py", line 488, in _grid_executor_call
run_grid_loops(grid)
File "/home/hwu27/workspace/triton-viz/triton_viz/core/patch.py", line 456, in run_grid_loops
self.fn(**call_args)
File "int8_matmul_quantization.py", line 188, in matmul_kernel
tl.atomic_add(c_ptrs, c, mask=c_mask)
File "/home/hwu27/workspace/triton-viz/.venv/lib/python3.12/site-packages/triton/runtime/interpreter.py", line 781, in
new_member = lambda *args, member=member, **kwargs: (member(*args, **
^^^^^^^^^^^^^^^^
File "/home/hwu27/workspace/triton-viz/.venv/lib/python3.12/site-packages/triton/language/core.py", line 42, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hwu27/workspace/triton-viz/.venv/lib/python3.12/site-packages/triton/language/core.py", line 2373, in atomic_add
return _semantic.atomic_add(pointer, val, mask, sem, scope)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hwu27/workspace/triton-viz/.venv/lib/python3.12/site-packages/triton/language/semantic.py", line 1423, in atomic_add
return self.tensor(self.builder.create_atomic_rmw(op, ptr.handle, val.handle, mask.handle, sem, scope),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hwu27/workspace/triton-viz/.venv/lib/python3.12/site-packages/triton/runtime/interpreter.py", line 679, in create_atomic_rmw
return TensorHandle(_interpreter.atomic_rmw(rmwOp, ptr.data, val.data, mask.data, sem), val.dtype.scalar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: atomic_rmw(): incompatible function arguments. The following argument types are supported:
1. (arg0: triton._C.libtriton.interpreter.RMW_OP, arg1: typing.Annotated[numpy.typing.ArrayLike, numpy.uint64], arg2: numpy.ndarray, arg3: typing.Annotated[numpy.typing.ArrayLike, numpy.bool],...


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copilot AI and others added 3 commits October 28, 2025 17:44
Co-authored-by: Jokeren <2306281+Jokeren@users.noreply.github.com>
Co-authored-by: Jokeren <2306281+Jokeren@users.noreply.github.com>
Co-authored-by: Jokeren <2306281+Jokeren@users.noreply.github.com>
Copilot AI changed the title [WIP] Add support for tl.atomic_add in int8_matmul_quantization Add support for atomic RMW operations (tl.atomic_add, etc.) Oct 28, 2025
Copilot AI requested a review from Jokeren October 28, 2025 18:06
Copilot finished work on behalf of Jokeren October 28, 2025 18:06
@Jokeren
Copy link
Member

Jokeren commented Oct 28, 2025

@copilot This is apparently wrong. The wrapper should be similar to atomic_cas. Also you don't have any tests

Co-authored-by: Jokeren <2306281+Jokeren@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Oct 28, 2025

@copilot This is apparently wrong. The wrapper should be similar to atomic_cas. Also you don't have any tests

Fixed in commit 1c494b4. The op_atomic_rmw_overrider now follows the same pattern as atomic_cas - it creates a SymbolicExpr with symbolic arguments and stores the enums as attributes. Added test_atomic_add to verify the functionality.

Copilot finished work on behalf of Jokeren October 28, 2025 18:15
@Jokeren
Copy link
Member

Jokeren commented Oct 28, 2025

@copilot Also test atomic_cas

Co-authored-by: Jokeren <2306281+Jokeren@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Oct 28, 2025

@copilot Also test atomic_cas

Added test_atomic_cas in commit ca88da3. The test follows the same pattern as test_atomic_add and verifies that atomic_cas operations work with the sanitizer.

Copilot finished work on behalf of Jokeren October 28, 2025 18:36
@Jokeren Jokeren marked this pull request as ready for review October 28, 2025 22:31
@Jokeren Jokeren requested a review from mark14wu October 28, 2025 22:31
@Jokeren
Copy link
Member

Jokeren commented Oct 29, 2025

@copilot run pre-commit

Co-authored-by: Jokeren <2306281+Jokeren@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Oct 29, 2025

@copilot run pre-commit

Ran ruff format (part of pre-commit) in commit a91afad. The code has been reformatted according to the project's style guidelines.

Copilot finished work on behalf of Jokeren October 29, 2025 15:29
@mark14wu mark14wu merged commit 7b89875 into main Nov 2, 2025
1 check passed
@mark14wu mark14wu deleted the copilot/add-tl-atomic-add-support branch November 2, 2025 16:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] Support tl.atomic_add in int8_matmul_quantization.py

3 participants