Skip to content

Voxtral Realtime: enable CUDA backend with int4 quantization#17798

Open
mergennachin wants to merge 1 commit intomainfrom
enable_voxtral_realtime
Open

Voxtral Realtime: enable CUDA backend with int4 quantization#17798
mergennachin wants to merge 1 commit intomainfrom
enable_voxtral_realtime

Conversation

@mergennachin
Copy link
Contributor

Add CUDA/AOTI backend support for the Voxtral Realtime model alongside
the existing XNNPACK and Metal backends.

Model (model.py):

  • CudaSDPA: F.scaled_dot_product_attention with repeat_interleave for
    GQA expansion and boolean attention masks (Triton SDPA requirement)
  • StaticKVCache (shared with Metal) for [B,H,S,D] layout with index_copy_
  • StandardEncoderRingKVCache/StandardEncoderSDPA for streaming encoder
  • _build_causal_mask_bool: 4D boolean mask for Triton compatibility
  • Simplified LMAttention.forward to always pass attn_mask (None for XNNPACK)

Export (export_voxtral_rt.py):

  • --backend cuda with CudaPartitioner and conv1d_to_conv2d decomposition
  • --dtype flag (default fp32, bf16 for CUDA Triton SDPA)
  • --qlinear-packing-format / --qlinear-encoder-packing-format for
    tile_packed_to_4d int4 quantization
  • CUDA device placement, Dim.AUTO for audio encoder, .ptd output

Runner (main.cpp, voxtral_realtime_runner.cpp/.h):

  • --data_path flag for .ptd delegate data (CUDA compiled kernels)
  • Module two-arg constructor for pte+ptd loading

Build (CMakePresets.json, Makefile):

  • voxtral-realtime-cuda preset
  • make voxtral_realtime-cuda target

CI (.github/workflows/cuda.yml, .ci/scripts/):

  • Voxtral Realtime in CUDA CI matrix (int4-tile-packed, offline mode)
  • Export/test scripts updated for CUDA quantization args and data path

Add CUDA/AOTI backend support for the Voxtral Realtime model alongside
the existing XNNPACK and Metal backends.

Model (model.py):
- CudaSDPA: F.scaled_dot_product_attention with repeat_interleave for
  GQA expansion and boolean attention masks (Triton SDPA requirement)
- StaticKVCache (shared with Metal) for [B,H,S,D] layout with index_copy_
- StandardEncoderRingKVCache/StandardEncoderSDPA for streaming encoder
- _build_causal_mask_bool: 4D boolean mask for Triton compatibility
- Simplified LMAttention.forward to always pass attn_mask (None for XNNPACK)

Export (export_voxtral_rt.py):
- --backend cuda with CudaPartitioner and conv1d_to_conv2d decomposition
- --dtype flag (default fp32, bf16 for CUDA Triton SDPA)
- --qlinear-packing-format / --qlinear-encoder-packing-format for
  tile_packed_to_4d int4 quantization
- CUDA device placement, Dim.AUTO for audio encoder, .ptd output

Runner (main.cpp, voxtral_realtime_runner.cpp/.h):
- --data_path flag for .ptd delegate data (CUDA compiled kernels)
- Module two-arg constructor for pte+ptd loading

Build (CMakePresets.json, Makefile):
- voxtral-realtime-cuda preset
- make voxtral_realtime-cuda target

CI (.github/workflows/cuda.yml, .ci/scripts/):
- Voxtral Realtime in CUDA CI matrix (int4-tile-packed, offline mode)
- Export/test scripts updated for CUDA quantization args and data path
Copilot AI review requested due to automatic review settings March 2, 2026 22:30
@pytorch-bot
Copy link

pytorch-bot bot commented Mar 2, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17798

Note: Links to docs will display an error until the docs builds have been completed.

❌ 7 New Failures, 1 Unrelated Failure

As of commit 1e5399a with merge base 25f2a3f (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 2, 2026
@github-actions
Copy link

github-actions bot commented Mar 2, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

@mergennachin mergennachin temporarily deployed to upload-benchmark-results March 2, 2026 23:33 — with GitHub Actions Inactive
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants