Skip to content

Conversation

@askmyteapot
Copy link
Contributor

@askmyteapot askmyteapot commented Jan 5, 2026

This allows audio to build under GCC14.2
Fixes the below error:

/home/media/audio/src/libtorchaudio/rnnt/gpu/compute.cu: In function ‘void torchaudio::rnnt::gpu::STABLE_TORCH_LIBRARY_IMPL_init_torchaudio_CUDA_0(torch::stable::detail::StableLibrary&)’:
/home/media/audio/src/libtorchaudio/rnnt/gpu/compute.cu:134:350: error: default arguments are only permitted for function parameters [-fpermissive]
  134 |   m.impl("rnnt_loss_forward", TORCH_BOX(&compute));
      |    

@askmyteapot askmyteapot requested a review from a team as a code owner January 5, 2026 07:03
@pytorch-bot
Copy link

pytorch-bot bot commented Jan 5, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/audio/4163

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 37e54d4 with merge base ad99271 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla
Copy link

meta-cla bot commented Jan 5, 2026

Hi @askmyteapot!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@meta-cla
Copy link

meta-cla bot commented Jan 5, 2026

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@meta-cla meta-cla bot added the CLA Signed label Jan 5, 2026
Copy link
Collaborator

@pearu pearu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Introducing compute_default may be unnecessary. Try using the following patch instead:

diff --git a/src/libtorchaudio/rnnt/cpu/compute.cpp b/src/libtorchaudio/rnnt/cpu/compute.cpp
index c8b0f473..a927b5ac 100644
--- a/src/libtorchaudio/rnnt/cpu/compute.cpp
+++ b/src/libtorchaudio/rnnt/cpu/compute.cpp
@@ -21,7 +21,7 @@ std::tuple<Tensor, Tensor> compute(
     Tensor target_lengths,
     int64_t blank,
     double clamp,
-    bool fused_log_softmax = true) {
+    bool fused_log_softmax) {
   STD_TORCH_CHECK(logits.is_cpu(), "logits must be on CPU");
 
   STD_TORCH_CHECK(
diff --git a/src/libtorchaudio/rnnt/gpu/compute.cu b/src/libtorchaudio/rnnt/gpu/compute.cu
index 03bad83b..7e99fec3 100644
--- a/src/libtorchaudio/rnnt/gpu/compute.cu
+++ b/src/libtorchaudio/rnnt/gpu/compute.cu
@@ -21,7 +21,7 @@ std::tuple<Tensor, Tensor> compute(
     Tensor target_lengths,
     int64_t blank,
     double clamp,
-    bool fused_log_softmax = true) {
+    bool fused_log_softmax) {
   STD_TORCH_CHECK(logits.is_cuda(), "logits must be on CUDA");
 
   STD_TORCH_CHECK(

Also, notice that build fails on compiling a .cu file so that the issue may not be related to GCC version. What is the output of nvcc --version?

@askmyteapot
Copy link
Contributor Author

Introducing compute_default may be unnecessary. Try using the following patch instead:

diff --git a/src/libtorchaudio/rnnt/cpu/compute.cpp b/src/libtorchaudio/rnnt/cpu/compute.cpp
index c8b0f473..a927b5ac 100644
--- a/src/libtorchaudio/rnnt/cpu/compute.cpp
+++ b/src/libtorchaudio/rnnt/cpu/compute.cpp
@@ -21,7 +21,7 @@ std::tuple<Tensor, Tensor> compute(
     Tensor target_lengths,
     int64_t blank,
     double clamp,
-    bool fused_log_softmax = true) {
+    bool fused_log_softmax) {
   STD_TORCH_CHECK(logits.is_cpu(), "logits must be on CPU");
 
   STD_TORCH_CHECK(
diff --git a/src/libtorchaudio/rnnt/gpu/compute.cu b/src/libtorchaudio/rnnt/gpu/compute.cu
index 03bad83b..7e99fec3 100644
--- a/src/libtorchaudio/rnnt/gpu/compute.cu
+++ b/src/libtorchaudio/rnnt/gpu/compute.cu
@@ -21,7 +21,7 @@ std::tuple<Tensor, Tensor> compute(
     Tensor target_lengths,
     int64_t blank,
     double clamp,
-    bool fused_log_softmax = true) {
+    bool fused_log_softmax) {
   STD_TORCH_CHECK(logits.is_cuda(), "logits must be on CUDA");
 
   STD_TORCH_CHECK(

Also, notice that build fails on compiling a .cu file so that the issue may not be related to GCC version. What is the output of nvcc --version?

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Fri_Feb_21_20:23:50_PST_2025
Cuda compilation tools, release 12.8, V12.8.93
Build cuda_12.8.r12.8/compiler.35583870_0

Copy link
Collaborator

@pearu pearu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please apply the change to src/libtorchaudio/rnnt/cpu/compute.cpp as well. There is no reason to keep cpu and gpu compute signatures different.

@askmyteapot
Copy link
Contributor Author

Done. Apologies on the extra steps. Doing it remotely.

Copy link
Collaborator

@pearu pearu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Conforming that the issue exists when nvcc uses GCC version 14. Thanks, @askmyteapot!

@pearu pearu merged commit e123269 into pytorch:main Jan 7, 2026
54 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants