Skip to content

chore: pin GHA by commit#5415

Merged
davidzhao merged 2 commits intomainfrom
dz/pin-gha-commit
Apr 11, 2026
Merged

chore: pin GHA by commit#5415
davidzhao merged 2 commits intomainfrom
dz/pin-gha-commit

Conversation

@davidzhao
Copy link
Copy Markdown
Member

No description provided.

@davidzhao davidzhao requested a review from a team April 11, 2026 20:14
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 3 additional findings.

Open in Devin Review

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 11, 2026

STT Test Results

Status: ✗ Some tests failed

Metric Count
✓ Passed 19
✗ Failed 4
× Errors 1
→ Skipped 15
▣ Total 39
⏱ Duration 195.5s
Failed Tests
  • tests.test_stt::test_recognize[livekit.plugins.google]
    self = <google.api_core.grpc_helpers_async._WrappedUnaryUnaryCall object at 0x7f6c3d588fb0>
    
        def __await__(self) -> Iterator[P]:
            try:
    >           response = yield from self._call.__await__()
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    
    .venv/lib/python3.12/site-packages/google/api_core/grpc_helpers_async.py:86: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <grpc.aio._interceptor.InterceptedUnaryUnaryCall object at 0x7f6c3d58a9c0>
    
        def __await__(self):
            call = yield from self._interceptors_task.__await__()
    >       response = yield from call.__await__()
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    
    .venv/lib/python3.12/site-packages/grpc/aio/_interceptor.py:474: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <_AioCall of RPC that terminated with:
    	status = Request had invalid authentication credentials. Expected OAuth 2 acce... other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project."}"
    >
    
        def __await__(self) -> Generator[Any, None, ResponseType]:
            """Wait till the ongoing RPC request finishes."""
            try:
                response = yield from self._call_response
            except asyncio.CancelledError:
                # Even if we caught all other CancelledError, there is still
                # this corner case. If the application cancels immediately after
                # the Call object is created, we will observe this
                # `CancelledError`.
                if not self.cancelled():
                    self.cancel()
                raise
      
            # NOTE(lidiz) If we raise RpcError in the task, and users doesn't
            # 'await' on it. AsyncIO will log 'Task exception was never retrieved'.
            # Instead, if we move the exception raising here, the spam stops.
            # Unfortunately, there can only be one 'yield from' in '__await__'. So,
            # we need to access the private instanc
    
  • tests.test_stt::test_stream[livekit.plugins.speechmatics]
    def finalizer() -> None:
            """Yield again, to finalize."""
      
            async def async_finalizer() -> None:
                try:
                    await gen_obj.__anext__()
                except StopAsyncIteration:
                    pass
                else:
                    msg = "Async generator fixture didn't stop."
                    msg += "Yield only once."
                    raise ValueError(msg)
      
    >       runner.run(async_finalizer(), context=context)
    
    .venv/lib/python3.12/site-packages/pytest_asyncio/plugin.py:330: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <asyncio.runners.Runner object at 0x7f6c3d588c80>
    coro = <coroutine object _wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.finalizer.<locals>.async_finalizer at 0x7f6c4ae906c0>
    
        def run(self, coro, *, context=None):
            """Run a coroutine inside the embedded event loop."""
            if not coroutines.iscoroutine(coro):
                raise ValueError("a coroutine was expected, got {!r}".format(coro))
      
            if events._get_running_loop() is not None:
                # fail fast with short traceback
                raise RuntimeError(
                    "Runner.run() cannot be called from a running event loop")
      
            self._lazy_init()
      
            if context is None:
                context = self._context
            task = self._loop.create_task(coro, context=context)
      
            if (threading.current_thread() is threading.main_thread()
                and signal.getsignal(signal.SIGINT) is signal.default_int_handler
            ):
                sigint_handler = functools.partial(self._on_sigint, main_task=task)
                try:
                    signal.signal(signal.SIGINT, sigint_handler)
                except ValueError:
                    # `signal.signal` may throw if `threading.main_thread` does
                    # not support signals (e.g. embedded interpreter with signals
                    # not registered - see gh-91880)
                    sigint_handler = None
    
  • tests.test_stt::test_stream[livekit.plugins.nvidia]
    stt_factory = <function parameter_factory.<locals>.<lambda> at 0x7f6c3d97f6a0>
    request = <FixtureRequest for <Coroutine test_stream[livekit.plugins.nvidia]>>
    
        @pytest.mark.usefixtures("job_process")
        @pytest.mark.parametrize("stt_factory", STTs)
        async def test_stream(stt_factory: Callable[[], STT], request):
            sample_rate = SAMPLE_RATE
            plugin_id = request.node.callspec.id.split("-")[0]
            frames, transcript, _ = await make_test_speech(chunk_duration_ms=10, sample_rate=sample_rate)
      
            # TODO: differentiate missing key vs other errors
            try:
                stt_instance: STT = stt_factory()
            except ValueError as e:
                pytest.skip(f"{plugin_id}: {e}")
      
            async with stt_instance as stt:
                label = f"{stt.model}@{stt.provider}"
                if not stt.capabilities.streaming:
                    pytest.skip(f"{label} does not support streaming")
      
                for attempt in range(MAX_RETRIES):
                    try:
                        state = {"closing": False}
      
                        async def _stream_input(
                            frames: list[rtc.AudioFrame], stream: RecognizeStream, state: dict = state
                        ):
                            for frame in frames:
                                stream.push_frame(frame)
                                await asyncio.sleep(0.005)
      
                            stream.end_input()
                            state["closing"] = True
      
                        async def _stream_output(stream: RecognizeStream, state: dict = state):
                            text = ""
                            # make sure the events are sent in the right order
                            recv_start, recv_end = False, True
                            start_time = time.time()
                            got_final_transcript = False
      
                            async for event in stream:
                                if event.type == agents.stt.SpeechEventType.START_OF_SPEECH:
    
  • tests.test_stt::test_stream[livekit.agents.inference]
    stt_factory = <function parameter_factory.<locals>.<lambda> at 0x7f30a40336a0>
    request = <FixtureRequest for <Coroutine test_stream[livekit.agents.inference]>>
    
        @pytest.mark.usefixtures("job_process")
        @pytest.mark.parametrize("stt_factory", STTs)
        async def test_stream(stt_factory: Callable[[], STT], request):
            sample_rate = SAMPLE_RATE
            plugin_id = request.node.callspec.id.split("-")[0]
            frames, transcript, _ = await make_test_speech(chunk_duration_ms=10, sample_rate=sample_rate)
      
            # TODO: differentiate missing key vs other errors
            try:
                stt_instance: STT = stt_factory()
            except ValueError as e:
                pytest.skip(f"{plugin_id}: {e}")
      
            async with stt_instance as stt:
                label = f"{stt.model}@{stt.provider}"
                if not stt.capabilities.streaming:
                    pytest.skip(f"{label} does not support streaming")
      
                for attempt in range(MAX_RETRIES):
                    try:
                        state = {"closing": False}
      
                        async def _stream_input(
                            frames: list[rtc.AudioFrame], stream: RecognizeStream, state: dict = state
                        ):
                            for frame in frames:
                                stream.push_frame(frame)
                                await asyncio.sleep(0.005)
      
                            stream.end_input()
                            state["closing"] = True
      
                        async def _stream_output(stream: RecognizeStream, state: dict = state):
                            text = ""
                            # make sure the events are sent in the right order
                            recv_start, recv_end = False, True
                            start_time = time.time()
                            got_final_transcript = False
      
                            async for event in stream:
                                if event.type == agents.stt.SpeechEventType.START_OF_SPEECH:
    
  • tests.test_stt::test_stream[livekit.plugins.google]
    self = <google.api_core.grpc_helpers_async._WrappedStreamStreamCall object at 0x7f0f002f66f0>
    
        async def _wrapped_aiter(self) -> AsyncGenerator[P, None]:
            try:
                # NOTE(lidiz) coverage doesn't understand the exception raised from
                # __anext__ method. It is covered by test case:
                #     test_wrap_stream_errors_aiter_non_rpc_error
    >           async for response in self._call:  # pragma: no branch
    
    .venv/lib/python3.12/site-packages/google/api_core/grpc_helpers_async.py:107: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <_AioCall of RPC that terminated with:
    	status = Request had invalid authentication credentials. Expected OAuth 2 acce... other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project."}"
    >
    
        async def _fetch_stream_responses(self) -> ResponseType:
            message = await self._read()
            while message is not cygrpc.EOF:
                yield message
                message = await self._read()
      
            # If the read operation failed, Core should explain why.
    >       await self._raise_for_status()
    
    .venv/lib/python3.12/site-packages/grpc/aio/_call.py:364: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <_AioCall of RPC that terminated with:
    	status = Request had invalid authentication credentials. Expected OAuth 2 acce... other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project."}"
    >
    
        async def _raise_for_status(self) -> None:
            if self._cython_call.is_locally_cancelled():
                raise asyncio.CancelledError()
            code = await self.code()
            if code != grpc.StatusCode.OK:
    >           raise _create_rpc_error(
                    await self.initial_metadata(),
                    await self._cython_call.status(),
                )
    E           grpc.aio._call.AioRpcError: <AioRpcError of RPC that terminated with
    
Skipped Tests
Test Reason
tests.test_stt::test_recognize[livekit.plugins.assemblyai] universal-streaming-english@AssemblyAI does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.speechmatics] enhanced@Speechmatics does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.fireworksai] unknown@FireworksAI does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.cartesia] ink-whisper@Cartesia does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.soniox] stt-rt-v4@Soniox does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.aws] unknown@Amazon Transcribe does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.deepgram.STTv2] flux-general-en@Deepgram does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.gradium.STT] unknown@Gradium does not support batch recognition
tests.test_stt::test_recognize[livekit.agents.inference] unknown@livekit does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.azure] unknown@Azure STT does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.nvidia] unknown@unknown does not support batch recognition
tests.test_stt::test_stream[livekit.plugins.elevenlabs] scribe_v1@ElevenLabs does not support streaming
tests.test_stt::test_stream[livekit.plugins.mistralai] voxtral-mini-latest@MistralAI does not support streaming
tests.test_stt::test_stream[livekit.plugins.fal] Wizper@Fal does not support streaming
tests.test_stt::test_stream[livekit.plugins.openai] gpt-4o-mini-transcribe@api.openai.com does not support streaming

Triggered by workflow run #1647

@davidzhao davidzhao merged commit 2df1de7 into main Apr 11, 2026
25 of 27 checks passed
@davidzhao davidzhao deleted the dz/pin-gha-commit branch April 11, 2026 20:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants