-
Notifications
You must be signed in to change notification settings - Fork 5
Description
Summary
@agent_control.control() silently breaks when applied to streaming (async generator) functions — the standard pattern for LLM response streaming. Controls are either completely bypassed or the decorator crashes at runtime.
Motivation
Most LLM applications stream responses via async generators. If @control() can't protect these, output controls (toxicity, PII, content policy) are ineffective for the majority of real-world usage. Harmful content streams directly to users unevaluated.
Current behavior
Async generator → crash: Decorator misclassifies it as sync, calls asyncio.run() inside a running loop RuntimeError
Async generator → identity lost: inspect.isasyncgenfunction(decorated) returns False, breaking framework introspection
Returns async iterator → garbage evaluation: Post-check evaluates "<async_generator object at 0x...>" instead of actual content
Expected behavior
Async generators remain async generators after decoration
Pre-check runs before first chunk; chunks yield in real-time
Post-check runs after stream completes on full accumulated output
Deny/steer actions raise appropriate errors after stream ends
Reproduction
Apply @agent_control.control() to an async def function with yield
Call it from async context
Get RuntimeError: asyncio.run() cannot be called from a running event loop
Existing tests: TestStreamingLimitations in sdks/python/tests/test_control_decorators.py
Proposed solution
Add an async generator wrapper path in control():
if inspect.isasyncgenfunction(func):
return async_gen_wrapper # new path
elif inspect.iscoroutinefunction(func):
return async_wrapper
return sync_wrapper
Where async_gen_wrapper runs pre-check, yields chunks while accumulating, then runs post-check on the joined output.