This protocol defines the communication between aionrs (Rust CLI) and a host client (e.g., AionUi Electron app) via stdin/stdout JSON Lines.
┌──────────────┐ stdin (JSON Lines) ┌──────────────────┐
│ │ ◄─────────────────────── │ │
│ aionrs│ │ Host Client │
│ (Rust CLI) │ ──────────────────────► │ (AionUi etc.) │
│ │ stdout (JSON Lines) │ │
└──────────────┘ └──────────────────┘
stderr → diagnostic logs (not part of protocol)
- Transport: stdin/stdout, one JSON object per line (JSON Lines / NDJSON)
- Encoding: UTF-8
- Activation:
aionrs --json-stream [other flags] - Lifecycle: One process per conversation; process stays alive for multi-turn
Every line is a JSON object with a type field.
Emitted once after initialization completes. Client MUST wait for this before sending messages.
{
"type": "ready",
"version": "0.1.0",
"session_id": "a1b2c3",
"capabilities": {
"tool_approval": true,
"thinking": true,
"mcp": true
}
}| Field | Type | Description |
|---|---|---|
version |
string | Protocol version (semver) |
session_id |
string? | Session ID (omitted when sessions are disabled in config) |
capabilities.tool_approval |
bool | Whether agent supports pause-and-wait tool approval |
capabilities.thinking |
bool | Whether agent emits thinking events |
capabilities.mcp |
bool | Whether MCP tools are available |
A new response turn has started.
{
"type": "stream_start",
"msg_id": "abc-123"
}Incremental text output (streaming).
{
"type": "text_delta",
"text": "Hello, ",
"msg_id": "abc-123"
}Model's internal reasoning (if extended thinking is enabled).
{
"type": "thinking",
"text": "Let me analyze the code structure...",
"msg_id": "abc-123"
}Agent wants to invoke a tool and needs client approval. Agent PAUSES execution until it receives tool_approve or tool_deny.
{
"type": "tool_request",
"msg_id": "abc-123",
"call_id": "tool-call-001",
"tool": {
"name": "Write",
"category": "edit",
"args": {
"file_path": "/src/main.rs",
"content": "fn main() { ... }"
},
"description": "Write to /src/main.rs"
}
}| Field | Type | Description |
|---|---|---|
call_id |
string | Unique ID for this tool invocation |
tool.name |
string | Tool name: Read, Write, Edit, Bash, Glob, Grep, Spawn, or MCP tool name |
tool.category |
string | "info" (read-only), "edit" (file mutation), "exec" (shell), "mcp" (MCP tool) |
tool.args |
object | Tool arguments |
tool.description |
string | Human-readable one-line description |
Category mapping for built-in tools:
| Tool | Category | Rationale |
|---|---|---|
Read |
info |
Read-only file access |
Glob |
info |
Read-only file search |
Grep |
info |
Read-only content search |
Write |
edit |
Creates or overwrites files |
Edit |
edit |
Modifies file content |
Bash |
exec |
Executes shell commands |
Spawn |
exec |
Spawns sub-agent |
| MCP tools | mcp |
External MCP server tools |
Note: When
auto_approve = true(yolo mode) or when a tool is in theallow_list, the agent executes immediately and emitstool_runningdirectly, skippingtool_request.
Tool execution has started (after approval or auto-approve).
{
"type": "tool_running",
"msg_id": "abc-123",
"call_id": "tool-call-001",
"tool_name": "Write"
}Tool execution completed.
{
"type": "tool_result",
"msg_id": "abc-123",
"call_id": "tool-call-001",
"tool_name": "Write",
"status": "success",
"output": "File written successfully",
"output_type": "text"
}| Field | Type | Description |
|---|---|---|
status |
string | "success" or "error" |
output |
string | Tool output (truncated if exceeds limit) |
output_type |
string | "text" (default), "diff" (for Edit tool), "image" (base64) |
Special output for Edit tool (output_type: "diff"):
{
"type": "tool_result",
"msg_id": "abc-123",
"call_id": "tool-call-002",
"tool_name": "Edit",
"status": "success",
"output": "--- a/src/main.rs\n+++ b/src/main.rs\n@@ -1,3 +1,3 @@\n-old line\n+new line",
"output_type": "diff",
"metadata": {
"file_path": "/src/main.rs"
}
}Tool was denied by client or cancelled.
{
"type": "tool_cancelled",
"msg_id": "abc-123",
"call_id": "tool-call-001",
"reason": "User denied"
}Current response turn finished.
{
"type": "stream_end",
"msg_id": "abc-123",
"usage": {
"input_tokens": 1500,
"output_tokens": 320,
"cache_read_tokens": 800,
"cache_write_tokens": 200
}
}An error occurred. The agent may or may not continue depending on severity.
{
"type": "error",
"msg_id": "abc-123",
"error": {
"code": "provider_error",
"message": "Rate limit exceeded",
"retryable": true
}
}| Error Code | Description |
|---|---|
provider_error |
LLM API error (rate limit, auth, etc.) |
tool_error |
Built-in tool execution error |
config_error |
Configuration or initialization error |
protocol_error |
Invalid command from client |
internal_error |
Unexpected internal error |
Informational message (non-critical, for display only).
{
"type": "info",
"msg_id": "abc-123",
"message": "Stream interrupted, retrying... (1/2)"
}Every line is a JSON object with a type field.
Send a user message. Agent responds with a stream of events.
{
"type": "message",
"msg_id": "abc-123",
"input": "Read the file src/main.rs and explain the code",
"files": ["/path/to/attached/file.png"]
}| Field | Type | Required | Description |
|---|---|---|---|
msg_id |
string | yes | Client-generated unique message ID |
input |
string | yes | User's message text |
files |
string[] | no | Attached file paths (images, documents) |
Abort the current response stream.
{
"type": "stop"
}Agent MUST:
- Cancel any in-flight LLM request
- Cancel any running tool (if possible)
- Emit
stream_endfor the current msg_id
Approve a pending tool execution.
{
"type": "tool_approve",
"call_id": "tool-call-001",
"scope": "once"
}| Field | Type | Description |
|---|---|---|
call_id |
string | Must match a pending tool_request |
scope |
string | "once" = this call only; "always" = auto-approve this tool+category for the session |
When scope = "always", the agent adds the tool's category to the session allow-list, so future calls of the same category skip approval.
Deny a pending tool execution.
{
"type": "tool_deny",
"call_id": "tool-call-001",
"reason": "Not allowed to write this file"
}Agent MUST:
- Emit
tool_cancelledevent - Feed the denial reason back to the LLM as tool result
- Continue the conversation (LLM decides next action)
Inject prior conversation context (for conversation resume).
{
"type": "init_history",
"text": "Previous conversation summary:\nUser asked about X...\nAssistant replied with Y..."
}Must be sent BEFORE the first message command. Agent incorporates this as conversation context.
Change the agent's approval mode for the session.
{
"type": "set_mode",
"mode": "yolo"
}| Mode | Behavior |
|---|---|
"default" |
All tools need approval (except allow-listed) |
"auto_edit" |
info and edit auto-approved; exec and mcp need approval |
"yolo" |
All tools auto-approved |
Client spawns:
aionrs --json-stream \
--provider anthropic \
--model claude-sonnet-4-20250514 \
--max-tokens 8192 \
--max-turns 30
Environment variables set by client:
ANTHROPIC_API_KEY=sk-...
# or OPENAI_API_KEY, AWS_REGION, etc.
Agent initializes → stdout: {"type":"ready","session_id":"a1b2c3",...}
Session lifecycle flags (mutually exclusive):
| Flag | Description |
|---|---|
--session-id <ID> |
Use a specific session ID instead of auto-generating one. Errors if the ID already exists. |
--resume <ID> |
Resume a previous session (loads conversation history). Use latest to resume the most recent. |
# New session with a custom ID
aionrs --json-stream --session-id my-conv-123 --provider openai --model gpt-4o
# Resume an existing session
aionrs --json-stream --resume my-conv-123 --provider openai --model gpt-4oClient → stdin: {"type":"message","msg_id":"m1","input":"Hello"}
Agent → stdout: {"type":"stream_start","msg_id":"m1"}
Agent → stdout: {"type":"text_delta","text":"Hi! ","msg_id":"m1"}
Agent → stdout: {"type":"text_delta","text":"How can I help?","msg_id":"m1"}
Agent → stdout: {"type":"stream_end","msg_id":"m1","usage":{...}}
Client → stdin: {"type":"message","msg_id":"m2","input":"Create a hello.rs file"}
Agent → stdout: {"type":"stream_start","msg_id":"m2"}
Agent → stdout: {"type":"text_delta","text":"I'll create the file.","msg_id":"m2"}
Agent → stdout: {"type":"tool_request","msg_id":"m2","call_id":"t1","tool":{"name":"Write","category":"edit",...}}
← Agent PAUSES here, waiting for approval →
Client → stdin: {"type":"tool_approve","call_id":"t1","scope":"once"}
Agent → stdout: {"type":"tool_running","msg_id":"m2","call_id":"t1","tool_name":"Write"}
Agent → stdout: {"type":"tool_result","msg_id":"m2","call_id":"t1","status":"success",...}
Agent → stdout: {"type":"text_delta","text":"File created successfully.","msg_id":"m2"}
Agent → stdout: {"type":"stream_end","msg_id":"m2","usage":{...}}
When the LLM requests multiple tools in one turn, agent emits multiple tool_request events. Client can approve/deny them independently.
Agent → stdout: {"type":"tool_request","call_id":"t1","tool":{"name":"Read","category":"info",...}}
Agent → stdout: {"type":"tool_request","call_id":"t2","tool":{"name":"Read","category":"info",...}}
Client → stdin: {"type":"tool_approve","call_id":"t1","scope":"once"}
Client → stdin: {"type":"tool_approve","call_id":"t2","scope":"once"}
Agent → stdout: {"type":"tool_running","call_id":"t1",...}
Agent → stdout: {"type":"tool_running","call_id":"t2",...}
Agent → stdout: {"type":"tool_result","call_id":"t1",...}
Agent → stdout: {"type":"tool_result","call_id":"t2",...}
Client closes stdin (EOF) or sends SIGTERM. Agent cleans up and exits.
If client sends malformed JSON or unknown command type:
{
"type": "error",
"msg_id": null,
"error": {
"code": "protocol_error",
"message": "Unknown command type: foo",
"retryable": false
}
}Agent should emit error and let the conversation continue if possible:
{
"type": "error",
"msg_id": "m3",
"error": {
"code": "provider_error",
"message": "Rate limit exceeded. Retry after 30s.",
"retryable": true
}
}For unrecoverable errors, agent emits error and exits with non-zero status:
{
"type": "error",
"msg_id": null,
"error": {
"code": "config_error",
"message": "ANTHROPIC_API_KEY not set",
"retryable": false
}
}When spawned in --json-stream mode, all configuration is passed via CLI flags and environment variables:
aionrs --json-stream \
--provider <anthropic|openai|bedrock|vertex> \
--model <model-id> \
--max-tokens <N> \
--max-turns <N> \
--base-url <URL> \
--system-prompt <TEXT> \
--auto-approve # Start in yolo mode
--workspace <PATH> # Working directory for file operationsEnvironment variables (set by client before spawn):
| Provider | Variables |
|---|---|
| Anthropic | ANTHROPIC_API_KEY, ANTHROPIC_BASE_URL |
| OpenAI | OPENAI_API_KEY, OPENAI_BASE_URL |
| Bedrock | AWS_REGION, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_PROFILE |
| Vertex AI | GOOGLE_APPLICATION_CREDENTIALS, VERTEX_PROJECT_ID, VERTEX_REGION |
The ready event includes a version field. Clients should check version compatibility.
- Minor version bump: New optional event types or fields added (backward compatible)
- Major version bump: Breaking changes to existing events/commands
Current version: 0.1.0