Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 64 additions & 0 deletions sdk/guides/llm-profile-store.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

## Benefits
- **Persistence:** Saves model parameters (API keys, temperature, max tokens, ...) to a stable disk format.
- **Reusability:** Import a defined profile into any script or session with a single identifier.

Check warning on line 15 in sdk/guides/llm-profile-store.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

sdk/guides/llm-profile-store.mdx#L15

Did you really mean 'Reusability'?
- **Portability:** Simplifies the synchronization of model configurations across different machines or deployment environments.

## How It Works
Expand Down Expand Up @@ -170,8 +170,72 @@

<RunExampleCode path_to_script="examples/01_standalone_sdk/37_llm_profile_store.py"/>

## Mid-Conversation Model Switching

You can use a saved profile to switch the active model on a running conversation between turns. This is useful when you want to start with one model, then switch to another for later user messages while keeping the same conversation history and combined usage metrics.

<Note>
This example is available on GitHub: [examples/01_standalone_sdk/44_model_switching_in_convo.py](https://github.com/OpenHands/software-agent-sdk/blob/main/examples/01_standalone_sdk/44_model_switching_in_convo.py)
</Note>

```python icon="python" expandable examples/01_standalone_sdk/44_model_switching_in_convo.py
"""Mid-conversation model switching.

Usage:
uv run examples/01_standalone_sdk/44_model_switching_in_convo.py
"""

import os

from openhands.sdk import LLM, Agent, LocalConversation, Tool
from openhands.sdk.llm.llm_profile_store import LLMProfileStore
from openhands.tools.terminal import TerminalTool


LLM_API_KEY = os.getenv("LLM_API_KEY")
store = LLMProfileStore()

store.save(
"gpt",
LLM(model="openhands/gpt-5.2", api_key=LLM_API_KEY),
include_secrets=True,
)

agent = Agent(
llm=LLM(
model=os.getenv("LLM_MODEL", "openhands/claude-sonnet-4-5-20250929"),
api_key=LLM_API_KEY,
),
tools=[Tool(name=TerminalTool.name)],
)
conversation = LocalConversation(agent=agent, workspace=os.getcwd())

# Send a message with the default model
conversation.send_message("Say hello in one sentence.")
conversation.run()

# Switch to a different model and send another message
conversation.switch_profile("gpt")
print(f"Switched to: {conversation.agent.llm.model}")

conversation.send_message("Say goodbye in one sentence.")
conversation.run()

# Print metrics per model
for usage_id, metrics in conversation.state.stats.usage_to_metrics.items():
print(f" [{usage_id}] cost=${metrics.accumulated_cost:.6f}")

combined = conversation.state.stats.get_combined_metrics()
print(f"Total cost: ${combined.accumulated_cost:.6f}")
print(f"EXAMPLE_COST: {combined.accumulated_cost}")

store.delete("gpt")
```

<RunExampleCode path_to_script="examples/01_standalone_sdk/44_model_switching_in_convo.py"/>

## Next Steps

- **[LLM Registry](/sdk/guides/llm-registry)** - Manage multiple LLMs in memory at runtime

Check warning on line 239 in sdk/guides/llm-profile-store.mdx

View check run for this annotation

Mintlify / Mintlify Validation (allhandsai) - vale-spellcheck

sdk/guides/llm-profile-store.mdx#L239

Did you really mean 'LLMs'?
- **[LLM Routing](/sdk/guides/llm-routing)** - Automatically route to different models
- **[Exception Handling](/sdk/guides/llm-error-handling)** - Handle LLM errors gracefully
Loading