-
Notifications
You must be signed in to change notification settings - Fork 100
FAQ
Common questions about ConnectOnion and AI agents. Can't find your question? Ask in our Discord community.
ConnectOnion is a Python framework for building AI agents that can use tools, make decisions, and complete tasks. It makes it easy to create agents that interact with APIs, databases, files, and more.
Key features:
- Simple 2-line agent creation
- Functions automatically become tools
- Interactive debugging with
auto_debug() - Built-in behavior tracking
- Multi-model support (OpenAI, Anthropic, Google)
Yes! ConnectOnion is open source and free to use. However, you'll need API keys from LLM providers (OpenAI, Anthropic, etc.) which have their own pricing.
Cost breakdown:
- ConnectOnion framework: Free
- OpenAI API: Pay per token (pricing)
- Anthropic API: Pay per token (pricing)
- Google Gemini: Has free tier (pricing)
ConnectOnion:
- Simpler API - create agents in 2 lines
- Built-in interactive debugging
- Focus on simplicity over features
- Smaller, easier to understand codebase
LangChain:
- More features and integrations
- Larger ecosystem
- Steeper learning curve
- More complex abstractions
Use ConnectOnion if: You want simplicity and fast development Use LangChain if: You need extensive pre-built integrations
Python 3.9 or higher is required.
Check your version:
python --versionUpgrade if needed:
# Mac/Linux
brew install python@3.9
# Windows
# Download from python.orgYes! ConnectOnion is MIT licensed - use it for anything, including commercial projects.
pip install connectonionThat's it! See the Quick Start Guide for next steps.
Yes, you need an API key from a supported LLM provider:
- OpenAI (recommended for beginners)
- Anthropic (Claude models)
- Google (Gemini models)
Get an OpenAI key: https://platform.openai.com/api-keys
Option 1: .env file (recommended)
echo "OPENAI_API_KEY=sk-your-key-here" > .envOption 2: Environment variable
export OPENAI_API_KEY=sk-your-key-hereSee Quick Start Guide for details.
from connectonion import Agent
agent = Agent("assistant")
agent.input("Hello!")That's it! This creates an agent and gives it a task.
Just pass Python functions to the tools parameter:
def search(query: str) -> str:
"""Search for information"""
return f"Results for: {query}"
agent = Agent("assistant", tools=[search])The agent will automatically know when to use search().
Common reasons:
1. Missing docstring:
# ✗ Agent doesn't know what this does
def my_tool(param):
return result
# ✓ Clear docstring
def my_tool(param: str) -> str:
"""Description of what this tool does"""
return result2. Unclear tool name:
# ✗ Unclear
def process(data):
pass
# ✓ Descriptive
def search_customer_database(name: str):
pass3. System prompt conflicts:
# ✗ Discourages tool use
system_prompt="Answer directly without tools"
# ✓ Encourages tool use
system_prompt="Use tools when helpful"See Debug Agent Errors for more solutions.
Yes! Pass a list of functions:
agent = Agent(
"assistant",
tools=[search, calculate, send_email, get_weather]
)The agent will choose which tool(s) to use based on the task.
Use the system_prompt parameter:
agent = Agent(
"assistant",
tools=[...],
system_prompt="""You are a helpful assistant.
Rules:
- Always verify information before presenting it
- Use search tool for current information
- Be concise and accurate
"""
)Yes! Agents maintain conversation history automatically:
agent.input("What's the capital of France?")
# Agent: "Paris"
agent.input("What's the population?")
# Agent knows "it" refers to ParisTo start fresh:
agent = Agent("assistant") # New agent, no historyUse auto_debug():
from connectonion.decorators import xray
@xray # Mark tool as breakpoint
def my_tool():
pass
agent = Agent("assistant", tools=[my_tool])
agent.auto_debug() # Enable debugging
agent.input("Task")
# Agent pauses at my_tool(), lets you inspect and modifySee Interactive Debugging Guide.
@xray marks tools as debugging breakpoints:
from connectonion.decorators import xray
@xray # Agent pauses here when auto_debug() is on
def critical_operation():
pass
# Regular tool - doesn't pause
def simple_operation():
passYes! Keep @xray but don't call auto_debug():
@xray # Enhanced logging, no pausing
def my_tool():
pass
agent = Agent("assistant", tools=[my_tool])
# No auto_debug() call - tool runs with enhanced logging onlyEnable console output (on by default):
agent = Agent("assistant", tools=[...])
agent.input("Task")
# Output shows:
# - Tool calls
# - Parameters
# - Results
# - Agent's responsesOr use auto_debug() for full control.
For development/testing:
model="gpt-4o-mini" # Fast, cheap, good enoughFor production (quality matters):
model="gpt-4o" # More accurate, slower, expensiveFor complex reasoning:
model="o1-preview" # Best reasoning, most expensiveDepends on the model and usage. Example (rough estimates):
gpt-4o-mini (recommended for most use cases):
- Input: $0.15 / 1M tokens
- Output: $0.60 / 1M tokens
- Typical task: $0.001 - $0.01
gpt-4o (for better quality):
- Input: $2.50 / 1M tokens
- Output: $10.00 / 1M tokens
- Typical task: $0.01 - $0.10
Track usage at: https://platform.openai.com/usage
-
Use cheaper models:
model="gpt-4o-mini" # 60x cheaper than gpt-4
-
Shorter system prompts:
# ✗ Long (expensive) system_prompt="[500 words of instructions]" # ✓ Concise (cheap) system_prompt="Be helpful and use tools when needed."
-
Cache results:
from functools import lru_cache @lru_cache(maxsize=100) def expensive_search(query: str): return results
-
Limit conversation history:
# Start new agent instead of long conversation agent = Agent("assistant")
Yes! ConnectOnion supports multiple providers:
Anthropic Claude:
# Set ANTHROPIC_API_KEY in .env
agent = Agent("assistant", model="claude-3-5-sonnet-20241022")Google Gemini:
# Set GOOGLE_API_KEY in .env
agent = Agent("assistant", model="gemini-2.0-flash-exp")Common causes:
-
Using slow model:
# Slow model="gpt-4o" # Fast model="gpt-4o-mini"
-
Slow tools:
# Add caching or optimize @lru_cache(maxsize=100) def slow_tool(): pass
-
Long system prompts: Shorter prompts = faster responses
Yes! Create multiple agents:
researcher = Agent("researcher", tools=[search])
writer = Agent("writer", tools=[write_file])
# Researcher finds information
info = researcher.input("Research AI agents")
# Writer creates content from research
writer.input(f"Write an article about: {info}")Yes! Make an agent into a tool:
researcher = Agent("researcher", tools=[search])
def research_tool(topic: str) -> str:
"""Research a topic using the research agent"""
return researcher.input(f"Research {topic}")
writer = Agent("writer", tools=[research_tool, write_file])Agent state is in agent.current_session:
import json
# Save state
state = agent.current_session
with open('agent_state.json', 'w') as f:
json.dump(state, f)
# Load state (note: you need to recreate the agent)
with open('agent_state.json', 'r') as f:
state = json.load(f)
agent.current_session = stateNot directly yet, but you can use OpenAI-compatible APIs:
# Example with local model via OpenAI-compatible server
import openai
openai.api_base = "http://localhost:8000/v1"
agent = Agent("assistant", model="local-model")Yes! See the Deploy to Production Guide.
Quick tips:
- Remove
auto_debug()calls - Use environment variables for secrets
- Add error handling
- Implement logging
- Set rate limits
See specific examples in Deploy to Production.
Optional. @xray without auto_debug() just adds enhanced logging (minimal overhead).
Keep it if: You want detailed logs Remove it if: You want maximum performance
-
Check guides:
-
Community:
-
Report bugs:
Ask in our Discord community or GitHub Discussions!
More Resources: