A Rust crate for building AI agent workflows and applications. Orchestra-rs provides a powerful, type-safe framework for orchestrating production-ready applications powered by Large Language Models (LLMs).
The goal of Orchestra-rs is to be the LangChain of the Rust ecosystem. We aim to provide a composable, safe, and efficient set of tools to chain together calls to LLMs, APIs, and other data sources. By leveraging Rust's powerful type system and performance, Orchestra-rs empowers developers to build reliable and scalable AI applications and intelligent agents with confidence.
- π Type-safe LLM interactions - Leverage Rust's type system for reliable AI applications
- π Multiple provider support - Currently supports Google Gemini, with more providers coming
- π οΈ Flexible configuration - Builder patterns and validation for model configurations
- π Rich message types - Support for text, mixed content, and future tool calling
- π§ͺ Comprehensive testing - Built-in mock providers and extensive test coverage
- β‘ Async/await support - Built for modern async Rust applications
- π Error handling - Comprehensive error types with context
Add Orchestra-rs to your Cargo.toml:
[dependencies]
orchestra-rs = { path = "." } # Will be published to crates.io soon
tokio = { version = "1.0", features = ["full"] }use orchestra_rs::{
llm::LLM,
providers::types::ProviderSource,
messages::Message,
model::ModelConfig,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Set your API key as an environment variable
std::env::set_var("GEMINI_API_KEY", "your-api-key-here");
// Create an LLM instance with Gemini
let llm = LLM::gemini("gemini-2.5-flash");
// Simple prompt
let response = llm.prompt("Hello, how are you today?").await?;
println!("Response: {}", response.text);
Ok(())
}use orchestra_rs::{
llm::LLM,
providers::types::ProviderSource,
model::ModelConfig,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a custom model configuration
let config = ModelConfig::new("gemini-2.5-flash")
.with_system_instruction("You are a helpful coding assistant")
.with_temperature(0.7)?
.with_top_p(0.9)?
.with_max_tokens(1000)
.with_stop_sequence("```");
// Create LLM with custom configuration
let llm = LLM::new(ProviderSource::Gemini, "gemini-2.5-flash".to_string())
.with_custom_config(config);
let response = llm.prompt("Write a simple Rust function").await?;
println!("Response: {}", response.text);
Ok(())
}use orchestra_rs::{
llm::LLM,
messages::Message,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let llm = LLM::gemini("gemini-2.5-flash");
// Build conversation history
let history = vec![
Message::human("Hi, I'm working on a Rust project"),
Message::assistant("Great! I'd be happy to help with your Rust project. What are you working on?"),
];
// Continue the conversation
let response = llm.chat(
Message::human("I need help with error handling"),
history
).await?;
println!("Response: {}", response.text);
Ok(())
}use orchestra_rs::{
llm::LLM,
providers::types::ProviderSource,
};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Conservative settings (lower temperature, more focused)
let conservative_llm = LLM::conservative(
ProviderSource::Gemini,
"gemini-2.5-flash".to_string()
);
// Creative settings (higher temperature, more diverse)
let creative_llm = LLM::creative(
ProviderSource::Gemini,
"gemini-2.5-flash".to_string()
);
// Balanced settings (moderate temperature)
let balanced_llm = LLM::balanced(
ProviderSource::Gemini,
"gemini-2.5-flash".to_string()
);
let response = conservative_llm.prompt("Explain Rust ownership").await?;
println!("Conservative response: {}", response.text);
Ok(())
}Currently supported models:
gemini-2.5-flash-litegemini-2.5-progemini-2.5-flashgemini-2.0-flash-litegemini-2.0-flashgemini-1.5-pro
Setup:
- Get an API key from Google AI Studio
- Set the environment variable:
GEMINI_API_KEY=your-api-key
- OpenAI GPT models
- Anthropic Claude
- Local models via Ollama
- Azure OpenAI
Orchestra-rs is built with a modular architecture:
- Core Types: Message types, model configurations, and error handling
- Providers: Pluggable LLM provider implementations
- LLM Interface: High-level interface for interacting with any provider
- Configuration: Flexible configuration with validation and presets
Orchestra-rs provides comprehensive error handling with context:
use orchestra_rs::{error::OrchestraError, llm::LLM};
#[tokio::main]
async fn main() {
let llm = LLM::gemini("gemini-2.5-flash");
match llm.prompt("Hello").await {
Ok(response) => println!("Success: {}", response.text),
Err(OrchestraError::ApiKey { message }) => {
eprintln!("API key error: {}", message);
},
Err(OrchestraError::Provider { provider, message }) => {
eprintln!("Provider {} error: {}", provider, message);
},
Err(e) => eprintln!("Other error: {}", e),
}
}Orchestra-rs includes comprehensive testing utilities:
use orchestra_rs::providers::mock::{MockProvider, MockConfig};
#[tokio::test]
async fn test_my_ai_function() {
let mock_config = MockConfig::new()
.with_responses(vec!["Mocked response 1", "Mocked response 2"]);
let provider = MockProvider::new(mock_config);
// Use the mock provider in your tests
}For a detailed overview of the library's architecture, please refer to the architecture documentation.
π± Early Development Stage
Orchestra-rs is in active development. The core APIs are stabilizing, but may still change. This is a great time to get involved and help shape the future of the framework.
- Core message and configuration types
- Google Gemini provider
- Comprehensive error handling
- Testing utilities and mock providers
- Tool calling support
- Streaming responses
- Additional providers (OpenAI, Anthropic, etc.)
- Agent workflows and chains
- Memory and context management
- Plugin system
This project is licensed under the MIT License - see the LICENSE file for details.