Experimental — This is an early release. APIs may change between versions. We welcome feedback and contributions as we iterate toward production readiness.
Make AI a software engineering discipline.
LLMs are untrusted. They're stochastic, may be trained on poisoned data, and change under the hood without notice. The more tokens they produce, the further they drift.
Current orchestration is risky. Most agent frameworks dump instructions and data together in the context window, then let the LLM loop freely — creating injection risks and compounding errors.
OpenSymbolicAI separates concerns:
| Problem | How We Solve It |
|---|---|
| Data influences planning unpredictably | Planning is isolated. LLM sees only the query and primitive signatures — not your data |
| LLM can make unplanned tool calls | Execution is deterministic. LLM plans, then C# executes without the LLM in the loop |
| Prompt injection and data exfiltration | Symbolic Firewall. LLM operates on variable names, not raw content. Data stays in application memory, never tokenized |
| Side effects are hidden | Mutations are explicit. ReadOnly = false primitives are clearly marked |
| Outputs are unpredictable | Outputs are typed. C# records and strong typing guarantee structured, validated results |
| Long contexts cause drift | Context is minimal. Only what's needed goes to the LLM — faster, cheaper, more reliable |
| Model changes break prompts | Model-agnostic. Constrained inputs/outputs minimize variability across models |
| Failures lose progress | Checkpoint system. Pause/resume execution with full state serialization |
| Hard to debug what happened | Full tracing. Step-by-step execution records with timing, argument capture, namespace snapshots |
| LLM generates dangerous code | Roslyn sandbox. Default-deny allowlist validates every AST node before execution. Loop guards prevent runaway iteration |
Thesis: Stop prompting. Start programming.
core-dotnet is the .NET runtime for OpenSymbolicAI: compile-time safe, source-generated, high-performance execution of LLM-planned computations using Roslyn scripting.
Core concepts:
- Primitives (
[Primitive]) — Atomic operations your agent can execute - Decompositions (
[Decomposition]) — Examples showing how to break complex intents into primitive sequences - Evaluators (
[Evaluator]) — Goal evaluation methods for iterative agents
Blueprints (pick the one that fits your problem):
| Blueprint | When to Use |
|---|---|
| PlanExecute | Single-turn tasks with a fixed sequence of primitives |
| DesignExecute | Tasks needing loops and conditionals (dynamic-length data) |
| GoalSeeking<T> | Iterative problems where progress is evaluated each step |
Related: core-py — Python runtime · cli-py — Interactive TUI
dotnet add package OpenSymbolicAIOr clone and build from source:
git clone https://github.com/OpenSymbolicAI/core-dotnet.git
cd core-dotnet
dotnet buildcp .env.example .env
# Add your API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY, GROQ_API_KEY, etc.)# Default: Ollama with qwen3:8b, interactive calculator REPL
dotnet run --project examples/OpenSymbolicAI.Examples
# Specify provider
dotnet run --project examples/OpenSymbolicAI.Examples -- groq
dotnet run --project examples/OpenSymbolicAI.Examples -- openai gpt-4o
dotnet run --project examples/OpenSymbolicAI.Examples -- anthropic claude-sonnet-4-6-20250514
# Run a specific example with verbose output
dotnet run --project examples/OpenSymbolicAI.Examples -- optimizer groq -v
dotnet run --project examples/OpenSymbolicAI.Examples -- recipe anthropic -v
dotnet run --project examples/OpenSymbolicAI.Examples -- cart ollama -vusing OpenSymbolicAI;
public partial class ScientificCalculator : PlanExecute
{
public ScientificCalculator(ILlm llm) : base(llm) { }
[Primitive(ReadOnly = true)]
public double Add(double a, double b) => a + b;
[Primitive(ReadOnly = true)]
public double Multiply(double a, double b) => a * b;
[Primitive(ReadOnly = true)]
public double Sqrt(double x) => Math.Sqrt(x);
[Decomposition(
Intent = "What is the hypotenuse of a 3-4 triangle?",
ExpandedIntent = "Square each side, add, then take square root")]
public double ExampleHypotenuse()
{
var a2 = Multiply(3, 3);
var b2 = Multiply(4, 4);
var sum = Add(a2, b2);
return Sqrt(sum);
}
}
// Usage
var llm = new OpenAiLlm(httpClient, config);
var calc = new ScientificCalculator(llm);
var result = await calc.RunAsync("What is the hypotenuse of a 5-12 triangle?");
Console.WriteLine(result.Value); // 13The LLM learns from decomposition examples to plan new queries using your primitives. A source generator emits the glue code at compile time — no reflection at runtime. When primitives use custom types (records, classes), their field definitions are automatically included in the prompt via type graph closure.
Security: All LLM-generated code is validated against a default-deny allowlist (via Roslyn AST analysis) before execution. File I/O, networking, reflection, process spawning, and other dangerous operations are blocked at the syntax level — not by string matching.
When tasks involve dynamic-length data, you need loops and conditionals. DesignExecute extends PlanExecute with control flow and loop guards.
public partial class ShoppingCart : DesignExecute
{
public ShoppingCart(ILlm llm) : base(llm) { }
[Primitive(ReadOnly = true)]
public double LookupPrice(string item) => _catalog[item.ToLower()];
[Primitive(ReadOnly = true)]
public double ApplyDiscount(double price, double percent) =>
Math.Round(price * (1 - percent / 100), 2);
[Decomposition(
Intent = "5 apples and 1 laptop shipped to California",
ExpandedIntent = "Loop over items, apply bulk discounts for 3+, add state tax")]
public double ExampleCart()
{
var items = new[] { ("apples", 5), ("laptop", 1) };
var subtotal = 0.0;
foreach (var (name, qty) in items)
{
var price = LookupPrice(name);
var line = Multiply(price, qty);
if (qty >= 3)
line = ApplyDiscount(line, 10.0);
subtotal = Add(subtotal, line);
}
var taxRate = LookupTaxRate("CA");
return AddTax(subtotal, taxRate);
}
}For iterative problems, GoalSeeking<T> runs a plan-execute-evaluate loop until the goal is achieved.
public partial class FunctionOptimizer : GoalSeeking<OptimizationContext>
{
public FunctionOptimizer(ILlm llm) : base(llm) { }
[Primitive(ReadOnly = true)]
public double Evaluate(double x) => Math.Round(TargetFunction(x), 6);
[Evaluator]
public GoalEvaluation CheckConverged(string goal, GoalContext context) =>
new() { GoalAchieved = context.Converged };
[Decomposition(
Intent = "Explore the function across the range",
ExpandedIntent = "Sample spread-out points to understand the shape")]
public double ExampleExplore()
{
var v1 = Evaluate(3.0);
var v2 = Evaluate(8.0);
var v3 = Evaluate(14.0);
return v3;
}
}Each iteration: plan → execute → introspect → evaluate. The LLM never sees raw execution results — only structured GoalContext.
The [Primitive] attribute accepts two named properties that control how the engine treats each operation:
| Property | Type | Default | Purpose |
|---|---|---|---|
ReadOnly |
bool |
false |
Whether the primitive only reads state. Mutating primitives (ReadOnly = false) are flagged in prompts and can trigger approval workflows |
Deterministic |
bool |
true |
Whether the primitive produces the same output for the same input. Non-deterministic primitives are excluded from execution caching |
Most primitives — math, lookups, string formatting — are deterministic: same inputs always produce the same output. The engine defaults to Deterministic = true.
Set Deterministic = false when a primitive's output can vary across calls, even with identical arguments:
// Deterministic (default) — pure computation, safe to cache
[Primitive(ReadOnly = true)]
public double Add(double a, double b) => a + b;
// Non-deterministic — calls an external LLM, result varies
[Primitive(ReadOnly = true, Deterministic = false)]
public async Task<string> Summarize(string text)
=> await _llm.CompleteAsync($"Summarize: {text}");
// Non-deterministic — depends on external state
[Primitive(ReadOnly = true, Deterministic = false)]
public double GetCurrentPrice(string ticker)
=> _marketFeed.GetQuote(ticker);
// Mutating AND non-deterministic — external API with side effects
[Primitive(ReadOnly = false, Deterministic = false)]
public async Task<string> SendEmail(string to, string body)
=> await _emailService.SendAsync(to, body);When it matters:
- Caching — The
CachedLlmlayer and checkpoint system use determinism metadata to decide what can be safely replayed - Prompt generation — Non-deterministic primitives are annotated in the LLM prompt so the planner understands which calls may produce different results on re-execution
- Debugging — Traces mark non-deterministic steps, making it clear which results might differ on replay
Rule of thumb: If you'd be surprised that re-running the primitive with the same arguments gave a different answer, leave the default (
true). If the result depends on time, randomness, external services, or LLM inference, setDeterministic = false.
Every run produces a complete ExecutionTrace with step-by-step records:
var result = await agent.RunAsync("What is 3 * 4 + 5?");
Console.WriteLine($"Plan: {result.Trace.PlanDuration.TotalMilliseconds}ms");
Console.WriteLine($"Tokens: {result.Trace.Tokens.TotalTokens}");
foreach (var step in result.Trace.Steps)
{
var status = step.Success ? "OK" : "FAIL";
Console.WriteLine($" [{status}] {step.Statement} => {step.ResultValue}");
}Ollama, OpenAI, Anthropic, Groq, Fireworks — or add your own via the ILlm interface.
services.AddOpenSymbolicAI(ai =>
{
ai.UseLlm(new LlmConfig
{
Provider = LlmProvider.Anthropic,
Model = "claude-sonnet-4-6-20250514",
ApiKey = Environment.GetEnvironmentVariable("ANTHROPIC_API_KEY"),
});
ai.UseCheckpointStore<InMemoryCheckpointStore>();
ai.UseObservability(obs => obs.AddFileTransport("traces.jsonl"));
});src/
├── OpenSymbolicAI/ # Main library
│ ├── Blueprints/ # PlanExecute, DesignExecute, GoalSeeking
│ ├── Llm/ # ILlm interface, providers, factory
│ ├── Execution/ # Roslyn executor, plan validation, loop guards
│ ├── Models/ # ExecutionTrace, ExecutionStep, TokenUsage, etc.
│ ├── Observability/ # Tracer, transports (in-memory, file, HTTP)
│ ├── Checkpoint/ # Pause/resume execution state
│ ├── Attributes/ # [Primitive], [Decomposition], [Evaluator]
│ ├── DependencyInjection/ # IServiceCollection extensions
│ └── Exceptions/ # Structured exception hierarchy
├── OpenSymbolicAI.Generators/ # Roslyn source generator (compile-time codegen)
examples/
├── OpenSymbolicAI.Examples/
│ ├── Calculator/ # Scientific calculator (PlanExecute)
│ ├── RecipeBook/ # Nutrition calculator (DesignExecute)
│ ├── ShoppingCart/ # Cart with tax & discounts (DesignExecute)
│ └── FunctionOptimizer/ # Black-box optimization (GoalSeeking)
tests/
└── OpenSymbolicAI.Tests/ # Unit + E2E tests
dotnet build # build
dotnet test # run tests (non-LLM tests run without API keys)
dotnet pack src/OpenSymbolicAI -c Release # create NuGet packageThis project is experimental. We're actively iterating on the API surface, execution model, and provider support. Expect breaking changes between minor versions until we reach 1.0.
What's stable:
- Core blueprint hierarchy (PlanExecute → DesignExecute → GoalSeeking)
- Primitive/Decomposition/Evaluator attribute model
- Execution tracing
What may change:
- DI registration API
- Observability transport interface
- Checkpoint serialization format
- Provider-specific configuration
See CONTRIBUTING.md for guidelines.
MIT