Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@

## Changes
<!-- List specific changes made in this PR -->
-
-
-
-
-
-

## Type of Change
<!-- Check all that apply -->
Expand Down
31 changes: 30 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ autorca run \

```python
from datetime import datetime
from autorca_core import run_rca, DataSourcesConfig
from autorca_core import run_rca, DataSourcesConfig, AnthropicLLM

# Define the incident time window
window = (
Expand Down Expand Up @@ -172,6 +172,35 @@ print(f"Confidence: {result.root_cause_candidates[0].confidence:.0%}")
print(result.summary)
```

### With LLM Enhancement (Anthropic Claude)

```python
import os
from autorca_core import run_rca, DataSourcesConfig, AnthropicLLM

# Initialize Anthropic LLM (requires ANTHROPIC_API_KEY env var)
llm = AnthropicLLM(
api_key=os.getenv("ANTHROPIC_API_KEY"),
model="claude-3-5-sonnet-20241022",
max_tokens=2048,
)

# Run RCA with LLM enhancement
result = run_rca(
incident_window=window,
primary_symptom="API 500 errors",
data_sources=sources,
llm=llm, # Add LLM for enhanced summaries
)

# Get comprehensive AI-generated analysis
print(result.summary) # Structured RCA with executive summary, impact assessment, and remediation

# Check token usage and costs
stats = llm.get_usage_stats()
print(f"Tokens used: {stats['total_tokens']}, Cost: ${stats['total_cost_usd']:.4f}")
```

---

## How This Fits an Autonomous Ops Stack
Expand Down
3 changes: 3 additions & 0 deletions autorca_core/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
from autorca_core.model.events import Event, LogEvent, MetricPoint, Span
from autorca_core.model.graph import Service, Dependency, IncidentNode
from autorca_core.reasoning.loop import run_rca, RCARunResult
from autorca_core.reasoning.llm import AnthropicLLM, DummyLLM
from autorca_core.logging import configure_logging, get_logger
from autorca_core.config import ThresholdConfig
from autorca_core.validation import IngestionLimits, ValidationError
Expand All @@ -26,6 +27,8 @@
"IncidentNode",
"run_rca",
"RCARunResult",
"AnthropicLLM",
"DummyLLM",
"configure_logging",
"get_logger",
"ThresholdConfig",
Expand Down
Loading
Loading