An intelligent log analysis tool using LLMs for context-aware insights, benchmarking, and publication-ready statistical evaluation.
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env # set secretsAnalyze a file and create a dashboard:
python -m src.cli analyze logs/app.log --chunk-size 10 --format html --output $(echo ${RESULTS_ROOT:-results})/reports/analysis_results.htmlRun a benchmarking comparison and append stats to the manuscript:
python -m src.cli benchmark path/to/logs.txt path/to/ground_truth.json \
--out-prefix $(echo ${RESULTS_ROOT:-results})/benchmarks/bench \
--model o3-mini --manuscript docs/results/manuscript.mdProvision EC2, stream CloudWatch logs, analyze, and save a live-updated dashboard:
python -m src.cli experiment --region us-east-1 --key-name YOUR_KEYPAIR \
--duration 300 --chunk-size 5 --out-root $(echo ${RESULTS_ROOT:-results})/experimentsPublish figures and stats to the manuscript:
python -m src.cli publish results/experiments/run_YYYYMMDD_HHMMSS --manuscript docs/results/manuscript.md
python -m src.cli publish_stats --results publication_ready_results.json --manuscript docs/results/manuscript.mdOne-click statistical analysis (synthetic, for paper numbers):
python quick_statistical_analysis.pySee docs/REPO_STRUCTURE.md for details. Generated artifacts go under results/ (configurable via RESULTS_ROOT).
OPENAI_API_KEY(required for LLM)ELASTIC_API_KEY,SPLUNK_USERNAME,SPLUNK_PASSWORD(optional connectors)RESULTS_ROOT(default:results)AWS_REGION(default:us-east-1)
make install
make experiment KEY_NAME=your-keypair AWS_REGION=us-east-1
make benchmark LOG_FILE=path/to/logs.txt GROUND_TRUTH=path/to/gt.json
make publish EXP_DIR=results/experiments/run_YYYYMMDD_HHMMSS
make publish-stats- All outputs are parameterized to land under
results/*. - Manuscript lives at
docs/results/manuscript.md. Figures and stats can be appended via CLI.