Code scanner to check for issues in prompts and LLM calls
-
Updated
Apr 6, 2025 - Python
Code scanner to check for issues in prompts and LLM calls
KAI Data Center Builder
La Perf is a framework for AI performance benchmarking — covering LLMs, VLMs, embeddings, with power-metrics collection.
Building an AI team to play Codenames using top Large Language Models (LLMs), evaluating performance, and pitting them against each other. Explore their strategy and capabilities in this interactive competition!
Arbitrary Numbers
A functionally operational, mathematically unhinged system for achieving 10× effective memory amplification on Apple Silicon using quantized fractal compression, complex-plane KV decomposition, and Euler-aligned swap geometry.
Test AI provider latency (TTFB, TTFT, TPS) in your CI/CD pipeline. Benchmark OpenAI, Anthropic, Google, and more.
AI Performance Engineering Cheatsheet: From Cloud to Edge.
A streamlined and easy-to-use AI performance evaluation / summary template with modern UI in HTML, including correct percentage chart and comparison with other models, precision, recall, F1-score, and confusion matrix. Enables you to create the result chart within 3 minutes.
⚙️ Streamline AI performance evaluation with a user-friendly HTML template for quick charts and model comparisons in just minutes.
Speedtest for AI. Test latency to every major AI provider from your terminal.
Add a description, image, and links to the ai-performance topic page so that developers can more easily learn about it.
To associate your repository with the ai-performance topic, visit your repo's landing page and select "manage topics."