Skip to content
#

hardware-profiling

Here are 2 public repositories matching this topic...

A reproducible benchmarking pipeline for machine learning models, focused on analyzing inference efficiency. It captures CPU utilization, MACs, memory consumption (RSS), model size, and runtime-specific resource demands, combined with hardware-aware profiling for consistent cross-system evaluation.

  • Updated Mar 9, 2026
  • Python

Comprehensive LLM evaluation framework comparing local and cloud models with hardware-aware benchmarking. Evaluate across code generation, document analysis, and structured output using pass@k, LLM-as-Judge, and RAG metrics. Supports Ollama, Google Gemini, Anthropic, and OpenAI.

  • Updated Mar 6, 2026
  • Python

Improve this page

Add a description, image, and links to the hardware-profiling topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the hardware-profiling topic, visit your repo's landing page and select "manage topics."

Learn more