A Rust library and CLI for downloading HuggingFace models at maximum speed. Multi-connection parallel downloads, file filtering, checksum verification, retry — and a search command to find models before you download them.
cargo install hf-fetch-model --features cli$ hf-fm search mistral,3B,instruct
Models matching "mistral,3B,instruct" (by downloads):
hf-fm mistralai/Ministral-3-3B-Instruct-2512 (159.7K downloads)
hf-fm mistralai/Ministral-3-3B-Instruct-2512-BF16 (62.6K downloads)
hf-fm mistralai/Ministral-3-3B-Instruct-2512-GGUF (32.7K downloads)
...
$ hf-fm search mistralai/Ministral-3-3B-Instruct-2512 --exact
Exact match:
hf-fm mistralai/Ministral-3-3B-Instruct-2512 (159.7K downloads)
License: apache-2.0
Pipeline: text-generation
Library: vllm
Languages: en, fr, es, de, it, pt, nl, zh, ja, ko, ar
$ hf-fm mistralai/Ministral-3-3B-Instruct-2512 --preset safetensors
Downloaded to: ~/.cache/huggingface/hub/models--mistralai--Ministral-3-3B.../snapshots/...
let outcome = hf_fetch_model::download(
"google/gemma-2-2b-it".to_owned(),
).await?;
println!("Model at: {}", outcome.inner().display());Filter, progress, auth, and more via the builder — see Configuration.
| Topic | |
|---|---|
| CLI Reference | All subcommands, flags, and output examples |
| Search | Comma filtering, --exact, model card metadata |
| Configuration | Builder API, presets, progress callbacks |
| Architecture | How hf-fetch-model relates to hf-hub and candle-mi |
| Diagnostics | --verbose output, tracing setup for library users |
| Changelog | Release history and migration notes |
- candle-mi — Mechanistic interpretability toolkit for transformer models
Licensed under either of Apache License, Version 2.0 or MIT License at your option.
- Exclusively developed with Claude Code (dev) and Augment Code (review)
- Git workflow managed with Fork
- All code follows CONVENTIONS.md, derived from Amphigraphic-Strict's Grit — a strict Rust subset designed to improve AI coding accuracy.