A curated collection of pre-compiled Python wheels for difficult-to-install AI/ML libraries on Windows.
Report a Broken Link
·
Request a New Wheel
Table of Contents
This repository was created to address a common pain point for AI enthusiasts and developers on the Windows platform: building complex Python packages from source. Libraries like flash-attention, xformers are essential for high-performance AI tasks but often lack official pre-built wheels for Windows, forcing users into a complicated and error-prone compilation process.
The goal here is to provide a centralized, up-to-date collection of direct links to pre-compiled .whl files for these libraries, primarily for the ComfyUI community and other PyTorch users on Windows. This saves you time and lets you focus on what's important: creating amazing things with AI.
══════════════════════════════════
Beyond the code, I believe in the power of community and continuous learning. I invite you to join the 'TokenDiff AI News' and 'TokenDiff Community Hub'
AI for every home, creativity for every mind! |
Questions, help, and thoughtful discussion. |
══════════════════════════════════
Follow these simple steps to use the wheels from this repository.
- Python for Windows: Ensure you have a compatible Python version installed (PyTorch currently supports Python 3.9 - 3.14 on Windows). You can get it from the official Python website.
To install a wheel, use pip with the direct URL to the .whl file. Make sure to enclose the URL in quotes.
# Example of installing a specific flash-attention wheel
pip install "https://huggingface.co/lldacing/flash-attention-windows-wheel/blob/main/flash_attn-2.7.4.post1+cu128torch2.7.0cxx11abiFALSE-cp312-cp312-win_amd64.whl"Tip
Find the package you need in the Available Wheels section below, find the row that matches your environment (Python, PyTorch, CUDA version), and copy the link for the pip install command.
Here is the list of tracked packages.
The foundation of everything. Install this first from the official source.
- Official Install Page: https://pytorch.org/get-started/locally/
For convenience, here are direct installation commands for specific versions on Linux/WSL with an NVIDIA GPU. For other configurations (CPU, macOS, ROCm), please use the official install page.
This is the recommended version for most users.
| CUDA Version | Pip Install Command |
|---|---|
| CUDA 13.0 | pip install torch torchvision --index-url https://download.pytorch.org/whl/cu130 |
| CUDA 12.8 | pip install torch torchvision --index-url https://download.pytorch.org/whl/cu128 |
| CUDA 12.6 | pip install torch torchvision --index-url https://download.pytorch.org/whl/cu126 |
Previous Stable Version 2.7.1, 2.8.0
| CUDA Version | Pip Install Command |
|---|---|
| CUDA 12.9 | pip install "torch>=2.8.0.dev,<2.9.0" torchvision --index-url https://download.pytorch.org/whl/cu129 |
| CUDA 12.8 | pip install "torch>=2.8.0.dev,<2.9.0" torchvision --index-url https://download.pytorch.org/whl/cu128 |
| CUDA 12.6 | pip install "torch>=2.8.0.dev,<2.9.0" torchvision --index-url https://download.pytorch.org/whl/cu126 |
| CUDA Version | Pip Install Command |
|---|---|
| CUDA 12.8 | pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu128 |
| CUDA 12.6 | pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu126 |
| CUDA 11.8 | pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu118 |
| CPU only | pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cpu |
Use these for access to the latest features, but expect potential instability.
PyTorch 2.10 (Nightly)
| CUDA Version | Pip Install Command |
|---|---|
| CUDA 13.0 | pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu130 |
| CUDA 12.8 | pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu128 |
| CUDA 12.6 | pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu126 |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | Download Link |
|---|---|---|---|---|
2.8.0a0 |
2.10.0 |
3.13 |
13.0 |
Link |
2.8.0 |
2.9.0 |
N/A | 12.8 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
High-performance attention implementation.
- Official Repo: Dao-AILab/flash-attention
- Pre-built Sources: lldacing's HF, Wildminder's HF, mjun0812 GitHub
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | CXX11 ABI | Download Link |
|---|---|---|---|---|---|
2.8.3 |
2.10.0 |
3.13 |
13.0 |
✓ | Link |
2.8.3 |
2.10.0 |
3.12 |
13.0 |
✓ | Link |
2.8.3 |
2.10.0 |
3.13 |
12.8 |
✓ | Link |
2.8.3 |
2.9.1 |
3.13 |
13.0 |
✓ | Link |
2.8.3 |
2.9.1 |
3.12 |
13.0 |
✓ | Link |
2.8.3 |
2.9.1 |
3.13 |
12.8 |
✓ | Link |
2.8.3 |
2.9.0 |
3.13 |
13.0 |
✓ | Link |
2.8.3 |
2.9.0 |
3.12 |
13.0 |
✓ | Link |
2.8.3 |
2.9.0 |
3.13 |
12.9 |
✓ | Link |
2.8.3 |
2.9.0 |
3.12 |
12.8 |
✓ | Link |
2.8.3 |
2.8.0 |
3.12 |
12.8 |
✓ | Link |
2.8.2 |
2.9.0 |
3.12 |
12.8 |
✓ | Link |
2.8.2 |
2.8.0 |
3.12 |
12.8 |
✓ | Link |
2.8.2 |
2.8.0 |
3.11 |
12.8 |
✓ | Link |
2.8.2 |
2.8.0 |
3.10 |
12.8 |
✓ | Link |
2.8.2 |
2.7.0 |
3.12 |
12.8 |
✗ | Link |
2.8.2 |
2.7.0 |
3.11 |
12.8 |
✗ | Link |
2.8.2 |
2.7.0 |
3.10 |
12.8 |
✗ | Link |
2.8.1 |
2.8.0 |
3.12 |
12.8 |
✓ | Link |
2.8.0.post2 |
2.8.0 |
3.12 |
12.8 |
✓ | Link |
2.7.4.post1 |
2.8.0 |
3.12 |
12.8 |
✓ | Link |
2.7.4.post1 |
2.8.0 |
3.10 |
12.8 |
✓ | Link |
2.7.4.post1 |
2.7.0 |
3.12 |
12.8 |
✗ | Link |
2.7.4.post1 |
2.7.0 |
3.11 |
12.8 |
✗ | Link |
2.7.4.post1 |
2.7.0 |
3.10 |
12.8 |
✗ | Link |
2.7.4 |
2.8.0 |
3.12 |
12.8 |
✓ | Link |
2.7.4 |
2.8.0 |
3.11 |
12.8 |
✓ | Link |
2.7.4 |
2.8.0 |
3.10 |
12.8 |
✓ | Link |
2.7.4 |
2.7.0 |
3.12 |
12.8 |
✗ | Link |
2.7.4 |
2.7.0 |
3.11 |
12.8 |
✗ | Link |
2.7.4 |
2.7.0 |
3.10 |
12.8 |
✗ | Link |
2.7.4 |
2.6.0 |
3.12 |
12.6 |
✗ | Link |
2.7.4 |
2.6.0 |
3.11 |
12.6 |
✗ | Link |
2.7.4 |
2.6.0 |
3.10 |
12.6 |
✗ | Link |
2.7.4 |
2.6.0 |
3.12 |
12.4 |
✗ | Link |
2.7.4 |
2.6.0 |
3.11 |
12.4 |
✗ | Link |
2.7.4 |
2.6.0 |
3.10 |
12.4 |
✗ | Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
Another library for memory-efficient attention and other optimizations.
- Official Repo: facebookresearch/xformers
- PyTorch Pre-built Index: https://download.pytorch.org/whl/xformers/
Note
PyTorch provides official pre-built wheels for xformers. You can often install it with pip install xformers if you installed PyTorch correctly. If that fails, find your matching wheel at the index link above.
ABI3 version, any Python 3.9-3.12
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | Download Link |
|---|---|---|---|---|
0.0.33 |
2.10 |
3.9 |
13.0 |
Link |
0.0.33 |
2.9 |
3.9 |
13.0 |
Link |
0.0.32.post2 |
2.8.0 |
3.9 |
12.9 |
Link |
0.0.32.post2 |
2.8.0 |
3.9 |
12.8 |
Link |
0.0.32.post2 |
2.8.0 |
3.9 |
12.6 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
- Official Repo: thu-ml/SageAttention
- Pre-built Sources: woct0rdho's Releases, Wildminder's HF
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | Download Link |
|---|---|---|---|---|
2.1.1 |
2.8.0 |
3.12 |
12.8 |
Link |
2.1.1 |
2.7.0 |
3.10 |
12.8 |
Link |
2.1.1 |
2.6.0 |
3.13 |
12.6 |
Link |
2.1.1 |
2.6.0 |
3.12 |
12.6 |
Link |
2.1.1 |
2.6.0 |
3.12 |
12.6 |
Link |
2.1.1 |
2.6.0 |
3.11 |
12.6 |
Link |
2.1.1 |
2.6.0 |
3.10 |
12.6 |
Link |
2.1.1 |
2.6.0 |
3.9 |
12.6 |
Link |
2.1.1 |
2.5.1 |
3.12 |
12.4 |
Link |
2.1.1 |
2.5.1 |
3.11 |
12.4 |
Link |
2.1.1 |
2.5.1 |
3.10 |
12.4 |
Link |
2.1.1 |
2.5.1 |
3.9 |
12.4 |
Link |
◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇
Note
Only supports CUDA >= 12.8, therefore PyTorch >= 2.7.
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | Download Link |
|---|---|---|---|---|
2.2.0.post4 |
>2.9.0 |
3.9 |
13.0 |
Link |
2.2.0.post4 |
>2.9.0 |
3.9 |
12.8 |
Link |
2.2.0.post3 |
2.10.0 |
3.13 |
13.0 |
Link |
2.2.0.post3 |
2.10.0 |
3.12 |
13.0 |
Link |
2.2.0.post3 |
2.10.0 |
3.13 |
12.8 |
Link |
2.2.0.post3 |
2.10.0 |
3.12 |
12.8 |
Link |
2.2.0.post3 |
2.9.0 |
3.13 |
13.0 |
Link |
2.2.0.post3 |
2.9.0 |
3.13 |
12.9 |
Link |
2.2.0.post3 |
2.9.0 |
>3.9 |
12.9 |
Link |
2.2.0.post3 |
2.9.0 |
3.13 |
12.8 |
Link |
2.2.0.post3 |
2.9.0 |
>3.9 |
12.8 |
Link |
2.2.0.post3 |
2.8.0 |
3.13 |
12.9 |
Link |
2.2.0.post3 |
2.8.0 |
>3.9 |
12.9 |
Link |
2.2.0.post3 |
2.8.0 |
3.13 |
12.8 |
Link |
2.2.0.post3 |
2.8.0 |
>3.9 |
12.8 |
Link |
2.2.0.post3 |
2.7.1 |
>3.9 |
12.8 |
Link |
2.2.0.post3 |
2.6.0 |
>3.9 |
12.6 |
Link |
2.2.0.post3 |
2.5.1 |
>3.9 |
12.4 |
Link |
2.2.0.post2 |
2.9.0 |
>3.9 |
12.8 |
Link |
2.2.0.post2 |
2.8.0 |
>3.9 |
12.8 |
Link |
2.2.0.post2 |
2.7.1 |
>3.9 |
12.8 |
Link |
2.2.0.post2 |
2.6.0 |
>3.9 |
12.6 |
Link |
2.2.0.post2 |
2.5.1 |
>3.9 |
12.4 |
Link |
2.2.0 |
2.8.0 |
3.13 |
12.8 |
Link |
2.2.0 |
2.8.0 |
3.12 |
12.8 |
Link |
2.2.0 |
2.8.0 |
3.11 |
12.8 |
Link |
2.2.0 |
2.8.0 |
3.10 |
12.8 |
Link |
2.2.0 |
2.8.0 |
3.9 |
12.8 |
Link |
2.2.0 |
2.7.1 |
3.13 |
12.8 |
Link |
2.2.0 |
2.7.1 |
3.12 |
12.8 |
Link |
2.2.0 |
2.7.1 |
3.11 |
12.8 |
Link |
2.2.0 |
2.7.1 |
3.10 |
12.8 |
Link |
2.2.0 |
2.7.1 |
3.9 |
12.8 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
- Official Repo: : mit-han-lab/nunchaku
| Package Version | PyTorch Ver | Python Ver | Download Link |
|---|---|---|---|
1.0.2 |
2.10 |
3.13 |
Link |
1.0.2 |
2.10 |
3.12 |
Link |
1.0.2 |
2.10 |
3.11 |
Link |
1.0.2 |
2.10 |
3.10 |
Link |
1.0.2 |
2.9 |
3.13 |
Link |
1.0.2 |
2.9 |
3.12 |
Link |
1.0.2 |
2.9 |
3.11 |
Link |
1.0.2 |
2.9 |
3.10 |
Link |
1.0.2 |
2.8 |
3.13 |
Link |
1.0.2 |
2.8 |
3.12 |
Link |
1.0.2 |
2.8 |
3.11 |
Link |
1.0.2 |
2.8 |
3.10 |
Link |
1.0.2 |
2.7 |
3.13 |
Link |
1.0.2 |
2.7 |
3.12 |
Link |
1.0.2 |
2.7 |
3.11 |
Link |
1.0.2 |
2.7 |
3.10 |
Link |
1.0.1 |
2.10 |
3.13 |
Link |
1.0.1 |
2.10 |
3.12 |
Link |
1.0.1 |
2.10 |
3.11 |
Link |
1.0.1 |
2.10 |
3.10 |
Link |
1.0.1 |
2.9 |
3.13 |
Link |
1.0.1 |
2.9 |
3.13 |
Link |
1.0.1 |
2.9 |
3.12 |
Link |
1.0.1 |
2.9 |
3.12 |
Link |
1.0.1 |
2.8 |
3.13 |
Link |
1.0.1 |
2.8 |
3.13 |
Link |
1.0.1 |
2.8 |
3.12 |
Link |
1.0.1 |
2.8 |
3.11 |
Link |
1.0.1 |
2.8 |
3.10 |
Link |
1.0.1 |
2.7 |
3.13 |
Link |
1.0.1 |
2.7 |
3.12 |
Link |
1.0.1 |
2.7 |
3.11 |
Link |
1.0.1 |
2.7 |
3.10 |
Link |
1.0.1 |
2.6 |
3.13 |
Link |
1.0.1 |
2.6 |
3.12 |
Link |
1.0.1 |
2.6 |
3.11 |
Link |
1.0.1 |
2.6 |
3.10 |
Link |
1.0.1 |
2.5 |
3.12 |
Link |
1.0.1 |
2.5 |
3.11 |
Link |
1.0.1 |
2.5 |
3.10 |
Link |
1.0.0 |
2.9 |
3.13 |
Link |
1.0.0 |
2.9 |
3.12 |
Link |
1.0.0 |
2.9 |
3.11 |
Link |
1.0.0 |
2.9 |
3.10 |
Link |
1.0.0 |
2.8 |
3.13 |
Link |
1.0.0 |
2.8 |
3.12 |
Link |
1.0.0 |
2.8 |
3.11 |
Link |
1.0.0 |
2.8 |
3.10 |
Link |
1.0.0 |
2.7 |
3.13 |
Link |
1.0.0 |
2.7 |
3.12 |
Link |
1.0.0 |
2.7 |
3.11 |
Link |
1.0.0 |
2.7 |
3.10 |
Link |
1.0.0 |
2.6 |
3.13 |
Link |
1.0.0 |
2.6 |
3.12 |
Link |
1.0.0 |
2.6 |
3.11 |
Link |
1.0.0 |
2.6 |
3.10 |
Link |
1.0.0 |
2.5 |
3.12 |
Link |
1.0.0 |
2.5 |
3.11 |
Link |
1.0.0 |
2.5 |
3.10 |
Link |
0.3.2 |
2.9 |
3.12 |
Link |
0.3.2 |
2.8 |
3.12 |
Link |
0.3.2 |
2.8 |
3.11 |
Link |
0.3.2 |
2.8 |
3.10 |
Link |
0.3.2 |
2.7 |
3.12 |
Link |
0.3.2 |
2.7 |
3.11 |
Link |
0.3.2 |
2.7 |
3.10 |
Link |
0.3.2 |
2.6 |
3.12 |
Link |
0.3.2 |
2.6 |
3.11 |
Link |
0.3.2 |
2.6 |
3.10 |
Link |
0.3.2 |
2.5 |
3.12 |
Link |
0.3.2 |
2.5 |
3.11 |
Link |
0.3.2 |
2.5 |
3.10 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
Neighborhood Attention Transformer.
- Official Repo: SHI-Labs/NATTEN
- Pre-built Source: lldacing's HF
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | Download Link |
|---|---|---|---|---|
0.17.5 |
2.7.0 |
3.12 |
12.8 |
Link |
0.17.5 |
2.7.0 |
3.11 |
12.8 |
Link |
0.17.5 |
2.7.0 |
3.10 |
12.8 |
Link |
0.17.5 |
2.6.0 |
3.12 |
12.6 |
Link |
0.17.5 |
2.6.0 |
3.11 |
12.6 |
Link |
0.17.5 |
2.6.0 |
3.10 |
12.6 |
Link |
0.17.3 |
2.5.1 |
3.12 |
12.4 |
Link |
0.17.3 |
2.5.1 |
3.11 |
12.4 |
Link |
0.17.3 |
2.5.1 |
3.10 |
12.4 |
Link |
0.17.3 |
2.5.0 |
3.12 |
12.4 |
Link |
0.17.3 |
2.5.0 |
3.11 |
12.4 |
Link |
0.17.3 |
2.5.0 |
3.10 |
12.4 |
Link |
0.17.3 |
2.4.1 |
3.12 |
12.4 |
Link |
0.17.3 |
2.4.1 |
3.11 |
12.4 |
Link |
0.17.3 |
2.4.1 |
3.10 |
12.4 |
Link |
0.17.3 |
2.4.0 |
3.12 |
12.4 |
Link |
0.17.3 |
2.4.0 |
3.11 |
12.4 |
Link |
0.17.3 |
2.4.0 |
3.10 |
12.4 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
Triton is a language and compiler for writing highly efficient custom deep-learning primitives. Not officially supported on Windows, but a fork provides pre-built wheels.
- Windows Fork: woct0rdho/triton-windows
- Installation:
pip install -U "triton-windows<3.6"
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
A lightweight wrapper around CUDA custom functions, particularly for 8-bit optimizers, matrix multiplication (LLM.int8()), and quantization functions.
- Official Repo: bitsandbytes-foundation/bitsandbytes
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
- Nodes: ComfyUI-RadialAttn
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
- Official Repo: thu-ml/SpargeAttn
- Pre-built Sources: woct0rdho's Releases
| Package Version | PyTorch Ver | CUDA Ver | Download Link |
|---|---|---|---|
0.1.0.post1 |
2.8.0 |
12.8 |
Link |
0.1.0.post1 |
2.7.1 |
12.8 |
Link |
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
All wheel information in this repository is managed in the wheels.json file, which serves as the single source of truth. The tables in this README are automatically generated from this file.
This provides a stable, structured JSON endpoint for any external tool or application that needs to access this data without parsing Markdown.
You can access the raw JSON file directly via the following URL:
https://raw.githubusercontent.com/wildminder/AI-windows-whl/main/wheels.json
Example using curl:
curl -L -o wheels.json https://raw.githubusercontent.com/wildminder/AI-windows-whl/main/wheels.jsonThe file contains a list of packages, each with its metadata and an array of wheels, where each wheel object contains version details and a direct download url.
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have found a new pre-built wheel or a reliable source, please fork the repo and create a pull request, or simply open an issue with the link.
This repository is simply a collection of links. Huge thanks to the individuals and groups who do the hard work of building and hosting these wheels for the community:









