Query and analyze your Weights & Biases data using natural language through the Model Context Protocol.
Example Use Cases (click command to copy)
| Analyze Experiments | Debug Traces | Create Reports | Get Help |
|---|---|---|---|
| Show me the top 5 runs by eval/accuracy in wandb-smle/hiring-agent-demo-public? | How did the latency of my hiring agent predict traces evolve over the last months? | Generate a wandb report comparing the decisions made by the hiring agent last month | How do I create a leaderboard in Weave - ask SupportBot? |
"Go through the last 100 traces of my last training run in grpo-cuda/axolotl-grpo and tell me why rollout traces of my RL experiment were bad sometimes?"
Available Tools (14 tools)
| Tool | Description | Example Query |
|---|---|---|
| infer_trace_schema_tool | Discover field names, types, and sample values | "What fields are in my traces?" |
| query_weave_traces_tool | Analyze LLM traces with detail_level control |
"Show failed traces with full data" |
| count_weave_traces_tool | Count traces and get storage metrics | "How many traces failed?" |
| query_wandb_tool | Query W&B runs, metrics, and experiments | "Show me runs with loss < 0.1" |
| get_run_history_tool | Sampled time-series metric data | "Show loss curve for run abc123" |
| create_wandb_report_tool | Create reports with markdown, charts, and panels | "Create a report with loss plots" |
| log_analysis_to_wandb | Log analysis metrics to W&B as a run | "Log these latency stats to W&B" |
| search_wandb_docs_tool | Search official W&B documentation | "How do I create a Weave scorer?" |
| query_wandb_entity_projects | List projects for an entity | "What projects exist?" |
| list_registries_tool | List model registries in an organization | "What registries are available?" |
| list_registry_collections_tool | List collections within a registry | "What models are in the prod registry?" |
| list_artifact_versions_tool | List versions of an artifact collection | "Show versions of my model artifact" |
| get_artifact_details_tool | Get full details of an artifact version | "What's in model-v2 artifact?" |
| compare_artifact_versions_tool | Diff two artifact versions | "Compare model v1 vs v2" |
Schema-first workflow: Call infer_trace_schema_tool first to discover fields, then query_weave_traces_tool with precise columns and detail_level:
"schema"-- structural fields only (fast browsing)"summary"-- truncated inputs/outputs (default)"full"-- everything untruncated (drill into specific traces)
Chart panels: create_wandb_report_tool accepts a panels parameter for LinePlots, BarPlots, and run comparisons alongside markdown.
Docs search: search_wandb_docs_tool proxies docs.wandb.ai so you get data tools + documentation search from a single MCP connection. Disable with WANDB_MCP_PROXY_DOCS=false if you connect the docs MCP separately.
Usage Tips (best practices)
→ Provide your W&B project and entity name
LLMs are not mind readers, ensure you specify the W&B Entity and W&B Project to the LLM.
→ Avoid asking overly broad questions
Questions such as "what is my best evaluation?" are probably overly broad and you'll get to an answer faster by refining your question to be more specific such as: "what eval had the highest f1 score?"
→ Ensure all data was retrieved
When asking broad, general questions such as "what are my best performing runs/evaluations?" it's always a good idea to ask the LLM to check that it retrieved all the available runs. The MCP tools are designed to fetch the correct amount of data, but sometimes there can be a tendency from the LLMs to only retrieve the latest runs or the last N runs.
We recommend using our hosted server at https://mcp.withwandb.com - no installation required!
🔑 Get your API key from wandb.ai/authorize
🌐 To connect to a W&B Dedicated / On-Prem Instance currently only the local MCP configuration can be used with an additional
WANDB_BASE_URLenv variable (the default isapi.wandb.ai)
One-click installation
- Click on the button above to automatically add the config to Cursor
- Then add your WANDB_API_KEY in the respective field
Bearer YOUR_API_KEYand connect
For manual or local installation, see Option 2 below.
Python client setup
from openai import OpenAI
import os
client = OpenAI()
resp = client.responses.create(
model="gpt-4o",
tools=[{
"type": "mcp",
"server_url": "https://mcp.withwandb.com/mcp",
"authorization": os.getenv('WANDB_API_KEY'),
"server_label": "WandB_MCP",
}],
input="How many traces are in my project?"
)
print(resp.output_text)Note: OpenAI's MCP is server-side, so localhost URLs won't work. For local servers, see Option 2 with ngrok.
One-command installation
# run in terminal
claude mcp add --transport http wandb https://mcp.withwandb.com/mcp --scope user --header "Authorization: Bearer <your-api-key-here>"For local installation, see Option 2 below.
One-command installation
# run in terminal
export WANDB_API_KEY=<your-api-key>
codex mcp add wandb --url https://mcp.withwandb.com/mcp --bearer-token-env-var WANDB_API_KEYFor local installation, see Option 2 below.
One-command installation
# Set your API key
export WANDB_API_KEY="your-api-key-here"
# Install the extension
gemini extensions install https://github.com/wandb/wandb-mcp-serverThe extension will use the configuration from gemini-extension.json pointing to the hosted server.
For local installation, see Option 2 below.
Settings configuration
# Open settings
code ~/.vscode/mcp.json # or global mcp.json file{
"servers": {
"wandb": {
"type": "http",
"url": "https://mcp.withwandb.com/mcp",
"headers": {
"Authorization": "Bearer YOUR_WANDB_API_KEY"
}
}
}
}For local installation, see Option 2 below.
Configuration setup
Mistral is currently the best supported Chat assistant based on API-key based authentication. Simply navigate to "[Connectors](https://mistral.ai/news/le-chat-mcp-connectors-memories)" and 1) paste in the URL `https://mcp.withwandb.com/mcp` and 2) select API Key Authentication and paste WANDB API key.Configuration setup
Add to your Claude config file. Claude desktop currently doesn't support remote MCPs to be added so we're adding the local MCP. Be careful to add the full path to uv for the command because Claude Desktop potentially doesn't find your uv installation otherwise.
# macOS
open ~/Library/Application\ Support/Claude/claude_desktop_config.json
# Windows
notepad %APPDATA%\Claude\claude_desktop_config.json{
"mcpServers": {
"wandb": {
"command": "/Users/niware_wb/.local/bin/uvx",
"args": [
"--from",
"git+https://github.com/wandb/wandb-mcp-server",
"wandb_mcp_server"
],
"env": {
"WANDB_API_KEY": "<your-api-key>"
}
}
}
}Restart Claude Desktop to activate.
We're working on adding OAuth support so that we can integrate with ChatGPT.
Option 1: Hosted Server (Recommended)
The hosted server provides a zero-configuration experience with enterprise-grade reliability. This server is maintained by the W&B team, automatically updated with new features, and scales to handle any workload. Perfect for teams and production use cases where you want to focus on your ML work rather than infrastructure.
The easiest way is using our hosted server at https://mcp.withwandb.com.
Benefits:
- ✅ Zero installation
- ✅ Always up-to-date
- ✅ Automatic scaling
- ✅ No maintenance
Simply use the configurations shown in Quick Start.
Option 2: Local Development (STDIO)
Run the MCP server locally for development, testing, or when you need full control over your data. The local server runs directly on your machine with STDIO transport for desktop clients or HTTP transport for web-based clients. Ideal for developers who want to customize the server or work in air-gapped environments. See below for client specific installation.
Quick Start:
# Install uv if needed
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install the server
uv pip install git+https://github.com/wandb/wandb-mcp-server
# Run with STDIO transport (for desktop clients)
export WANDB_API_KEY="your-api-key"
uvx --from git+https://github.com/wandb/wandb-mcp-server wandb_mcp_server📖 For complete command line options and environment variables, see the Command Line Reference in the More Information section.
Add to your MCP client config (for detailed client-specific configs see below):
{
"mcpServers": {
"wandb": {
"command": "uvx",
"args": [
"--from",
"git+https://github.com/wandb/wandb-mcp-server",
"wandb_mcp_server"
],
"env": {
"WANDB_API_KEY": "YOUR_API_KEY",
"WANDB_BASE_URL": "YOUR_BASE_URL", #optional for dedicated or on-prem installations
}
}
}
}- Open Cursor Settings (
⌘,orCtrl,) - Navigate to Features → Model Context Protocol
- Click "Install from Registry" or "Add MCP Server"
- Search for "wandb" or enter:
- Name:
wandb - URL:
https://mcp.withwandb.com/mcp - API Key: Your W&B API key
- Name:
Manual hosted config in mcp.json:
"wandb": {
"transport": "http",
"url": "https://mcp.withwandb.com/mcp",
"headers": {
"Authorization": "Bearer YOUR-API_KEY",
"Accept": "application/json, text/event-stream"
}
}
Manual local (dedicated or on-prem) config in mcp.json:
"wandb": {
"command": "uvx",
"args": [
"--from",
"git+https://github.com/wandb/wandb-mcp-server",
"wandb_mcp_server"
],
"env": {
"WANDB_API_KEY": "YOUR-API_KEY",
"WANDB_BASE_URL": "https://your-wandb-instance.example.com", # optional
}
}
codex mcp add wandb \
--env WANDB_API_KEY=your_api_key_here \
--env WANDB_BASE_URL=https://your-wandb-instance.example.com \
-- uvx --from git+https://github.com/wandb/wandb-mcp-server wandb_mcp_serverAdd --scope user for global config.
claude mcp add wandb -e WANDB_API_KEY=your-api-key -e WANDB_BASE_URL=your-base-url -- uvx --from git+https://github.com/wandb/wandb-mcp-server wandb_mcp_serverSame as above.
# macOS
open ~/Library/Application\ Support/Claude/claude_desktop_config.json
# Windows
notepad %APPDATA%\Claude\claude_desktop_config.json{
"mcpServers": {
"wandb": {
"command": "/Users/niware_wb/.local/bin/uvx",
"args": [
"--from",
"git+https://github.com/wandb/wandb-mcp-server",
"wandb_mcp_server"
],
"env": {
"WANDB_API_KEY": "<your-api-key>",
"WANDB_BASE_URL": "https://your-wandb-instance.example.com", # optional
}
}
}
}Restart Claude Desktop to activate.
For clients like OpenAI and LeChat that require public URLs:
# 1. Start HTTP server
uvx wandb-mcp-server --transport http --port 8080
# 2. Expose with ngrok
ngrok http 8080
# 3. Use the ngrok URL in your client configurationOption 3: Self-Hosted HTTP Server (Advanced)
This public repository focuses on the STDIO transport. If you need a fully managed HTTP deployment (Docker, Cloud Run, Hugging Face, etc.), start from this codebase and add your own HTTP entrypoint in a separate repo. The production-grade hosted server maintained by W&B now lives in a private repository built on top of this one.
For lightweight experimentation and testing, you can run the FastMCP HTTP transport directly:
# Basic HTTP server
uvx wandb_mcp_server --transport http --host 0.0.0.0 --port 8080
# With Weave tracing enabled
uvx wandb_mcp_server \
--transport http \
--host 0.0.0.0 \
--port 8080 \
--weave_entity your-entity \
--weave_project mcp-server-logs📖 For all available command line options, see the Command Line Reference in the More Information section.
Note: Clients must continue to provide their own W&B API key via Bearer token per the MCP spec.
Option 4: Dedicated / On-Prem Deployment
For W&B Dedicated and On-Prem customers, the MCP server is available as an optional subchart in the operator-wandb Helm chart. Enable it with one line in your WeightsAndBiases CR:
mcp-server:
install: trueThe server becomes accessible at https://<your-instance>/mcp. It automatically connects to your in-cluster Weave trace server and W&B API.
Requirements:
weave-tracemust be installed (weave-trace.install: true)- Operator chart version >= 0.42.0
- Image
wandb/mcp-server:0.3.0or later
Client configuration for dedicated instances:
{
"mcpServers": {
"wandb": {
"url": "https://your-instance.wandb.io/mcp",
"headers": {
"Authorization": "Bearer YOUR_WANDB_API_KEY"
}
}
}
}Contact your W&B account team to enable MCP on your dedicated deployment.
When running the server locally, you can customize its behavior with command line arguments:
Note: Arguments use underscores (e.g.,
--wandb_api_key), not dashes.
| Argument | Type | Default | Description |
|---|---|---|---|
--transport |
string | stdio |
Transport type: stdio for local MCP client communication or http for HTTP server |
--host |
string | localhost |
Host to bind HTTP server to (only used with --transport http) |
--port |
integer | 8080 |
Port to run the HTTP server on (only used with --transport http) |
--wandb_api_key |
string | None | Weights & Biases API key for authentication |
--weave_entity |
string | None | The W&B entity to log traced MCP server calls to |
--weave_project |
string | weave-mcp-server |
The W&B project to log traced MCP server calls to |
| Variable | Description | Required |
|---|---|---|
WANDB_API_KEY |
Your W&B API key (alternative to --wandb_api_key flag) |
Yes |
WANDB_BASE_URL |
Custom W&B instance URL (for dedicated/on-prem instances) | No |
MCP_SERVER_LOG_LEVEL |
Logging verbosity: DEBUG, INFO, WARNING, ERROR |
No |
WANDB_SILENT |
Set to "False" to suppress W&B output |
No |
WEAVE_SILENT |
Set to "False" to suppress Weave output |
No |
WANDB_DEBUG |
Set to "true" to enable detailed W&B logging |
No |
MCP_AUTH_DISABLED |
Disable HTTP authentication (development only) | No |
WANDB_MCP_PROXY_DOCS |
Enable/disable docs search proxy (default: true) |
No |
MAX_RESPONSE_TOKENS |
Token budget for response truncation (default: 30000) |
No |
STDIO Transport (default for desktop clients):
# Basic usage with environment variable
export WANDB_API_KEY="your-api-key"
uvx --from git+https://github.com/wandb/wandb-mcp-server wandb_mcp_server
# Or with API key as argument
uvx --from git+https://github.com/wandb/wandb-mcp-server wandb_mcp_server --wandb_api_key your-api-keyHTTP Transport (for testing and development):
# Basic HTTP server on localhost:8080
uvx wandb_mcp_server --transport http --host 127.0.0.1 --port 8080
# Bind to all interfaces with custom port
uvx wandb_mcp_server --transport http --host 0.0.0.0 --port 9090With Weave Tracing (log MCP calls to W&B):
uvx wandb_mcp_server \
--transport http \
--port 8080 \
--weave_entity my-team \
--weave_project mcp-monitoringView all options:
uvx --from git+https://github.com/wandb/wandb-mcp-server wandb_mcp_server --help- CONTRIBUTING.md -- Development setup, testing, PR process, architecture overview
- RELEASING.md -- Version bumping, release checklist, deployment pipeline
- W&B Docs: docs.wandb.ai
- Weave Docs: weave-docs.wandb.ai
- MCP Spec: modelcontextprotocol.io
Complete OpenAI Example
from openai import OpenAI
from dotenv import load_dotenv
import os
load_dotenv()
client = OpenAI()
resp = client.responses.create(
model="gpt-4o", # Use gpt-4o for larger context window
tools=[
{
"type": "mcp",
"server_label": "wandb",
"server_description": "Query W&B data",
"server_url": "https://mcp.withwandb.com/mcp",
"authorization": os.getenv('WANDB_API_KEY'),
"require_approval": "never",
},
],
input="How many traces are in wandb-smle/hiring-agent-demo-public?",
)
print(resp.output_text)Unit tests run without API keys or network access:
pip install -e ".[test]"
pytest tests/ -vCI runs automatically on every push and PR via GitHub Actions.
| Repo | Visibility | Contains |
|---|---|---|
wandb/wandb-mcp-server |
Public | Tool logic, core server, unit tests |
wandb/wandb-mcp-server-internal |
Private | LLM evals, load tests, Dockerfile, Helm, CI/CD |
The internal repo installs the public repo as a pip dependency.