Skip to content

Discovery layer for finding x402 endpoints at runtime #2

@rplryan

Description

@rplryan

Hey qntx team — love the drop-in OpenAI client approach. Wanted to flag something that solves the "which x402 endpoint do I point at?" problem your users face.

x402 Service Discovery API — an enriched catalog of 251+ x402-enabled services:

import requests
from openai import OpenAI

# Step 1: Find the best x402 LLM endpoint
services = requests.get(
    "https://x402-discovery-api.onrender.com/discover",
    params={"query": "llm", "max_price_usd": 0.05, "sort": "cheapest"}
).json()["services"]

best_endpoint = services[0]["url"]

# Step 2: Use it with your x402-openai client
client = OpenAI(base_url=best_endpoint, api_key="x402")
response = client.chat.completions.create(...)

What it provides:

  • 251+ x402 LLM + data + compute services indexed
  • Health status + latency scores per service
  • ERC-8004 trust verification
  • Routing by price, speed, or trust
  • Auto-updates every 6 hours from x402.org/ecosystem

Also ships as an MCP server — so Claude/Cursor can discover x402-compatible OpenAI-compatible endpoints at agent runtime, then invoke your client.

Might be useful as a companion link in your README's "Getting Started" section — users need to find where to point the client before they can use it.

rplryan

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions