Skip to content

LangChain Integration

Yaqing2023 edited this page Mar 7, 2026 · 1 revision

LangChain Integration

Use MoltsPay as tools in your LangChain agents - let your AI autonomously pay for services!

Installation

pip install moltspay[langchain]

Quick Start

from langchain.agents import initialize_agent, AgentType
from langchain_openai import ChatOpenAI
from moltspay.integrations.langchain import MoltsPayTool

llm = ChatOpenAI(model="gpt-4")
tools = [MoltsPayTool()]

agent = initialize_agent(
    tools, 
    llm, 
    agent=AgentType.OPENAI_FUNCTIONS,
    verbose=True
)

# Agent can now pay for AI services!
result = agent.run("Generate a video of a cat dancing on the beach")

Available Tools

MoltsPayTool

Pay for and execute a service.

from moltspay.integrations.langchain import MoltsPayTool

tool = MoltsPayTool(
    max_per_tx=10.0,    # Optional: limit per transaction
    max_per_day=100.0   # Optional: daily limit
)

Tool Description (shown to LLM):

Pay for and execute an AI service. Input should be a JSON object with "url", "service", and service-specific parameters.

Example Input:

{
  "url": "https://juai8.com/zen7",
  "service": "text-to-video",
  "prompt": "a cat dancing on the beach"
}

MoltsPayDiscoverTool

Discover available services from a provider.

from moltspay.integrations.langchain import MoltsPayDiscoverTool

tool = MoltsPayDiscoverTool()

Tool Description:

Discover available paid AI services from a provider URL. Returns list of services with prices.

Example Input:

https://juai8.com/zen7

Get Both Tools

from moltspay.integrations.langchain import get_moltspay_tools

tools = get_moltspay_tools(max_per_tx=10.0)
# Returns [MoltsPayTool, MoltsPayDiscoverTool]

Full Example: Autonomous Video Agent

from langchain.agents import initialize_agent, AgentType
from langchain_openai import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from moltspay.integrations.langchain import get_moltspay_tools

# Setup
llm = ChatOpenAI(model="gpt-4", temperature=0)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
tools = get_moltspay_tools(max_per_tx=5.0)

agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.OPENAI_FUNCTIONS,
    memory=memory,
    verbose=True
)

# Conversation
print(agent.run("What video services are available at juai8.com/zen7?"))
# Agent uses MoltsPayDiscoverTool, returns:
# "I found 2 services: text-to-video ($0.99) and image-to-video ($1.49)"

print(agent.run("Generate a video of a sunset over mountains"))
# Agent uses MoltsPayTool, pays $0.99, returns video URL

Custom Service Discovery

Pre-configure known services:

from moltspay.integrations.langchain import MoltsPayTool

# With known service URL
tool = MoltsPayTool(
    default_url="https://juai8.com/zen7"
)

# Agent can now just say "generate a video of X" 
# without specifying the URL

Error Handling in Agents

The tools handle errors gracefully:

# If insufficient funds:
# Tool returns: "Payment failed: Insufficient funds. Need 0.99 USDC, have 0.50 USDC."

# If limit exceeded:
# Tool returns: "Payment failed: Exceeds per_tx limit. Tried 5.00, limit is 2.00."

# Agent can then communicate this to the user

With LangGraph

from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
from moltspay.integrations.langchain import get_moltspay_tools

llm = ChatOpenAI(model="gpt-4")
tools = get_moltspay_tools()

agent = create_react_agent(llm, tools)

result = agent.invoke({
    "messages": [("user", "Generate a video of a cat")]
})

Security Considerations

  1. Set spending limits - Always configure max_per_tx and max_per_day
  2. Monitor usage - Check client.limits().spent_today periodically
  3. Use specific URLs - Prefer default_url over letting the agent choose
  4. Review agent actions - Enable verbose=True to see what the agent does
# Recommended safe configuration
tools = get_moltspay_tools(
    max_per_tx=2.0,    # Max $2 per service call
    max_per_day=20.0   # Max $20 per day total
)

Next Steps

Clone this wiki locally