-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Setup Polymarket agents with websocket support #18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
chippy101
wants to merge
17
commits into
moondevonyt:main
Choose a base branch
from
chippy101:claude/setup-polymarket-agents-011CUWKBi3VmZbg7YjCWvwzJ
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Setup Polymarket agents with websocket support #18
chippy101
wants to merge
17
commits into
moondevonyt:main
from
chippy101:claude/setup-polymarket-agents-011CUWKBi3VmZbg7YjCWvwzJ
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Installed websocket-client dependency for real-time market monitoring - Updated requirements.txt with websocket-client==1.9.0 - Created comprehensive POLYMARKET_SETUP.md guide - Ready to run Polymarket agent with AI swarm analysis 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
|
@claude is attempting to deploy a commit to the md777 Team on Vercel. A member of the Team first needs to authorize it. |
- Changed USE_SWARM_MODE from True to False - Set AI_MODEL_PROVIDER to "anthropic" (Claude) - Set AI_MODEL_NAME to "claude-3-5-sonnet-20241022" - Added all required dependencies to requirements.txt: - websocket-client==1.9.0 (WebSocket support) - termcolor==3.2.0 (colored terminal output) - python-dotenv==1.2.1 (.env file support) - anthropic==0.71.0 (Claude API) - groq, openai, google-generativeai (model factory dependencies) - pandas, numpy (data processing) Agent now ready to run with single Claude API key instead of 6-model swarm. Cost-effective setup for testing and learning. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Changed AI_MODEL_PROVIDER from "anthropic" to "deepseek" - Changed AI_MODEL_NAME to "deepseek-chat" - DeepSeek is cost-effective at $0.14 per million tokens - User has DeepSeek API key configured in .env (not committed) Note: DeepSeek requires credits but is extremely cheap. Exploring truly free alternatives (Groq, Gemini) for next iteration. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Changed AI_MODEL_PROVIDER to "ollama" - Set AI_MODEL_NAME to "llama3.2" - Created RUN_POLYMARKET_AGENT.md with detailed instructions - Agent now uses free local AI via Ollama - No API keys needed, no costs, no geographic restrictions - Works completely offline once model is downloaded Benefits: - 100% free forever - No rate limits - Full privacy (data stays local) - No account/API key required - Bypasses geographic restrictions User needs to run agent on Linux Mint host (not Docker) to connect to Ollama on localhost:11434. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Trading Agent Changes: - Set USE_SWARM_MODE to False (single model mode) - Changed AI_MODEL_TYPE from 'xai' to 'ollama' - Set AI_MODEL_NAME to 'llama3.2:1b' (fast & lightweight) - Added comment about free local AI usage RBI Agent Changes: - Reconfigured all 4 model configs from GPT-5 to Ollama - RESEARCH_CONFIG: ollama/llama3.2:1b - BACKTEST_CONFIG: ollama/llama3.2:1b - DEBUG_CONFIG: ollama/llama3.2:1b - PACKAGE_CONFIG: ollama/llama3.2:1b Documentation: - Created TRADING_AND_RBI_SETUP.md - Complete setup instructions for both agents - Configuration options and examples - Troubleshooting guide - Performance notes and model comparison All three agents now running 100% free with local Ollama: ✅ Polymarket Agent - llama3.2 (already configured) ✅ Trading Agent - llama3.2:1b (configured) ✅ RBI Agent - llama3.2:1b (configured) Total cost: $0.00 forever! No API keys, no geographic restrictions. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
HyperLiquid Integration Changes (src/nice_funcs_hyperliquid.py): - Added USE_TESTNET = True flag for paper trading mode - Created get_api_url() helper function - Automatically selects testnet/mainnet API URL based on flag - Replaced all constants.MAINNET_API_URL with get_api_url() - Visual indicator shows which mode is active (testnet/mainnet) - Zero code changes needed to switch between modes Trading Agent Changes (src/agents/trading_agent.py): - Changed EXCHANGE from "ASTER" to "HYPERLIQUID" - Added clear testnet mode documentation in config - Currently trading BTC with Ollama llama3.2:1b - Single model mode (fast execution) - Long-only positions, 90% position size, 9x leverage Documentation (HYPERLIQUID_TESTNET_SETUP.md): - Complete testnet setup guide - How to get testnet wallet and funds - Step-by-step configuration instructions - Troubleshooting guide - Safety notes and testing strategy - How to switch to mainnet when ready Safe Testing Environment: ✅ Fake money (testnet USDC) ✅ Real trading logic (same as mainnet) ✅ Free testnet funds (from faucet) ✅ No risk, perfect for learning ✅ Easy switch to mainnet later All agents now ready: ✅ Polymarket Agent - Ollama llama3.2 (analysis only) ✅ Trading Agent - Ollama llama3.2:1b (testnet paper trading) ✅ RBI Agent - Ollama llama3.2:1b (backtest generation) Total cost: $0.00 with testnet! 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Created comprehensive guide for migrating project and Ollama to external drive. Migration Guide (MIGRATION_TO_EXTERNAL_DRIVE.md): - Complete step-by-step instructions for Evo Plus 2a drive - Safe copy-first approach (test before deleting) - Moves project files (~100 MB) to external drive - Moves Ollama models (~3.3 GB) to external drive - Uses symlinks for transparent Ollama integration - Saves ~3.7 GB on laptop SSD Features: - Exact commands for user's specific drive path - Safety checks at each step - Testing checklist before cleanup - Troubleshooting section - Performance notes (Samsung Evo Plus is fast!) - Auto-mounting instructions (optional) - Quick reference for new workflow Benefits: - 💾 Saves 3.7+ GB on laptop - 📦 Everything in one place - 🔄 Easy to backup - 🚀 No performance loss - ✅ All agents work exactly the same User can follow guide to migrate when ready. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Changed AI_MODEL_NAME from llama3.2:1b to llama3.2:latest - Provides better accuracy for trading decisions - Models now running from external SSD (3.2GB saved on laptop) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Changes: - Swarm Agent: Configure 4 FREE local Ollama models (deepseek-r1:7b, qwen2.5:7b, llama3.2:latest, llama3.2:1b) - Trading Agent: Enable USE_SWARM_MODE = True for multi-model consensus - Consensus Reviewer: Using deepseek-r1:7b for final synthesis - Disabled all paid API models (Claude, GPT-5, Grok, etc.) - Updated all references from 6-model to 4-model consensus Benefits: - 100% FREE - No API costs, all models run locally - Better decisions - 4 different AI perspectives voting - More reliable - Consensus catches individual model mistakes - ~11GB total on external SSD (0 bytes on laptop) Models in swarm: 1. DeepSeek-R1 7B - Reasoning specialist 2. Qwen 2.5 7B - Pattern recognition 3. Llama 3.2 2GB - General intelligence 4. Llama 3.2 1B - Fast baseline 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Changes: - Added PAPER_TRADING_MODE flag (default: True) - Created fake $10,000 starting balance for simulation - Added get_paper_balance() - tracks cash and positions - Added simulate_paper_trade() - simulates BUY/SELL without real execution - Added show_paper_portfolio() - displays current portfolio status - Modified run_trading_cycle() to skip real trades when in paper mode - Shows "WOULD BUY/SELL" instead of executing real trades Benefits: - Test 4-model consensus safely without funds - See AI decision-making in real-time - Track portfolio performance - Zero risk, zero cost - Perfect for strategy validation When PAPER_TRADING_MODE = True: ✅ Analyzes real market data ✅ 4 AI models vote on decisions ✅ Shows what trades WOULD be made ✅ Tracks fake portfolio performance ❌ No real trades executed ❌ No real money at risk To enable real trading: Set PAPER_TRADING_MODE = False 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Complete guide for setting up paper-free testing on Solana devnet: - Step-by-step wallet creation (Phantom or CLI) - Free devnet SOL from faucet (unlimited) - Private key format conversion (base58) - Trading Agent configuration for devnet - Devnet token addresses - Troubleshooting section - Cost comparison table Benefits vs HyperLiquid testnet: - ✅ Actually works (no funding issues) - ✅ Unlimited free devnet SOL - ✅ Easy faucet access - ✅ Well-documented - ✅ Stable testing environment Tomorrow's plan: 1. Create Solana devnet wallet 2. Get free SOL from faucet 3. Configure EXCHANGE = "SOLANA" 4. Test 4-model consensus with real devnet trades 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Fix: ohlcv_collector.py was importing nice_funcs_aster unconditionally, causing crash even when using HyperLiquid or Solana exchange. Changes: - Wrapped Aster import in try/except block - Set aster = None if import fails (module isn't used anyway) - Allows trading agent to run without Aster dependencies Now works with: ✅ HyperLiquid (doesn't need Aster) ✅ Solana (doesn't need Aster) ✅ Paper Trading Mode (doesn't need Aster) ✅ Aster (if dependencies installed) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Fix: nice_funcs.py required BIRDEYE_API_KEY even when using HyperLiquid or paper trading mode. Changes: - Changed hard error to warning when BIRDEYE_API_KEY missing - Set placeholder value to prevent crashes - Added helpful message indicating this is OK for HyperLiquid/Paper trading Now works with: ✅ HyperLiquid (doesn't need BirdEye) ✅ Paper Trading Mode (doesn't need BirdEye) ✅ Solana (requires BirdEye - will warn if missing) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Changes: - Increased MODEL_TIMEOUT from 90s to 180s (3 minutes) - Reduced swarm from 4 models to 2 models for speed - Active models: deepseek-r1:7b (reasoning) + llama3.2:1b (fast baseline) - Disabled: qwen2.5:7b and llama3.2:latest (can re-enable for 4-model consensus) Why: - Laptop hardware was hitting 90s timeout exactly - 7B models need more time to generate responses - 2-model consensus is faster (6 min vs 12+ min) while still effective - Models were responding successfully but timing out at 90.07-90.09s Benefits: - ✅ No more timeouts - ✅ Faster results (~6 minutes per analysis) - ✅ Still gets AI consensus (2 different perspectives) - ✅ Can re-enable all 4 models if desired (just uncomment in swarm_agent.py) Users with faster hardware can uncomment qwen/llama-large for 4-model consensus. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Fix: Ollama client was using 90s timeout while swarm expected 180s, causing models to timeout exactly at 90.07-90.09s. Changes: - Increased requests.post timeout from 90s to 180s - Now matches swarm_agent.py MODEL_TIMEOUT setting - Added comment explaining it's for 7B models on slower hardware This was the final piece - swarm timeout was set but HTTP client timeout wasn't updated. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Fix: Agent was trying to check real exchange positions in paper trading mode, causing crash and breaking the continuous loop. Changes: - Skip position checking when PAPER_TRADING_MODE = True - Allows agent to loop every 15 minutes without crashing - Paper trading now runs continuously as intended Now agent will: ✅ Run analysis ✅ Show paper trading results ✅ Sleep 15 minutes ✅ Repeat forever (until Ctrl+C) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Problem: Agent was too conservative, missing actual trading opportunities that user was successfully taking manually. Changes to SWARM_TRADING_PROMPT: - Changed from "prioritize risk management" to "aggressive trader" - Added specific technical criteria for BUY/SELL signals: * BUY: RSI > 50, MACD positive, price above MAs, volume increasing * SELL: RSI < 50, MACD negative, price below MAs, weak volume - Emphasized 15-minute timeframe = short-term momentum - Made "Do Nothing" the exception, not the default - Told AI to be AGGRESSIVE and look for opportunities Why: - User manually took 2 profitable trades on ZEC/BTC - Agent recommended 100% NOTHING for same assets - Previous prompt was too risk-averse - Needed specific technical criteria, not vague "strong signals" Result: Agent should now detect actual trading setups instead of being overly conservative. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
🤖 Generated with Claude Code