diff --git a/README.md b/README.md index 900575b..dd5a792 100644 --- a/README.md +++ b/README.md @@ -68,6 +68,8 @@ Get up and running in under 2 minutes: > **Prerequisites:** You must have QuantConnect credentials (User ID and API Token) before running the server. The server will not function without proper authentication. See [Authentication](#-authentication) section for details on obtaining these credentials. ### **Install with uvx (Recommended)** + +#### Core Installation (API Tools Only) ```bash # Install and run directly from PyPI - no cloning required! uvx quantconnect-mcp @@ -77,6 +79,17 @@ uv pip install quantconnect-mcp pip install quantconnect-mcp ``` +#### Full Installation (with QuantBook Support) +```bash +# Install with QuantBook container functionality +uv pip install "quantconnect-mcp[quantbook]" +pip install "quantconnect-mcp[quantbook]" + +# Requires Docker and lean-cli to be installed +docker --version # Ensure Docker is available +pip install lean # Install QuantConnect lean-cli +``` + ### One-Click Claude Desktop Install (Recommended) @@ -95,6 +108,9 @@ pip install quantconnect-mcp export QUANTCONNECT_USER_ID="your_user_id" # Required export QUANTCONNECT_API_TOKEN="your_api_token" # Required export QUANTCONNECT_ORGANIZATION_ID="your_org_id" # Optional + +# Optional: Enable QuantBook container functionality (default: false) +export ENABLE_QUANTBOOK="true" # Requires Docker + quantconnect-mcp[quantbook] ``` ### 3. **Launch the Server** @@ -102,11 +118,53 @@ export QUANTCONNECT_ORGANIZATION_ID="your_org_id" # Optional # STDIO transport (default) - Recommended for MCP clients uvx quantconnect-mcp +# With QuantBook functionality enabled +ENABLE_QUANTBOOK=true uvx quantconnect-mcp + # HTTP transport MCP_TRANSPORT=streamable-http MCP_PORT=8000 uvx quantconnect-mcp + +# Full configuration example +ENABLE_QUANTBOOK=true \ +LOG_LEVEL=DEBUG \ +MCP_TRANSPORT=streamable-http \ +MCP_PORT=8000 \ +uvx quantconnect-mcp ``` -### 4. **Interact with Natural Language** +### 4. **QuantBook Container Functionality (Optional)** + +The server supports optional QuantBook functionality that runs research environments using lean-cli managed Docker containers. This provides: + +- **๐Ÿณ lean-cli Integration**: Uses official QuantConnect lean-cli for container management +- **๐Ÿ“” Jupyter Notebook Environment**: Code executes in `/LeanCLI/research.ipynb` with pre-initialized `qb` +- **๐Ÿ”’ Enhanced Security**: Isolated containers with resource limits +- **โšก Scalable Sessions**: Multiple concurrent research sessions with automatic cleanup +- **๐Ÿ“Š Interactive Analysis**: Execute Python code with full QuantConnect research libraries + +#### **Requirements** +- Docker installed and running +- lean-cli installed: `pip install lean` +- Install with QuantBook support: `pip install "quantconnect-mcp[quantbook]"` +- Set environment variable: `ENABLE_QUANTBOOK=true` + +#### **Key Features** +- QuantBook (`qb`) is pre-initialized in Jupyter notebooks +- Research notebooks located at `/LeanCLI/research.ipynb` +- New notebooks must use the `Foundation-Py-Default` kernel for qb access +- Automatic notebook modification for code execution +- Compatible with QuantConnect's standard research environment + +### 5. **QuantBook Usage Notes** + +When using QuantBook functionality, keep these key points in mind: + +#### **๐Ÿ“” Notebook-Based Execution** +- All QuantBook code executes by modifying `/LeanCLI/research.ipynb` +- `qb` (QuantBook instance) is **pre-initialized** and ready to use +- The LLM should not try to import or create QuantBook - just use `qb` directly, it's not available outside this environment + +### 6. **Interact with Natural Language** Instead of calling tools programmatically, you use natural language with a connected AI client (like Claude, a GPT, or any other MCP-compatible interface). @@ -233,24 +291,26 @@ This MCP server is designed to be used with natural language. Below are examples | `update_file_content` | Update file content | `project_id`, `name`, `content` | | `update_file_name` | Rename file in project | `project_id`, `old_file_name`, `new_name` | -### โ—† QuantBook Research Tools +### โ—† QuantBook Research Tools (Optional - Requires ENABLE_QUANTBOOK=true) | Tool | Description | Key Parameters | |------|-------------|----------------| -| `initialize_quantbook` | Create new research instance | `instance_name`, `organization_id`, `token` | -| `list_quantbook_instances` | View all active instances | - | -| `get_quantbook_info` | Get instance details | `instance_name` | -| `remove_quantbook_instance` | Clean up instance | `instance_name` | +| `initialize_quantbook` | Create new containerized research instance | `instance_name`, `memory_limit`, `cpu_limit`, `timeout` | +| `list_quantbook_instances` | View all active container instances | - | +| `get_quantbook_info` | Get container instance details | `instance_name` | +| `remove_quantbook_instance` | Clean up container instance | `instance_name` | +| `execute_quantbook_code` | Execute Python code via notebook modification | `code`, `instance_name`, `timeout` | +| `get_session_manager_status` | Get container session manager status | - | -### โ—† Data Retrieval Tools +### โ—† Data Retrieval Tools (Optional - Requires ENABLE_QUANTBOOK=true) | Tool | Description | Key Parameters | |------|-------------|----------------| -| `add_equity` | Add single equity security | `ticker`, `resolution`, `instance_name` | -| `add_multiple_equities` | Add multiple securities | `tickers`, `resolution`, `instance_name` | -| `get_history` | Get historical price data | `symbols`, `start_date`, `end_date`, `resolution` | -| `add_alternative_data` | Subscribe to alt data | `data_type`, `symbol`, `instance_name` | -| `get_alternative_data_history` | Get alt data history | `data_type`, `symbols`, `start_date`, `end_date` | +| `add_equity` | Add single equity security via notebook | `ticker`, `resolution`, `instance_name` | +| `add_multiple_equities` | Add multiple securities via notebook | `tickers`, `resolution`, `instance_name` | +| `get_history` | Get historical price data via notebook | `symbols`, `start_date`, `end_date`, `resolution` | +| `add_alternative_data` | Subscribe to alt data via notebook | `data_type`, `symbol`, `instance_name` | +| `get_alternative_data_history` | Get alt data history via notebook | `data_type`, `symbols`, `start_date`, `end_date` | ### โ—† Statistical Analysis Tools @@ -332,6 +392,7 @@ quantconnect-mcp/ ### Environment Variables +#### Core Server Configuration | Variable | Description | Default | Example | |----------|-------------|---------|---------| | `MCP_TRANSPORT` | Transport method | `stdio` | `streamable-http` | @@ -339,6 +400,23 @@ quantconnect-mcp/ | `MCP_PORT` | Server port | `8000` | `3000` | | `MCP_PATH` | HTTP endpoint path | `/mcp` | `/api/v1/mcp` | | `LOG_LEVEL` | Logging verbosity | `INFO` | `DEBUG` | +| `LOG_FILE` | Log file path | None | `/var/log/quantconnect-mcp.log` | + +#### QuantConnect Authentication +| Variable | Description | Required | Example | +|----------|-------------|----------|---------| +| `QUANTCONNECT_USER_ID` | Your QuantConnect user ID | โ—‰ Yes | `123456` | +| `QUANTCONNECT_API_TOKEN` | Your QuantConnect API token | โ—‰ Yes | `abc123...` | +| `QUANTCONNECT_ORGANIZATION_ID` | Organization ID (optional) | โ—ฆ No | `org123` | + +#### QuantBook Container Configuration (Optional) +| Variable | Description | Default | Example | +|----------|-------------|---------|---------| +| `ENABLE_QUANTBOOK` | Enable QuantBook functionality | `false` | `true` | +| `QUANTBOOK_MEMORY_LIMIT` | Container memory limit | `2g` | `4g` | +| `QUANTBOOK_CPU_LIMIT` | Container CPU limit | `1.0` | `2.0` | +| `QUANTBOOK_SESSION_TIMEOUT` | Session timeout (seconds) | `3600` | `7200` | +| `QUANTBOOK_MAX_SESSIONS` | Maximum concurrent sessions | `10` | `20` | ### System Resources diff --git a/pyproject.toml b/pyproject.toml index 2fcf784..68e5754 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -22,7 +22,12 @@ dependencies = [ "seaborn>=0.13.2", "statsmodels>=0.14.4", "quantconnect-lean", - "quantconnect>=0.1.0", + "docker>=7.1.0", +] + +[project.optional-dependencies] +quantbook = [ + "docker>=7.1.0", ] [project.scripts] diff --git a/quantconnect-mcp.dxt b/quantconnect-mcp.dxt index 365efaa..0b6ccc8 100644 Binary files a/quantconnect-mcp.dxt and b/quantconnect-mcp.dxt differ diff --git a/quantconnect_mcp/main.py b/quantconnect_mcp/main.py index 53a675d..9d3befd 100644 --- a/quantconnect_mcp/main.py +++ b/quantconnect_mcp/main.py @@ -3,7 +3,11 @@ import os import sys +import asyncio +import signal +import logging from pathlib import Path +from typing import Optional # Ensure package root is in Python path for consistent imports package_root = Path(__file__).parent.parent @@ -12,8 +16,6 @@ from quantconnect_mcp.src.server import mcp from quantconnect_mcp.src.tools import ( - register_quantbook_tools, - register_data_tools, register_analysis_tools, register_portfolio_tools, register_universe_tools, @@ -26,63 +28,233 @@ from quantconnect_mcp.src.auth import configure_auth from quantconnect_mcp.src.utils import safe_print +# Conditional imports for QuantBook functionality +def check_quantbook_support() -> bool: + """Check if QuantBook dependencies are available.""" + try: + import docker + return True + except ImportError: + return False + +def import_quantbook_modules(): + """Conditionally import QuantBook modules.""" + try: + from quantconnect_mcp.src.tools.quantbook_tools import register_quantbook_tools + from quantconnect_mcp.src.tools.data_tools import register_data_tools + return register_quantbook_tools, register_data_tools + except ImportError as e: + safe_print(f"โš ๏ธ QuantBook dependencies not available: {e}") + return None, None + +def import_logging_setup(): + """Conditionally import logging setup.""" + try: + from quantconnect_mcp.src.adapters.logging_config import setup_logging + return setup_logging + except ImportError: + return None + + +# Global shutdown flag +_shutdown_requested = False +_session_manager: Optional[object] = None + + +def setup_signal_handlers() -> None: + """Setup signal handlers for graceful shutdown.""" + global _shutdown_requested + + def signal_handler(signum: int, frame) -> None: + global _shutdown_requested + if not _shutdown_requested: + _shutdown_requested = True + safe_print(f"\n๐Ÿ”„ Received signal {signum}, initiating graceful shutdown...") + + # Shutdown session manager if available + if _session_manager and hasattr(_session_manager, 'stop'): + try: + asyncio.create_task(_session_manager.stop()) + except Exception as e: + safe_print(f"โš ๏ธ Error during session manager shutdown: {e}") + + # Register signal handlers + try: + signal.signal(signal.SIGINT, signal_handler) # Ctrl+C + signal.signal(signal.SIGTERM, signal_handler) # Termination + if hasattr(signal, 'SIGHUP'): + signal.signal(signal.SIGHUP, signal_handler) # Hangup + except Exception as e: + safe_print(f"โš ๏ธ Could not setup signal handlers: {e}") + + +async def shutdown_cleanup() -> None: + """Perform cleanup during shutdown.""" + global _session_manager + + try: + if _session_manager and hasattr(_session_manager, 'stop'): + safe_print("๐Ÿงน Cleaning up session manager...") + await _session_manager.stop() + safe_print("โœ… Session manager cleaned up") + except Exception as e: + safe_print(f"โš ๏ธ Error during cleanup: {e}") + def main(): """Initialize and run the QuantConnect MCP server.""" + global _session_manager + + try: + # Setup signal handlers for graceful shutdown + setup_signal_handlers() + + # Check QuantBook support and environment configuration + enable_quantbook = os.getenv("ENABLE_QUANTBOOK", "false").lower() in ("true", "1", "yes", "on") + quantbook_available = check_quantbook_support() + + # Setup logging (basic logging if QuantBook not available) + log_level = os.getenv("LOG_LEVEL", "INFO") + log_file = os.getenv("LOG_FILE") + + # Setup advanced logging if available + setup_logging = import_logging_setup() + if setup_logging: + setup_logging( + log_level=log_level, + log_file=Path(log_file) if log_file else None, + include_container_logs=True, + ) + safe_print(f"๐Ÿ”ง Advanced logging configured (level: {log_level})") + else: + # Basic logging setup + logging.basicConfig( + level=getattr(logging, log_level.upper()), + format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' + ) + safe_print(f"๐Ÿ”ง Basic logging configured (level: {log_level})") + + if quantbook_available: + register_quantbook_tools, register_data_tools = import_quantbook_modules() + else: + register_quantbook_tools = None + register_data_tools = None + + # Auto-configure authentication from environment variables if available + user_id = os.getenv("QUANTCONNECT_USER_ID") + api_token = os.getenv("QUANTCONNECT_API_TOKEN") + organization_id = os.getenv("QUANTCONNECT_ORGANIZATION_ID") + + if user_id and api_token: + try: + safe_print("๐Ÿ” Configuring QuantConnect authentication from environment...") + configure_auth(user_id, api_token, organization_id) + safe_print("โœ… Authentication configured successfully") + except Exception as e: + safe_print(f"โš ๏ธ Failed to configure authentication: {e}") + safe_print( + "๐Ÿ’ก You can configure authentication later using the configure_quantconnect_auth tool" + ) + + # Register core tool modules (always available) + safe_print("๐Ÿ”ง Registering QuantConnect API tools...") + register_auth_tools(mcp) + register_project_tools(mcp) + register_file_tools(mcp) + register_backtest_tools(mcp) + register_analysis_tools(mcp) + register_portfolio_tools(mcp) + register_universe_tools(mcp) - # Auto-configure authentication from environment variables if available - user_id = os.getenv("QUANTCONNECT_USER_ID") - api_token = os.getenv("QUANTCONNECT_API_TOKEN") - organization_id = os.getenv("QUANTCONNECT_ORGANIZATION_ID") + # Conditionally register QuantBook tools + if enable_quantbook: + if quantbook_available and register_quantbook_tools and register_data_tools: + safe_print("๐Ÿณ Registering QuantBook container tools...") + register_quantbook_tools(mcp) + register_data_tools(mcp) + safe_print("โœ… QuantBook functionality enabled") + + # Get session manager reference for cleanup + try: + from quantconnect_mcp.src.adapters.session_manager import get_session_manager + _session_manager = get_session_manager() + except ImportError: + pass + else: + safe_print("โŒ QuantBook functionality requested but dependencies not available") + safe_print("๐Ÿ’ก Install with: pip install quantconnect-mcp[quantbook]") + safe_print("๐Ÿณ Ensure Docker is installed and accessible") + else: + safe_print("โญ๏ธ QuantBook functionality disabled (set ENABLE_QUANTBOOK=true to enable)") - if user_id and api_token: + # Register resources + safe_print("๐Ÿ“Š Registering system resources...") + register_system_resources(mcp) + + safe_print(f"โœ… QuantConnect MCP Server initialized") + + # Determine transport method + transport = os.getenv("MCP_TRANSPORT", "stdio") + + # Run server with proper error handling try: - safe_print("๐Ÿ” Configuring QuantConnect authentication from environment...") - configure_auth(user_id, api_token, organization_id) - safe_print("โœ… Authentication configured successfully") + if transport == "streamable-http": + host = os.getenv("MCP_HOST", "0.0.0.0") + port = int(os.getenv("MCP_PORT", os.getenv("PORT", "8000"))) + safe_print(f"๐ŸŒ Starting HTTP server on {host}:{port}") + mcp.run( + transport="streamable-http", + host=host, + port=port, + path=os.getenv("MCP_PATH", "/mcp"), + ) + elif transport == "stdio": + safe_print("๐Ÿ“ก Starting STDIO transport") + mcp.run() # Default stdio transport + else: + safe_print(f"๐Ÿš€ Starting with {transport} transport") + mcp.run(transport=transport) + + except (BrokenPipeError, ConnectionResetError, EOFError) as e: + # Client disconnected - this is normal, not an error + safe_print("๐Ÿ”Œ Client disconnected") + logging.getLogger(__name__).debug(f"Client disconnect: {e}") + + except KeyboardInterrupt: + safe_print("\nโน๏ธ Keyboard interrupt received") + + except OSError as e: + if e.errno == 32: # Broken pipe + safe_print("๐Ÿ”Œ Client disconnected (broken pipe)") + logging.getLogger(__name__).debug(f"Broken pipe: {e}") + else: + safe_print(f"โŒ OS Error: {e}") + raise + except Exception as e: - safe_print(f"โš ๏ธ Failed to configure authentication: {e}") - safe_print( - "๐Ÿ’ก You can configure authentication later using the configure_quantconnect_auth tool" - ) - - # Register all tool modules - safe_print("๐Ÿ”ง Registering QuantConnect tools...") - register_auth_tools(mcp) - register_project_tools(mcp) - register_file_tools(mcp) - register_backtest_tools(mcp) - register_quantbook_tools(mcp) - register_data_tools(mcp) - register_analysis_tools(mcp) - register_portfolio_tools(mcp) - register_universe_tools(mcp) - - # Register resources - safe_print("๐Ÿ“Š Registering system resources...") - register_system_resources(mcp) - - safe_print(f"โœ… QuantConnect MCP Server initialized") - - # Determine transport method - transport = os.getenv("MCP_TRANSPORT", "stdio") - - if transport == "streamable-http": - host = os.getenv("MCP_HOST", "0.0.0.0") - port = int(os.getenv("MCP_PORT", os.getenv("PORT", "8000"))) - safe_print(f"๐ŸŒ Starting HTTP server on {host}:{port}") - mcp.run( - transport="streamable-http", - host=host, - port=port, - path=os.getenv("MCP_PATH", "/mcp"), - ) - elif transport == "stdio": - safe_print("๐Ÿ“ก Starting STDIO transport") - mcp.run() # Default stdio transport - else: - safe_print(f"๐Ÿš€ Starting with {transport} transport") - mcp.run(transport=transport) + safe_print(f"โŒ Unexpected error: {e}") + logging.getLogger(__name__).error(f"Server error: {e}", exc_info=True) + raise + + finally: + # Cleanup + if _session_manager: + try: + import asyncio + asyncio.run(shutdown_cleanup()) + except Exception as e: + safe_print(f"โš ๏ธ Error during final cleanup: {e}") + + except KeyboardInterrupt: + safe_print("\nโน๏ธ Startup interrupted") + sys.exit(1) + + except Exception as e: + safe_print(f"โŒ Failed to start server: {e}") + logging.getLogger(__name__).error(f"Startup error: {e}", exc_info=True) + sys.exit(1) + + safe_print("๐Ÿ‘‹ QuantConnect MCP Server shutdown complete") if __name__ == "__main__": diff --git a/quantconnect_mcp/src/adapters/__init__.py b/quantconnect_mcp/src/adapters/__init__.py new file mode 100644 index 0000000..88534ad --- /dev/null +++ b/quantconnect_mcp/src/adapters/__init__.py @@ -0,0 +1,7 @@ +"""Adapter modules for external integrations.""" + +from .research_session_lean_cli import ResearchSession +from .session_manager import SessionManager, get_session_manager, initialize_session_manager +from .logging_config import setup_logging, security_logger + +__all__ = ["ResearchSession", "SessionManager", "get_session_manager", "initialize_session_manager", "setup_logging", "security_logger"] \ No newline at end of file diff --git a/quantconnect_mcp/src/adapters/jupyter_kernel_client.py b/quantconnect_mcp/src/adapters/jupyter_kernel_client.py new file mode 100644 index 0000000..98e47d1 --- /dev/null +++ b/quantconnect_mcp/src/adapters/jupyter_kernel_client.py @@ -0,0 +1,118 @@ +"""Jupyter Kernel Client for executing code in research containers.""" + +import asyncio +import json +import logging +import uuid +from typing import Any, Dict, Optional + +import httpx + +logger = logging.getLogger(__name__) + + +class JupyterKernelClient: + """Client for interacting with Jupyter kernels via REST API.""" + + def __init__(self, base_url: str): + """ + Initialize the client. + + Args: + base_url: Base URL of Jupyter server (e.g., http://localhost:8888) + """ + self.base_url = base_url.rstrip('/') + self.client = httpx.AsyncClient(timeout=30.0) + self.kernel_id: Optional[str] = None + + async def list_kernels(self) -> list: + """List all running kernels.""" + try: + response = await self.client.get(f"{self.base_url}/api/kernels") + response.raise_for_status() + return response.json() + except Exception as e: + logger.error(f"Failed to list kernels: {e}") + return [] + + async def create_kernel(self) -> Optional[str]: + """Create a new kernel and return its ID.""" + try: + response = await self.client.post( + f"{self.base_url}/api/kernels", + json={"name": "python3"} + ) + response.raise_for_status() + kernel_info = response.json() + self.kernel_id = kernel_info["id"] + logger.info(f"Created kernel: {self.kernel_id}") + return self.kernel_id + except Exception as e: + logger.error(f"Failed to create kernel: {e}") + return None + + async def get_or_create_kernel(self) -> Optional[str]: + """Get existing kernel or create a new one.""" + # First check if we have a kernel + if self.kernel_id: + # Verify it's still running + kernels = await self.list_kernels() + if any(k["id"] == self.kernel_id for k in kernels): + return self.kernel_id + + # Check for existing kernels + kernels = await self.list_kernels() + if kernels: + # Use the first available kernel + self.kernel_id = kernels[0]["id"] + logger.info(f"Using existing kernel: {self.kernel_id}") + return self.kernel_id + + # Create new kernel + return await self.create_kernel() + + async def execute_code(self, code: str) -> Dict[str, Any]: + """ + Execute code in the kernel. + + Args: + code: Python code to execute + + Returns: + Dictionary with execution results + """ + kernel_id = await self.get_or_create_kernel() + if not kernel_id: + return { + "status": "error", + "error": "Failed to get or create kernel", + "output": "" + } + + # Create execution request + msg_id = str(uuid.uuid4()) + + # Connect to WebSocket for kernel communication + ws_url = f"{self.base_url.replace('http', 'ws')}/api/kernels/{kernel_id}/channels" + + try: + # For now, use a simpler approach - execute via container + # This is a placeholder for full WebSocket implementation + logger.warning("WebSocket execution not yet implemented, falling back to container exec") + return { + "status": "error", + "error": "Jupyter kernel execution not yet implemented", + "output": "" + } + + except Exception as e: + logger.error(f"Failed to execute code: {e}") + return { + "status": "error", + "error": str(e), + "output": "" + } + + async def close(self): + """Close the client.""" + await self.client.aclose() \ No newline at end of file diff --git a/quantconnect_mcp/src/adapters/logging_config.py b/quantconnect_mcp/src/adapters/logging_config.py new file mode 100644 index 0000000..5a8d042 --- /dev/null +++ b/quantconnect_mcp/src/adapters/logging_config.py @@ -0,0 +1,119 @@ +"""Logging configuration for QuantConnect MCP Server""" + +import logging +import sys +from datetime import datetime +from pathlib import Path +from typing import Optional + + +def setup_logging( + log_level: str = "INFO", + log_file: Optional[Path] = None, + include_container_logs: bool = True, +) -> None: + """ + Setup logging configuration for the MCP server. + + Args: + log_level: Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) + log_file: Optional log file path + include_container_logs: Whether to include container-specific logging + """ + + # Create formatters + detailed_formatter = logging.Formatter( + fmt='%(asctime)s - %(name)s - %(levelname)s - [%(filename)s:%(lineno)d] - %(message)s', + datefmt='%Y-%m-%d %H:%M:%S' + ) + + simple_formatter = logging.Formatter( + fmt='%(asctime)s - %(levelname)s - %(message)s', + datefmt='%H:%M:%S' + ) + + # Setup root logger + root_logger = logging.getLogger() + root_logger.setLevel(getattr(logging, log_level.upper())) + + # Clear existing handlers + root_logger.handlers.clear() + + # Console handler - MUST use stderr to avoid contaminating MCP JSON-RPC on stdout + console_handler = logging.StreamHandler(sys.stderr) + console_handler.setLevel(logging.INFO) + console_handler.setFormatter(simple_formatter) + root_logger.addHandler(console_handler) + + # File handler if specified + if log_file: + log_file.parent.mkdir(parents=True, exist_ok=True) + file_handler = logging.FileHandler(log_file) + file_handler.setLevel(logging.DEBUG) + file_handler.setFormatter(detailed_formatter) + root_logger.addHandler(file_handler) + + # Setup specific loggers with appropriate levels + loggers = { + 'quantconnect_mcp': logging.DEBUG, + 'quantconnect_mcp.adapters': logging.DEBUG, + 'quantconnect_mcp.adapters.research_session': logging.INFO, + 'quantconnect_mcp.adapters.session_manager': logging.INFO, + 'quantconnect_mcp.tools': logging.INFO, + 'docker': logging.WARNING, # Reduce noise from Docker client + 'urllib3': logging.WARNING, # Reduce noise from HTTP requests + } + + for logger_name, level in loggers.items(): + logger = logging.getLogger(logger_name) + logger.setLevel(level) + + # Log startup message + root_logger.info(f"Logging initialized - Level: {log_level}, File: {log_file}") + + +def get_container_logger(session_id: str) -> logging.Logger: + """Get a logger specific to a container session.""" + return logging.getLogger(f"quantconnect_mcp.container.{session_id}") + + +class SecurityLogger: + """Logger for security-related events.""" + + def __init__(self): + self.logger = logging.getLogger("quantconnect_mcp.security") + + def log_session_created(self, session_id: str, container_id: str) -> None: + """Log session creation.""" + self.logger.info( + f"SECURITY: Session created - ID: {session_id}, Container: {container_id}" + ) + + def log_session_destroyed(self, session_id: str, reason: str = "normal") -> None: + """Log session destruction.""" + self.logger.info( + f"SECURITY: Session destroyed - ID: {session_id}, Reason: {reason}" + ) + + def log_code_execution(self, session_id: str, code_hash: str, success: bool) -> None: + """Log code execution attempts.""" + status = "SUCCESS" if success else "FAILED" + self.logger.info( + f"SECURITY: Code execution {status} - Session: {session_id}, Hash: {code_hash}" + ) + + def log_security_violation(self, session_id: str, violation_type: str, details: str) -> None: + """Log security violations.""" + self.logger.warning( + f"SECURITY VIOLATION: {violation_type} - Session: {session_id}, Details: {details}" + ) + + def log_resource_limit_hit(self, session_id: str, resource: str, limit: str) -> None: + """Log when resource limits are hit.""" + self.logger.warning( + f"SECURITY: Resource limit hit - Session: {session_id}, Resource: {resource}, Limit: {limit}" + ) + + +# Global security logger instance +security_logger = SecurityLogger() \ No newline at end of file diff --git a/quantconnect_mcp/src/adapters/research_session_jupyter.py b/quantconnect_mcp/src/adapters/research_session_jupyter.py new file mode 100644 index 0000000..3a601c8 --- /dev/null +++ b/quantconnect_mcp/src/adapters/research_session_jupyter.py @@ -0,0 +1,337 @@ +"""QuantConnect Research Session with Jupyter Kernel Support""" + +import asyncio +import json +import logging +import tempfile +import uuid +from datetime import datetime, timedelta +from pathlib import Path +from typing import Any, Dict, List, Optional, Union + +import docker +import docker.types +import pandas as pd +from docker.models.containers import Container + +from .logging_config import get_container_logger, security_logger + +logger = logging.getLogger(__name__) + + +class JupyterResearchSession: + """ + Enhanced Research Session that attempts to use Jupyter kernel if available. + Falls back to direct Python execution if kernel is not ready. + """ + + IMAGE = "quantconnect/research:latest" + CONTAINER_WORKSPACE = "/Lean" + NOTEBOOKS_PATH = "/Lean/Notebooks" + TIMEOUT_DEFAULT = 300 # 5 minutes + KERNEL_WAIT_TIME = 60 # Maximum time to wait for kernel + + def __init__( + self, + session_id: Optional[str] = None, + workspace_dir: Optional[Path] = None, + memory_limit: str = "2g", + cpu_limit: float = 1.0, + timeout: int = TIMEOUT_DEFAULT, + ): + """Initialize a new research session.""" + self.session_id = session_id or f"qb_{uuid.uuid4().hex[:8]}" + self.memory_limit = memory_limit + self.cpu_limit = cpu_limit + self.timeout = timeout + self.created_at = datetime.utcnow() + self.last_used = self.created_at + self.kernel_ready = False + self.kernel_name = None + + # Setup workspace + if workspace_dir: + self.workspace_dir = Path(workspace_dir) + self.workspace_dir.mkdir(parents=True, exist_ok=True) + self._temp_dir = None + else: + self._temp_dir = tempfile.TemporaryDirectory(prefix=f"qc_research_{self.session_id}_") + self.workspace_dir = Path(self._temp_dir.name) + + # Docker client and container + self.client = docker.from_env() + self.container: Optional[Container] = None + self._initialized = False + + logger.info(f"Created Jupyter research session {self.session_id}") + + async def initialize(self) -> None: + """Initialize the Docker container and wait for Jupyter kernel.""" + if self._initialized: + return + + try: + # Ensure the image is available + try: + self.client.images.get(self.IMAGE) + except docker.errors.ImageNotFound: + logger.info(f"Pulling image {self.IMAGE}...") + self.client.images.pull(self.IMAGE) + + # Start container with Jupyter environment + volumes = { + str(self.workspace_dir): { + "bind": self.NOTEBOOKS_PATH, + "mode": "rw" + } + } + + environment = { + "PYTHONPATH": "/Lean:/Lean/Library", + "COMPOSER_DLL_DIRECTORY": "/Lean", + } + + # Start the container + self.container = self.client.containers.run( + self.IMAGE, + command=["sleep", "infinity"], # Keep container running + volumes=volumes, + environment=environment, + working_dir=self.NOTEBOOKS_PATH, + detach=True, + mem_limit=self.memory_limit, + cpu_period=100000, + cpu_quota=int(100000 * self.cpu_limit), + name=f"qc_jupyter_{self.session_id}", + remove=True, + labels={ + "mcp.quantconnect.session_id": self.session_id, + "mcp.quantconnect.created_at": self.created_at.isoformat(), + }, + ) + + # Wait for container to be ready + await asyncio.sleep(3) + + # Check for Jupyter kernel availability + await self._wait_for_jupyter_kernel() + + self._initialized = True + + # Security logging + security_logger.log_session_created(self.session_id, self.container.id) + logger.info(f"Jupyter research session {self.session_id} initialized (kernel_ready={self.kernel_ready})") + + except Exception as e: + logger.error(f"Failed to initialize Jupyter research session {self.session_id}: {e}") + await self.close() + raise + + async def _wait_for_jupyter_kernel(self) -> bool: + """Wait for Jupyter kernel to be ready.""" + logger.info("Checking for Jupyter kernel availability...") + + start_time = datetime.utcnow() + while (datetime.utcnow() - start_time).seconds < self.KERNEL_WAIT_TIME: + try: + # Check if Jupyter is available + jupyter_check = await asyncio.to_thread( + self.container.exec_run, + "which jupyter", + workdir="/" + ) + + if jupyter_check.exit_code != 0: + logger.info("Jupyter not found in container, using direct Python execution") + return False + + # List available kernels + kernel_list = await asyncio.to_thread( + self.container.exec_run, + "jupyter kernelspec list --json", + workdir="/" + ) + + if kernel_list.exit_code == 0 and kernel_list.output: + try: + kernels = json.loads(kernel_list.output.decode()) + available_kernels = kernels.get("kernelspecs", {}) + + # Look for QuantConnect kernel + for kernel_name, kernel_info in available_kernels.items(): + if "python" in kernel_name.lower() or "quant" in kernel_name.lower(): + self.kernel_name = kernel_name + self.kernel_ready = True + logger.info(f"Found Jupyter kernel: {kernel_name}") + return True + except json.JSONDecodeError: + pass + + await asyncio.sleep(5) + + except Exception as e: + logger.warning(f"Error checking for Jupyter kernel: {e}") + await asyncio.sleep(5) + + logger.info("Jupyter kernel not ready after timeout, using direct Python execution") + return False + + async def execute_with_kernel(self, code: str, timeout: Optional[int] = None) -> Dict[str, Any]: + """Execute code using Jupyter kernel.""" + execution_timeout = timeout or self.timeout + + try: + # Create a temporary notebook file + notebook_content = { + "cells": [{ + "cell_type": "code", + "source": code, + "metadata": {} + }], + "metadata": { + "kernelspec": { + "name": self.kernel_name or "python3", + "display_name": "Python 3" + } + }, + "nbformat": 4, + "nbformat_minor": 5 + } + + notebook_filename = f"temp_{uuid.uuid4().hex[:8]}.ipynb" + notebook_path = f"{self.NOTEBOOKS_PATH}/{notebook_filename}" + + # Write notebook to container + write_cmd = f"cat > {notebook_path} << 'EOF'\n{json.dumps(notebook_content)}\nEOF" + await asyncio.to_thread( + self.container.exec_run, + ['/bin/sh', '-c', write_cmd] + ) + + # Execute notebook + exec_cmd = f"jupyter nbconvert --to notebook --execute --inplace --ExecutePreprocessor.timeout={execution_timeout} {notebook_filename}" + + exec_result = await asyncio.wait_for( + asyncio.to_thread( + self.container.exec_run, + exec_cmd, + workdir=self.NOTEBOOKS_PATH + ), + timeout=execution_timeout + 10 # Add buffer for nbconvert overhead + ) + + if exec_result.exit_code == 0: + # Read the executed notebook to get output + read_cmd = f"cat {notebook_path}" + read_result = await asyncio.to_thread( + self.container.exec_run, + read_cmd + ) + + if read_result.exit_code == 0 and read_result.output: + executed_nb = json.loads(read_result.output.decode()) + + # Extract output from first cell + outputs = [] + if executed_nb["cells"] and "outputs" in executed_nb["cells"][0]: + for output in executed_nb["cells"][0]["outputs"]: + if "text" in output: + outputs.append(output["text"]) + elif "data" in output and "text/plain" in output["data"]: + outputs.append(output["data"]["text/plain"]) + + # Clean up notebook + await asyncio.to_thread( + self.container.exec_run, + f"rm -f {notebook_path}" + ) + + return { + "status": "success", + "output": "\n".join(outputs), + "error": None, + "session_id": self.session_id, + "kernel_used": True + } + + # If execution failed, return error + error_output = exec_result.output.decode() if exec_result.output else "Unknown error" + return { + "status": "error", + "output": "", + "error": f"Kernel execution failed: {error_output}", + "session_id": self.session_id, + "kernel_used": True + } + + except asyncio.TimeoutError: + return { + "status": "error", + "output": "", + "error": f"Kernel execution timed out after {execution_timeout} seconds", + "session_id": self.session_id, + "timeout": True, + "kernel_used": True + } + except Exception as e: + logger.error(f"Kernel execution error: {e}") + return { + "status": "error", + "output": "", + "error": f"Kernel execution error: {str(e)}", + "session_id": self.session_id, + "kernel_used": True + } + + async def execute(self, code: str, timeout: Optional[int] = None) -> Dict[str, Any]: + """Execute Python code in the research container.""" + if not self._initialized: + await self.initialize() + + if not self.container: + raise ValueError("Container not available") + + self.last_used = datetime.utcnow() + + # Try kernel execution if available + if self.kernel_ready: + logger.info("Attempting kernel execution...") + result = await self.execute_with_kernel(code, timeout) + if result["status"] == "success" or "timeout" not in result: + return result + logger.warning("Kernel execution failed, falling back to direct execution") + + # Fall back to direct Python execution (from original implementation) + # This would use the same approach as the original research_session.py + logger.info("Using direct Python execution...") + + # Import the original execute logic here or create a base class + # For now, return a placeholder + return { + "status": "error", + "output": "", + "error": "Direct execution not implemented in this demo", + "session_id": self.session_id, + "kernel_used": False + } + + async def close(self, reason: str = "normal") -> None: + """Clean up the research session.""" + logger.info(f"Closing Jupyter research session {self.session_id} (reason: {reason})") + + try: + if self.container: + self.container.stop(timeout=10) + self.container = None + + if self._temp_dir: + self._temp_dir.cleanup() + self._temp_dir = None + + security_logger.log_session_destroyed(self.session_id, reason) + + except Exception as e: + logger.error(f"Error during session cleanup: {e}") + + finally: + self._initialized = False \ No newline at end of file diff --git a/quantconnect_mcp/src/adapters/research_session_lean_cli.py b/quantconnect_mcp/src/adapters/research_session_lean_cli.py new file mode 100644 index 0000000..0b80393 --- /dev/null +++ b/quantconnect_mcp/src/adapters/research_session_lean_cli.py @@ -0,0 +1,920 @@ +"""QuantConnect Research Session using lean-cli.""" + +import asyncio +import json +import logging +import os +import subprocess +import tempfile +import uuid +from datetime import datetime, timedelta +from pathlib import Path +from typing import Any, Dict, Optional + +import docker +from docker.models.containers import Container + +from .logging_config import get_container_logger, security_logger + +logger = logging.getLogger(__name__) + + +class ResearchSessionError(Exception): + """Custom exception for research session errors.""" + pass + + +class ResearchSession: + """ + Research session that uses lean-cli to manage the research environment. + + This approach ensures full compatibility with QuantConnect's setup + by delegating all initialization and container management to lean-cli. + """ + + def __init__( + self, + session_id: Optional[str] = None, + workspace_dir: Optional[Path] = None, + port: Optional[int] = None, + ): + """ + Initialize a new research session. + + Args: + session_id: Unique identifier for this session + workspace_dir: Directory for the lean project (temp dir if None) + port: Port to run Jupyter on (default: 8888) + """ + self.session_id = session_id or f"qb_{uuid.uuid4().hex[:8]}" + self.port = port or int(os.environ.get("QUANTBOOK_DOCKER_PORT", "8888")) + self.created_at = datetime.utcnow() + self.last_used = self.created_at + + # Setup workspace + if workspace_dir: + self.workspace_dir = Path(workspace_dir) + self._temp_dir = None + else: + self._temp_dir = tempfile.TemporaryDirectory(prefix=f"qc_research_{self.session_id}_") + self.workspace_dir = Path(self._temp_dir.name) + + # Ensure workspace exists + self.workspace_dir.mkdir(parents=True, exist_ok=True) + + # Docker client for container management + self.client = docker.from_env() + self.container: Optional[Container] = None + self._initialized = False + + logger.info(f"Created research session {self.session_id} using lean-cli (port: {self.port})") + + async def _check_lean_cli(self) -> bool: + """Check if lean-cli is installed and available.""" + try: + result = await asyncio.to_thread( + subprocess.run, + ["lean", "--version"], + capture_output=True, + text=True, + check=False + ) + if result.returncode == 0: + logger.info(f"lean-cli version: {result.stdout.strip()}") + return True + else: + logger.error(f"lean-cli check failed: {result.stderr}") + return False + except FileNotFoundError: + logger.error("lean-cli not found in PATH") + return False + except Exception as e: + logger.error(f"Error checking lean-cli: {e}") + return False + + async def _init_lean_project(self) -> bool: + """Initialize a lean project in the workspace directory.""" + try: + # Check if already initialized (either lean.json or config.json) + lean_json = self.workspace_dir / "lean.json" + config_json = self.workspace_dir / "config.json" + + if lean_json.exists() or config_json.exists(): + logger.info("Lean project already initialized") + return True + + # Run lean init in the workspace directory + logger.info(f"Initializing lean project in {self.workspace_dir}") + + # First, we need to ensure we're logged in + # Check if credentials are available + if not all([ + os.environ.get("QUANTCONNECT_USER_ID"), + os.environ.get("QUANTCONNECT_API_TOKEN"), + os.environ.get("QUANTCONNECT_ORGANIZATION_ID") + ]): + logger.warning("QuantConnect credentials not fully configured") + # Continue anyway - lean init might work with cached credentials + + # Run lean init + org_id = os.environ.get("QUANTCONNECT_ORGANIZATION_ID", "") + init_cmd = ["lean", "init"] + if org_id: + init_cmd.extend(["--organization", org_id]) + + logger.info(f"Running: {' '.join(init_cmd)}") + result = await asyncio.to_thread( + subprocess.run, + init_cmd, + cwd=str(self.workspace_dir), + capture_output=True, + text=True, + check=False + ) + + if result.returncode != 0: + logger.error(f"lean init failed with return code {result.returncode}") + logger.error(f"stdout: {result.stdout}") + logger.error(f"stderr: {result.stderr}") + + # Check if it's a credentials issue + if "Please log in" in result.stderr or "authentication" in result.stderr.lower(): + logger.error("Authentication required. Please run 'lean login' first.") + + return False + + logger.info("Lean project initialized successfully") + return True + + except Exception as e: + logger.error(f"Error initializing lean project: {e}") + return False + + async def _find_container(self) -> None: + """Try to find the research container.""" + all_containers = self.client.containers.list() + logger.info(f"Looking for container among {len(all_containers)} running containers") + + # Try different name patterns that lean-cli might use + name_patterns = [ + "lean_cli_", + "research", + str(self.port), + ] + + for container in all_containers: + container_name_lower = container.name.lower() + # Check if any of our patterns match + if any(pattern.lower() in container_name_lower for pattern in name_patterns): + # Additional check - make sure it's a research container + try: + # Check ports + port_bindings = container.ports.get('8888/tcp', []) + for binding in port_bindings: + if binding.get('HostPort') == str(self.port): + self.container = container + logger.info(f"Found research container: {container.name}") + return + except Exception as e: + logger.debug(f"Error checking container {container.name}: {e}") + + async def _create_research_notebook(self) -> Path: + """Create a default research notebook if it doesn't exist.""" + notebooks_dir = self.workspace_dir / "Research" + notebooks_dir.mkdir(parents=True, exist_ok=True) + + notebook_path = notebooks_dir / "research.ipynb" + if not notebook_path.exists(): + notebook_content = { + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# QuantConnect Research Environment\n", + "Welcome to the QuantConnect Research Environment. ", + "QuantBook is automatically available as 'qb'.", + "qb = QuantBook()" + ] + }, + { + "cell_type": "code", + "execution_count": None, + "metadata": {}, + "source": [ + "# QuantBook Analysis\n", + "# Documentation: https://www.quantconnect.com/docs/v2/research-environment\n", + "\n", + "import os\n", + "import glob\n", + "\n", + "# Configure QuantConnect environment properly\n", + "import QuantConnect\n", + "from QuantConnect.Configuration import Config\n", + "\n", + "# Reset config and set required values\n", + "Config.Reset()\n", + "Config.Set('data-folder', '/Lean/Data')\n", + "Config.Set('log-handler', 'ConsoleLogHandler')\n", + "Config.Set('debug-mode', 'false')\n", + "Config.Set('results-destination-folder', '/LeanCLI')\n", + "\n", + "# Initialize QuantBook with proper error handling\n", + "qb = None\n", + "try:\n", + " qb = QuantBook()\n", + " print('โœ… QuantBook initialized successfully!')\n", + "except Exception as e:\n", + " print(f'โŒ QuantBook initialization failed: {e}')\n", + " print('Will attempt to continue with limited functionality...')\n", + "\n", + "print('Checking for QuantBook initialization...')\n", + "\n", + "# Look for IPython startup scripts\n", + "startup_paths = [\n", + " '/root/.ipython/profile_default/startup/',\n", + " '/opt/miniconda3/etc/ipython/startup/',\n", + " '/etc/ipython/startup/',\n", + " '~/.ipython/profile_default/startup/'\n", + "]\n", + "\n", + "for path in startup_paths:\n", + " expanded_path = os.path.expanduser(path)\n", + " if os.path.exists(expanded_path):\n", + " print(f'\\nFound startup directory: {expanded_path}')\n", + " files = glob.glob(os.path.join(expanded_path, '*.py'))\n", + " for f in files:\n", + " print(f' Startup script: {os.path.basename(f)}')\n", + " # Read first few lines to see what it does\n", + " try:\n", + " with open(f, 'r') as file:\n", + " lines = file.readlines()[:10]\n", + " for line in lines:\n", + " if 'QuantBook' in line or 'qb' in line:\n", + " print(f' -> {line.strip()}')\n", + " except:\n", + " pass\n", + "\n", + "# Check if qb is already available\n", + "try:\n", + " qb\n", + " print('\\nโœ“ QuantBook is ALREADY initialized and available as qb!')\n", + " print(f'Type: {type(qb)}')\n", + "except NameError:\n", + " print('\\nโœ— QuantBook (qb) is NOT available in the current namespace')\n", + " print('\\nThis suggests we need to run in the Jupyter web interface where startup scripts are executed')\n" + ], + "outputs": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + } + }, + "nbformat": 4, + "nbformat_minor": 4 + } + with open(notebook_path, "w") as f: + json.dump(notebook_content, f, indent=2) + logger.info(f"Created default research notebook: {notebook_path}") + + return notebooks_dir + + async def initialize(self) -> None: + """Initialize the research environment using lean-cli.""" + if self._initialized: + return + + try: + # Check if lean-cli is available + if not await self._check_lean_cli(): + raise ResearchSessionError( + "lean-cli is not installed. Please install it with: pip install lean" + ) + + # Initialize lean project if needed + init_success = await self._init_lean_project() + if not init_success: + logger.warning("Failed to initialize lean project, will try to proceed anyway") + + # Create research notebook directory + research_dir = await self._create_research_notebook() + + # Start the research environment using lean-cli + logger.info(f"Starting research environment on port {self.port}") + + # Build the lean research command + cmd = [ + "lean", "research", + str(research_dir), # Project directory + "--port", str(self.port), + "--no-open" # Don't open browser automatically + ] + + # Add detach flag to run in background + cmd.append("--detach") + + # Run the command + result = await asyncio.to_thread( + subprocess.run, + cmd, + cwd=str(self.workspace_dir), + capture_output=True, + text=True, + check=False + ) + + if result.returncode != 0: + logger.error(f"lean research failed with return code {result.returncode}") + logger.error(f"stdout: {result.stdout}") + logger.error(f"stderr: {result.stderr}") + + error_msg = result.stderr or result.stdout or "Unknown error" + + # Provide helpful error messages + if "Please log in" in error_msg: + raise ResearchSessionError( + "Authentication required. Please run 'lean login' first to authenticate with QuantConnect." + ) + elif "lean.json" in error_msg or "config.json" in error_msg: + raise ResearchSessionError( + "No Lean configuration found. Please run 'lean init' in your project directory first." + ) + else: + raise ResearchSessionError(f"Failed to start research environment: {error_msg}") + + # Extract container name from output + output = result.stdout + logger.info(f"lean research output: {output}") + + # Wait a moment for container to fully start + await asyncio.sleep(2) + + # Find the container - lean-cli uses specific naming patterns + container_name = None + self.container = None + + # First try to extract from output + for line in output.split('\n'): + if "container" in line.lower() and ("started" in line or "running" in line): + # Try different extraction patterns + import re + # Pattern 1: 'container-name' + match = re.search(r"'([^']+)'", line) + if match: + container_name = match.group(1) + logger.info(f"Extracted container name from output: {container_name}") + break + # Pattern 2: container-name (no quotes) + match = re.search(r"container[:\s]+(\S+)", line, re.IGNORECASE) + if match: + container_name = match.group(1) + logger.info(f"Extracted container name from output (pattern 2): {container_name}") + break + + # Try to get container by extracted name + if container_name: + try: + self.container = self.client.containers.get(container_name) + logger.info(f"Found container by name: {container_name}") + except docker.errors.NotFound: + logger.warning(f"Container {container_name} not found") + + # If not found yet, search by various patterns + if not self.container: + # List all running containers for debugging + all_containers = self.client.containers.list() + logger.info(f"All running containers: {[c.name for c in all_containers]}") + + # Try different name patterns that lean-cli might use + name_patterns = [ + "lean_cli_", + self.session_id, + "research", + str(self.port), # Sometimes port is in the name + ] + + for container in all_containers: + container_name_lower = container.name.lower() + # Check if any of our patterns match + if any(pattern.lower() in container_name_lower for pattern in name_patterns): + # Additional check - make sure it's a research container + if "research" in container_name_lower or str(self.port) in container.ports.get('8888/tcp', [{}])[0].get('HostPort', ''): + self.container = container + logger.info(f"Found research container by pattern matching: {container.name}") + break + + # Last resort - check by port binding + if not self.container: + for container in all_containers: + try: + # Check if this container has port 8888 mapped to our port + port_bindings = container.ports.get('8888/tcp', []) + if port_bindings: + for binding in port_bindings: + if binding.get('HostPort') == str(self.port): + self.container = container + logger.info(f"Found container by port {self.port}: {container.name}") + break + except Exception as e: + logger.debug(f"Error checking container {container.name}: {e}") + + if self.container: + break + + self._initialized = True + + # Security logging + if self.container: + security_logger.log_session_created(self.session_id, self.container.id) + logger.info(f"Research session {self.session_id} initialized successfully with container {self.container.name}") + else: + logger.warning(f"Research session {self.session_id} initialized but container not yet found") + logger.info("Container may still be starting up. Will retry on first execute.") + + logger.info(f"Jupyter Lab accessible at: http://localhost:{self.port}") + + except Exception as e: + logger.error(f"Failed to initialize research session: {e}") + await self.close() + raise ResearchSessionError(f"Failed to initialize research session: {e}") + + async def execute(self, code: str, timeout: int = 300) -> Dict[str, Any]: + """ + Execute code by modifying /LeanCLI/research.ipynb where qb is available. + This ensures all code has access to the pre-initialized QuantBook instance. + """ + if not self._initialized: + await self.initialize() + + # If container wasn't found during init, try to find it again + if not self.container: + logger.warning("Container not found during init, attempting to locate it again...") + await self._find_container() + + if not self.container: + # Return a specific error that helps with debugging + return { + "status": "error", + "output": "", + "error": "Container not found. The Jupyter environment may still be starting up.", + "session_id": self.session_id, + "message": f"Please check http://localhost:{self.port} to see if Jupyter is running." + } + + self.last_used = datetime.utcnow() + + try: + # ALWAYS use /LeanCLI/research.ipynb + notebook_path = "/LeanCLI/research.ipynb" + + # Read the existing notebook + read_cmd = f"cat {notebook_path}" + read_result = await asyncio.to_thread( + self.container.exec_run, + read_cmd, + demux=False + ) + + if read_result.exit_code != 0: + logger.error(f"Failed to read notebook at {notebook_path}") + # Create a basic notebook if it doesn't exist + notebook_content = { + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": ["# QuantConnect Research\n", "qb is pre-initialized and ready to use"] + }, + { + "cell_type": "code", + "execution_count": None, + "metadata": {}, + "source": [ + "# QuantBook initialization check\n", + "import QuantConnect\n", + "from QuantConnect.Configuration import Config\n", + "\n", + "# Configure QuantConnect properly\n", + "Config.Reset()\n", + "Config.Set('data-folder', '/Lean/Data')\n", + "Config.Set('log-handler', 'ConsoleLogHandler')\n", + "Config.Set('debug-mode', 'false')\n", + "Config.Set('results-destination-folder', '/LeanCLI')\n", + "\n", + "# Initialize QuantBook with error handling\n", + "qb = None\n", + "try:\n", + " qb = QuantBook()\n", + " print('โœ… QuantBook initialized successfully!')\n", + "except Exception as e:\n", + " print(f'โŒ QuantBook initialization failed: {e}')\n", + " print('Will attempt to continue with limited functionality...')\n" + ], + "outputs": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + } + }, + "nbformat": 4, + "nbformat_minor": 4 + } + else: + # Parse existing notebook + try: + notebook_content = json.loads(read_result.output.decode('utf-8')) + except Exception as e: + logger.error(f"Failed to parse notebook: {e}") + return { + "status": "error", + "output": "", + "error": f"Failed to parse notebook: {e}", + "session_id": self.session_id, + } + + # Add new cell with the code + # Properly format the source with newlines preserved + if isinstance(code, str): + lines = code.split('\n') + # Add newline to each line except possibly the last + source = [line + '\n' for line in lines[:-1]] + if lines[-1]: # If last line is not empty, add it + source.append(lines[-1] + '\n') + elif not source: # If code was empty or just newlines + source = [''] + else: + source = code + + new_cell = { + "cell_type": "code", + "execution_count": None, + "metadata": {}, + "source": source, + "outputs": [] + } + notebook_content["cells"].append(new_cell) + + # Write the updated notebook back + notebook_json = json.dumps(notebook_content, indent=2) + write_cmd = f"cat > {notebook_path} << 'EOF'\n{notebook_json}\nEOF" + write_result = await asyncio.to_thread( + self.container.exec_run, + ['/bin/sh', '-c', write_cmd], + demux=False + ) + + if write_result.exit_code != 0: + logger.error(f"Failed to write notebook: {write_result.output}") + return { + "status": "error", + "output": "", + "error": "Failed to update notebook", + "session_id": self.session_id, + } + + # Execute using direct kernel approach since jupyter tools may not be available + # Try to execute the last cell directly using the jupyter kernel + + # First, check what execution tools are available + check_tools_cmd = "cd /LeanCLI && which jupyter && which python && which ipython && ls -la && echo '=== KERNELS ===' && jupyter kernelspec list" + tools_result = await asyncio.to_thread( + self.container.exec_run, + ['/bin/sh', '-c', check_tools_cmd], + demux=False + ) + tools_output = tools_result.output.decode('utf-8', errors='replace') if tools_result.output else "" + logger.info(f"Available tools in container: {tools_output}") + + # Use proper Jupyter kernel execution - the key is to communicate with the running kernel + exec_commands = [ + # Method 1: Enhanced Jupyter kernel detection and execution + ("jupyter kernel", f"""cd /LeanCLI && python -c " +import requests +import json +import time +import os +import subprocess + +# Load the notebook to get the code +with open('research.ipynb') as f: + nb = json.load(f) +code = ''.join(nb['cells'][-1]['source']) + +print('=== JUPYTER KERNEL DEBUG ===') + +# Check if Jupyter server is running +try: + # Try different endpoints and configurations + base_urls = ['http://localhost:8888', 'http://127.0.0.1:8888'] + token = os.environ.get('JUPYTER_TOKEN', '') + + for base_url in base_urls: + print(f'Trying {{base_url}}...') + + # Check server status + try: + if token: + server_response = requests.get(f'{{base_url}}/api/status?token={{token}}', timeout=3) + else: + server_response = requests.get(f'{{base_url}}/api/status', timeout=3) + print(f'Server status: {{server_response.status_code}}') + except Exception as e: + print(f'Server check failed: {{e}}') + continue + + # Get kernels + try: + if token: + kernels_response = requests.get(f'{{base_url}}/api/kernels?token={{token}}', timeout=5) + else: + kernels_response = requests.get(f'{{base_url}}/api/kernels', timeout=5) + + print(f'Kernels API response: {{kernels_response.status_code}}') + + if kernels_response.status_code == 200: + kernels = kernels_response.json() + print(f'Found {{len(kernels)}} kernels: {{[k.get(\\\"id\\\", \\\"unknown\\\") for k in kernels]}}') + + if kernels: + kernel_id = kernels[0]['id'] + print(f'Using kernel: {{kernel_id}}') + + # Execute code via kernel API + execute_url = f'{{base_url}}/api/kernels/{{kernel_id}}/execute' + if token: + execute_url += f'?token={{token}}' + + execute_data = {{ + 'code': code, + 'silent': False, + 'store_history': True, + 'user_expressions': {{}}, + 'allow_stdin': False + }} + + exec_response = requests.post(execute_url, json=execute_data, timeout=30) + + if exec_response.status_code == 200: + result = exec_response.json() + print('=== KERNEL EXECUTION SUCCESS ===') + print(json.dumps(result, indent=2)) + break + else: + print(f'Kernel execution failed: {{exec_response.status_code}}') + print(exec_response.text) + else: + print('No running kernels found') + else: + print(f'Kernels API failed: {{kernels_response.status_code}} - {{kernels_response.text}}') + except Exception as e: + print(f'Kernel API failed for {{base_url}}: {{e}}') + + # If API approach fails, try to start a kernel and execute + print('=== TRYING KERNEL START ===') + try: + # Create a new kernel session + kernel_response = requests.post('http://localhost:8888/api/kernels', + json={{'name': 'python3'}}, timeout=10) + if kernel_response.status_code == 201: + kernel_info = kernel_response.json() + kernel_id = kernel_info['id'] + print(f'Created new kernel: {{kernel_id}}') + + # Wait a moment for kernel to start + time.sleep(2) + + # Now execute code + execute_data = {{ + 'code': code, + 'silent': False, + 'store_history': True + }} + + exec_response = requests.post( + f'http://localhost:8888/api/kernels/{{kernel_id}}/execute', + json=execute_data, timeout=30) + + if exec_response.status_code == 200: + result = exec_response.json() + print('=== NEW KERNEL EXECUTION SUCCESS ===') + print(json.dumps(result, indent=2)) + else: + print(f'New kernel execution failed: {{exec_response.status_code}}') + else: + print(f'Failed to create kernel: {{kernel_response.status_code}}') + except Exception as e: + print(f'Kernel creation failed: {{e}}') + +except Exception as e: + print(f'All kernel approaches failed: {{e}}') + +print('=== FALLBACK TO NOTEBOOK EXECUTION ===') +exec(code) +" 2>&1"""), + + # Method 2: Execute using IPython with QuantConnect startup + ("ipython execution", f"""cd /LeanCLI && ipython -c " +import json + +# Load the notebook to get the code +with open('research.ipynb', 'r') as f: + nb = json.load(f) + +# Get the last cell's code +code = ''.join(nb['cells'][-1]['source']) + +print('=== EXECUTING CODE ===') +print(code) +print('=== OUTPUT ===') + +# Execute the code - IPython should have QuantConnect already loaded via startup scripts +exec(code) +" 2>&1"""), + + # Method 3: Use nbconvert with better error handling + ("jupyter nbconvert", f"cd /LeanCLI && jupyter nbconvert --to notebook --execute research.ipynb --output research_executed.ipynb --ExecutePreprocessor.kernel_name=python3 --ExecutePreprocessor.timeout=60 --allow-errors --no-input 2>&1"), + ] + + executed_successfully = False + execution_output = "" + direct_output = "" + + for i, (method_name, exec_cmd) in enumerate(exec_commands): + try: + logger.info(f"Trying execution method {i+1}: {method_name}") + + exec_result = await asyncio.wait_for( + asyncio.to_thread( + self.container.exec_run, + ['/bin/sh', '-c', exec_cmd], + demux=False + ), + timeout=timeout + ) + + execution_output = exec_result.output.decode('utf-8', errors='replace') if exec_result.output else "" + + if exec_result.exit_code == 0: + executed_successfully = True + logger.info(f"Successfully executed with method {i+1} ({method_name})") + + # For direct kernel and ipython execution, the output is already captured + if "jupyter kernel" in method_name or "ipython execution" in method_name: + direct_output = execution_output + + break + else: + logger.error(f"Method {i+1} ({method_name}) failed with exit code {exec_result.exit_code}") + logger.error(f"Full error output: {execution_output}") + + except asyncio.TimeoutError: + logger.error(f"Execution method {i+1} ({method_name}) timed out after {timeout}s") + continue + except Exception as e: + logger.debug(f"Error with execution method {i+1} ({method_name}): {e}") + continue + + if executed_successfully: + # Handle different execution methods + if direct_output: + # Direct python execution - output is already captured + return { + "status": "success", + "output": direct_output.strip() if direct_output else "Code executed successfully (no output)", + "error": None, + "session_id": self.session_id, + } + else: + # Notebook execution - try to read the executed notebook to get the output + try: + read_executed_cmd = "cat /LeanCLI/research_executed.ipynb" + read_exec_result = await asyncio.to_thread( + self.container.exec_run, + read_executed_cmd, + demux=False + ) + + if read_exec_result.exit_code == 0: + executed_notebook = json.loads(read_exec_result.output.decode('utf-8')) + # Get the output from the last cell + last_cell = executed_notebook["cells"][-1] + + # Extract output from the cell + cell_output = "" + if "outputs" in last_cell and last_cell["outputs"]: + for output in last_cell["outputs"]: + if output.get("output_type") == "stream": + cell_output += "".join(output.get("text", [])) + elif output.get("output_type") == "execute_result": + if "data" in output and "text/plain" in output["data"]: + cell_output += "".join(output["data"]["text/plain"]) + elif output.get("output_type") == "display_data": + if "data" in output and "text/plain" in output["data"]: + cell_output += "".join(output["data"]["text/plain"]) + elif output.get("output_type") == "error": + error_msg = f"Error: {output.get('ename', 'Unknown')}: {output.get('evalue', 'Unknown error')}" + traceback_lines = output.get('traceback', []) + full_error = error_msg + "\n" + "\n".join(traceback_lines) + + # Clean up the executed notebook file + await asyncio.to_thread( + self.container.exec_run, + "rm -f /LeanCLI/research_executed.ipynb", + demux=False + ) + + return { + "status": "error", + "output": cell_output, + "error": full_error, + "session_id": self.session_id, + } + + # Clean up the executed notebook file + await asyncio.to_thread( + self.container.exec_run, + "rm -f /LeanCLI/research_executed.ipynb", + demux=False + ) + + return { + "status": "success", + "output": cell_output.strip() if cell_output else "Code executed successfully (no output)", + "error": None, + "session_id": self.session_id, + } + + except Exception as e: + logger.error(f"Failed to parse executed notebook: {e}") + # Fall through to fallback approach + + # Fallback: If notebook execution failed, return a helpful message + # but still indicate the code was added to the notebook + return { + "status": "success", + "output": f"Code added to /LeanCLI/research.ipynb. Executed_successfully output = {executed_successfully}", + "error": None, + "session_id": self.session_id, + "note": f"Notebook execution failed. Details: {execution_output[:500] if execution_output else 'No details available'}" + } + + except Exception as e: + logger.error(f"Error executing code: {e}") + return { + "status": "error", + "output": "", + "error": str(e), + "session_id": self.session_id, + } + + def is_expired(self, max_idle_time: timedelta = timedelta(hours=1)) -> bool: + """Check if session has been idle too long.""" + return datetime.utcnow() - self.last_used > max_idle_time + + async def close(self, reason: str = "normal") -> None: + """Stop the research session.""" + logger.info(f"Closing research session {self.session_id} (reason: {reason})") + + try: + if self.container: + try: + # Stop the container + self.container.stop(timeout=10) + logger.info(f"Container {self.container.name} stopped") + except Exception as e: + logger.warning(f"Error stopping container: {e}") + try: + self.container.kill() + except Exception as e2: + logger.error(f"Error killing container: {e2}") + + self.container = None + + # Clean up temp directory if used + if self._temp_dir: + self._temp_dir.cleanup() + self._temp_dir = None + + # Security logging + security_logger.log_session_destroyed(self.session_id, reason) + + except Exception as e: + logger.error(f"Error during session cleanup: {e}") + finally: + self._initialized = False + logger.info(f"Research session {self.session_id} closed") + + def __repr__(self) -> str: + return ( + f"ResearchSession(id={self.session_id}, " + f"initialized={self._initialized}, " + f"port={self.port})" + ) \ No newline at end of file diff --git a/quantconnect_mcp/src/adapters/research_session_old.py.bak b/quantconnect_mcp/src/adapters/research_session_old.py.bak new file mode 100644 index 0000000..0e290f8 --- /dev/null +++ b/quantconnect_mcp/src/adapters/research_session_old.py.bak @@ -0,0 +1,883 @@ +"""QuantConnect Research Session Container Adapter""" + +import asyncio +import hashlib +import json +import logging +import os +import shutil +import tempfile +import uuid +import zipfile +from datetime import datetime, timedelta +from pathlib import Path +from typing import Any, Dict, List, Optional, Union + +import docker +import docker.types +import pandas as pd +import requests +from docker.models.containers import Container +from docker.types import Mount + +from .logging_config import get_container_logger, security_logger + +logger = logging.getLogger(__name__) + + +class ResearchSessionError(Exception): + """Custom exception for research session errors.""" + pass + + +class ResearchSession: + """ + Container-based QuantConnect Research session adapter. + + Manages a Docker container running the quantconnect/research image + and provides methods to execute code and exchange data. + """ + + IMAGE = "quantconnect/research:latest" # Use research image as intended + CONTAINER_WORKSPACE = "/Lean" # Match LEAN_ROOT_PATH + NOTEBOOKS_PATH = "/Lean/Notebooks" + DATA_PATH = "/Lean/Data" + TIMEOUT_DEFAULT = 300 # 5 minutes + + def __init__( + self, + session_id: Optional[str] = None, + workspace_dir: Optional[Path] = None, + memory_limit: str = "2g", + cpu_limit: float = 1.0, + timeout: int = TIMEOUT_DEFAULT, + port: Optional[int] = None, + ): + """ + Initialize a new research session. + + Args: + session_id: Unique identifier for this session + workspace_dir: Local workspace directory (temp dir if None) + memory_limit: Container memory limit (e.g., "2g", "512m") + cpu_limit: Container CPU limit (fraction of CPU) + timeout: Default execution timeout in seconds + port: Local port to expose Jupyter Lab on (default: env var QUANTBOOK_DOCKER_PORT or 8888) + """ + self.session_id = session_id or f"qb_{uuid.uuid4().hex[:8]}" + self.memory_limit = memory_limit + self.cpu_limit = cpu_limit + self.timeout = timeout + self.created_at = datetime.utcnow() + self.last_used = self.created_at + + # Get port from parameter, env var, or default + import os + if port is not None: + self.port = port + else: + self.port = int(os.environ.get("QUANTBOOK_DOCKER_PORT", "8888")) + + # Setup workspace + if workspace_dir: + self.workspace_dir = Path(workspace_dir) + self.workspace_dir.mkdir(parents=True, exist_ok=True) + self._temp_dir = None + else: + self._temp_dir = tempfile.TemporaryDirectory(prefix=f"qc_research_{self.session_id}_") + self.workspace_dir = Path(self._temp_dir.name) + + # Create necessary directories + self.notebooks_dir = self.workspace_dir / "Notebooks" + self.notebooks_dir.mkdir(parents=True, exist_ok=True) + + # Create data directory structure (minimal for research) + self.data_dir = self.workspace_dir / "Data" + self.data_dir.mkdir(parents=True, exist_ok=True) + + # Create temp directory for configs + self.temp_config_dir = self.workspace_dir / "temp" + self.temp_config_dir.mkdir(parents=True, exist_ok=True) + + # Docker client and container + self.client = docker.from_env() + self.container: Optional[Container] = None + self._initialized = False + + logger.info(f"Created research session {self.session_id} (port: {self.port})") + + async def _download_lean_repository(self) -> None: + """Download and extract the Lean repository for config and data files.""" + logger.info("Downloading latest Lean repository for configuration and data...") + + try: + # Download the Lean repository master branch + response = await asyncio.to_thread( + requests.get, + "https://github.com/QuantConnect/Lean/archive/master.zip", + stream=True, + timeout=60 + ) + response.raise_for_status() + + # Save to temporary file + zip_path = self.temp_config_dir / "lean-master.zip" + with open(zip_path, "wb") as f: + for chunk in response.iter_content(chunk_size=8192): + if chunk: + f.write(chunk) + + # Extract the zip file + extract_dir = self.temp_config_dir / "lean-extract" + with zipfile.ZipFile(zip_path, 'r') as zip_ref: + zip_ref.extractall(extract_dir) + + # Copy the config file + source_config = extract_dir / "Lean-master" / "Launcher" / "config.json" + if source_config.exists(): + # Read and clean the config (like lean-cli does) + config_content = source_config.read_text(encoding="utf-8") + lean_config = self._parse_json_with_comments(config_content) + + # Update config with research-specific settings + lean_config["environment"] = "backtesting" + lean_config["algorithm-type-name"] = "QuantBookResearch" + lean_config["algorithm-language"] = "Python" + lean_config["algorithm-location"] = "/Notebooks/research.ipynb" + lean_config["research-object-store-name"] = self.session_id + lean_config["job-organization-id"] = os.environ.get("QUANTCONNECT_ORGANIZATION_ID", "0") + lean_config["job-user-id"] = os.environ.get("QUANTCONNECT_USER_ID", "0") + lean_config["api-access-token"] = os.environ.get("QUANTCONNECT_API_TOKEN", "") + lean_config["composer-dll-directory"] = "/Lean" + lean_config["results-destination-folder"] = "/tmp" + lean_config["object-store-name"] = self.session_id + lean_config["data-folder"] = "/Lean/Data" + + # No real limit for the object store by default + lean_config["storage-limit-mb"] = "9999999" + lean_config["storage-file-count"] = "9999999" + + # Save the cleaned config + config_path = self.temp_config_dir / "config.json" + with open(config_path, "w") as f: + json.dump(lean_config, f, indent=2) + + logger.info("Lean configuration downloaded and prepared") + else: + raise ResearchSessionError("Could not find Launcher/config.json in Lean repository") + + # Copy the Data directory + source_data = extract_dir / "Lean-master" / "Data" + if source_data.exists() and source_data.is_dir(): + # Copy essential data files (market hours, symbol properties, etc.) + essential_dirs = ["market-hours", "symbol-properties", "equity/usa/map_files"] + for dir_name in essential_dirs: + source_dir = source_data / dir_name + if source_dir.exists(): + dest_dir = self.data_dir / dir_name + dest_dir.parent.mkdir(parents=True, exist_ok=True) + shutil.copytree(source_dir, dest_dir, dirs_exist_ok=True) + logger.info(f"Copied data directory: {dir_name}") + + logger.info("Essential data files downloaded") + else: + logger.warning("Could not find Data directory in Lean repository") + + # Clean up + zip_path.unlink(missing_ok=True) + shutil.rmtree(extract_dir, ignore_errors=True) + + except Exception as e: + logger.error(f"Failed to download Lean repository: {e}") + raise ResearchSessionError(f"Failed to download Lean repository: {e}") + + def _parse_json_with_comments(self, content: str) -> Dict[str, Any]: + """Parse JSON content that may contain comments.""" + try: + import re + # Remove multi-line and single-line comments + content = re.sub(r'/\*.*?\*/|//[^\r\n"]*[\r\n]', '', content, flags=re.DOTALL) + + # Handle single line comments with double quotes + lines = [] + for line in content.split('\n'): + double_quotes_count = 0 + previous_char = '' + cleaned_line = '' + i = 0 + while i < len(line): + current_char = line[i] + if current_char == '/' and i + 1 < len(line) and line[i + 1] == '/' and double_quotes_count % 2 == 0: + # Found comment start outside quotes + break + else: + if current_char == '"' and previous_char != '\\': + double_quotes_count += 1 + cleaned_line += current_char + previous_char = current_char + i += 1 + lines.append(cleaned_line) + + cleaned_content = '\n'.join(lines) + return json.loads(cleaned_content) + except Exception as e: + logger.error(f"Failed to parse JSON with comments: {e}") + # Fallback to simple JSON parsing + return json.loads(content) + + async def initialize(self) -> None: + """Initialize the Docker container.""" + if self._initialized: + return + + try: + # Ensure the image is available + try: + self.client.images.get(self.IMAGE) + except docker.errors.ImageNotFound: + logger.info(f"Pulling image {self.IMAGE}...") + self.client.images.pull(self.IMAGE) + + # Download and extract Lean repository for config and data (like lean-cli does) + await self._download_lean_repository() + + # Load the full Lean config from the downloaded repository + config_path = self.temp_config_dir / "config.json" + if not config_path.exists(): + raise ResearchSessionError("Failed to download Lean configuration") + + # Create a default research notebook if none exists + default_notebook = self.notebooks_dir / "research.ipynb" + if not default_notebook.exists(): + notebook_content = { + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": ["# QuantConnect Research Environment\n", + "Welcome to the QuantConnect Research Environment. ", + "Here you can perform historical research using the QuantBook API."] + }, + { + "cell_type": "code", + "metadata": {}, + "source": ["# QuantBook is automatically available as 'qb'\n", + "# Documentation: https://www.quantconnect.com/docs/v2/research-environment"] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + } + }, + "nbformat": 4, + "nbformat_minor": 4 + } + with open(default_notebook, "w") as f: + json.dump(notebook_content, f, indent=2) + + # Set up mounts exactly like LEAN CLI + mounts = [ + # Mount notebooks directory + Mount( + target=self.NOTEBOOKS_PATH, + source=str(self.notebooks_dir), + type="bind", + read_only=False + ), + # Mount data directory (even if minimal) + Mount( + target=self.DATA_PATH, + source=str(self.data_dir), + type="bind", + read_only=True + ), + # Mount config in root + Mount( + target="/Lean/config.json", + source=str(config_path), + type="bind", + read_only=True + ), + # Also mount config in notebooks directory (like LEAN CLI) + Mount( + target=f"{self.NOTEBOOKS_PATH}/config.json", + source=str(config_path), + type="bind", + read_only=True + ) + ] + + # Add environment variables like lean-cli does + environment = { + "COMPOSER_DLL_DIRECTORY": "/Lean", + "LEAN_ENGINE": "true", + "PYTHONPATH": "/Lean" + } + + # Create the startup script similar to LEAN CLI + shell_script_commands = [ + "#!/usr/bin/env bash", + "set -e", + # Setup Jupyter config + "mkdir -p ~/.jupyter", + 'echo "c.ServerApp.disable_check_xsrf = True\nc.ServerApp.tornado_settings = {\'headers\': {\'Content-Security-Policy\': \'frame-ancestors self *\'}}" > ~/.jupyter/jupyter_server_config.py', + "mkdir -p ~/.ipython/profile_default/static/custom", + 'echo "#header-container { display: none !important; }" > ~/.ipython/profile_default/static/custom/custom.css', + # Start the research environment (look for start.sh or similar) + "if [ -f /start.sh ]; then", + " echo 'Starting research environment with /start.sh'", + " exec /start.sh", + "elif [ -f /opt/miniconda3/bin/jupyter ]; then", + " echo 'Starting Jupyter Lab directly'", + " cd /Lean/Notebooks", + " exec jupyter lab --ip=0.0.0.0 --port=8888 --no-browser --allow-root --NotebookApp.token='' --NotebookApp.password='' --NotebookApp.allow_origin='*'", + "else", + " echo 'No Jupyter found, keeping container alive'", + " exec sleep infinity", + "fi" + ] + + # Write the startup script to a temporary file + if self._temp_dir: + startup_script_path = Path(self._temp_dir.name) / "lean-cli-start.sh" + else: + startup_script_path = self.workspace_dir / "lean-cli-start.sh" + + startup_script_path.parent.mkdir(parents=True, exist_ok=True) + with open(startup_script_path, "w", encoding="utf-8", newline="\n") as file: + file.write("\n".join(shell_script_commands) + "\n") + + # Make the script executable + os.chmod(startup_script_path, 0o755) + + # Add the startup script mount + mounts.append(Mount( + target="/lean-cli-start.sh", + source=str(startup_script_path), + type="bind", + read_only=True + )) + + # Create container with the startup script as entrypoint + try: + self.container = self.client.containers.run( + self.IMAGE, + entrypoint=["bash", "/lean-cli-start.sh"], + mounts=mounts, + environment=environment, + working_dir=self.NOTEBOOKS_PATH, + detach=True, + mem_limit=self.memory_limit, + cpu_period=100000, + cpu_quota=int(100000 * self.cpu_limit), + name=f"qc_research_{self.session_id}", + remove=True, # Auto-remove when stopped + labels={ + "mcp.quantconnect.session_id": self.session_id, + "mcp.quantconnect.created_at": self.created_at.isoformat(), + }, + ports={"8888/tcp": str(self.port)}, # Expose Jupyter port to local port + ) + + # Explicitly check if container started + self.container.reload() + if self.container.status != "running": + # If not running, get logs to see what went wrong + logs = self.container.logs().decode() + raise ResearchSessionError(f"Container failed to start (status: {self.container.status}). Logs: {logs}") + + logger.info(f"Container {self.container.id} started successfully with status: {self.container.status}") + + except Exception as e: + logger.error(f"Failed to create/start container: {e}") + raise ResearchSessionError(f"Container creation failed: {e}") + + # Wait for Jupyter to start up + logger.info("Waiting for Jupyter kernel to initialize...") + jupyter_ready = False + for i in range(12): # Check for up to 60 seconds + await asyncio.sleep(5) + + # Check if Jupyter is running + jupyter_check = await asyncio.to_thread( + self.container.exec_run, + "ps aux | grep -E 'jupyter-lab|jupyter-notebook' | grep -v grep", + workdir="/" + ) + + if jupyter_check.exit_code == 0 and jupyter_check.output: + logger.info("Jupyter is running") + jupyter_ready = True + break + else: + logger.info(f"Waiting for Jupyter... ({i+1}/12)") + + if jupyter_ready: + logger.info(f"Jupyter kernel is ready on port {self.port}") + # Give it a bit more time to fully initialize the kernel + await asyncio.sleep(5) + else: + logger.warning("Jupyter did not start within timeout, proceeding anyway") + + # Initialize Python environment in the container + # First create the notebooks directory if it doesn't exist + mkdir_result = await asyncio.to_thread( + self.container.exec_run, + f"mkdir -p {self.NOTEBOOKS_PATH}", + workdir="/" + ) + + # Test basic Python functionality + init_commands = [ + ("python3 --version", "Check Python version"), + ("python3 -c \"import sys; print('Python initialized:', sys.version)\"", "Test Python import"), + ("python3 -c \"import pandas as pd; import numpy as np; print('Data libraries available')\"", "Test data libraries"), + ("ls -la /Lean/", "Check LEAN directory"), + (["/bin/bash", "-c", "ls -la /opt/miniconda3/share/jupyter/kernels/ 2>/dev/null || echo 'No Jupyter kernels directory'"], "Check Jupyter kernels"), + ] + + for cmd, description in init_commands: + logger.info(f"Running initialization: {description}") + result = await asyncio.to_thread( + self.container.exec_run, + cmd if isinstance(cmd, list) else cmd, + workdir="/" # Use root for init commands + ) + if result.exit_code != 0: + # Don't fail on non-critical checks + if "Check Jupyter kernels" in description: + logger.warning(f"Non-critical check failed: {description}") + else: + error_msg = result.output.decode() if result.output else "No output" + logger.error(f"Init command failed: {cmd} - {error_msg}") + raise ResearchSessionError(f"Container initialization failed ({description}): {error_msg}") + else: + output = result.output.decode() if result.output else "" + logger.info(f"Init success: {output.strip()}") + + self._initialized = True + + # Security logging + security_logger.log_session_created(self.session_id, self.container.id) + logger.info(f"Research session {self.session_id} initialized successfully") + + container_logger = get_container_logger(self.session_id) + container_logger.info(f"Container {self.container.id} ready for session {self.session_id}") + + except Exception as e: + logger.error(f"Failed to initialize research session {self.session_id}: {e}") + await self.close() + raise ResearchSessionError(f"Failed to initialize research session: {e}") + + async def execute( + self, + code: str, + timeout: Optional[int] = None + ) -> Dict[str, Any]: + """ + Execute Python code in the research container with comprehensive error handling. + + Args: + code: Python code to execute + timeout: Execution timeout in seconds (uses default if None) + + Returns: + Dictionary with execution results + """ + if not self._initialized: + await self.initialize() + + if not self.container: + raise ResearchSessionError("Container not available") + + # Security and logging + code_hash = hashlib.sha256(code.encode()).hexdigest()[:16] + container_logger = get_container_logger(self.session_id) + + # Basic security checks + if len(code) > 50000: # 50KB limit + security_logger.log_security_violation( + self.session_id, "CODE_SIZE_LIMIT", f"Code size: {len(code)} bytes" + ) + return { + "status": "error", + "output": "", + "error": "Code size exceeds 50KB limit", + "session_id": self.session_id, + } + + # Check for potentially dangerous operations + dangerous_patterns = [ + "import os", "import subprocess", "import sys", "__import__", + "exec(", "eval(", "compile(", "open(", "file(", + ] + + for pattern in dangerous_patterns: + if pattern in code.lower(): + security_logger.log_security_violation( + self.session_id, "DANGEROUS_CODE_PATTERN", f"Pattern: {pattern}" + ) + container_logger.warning(f"Potentially dangerous code pattern detected: {pattern}") + + self.last_used = datetime.utcnow() + execution_timeout = timeout or self.timeout + + container_logger.info(f"Executing code (hash: {code_hash}, timeout: {execution_timeout}s)") + + try: + # Check container health before execution + try: + container_status = self.container.status + if container_status != "running": + raise ResearchSessionError(f"Container is not running (status: {container_status})") + except Exception as e: + raise ResearchSessionError(f"Failed to check container status: {e}") + + # Execute code directly in container using exec_run (like lean-cli) + # Create a Python script that includes QuantBook initialization + script_content = f"""#!/usr/bin/env python3 +import sys +import traceback +import pandas as pd +import numpy as np + +# Import datetime first +from datetime import datetime, timedelta + +# In QuantConnect Research environment, qb should be pre-initialized by the kernel +# The user's code will have access to qb and other QuantConnect objects + +try: + # Execute the user code with qb available +{chr(10).join(' ' + line for line in code.split(chr(10)))} +except Exception as e: + # Print error to stderr so it doesn't interfere with stdout + print(f"Error: {{e}}", file=sys.stderr, flush=True) + traceback.print_exc(file=sys.stderr) + sys.exit(1) +""" + + # Debug logging + import os + from datetime import datetime as dt + debug_log_path = "/Users/taylorwilsdon/git/quantconnect-mcp/mcp_debug_output.log" + with open(debug_log_path, "a") as debug_file: + debug_file.write(f"\n=== EXECUTION DEBUG {dt.now().isoformat()} ===\n") + debug_file.write(f"Session: {self.session_id}\n") + debug_file.write(f"Code hash: {code_hash}\n") + debug_file.write(f"Script preview: {script_content[:200]}...\n") + + # Test with the low-level API to see if we get output + test_exec = await asyncio.to_thread( + self.container.client.api.exec_create, + self.container.id, + 'echo "Low-level API test"', + stdout=True, + stderr=True + ) + test_output = await asyncio.to_thread( + self.container.client.api.exec_start, + test_exec['Id'], + stream=False + ) + + with open(debug_log_path, "a") as debug_file: + debug_file.write(f"Low-level test output: {test_output}\n") + + # Use low-level Docker API with file-based execution + try: + # First, write the script to a file in the container + script_filename = f"quantbook_exec_{code_hash}.py" + script_path = f"{self.NOTEBOOKS_PATH}/{script_filename}" + + # Write script content to file + write_cmd = f"cat > {script_path} << 'EOF'\n{script_content}\nEOF" + write_exec = await asyncio.to_thread( + self.container.client.api.exec_create, + self.container.id, + ['/bin/sh', '-c', write_cmd], + stdout=True, + stderr=True, + workdir=self.NOTEBOOKS_PATH + ) + write_result = await asyncio.to_thread( + self.container.client.api.exec_start, + write_exec['Id'], + stream=False + ) + + # Debug log the write result + with open(debug_log_path, "a") as debug_file: + debug_file.write(f"Script write result: {write_result}\n") + + # Now execute the script file + exec_cmd = f'python3 {script_filename}' + exec_instance = await asyncio.to_thread( + self.container.client.api.exec_create, + self.container.id, + exec_cmd, + stdout=True, + stderr=True, + workdir=self.NOTEBOOKS_PATH + ) + + # Start execution and get output (not streaming) + exec_output = await asyncio.wait_for( + asyncio.to_thread( + self.container.client.api.exec_start, + exec_instance['Id'], + stream=False + ), + timeout=execution_timeout + ) + + # Get exec info for exit code + exec_info = await asyncio.to_thread( + self.container.client.api.exec_inspect, + exec_instance['Id'] + ) + + exit_code = exec_info.get('ExitCode', -1) + + # Process the output + stdout_output = exec_output.decode('utf-8', errors='replace') if exec_output else "" + stderr_output = "" + + # Debug log the raw output + with open(debug_log_path, "a") as debug_file: + debug_file.write(f"Low-level exec_output type: {type(exec_output)}\n") + debug_file.write(f"Low-level exec_output length: {len(exec_output) if exec_output else 0}\n") + debug_file.write(f"Low-level exec_output preview: {repr(exec_output[:500]) if exec_output else 'None'}\n") + debug_file.write(f"Exit code: {exit_code}\n") + debug_file.write(f"Stdout length: {len(stdout_output)}\n") + debug_file.write(f"Stdout preview: {repr(stdout_output[:500])}\n") + + # Clean up the script file + cleanup_exec = await asyncio.to_thread( + self.container.client.api.exec_create, + self.container.id, + f'rm -f {script_filename}', + workdir=self.NOTEBOOKS_PATH + ) + await asyncio.to_thread( + self.container.client.api.exec_start, + cleanup_exec['Id'], + stream=False + ) + + # Combine outputs for return + full_output = stdout_output + if stderr_output and exit_code != 0: + full_output = stdout_output + "\n[STDERR]\n" + stderr_output + + except asyncio.TimeoutError: + security_logger.log_resource_limit_hit( + self.session_id, "EXECUTION_TIMEOUT", f"{execution_timeout}s" + ) + container_logger.error(f"Code execution timed out after {execution_timeout}s") + + with open(debug_log_path, "a") as debug_file: + debug_file.write(f"TIMEOUT after {execution_timeout}s\n") + + return { + "status": "error", + "output": "", + "error": f"Code execution timed out after {execution_timeout} seconds", + "session_id": self.session_id, + "timeout": True, + } + + # Log the output for debugging + container_logger.debug(f"Container output (exit_code: {exit_code}): {repr(full_output[:200])}") + + # Check execution status based on exit code + if exit_code == 0: + # Success - return stdout content + security_logger.log_code_execution(self.session_id, code_hash, True) + container_logger.info(f"Code execution successful (hash: {code_hash})") + + return { + "status": "success", + "output": full_output.strip(), # Remove trailing whitespace + "error": None, + "session_id": self.session_id, + } + else: + # Error - output contains both stdout and stderr + security_logger.log_code_execution(self.session_id, code_hash, False) + container_logger.error(f"Code execution failed (hash: {code_hash}, exit_code: {exit_code})") + + return { + "status": "error", + "output": full_output.strip(), + "error": f"Code execution failed with exit code {exit_code}", + "session_id": self.session_id, + "exit_code": exit_code, + } + + except ResearchSessionError: + # Re-raise custom exceptions + raise + except Exception as e: + container_logger.error(f"Unexpected error during code execution: {e}") + security_logger.log_code_execution(self.session_id, code_hash, False) + return { + "status": "error", + "output": "", + "error": f"Unexpected execution error: {str(e)}", + "session_id": self.session_id, + "exception_type": type(e).__name__, + } + + async def save_dataframe( + self, + df: pd.DataFrame, + filename: str, + format: str = "parquet" + ) -> Dict[str, Any]: + """ + Save a pandas DataFrame to the workspace. + + Args: + df: DataFrame to save + filename: Output filename + format: File format (parquet, csv, json) + + Returns: + Operation result + """ + try: + filepath = self.workspace_dir / filename + + if format.lower() == "parquet": + df.to_parquet(filepath) + elif format.lower() == "csv": + df.to_csv(filepath, index=False) + elif format.lower() == "json": + df.to_json(filepath, orient="records", date_format="iso") + else: + raise ValueError(f"Unsupported format: {format}") + + return { + "status": "success", + "message": f"DataFrame saved to {filename}", + "filepath": str(filepath), + "format": format, + "shape": df.shape, + } + + except Exception as e: + return { + "status": "error", + "error": str(e), + "message": f"Failed to save DataFrame to {filename}", + } + + async def load_dataframe( + self, + filename: str, + format: Optional[str] = None + ) -> Dict[str, Any]: + """ + Load a pandas DataFrame from the workspace. + + Args: + filename: Input filename + format: File format (auto-detected if None) + + Returns: + Operation result with DataFrame data + """ + try: + filepath = self.workspace_dir / filename + + if not filepath.exists(): + return { + "status": "error", + "error": f"File {filename} not found in workspace", + } + + # Auto-detect format if not specified + if format is None: + format = filepath.suffix.lower().lstrip(".") + + if format == "parquet": + df = pd.read_parquet(filepath) + elif format == "csv": + df = pd.read_csv(filepath) + elif format == "json": + df = pd.read_json(filepath) + else: + return { + "status": "error", + "error": f"Unsupported format: {format}", + } + + return { + "status": "success", + "message": f"DataFrame loaded from {filename}", + "shape": df.shape, + "columns": df.columns.tolist(), + "dtypes": df.dtypes.to_dict(), + "data": df.to_dict("records")[:100], # Limit to first 100 rows + } + + except Exception as e: + return { + "status": "error", + "error": str(e), + "message": f"Failed to load DataFrame from {filename}", + } + + def is_expired(self, max_idle_time: timedelta = timedelta(hours=1)) -> bool: + """Check if session has been idle too long.""" + return datetime.utcnow() - self.last_used > max_idle_time + + async def close(self, reason: str = "normal") -> None: + """Clean up the research session with enhanced logging.""" + logger.info(f"Closing research session {self.session_id} (reason: {reason})") + container_logger = get_container_logger(self.session_id) + + try: + if self.container: + container_id = self.container.id + try: + container_logger.info(f"Stopping container {container_id}") + self.container.stop(timeout=10) + container_logger.info(f"Container {container_id} stopped successfully") + except Exception as e: + container_logger.warning(f"Error stopping container {container_id}: {e}") + try: + container_logger.info(f"Force killing container {container_id}") + self.container.kill() + container_logger.warning(f"Container {container_id} force killed") + except Exception as e2: + container_logger.error(f"Error killing container {container_id}: {e2}") + + self.container = None + + if self._temp_dir: + container_logger.info(f"Cleaning up temporary directory: {self._temp_dir.name}") + self._temp_dir.cleanup() + self._temp_dir = None + + # Security logging + security_logger.log_session_destroyed(self.session_id, reason) + + except Exception as e: + logger.error(f"Error during session cleanup: {e}") + container_logger.error(f"Cleanup failed: {e}") + + finally: + self._initialized = False + logger.info(f"Research session {self.session_id} cleanup completed") + + def __repr__(self) -> str: + return ( + f"ResearchSession(id={self.session_id}, " + f"initialized={self._initialized}, " + f"created_at={self.created_at.isoformat()})" + ) \ No newline at end of file diff --git a/quantconnect_mcp/src/adapters/session_manager.py b/quantconnect_mcp/src/adapters/session_manager.py new file mode 100644 index 0000000..a492829 --- /dev/null +++ b/quantconnect_mcp/src/adapters/session_manager.py @@ -0,0 +1,246 @@ +"""Session Manager for QuantConnect Research Sessions""" + +import asyncio +import logging +from datetime import datetime, timedelta +from typing import Dict, List, Optional + +from .research_session_lean_cli import ResearchSession, ResearchSessionError + +logger = logging.getLogger(__name__) + + +class SessionManager: + """ + Manages multiple ResearchSession instances with lifecycle management, + cleanup, and resource monitoring. + """ + + def __init__( + self, + max_sessions: int = 10, + session_timeout: timedelta = timedelta(hours=1), + cleanup_interval: int = 300, # 5 minutes + ): + """ + Initialize the session manager. + + Args: + max_sessions: Maximum number of concurrent sessions + session_timeout: How long idle sessions are kept alive + cleanup_interval: How often to run cleanup in seconds + """ + self.max_sessions = max_sessions + self.session_timeout = session_timeout + self.cleanup_interval = cleanup_interval + + self._sessions: Dict[str, ResearchSession] = {} + self._cleanup_task: Optional[asyncio.Task] = None + self._running = False + + logger.info(f"SessionManager initialized (max_sessions={max_sessions})") + + async def start(self) -> None: + """Start the session manager and cleanup task.""" + if self._running: + return + + self._running = True + self._cleanup_task = asyncio.create_task(self._cleanup_loop()) + logger.info("SessionManager started") + + async def stop(self) -> None: + """Stop the session manager and clean up all sessions.""" + if not self._running: + return + + self._running = False + + # Cancel cleanup task + if self._cleanup_task: + self._cleanup_task.cancel() + try: + await self._cleanup_task + except asyncio.CancelledError: + pass + + # Clean up all sessions + await self.cleanup_all_sessions() + logger.info("SessionManager stopped") + + async def get_or_create_session( + self, + session_id: str, + **session_kwargs + ) -> ResearchSession: + """ + Get an existing session or create a new one. + + Args: + session_id: Unique session identifier + **session_kwargs: Additional arguments for ResearchSession + + Returns: + ResearchSession instance + + Raises: + ResearchSessionError: If max sessions exceeded or creation fails + """ + # Check if session already exists + if session_id in self._sessions: + session = self._sessions[session_id] + session.last_used = datetime.utcnow() + logger.debug(f"Retrieved existing session {session_id}") + return session + + # Check session limit + if len(self._sessions) >= self.max_sessions: + # Try to clean up expired sessions first + await self._cleanup_expired_sessions() + + if len(self._sessions) >= self.max_sessions: + raise ResearchSessionError( + f"Maximum number of sessions ({self.max_sessions}) reached. " + "Please close unused sessions or wait for them to expire." + ) + + # Create new session + try: + session = ResearchSession(session_id=session_id, **session_kwargs) + await session.initialize() + + self._sessions[session_id] = session + logger.info(f"Created new research session {session_id}") + return session + + except Exception as e: + logger.error(f"Failed to create session {session_id}: {e}") + raise ResearchSessionError(f"Failed to create session: {e}") + + async def get_session(self, session_id: str) -> Optional[ResearchSession]: + """ + Get an existing session without creating a new one. + + Args: + session_id: Session identifier + + Returns: + ResearchSession or None if not found + """ + session = self._sessions.get(session_id) + if session: + session.last_used = datetime.utcnow() + return session + + async def close_session(self, session_id: str) -> bool: + """ + Close and remove a specific session. + + Args: + session_id: Session identifier + + Returns: + True if session was found and closed, False otherwise + """ + session = self._sessions.pop(session_id, None) + if session: + await session.close() + logger.info(f"Closed session {session_id}") + return True + return False + + async def cleanup_all_sessions(self) -> None: + """Close and remove all sessions.""" + session_ids = list(self._sessions.keys()) + for session_id in session_ids: + await self.close_session(session_id) + + logger.info(f"Cleaned up {len(session_ids)} sessions") + + def list_sessions(self) -> List[Dict[str, any]]: + """ + Get information about all active sessions. + + Returns: + List of session information dictionaries + """ + return [ + { + "session_id": session.session_id, + "created_at": session.created_at.isoformat(), + "last_used": getattr(session, 'last_used', session.created_at).isoformat(), + "initialized": session._initialized, + "workspace_dir": str(session.workspace_dir), + "port": getattr(session, 'port', 8888), + } + for session in self._sessions.values() + ] + + def get_session_count(self) -> Dict[str, int]: + """Get session count information.""" + return { + "active_sessions": len(self._sessions), + "max_sessions": self.max_sessions, + "available_slots": max(0, self.max_sessions - len(self._sessions)), + } + + async def _cleanup_expired_sessions(self) -> int: + """Clean up expired sessions and return count of cleaned sessions.""" + expired_sessions = [] + now = datetime.utcnow() + + for session_id, session in self._sessions.items(): + if session.is_expired(self.session_timeout): + expired_sessions.append(session_id) + + # Close expired sessions + for session_id in expired_sessions: + await self.close_session(session_id) + + if expired_sessions: + logger.info(f"Cleaned up {len(expired_sessions)} expired sessions") + + return len(expired_sessions) + + async def _cleanup_loop(self) -> None: + """Background task for periodic session cleanup.""" + logger.info(f"Session cleanup loop started (interval={self.cleanup_interval}s)") + + while self._running: + try: + await asyncio.sleep(self.cleanup_interval) + if self._running: + await self._cleanup_expired_sessions() + except asyncio.CancelledError: + break + except Exception as e: + logger.error(f"Error in cleanup loop: {e}") + + logger.info("Session cleanup loop stopped") + + +# Global session manager instance +_session_manager: Optional[SessionManager] = None + + +def get_session_manager() -> SessionManager: + """Get the global session manager instance.""" + global _session_manager + if _session_manager is None: + _session_manager = SessionManager() + return _session_manager + + +async def initialize_session_manager() -> None: + """Initialize and start the global session manager.""" + manager = get_session_manager() + if not manager._running: + await manager.start() + + +async def shutdown_session_manager() -> None: + """Shutdown the global session manager.""" + global _session_manager + if _session_manager and _session_manager._running: + await _session_manager.stop() + _session_manager = None \ No newline at end of file diff --git a/quantconnect_mcp/src/resources/system_resources.py b/quantconnect_mcp/src/resources/system_resources.py index 8f6741c..1b93639 100644 --- a/quantconnect_mcp/src/resources/system_resources.py +++ b/quantconnect_mcp/src/resources/system_resources.py @@ -39,25 +39,31 @@ async def system_info() -> Dict[str, Any]: @mcp.resource("resource://quantconnect/server/status") async def server_status() -> Dict[str, Any]: """Get QuantConnect MCP server status and statistics.""" - from ..tools.quantbook_tools import _quantbook_instances # type: ignore - - # Count active QuantBook instances - active_instances = len(_quantbook_instances) - - # Get instance details + + # Try to get session manager status without causing import issues + active_instances = 0 instance_details = {} - for name, qb in _quantbook_instances.items(): - try: - securities_count = ( - len(qb.Securities) if hasattr(qb, "Securities") else 0 - ) - instance_details[name] = { - "type": str(type(qb).__name__), - "securities_count": securities_count, + + try: + # Only try to import session manager if quantbook is available + from ..adapters.session_manager import get_session_manager + manager = get_session_manager() + sessions = manager.list_sessions() + active_instances = len(sessions) + + for session_info in sessions: + instance_details[session_info["session_id"]] = { + "type": "ResearchSession", "status": "active", + "created_at": session_info["created_at"], + "workspace": session_info["workspace_dir"], } - except Exception as e: - instance_details[name] = {"status": "error", "error": str(e)} + except ImportError: + # QuantBook functionality not available - that's okay + pass + except Exception as e: + # Other errors in session management + instance_details["error"] = str(e) return { "server_name": "QuantConnect MCP Server", @@ -65,8 +71,9 @@ async def server_status() -> Dict[str, Any]: "active_quantbook_instances": active_instances, "instance_details": instance_details, "available_tools": [ - "QuantBook Management", - "Data Retrieval", + "QuantConnect API", + "Project Management", + "Backtesting", "Statistical Analysis", "Portfolio Optimization", "Universe Selection", diff --git a/quantconnect_mcp/src/server.py b/quantconnect_mcp/src/server.py index 8dc6e84..1cd8e62 100644 --- a/quantconnect_mcp/src/server.py +++ b/quantconnect_mcp/src/server.py @@ -5,8 +5,6 @@ from fastmcp import FastMCP from .tools import ( - register_quantbook_tools, - register_data_tools, register_analysis_tools, register_portfolio_tools, register_universe_tools, @@ -23,15 +21,20 @@ mcp: FastMCP = FastMCP( name="QuantConnect MCP Server", instructions=""" - This server provides comprehensive QuantConnect API functionality for: - - Research environment operations with QuantBook - - Historical data retrieval and analysis + This server provides QuantConnect API functionality for: + - Project and backtest management - Statistical analysis (PCA, cointegration, mean reversion) - Portfolio optimization and risk analysis - Universe selection and asset filtering - Alternative data integration + - File management and organization + + Optional QuantBook functionality (requires ENABLE_QUANTBOOK=true): + - Research environment operations with QuantBook in Docker containers + - Historical data retrieval and analysis + - Interactive Jupyter-like code execution - Use the available tools to interact with QuantConnect's research capabilities. + Use the available tools to interact with QuantConnect's capabilities. """, on_duplicate_tools="error", dependencies=[ @@ -47,63 +50,4 @@ ], ) -def main(): - """Initialize and run the QuantConnect MCP server.""" - - # Auto-configure authentication from environment variables if available - user_id = os.getenv("QUANTCONNECT_USER_ID") - api_token = os.getenv("QUANTCONNECT_API_TOKEN") - organization_id = os.getenv("QUANTCONNECT_ORGANIZATION_ID") - - if user_id and api_token: - try: - safe_print("๐Ÿ” Configuring QuantConnect authentication from environment...") - configure_auth(user_id, api_token, organization_id) - safe_print("โœ… Authentication configured successfully") - except Exception as e: - safe_print(f"โš ๏ธ Failed to configure authentication: {e}") - safe_print( - "๐Ÿ’ก You can configure authentication later using the configure_quantconnect_auth tool" - ) - - # Register all tool modules - safe_print("๐Ÿ”ง Registering QuantConnect tools...") - register_auth_tools(mcp) - register_project_tools(mcp) - register_file_tools(mcp) - register_backtest_tools(mcp) - register_quantbook_tools(mcp) - register_data_tools(mcp) - register_analysis_tools(mcp) - register_portfolio_tools(mcp) - register_universe_tools(mcp) - - # Register resources - safe_print("๐Ÿ“Š Registering system resources...") - register_system_resources(mcp) - - safe_print(f"โœ… QuantConnect MCP Server initialized") - - # Determine transport method - transport = os.getenv("MCP_TRANSPORT", "stdio") - - if transport == "streamable-http": - host = os.getenv("MCP_HOST", "127.0.0.1") - port = int(os.getenv("MCP_PORT", "8000")) - safe_print(f"๐ŸŒ Starting HTTP server on {host}:{port}") - mcp.run( - transport="streamable-http", - host=host, - port=port, - path=os.getenv("MCP_PATH", "/mcp"), - ) - elif transport == "stdio": - safe_print("๐Ÿ“ก Starting STDIO transport") - mcp.run() # Default stdio transport - else: - safe_print(f"๐Ÿš€ Starting with {transport} transport") - mcp.run(transport=transport) - - -if __name__ == "__main__": - main() +# Server configuration is now handled in main.py diff --git a/quantconnect_mcp/src/tools/__init__.py b/quantconnect_mcp/src/tools/__init__.py index 7ff77e8..15faf7e 100644 --- a/quantconnect_mcp/src/tools/__init__.py +++ b/quantconnect_mcp/src/tools/__init__.py @@ -1,7 +1,6 @@ """QuantConnect MCP Tools Package""" -from .quantbook_tools import register_quantbook_tools -from .data_tools import register_data_tools +# Core tools (always available) from .analysis_tools import register_analysis_tools from .portfolio_tools import register_portfolio_tools from .universe_tools import register_universe_tools @@ -10,9 +9,11 @@ from .file_tools import register_file_tools from .backtest_tools import register_backtest_tools +# QuantBook tools are imported conditionally in main.py to avoid Docker dependency +# from .quantbook_tools import register_quantbook_tools +# from .data_tools import register_data_tools + __all__ = [ - "register_quantbook_tools", - "register_data_tools", "register_analysis_tools", "register_portfolio_tools", "register_universe_tools", @@ -20,4 +21,7 @@ "register_project_tools", "register_file_tools", "register_backtest_tools", + # QuantBook tools excluded from __all__ - imported conditionally + # "register_quantbook_tools", + # "register_data_tools", ] diff --git a/quantconnect_mcp/src/tools/analysis_tools.py b/quantconnect_mcp/src/tools/analysis_tools.py index 85eaffd..79df0ce 100644 --- a/quantconnect_mcp/src/tools/analysis_tools.py +++ b/quantconnect_mcp/src/tools/analysis_tools.py @@ -5,7 +5,15 @@ import pandas as pd import numpy as np import json -from .quantbook_tools import get_quantbook_instance + +# Conditional import to avoid issues when Docker/QuantBook not available +def get_quantbook_instance(instance_name: str = "default"): + """Get QuantBook instance - always returns None when quantbook unavailable.""" + try: + from .quantbook_tools import get_quantbook_instance as _get_instance + return _get_instance(instance_name) + except ImportError: + return None def register_analysis_tools(mcp: FastMCP): diff --git a/quantconnect_mcp/src/tools/data_tools.py b/quantconnect_mcp/src/tools/data_tools.py index c8854f7..e81d87d 100644 --- a/quantconnect_mcp/src/tools/data_tools.py +++ b/quantconnect_mcp/src/tools/data_tools.py @@ -1,11 +1,14 @@ -"""Data Retrieval Tools for QuantConnect MCP Server""" +"""Data Retrieval Tools for QuantConnect MCP Server (Container-Based)""" from fastmcp import FastMCP from typing import Dict, Any, List, Optional, Union from datetime import datetime import pandas as pd import json -from .quantbook_tools import get_quantbook_instance +import logging +from .quantbook_tools import get_quantbook_session + +logger = logging.getLogger(__name__) def register_data_tools(mcp: FastMCP): @@ -26,40 +29,143 @@ async def add_equity( Returns: Dictionary containing the added security information """ - qb = get_quantbook_instance(instance_name) - if qb is None: + session = await get_quantbook_session(instance_name) + if session is None: return { "status": "error", "error": f"QuantBook instance '{instance_name}' not found", + "message": "Initialize a QuantBook instance first using initialize_quantbook", } try: - from QuantConnect import Resolution # type: ignore + # Validate resolution + valid_resolutions = ["Minute", "Hour", "Daily"] + if resolution not in valid_resolutions: + return { + "status": "error", + "error": f"Invalid resolution '{resolution}'. Must be one of: {valid_resolutions}", + } + + # Execute code to add equity in container + add_equity_code = f""" + from QuantConnect import Resolution + + # Ensure qb is initialized + if 'qb' not in globals() or qb is None: + # Initialize QuantBook - this will use the container's environment + qb = QuantBook() + print("Initialized QuantBook instance") # Map string resolution to enum - resolution_map = { + resolution_map = {{ "Minute": Resolution.Minute, "Hour": Resolution.Hour, "Daily": Resolution.Daily, - } + }} - if resolution not in resolution_map: + try: + # Add equity to QuantBook + security = qb.AddEquity("{ticker}", resolution_map["{resolution}"]) + symbol = str(security.Symbol) + + print(f"Successfully added equity '{ticker}' with {resolution} resolution") + print(f"Symbol: {{symbol}}") + + # Store result for return + result = {{ + "ticker": "{ticker}", + "symbol": symbol, + "resolution": "{resolution}", + "success": True + }} + + # Print result as JSON for MCP to parse + import json + print("=== QUANTBOOK_RESULT_START ===") + print(json.dumps(result)) + print("=== QUANTBOOK_RESULT_END ===") + + except Exception as e: + print(f"Failed to add equity '{ticker}': {{e}}") + result = {{ + "ticker": "{ticker}", + "error": str(e), + "success": False + }} + + # Print error result as JSON + import json + print("=== QUANTBOOK_RESULT_START ===") + print(json.dumps(result)) + print("=== QUANTBOOK_RESULT_END ===") + """ + + execution_result = await session.execute(add_equity_code) + + if execution_result["status"] != "success": return { "status": "error", - "error": f"Invalid resolution '{resolution}'. Must be one of: {list(resolution_map.keys())}", + "error": execution_result.get("error", "Unknown error"), + "message": f"Failed to add equity '{ticker}'", + "execution_output": execution_result.get("output", ""), } - symbol = qb.AddEquity(ticker, resolution_map[resolution]).Symbol + # Parse the JSON result from container output + output = execution_result.get("output", "") + parsed_result = None + + try: + # Extract JSON result from container output + if "=== QUANTBOOK_RESULT_START ===" in output and "=== QUANTBOOK_RESULT_END ===" in output: + start_marker = output.find("=== QUANTBOOK_RESULT_START ===") + end_marker = output.find("=== QUANTBOOK_RESULT_END ===") + if start_marker != -1 and end_marker != -1: + json_start = start_marker + len("=== QUANTBOOK_RESULT_START ===\n") + json_content = output[json_start:end_marker].strip() + parsed_result = json.loads(json_content) + + if parsed_result and parsed_result.get("success"): + # Return successful result with parsed data + return { + "status": "success", + "ticker": ticker, + "symbol": parsed_result.get("symbol", ticker), + "resolution": resolution, + "message": f"Successfully added equity '{ticker}' with {resolution} resolution", + "execution_output": output, + "instance_name": instance_name, + } + elif parsed_result and not parsed_result.get("success"): + # Container execution succeeded but equity addition failed + return { + "status": "error", + "error": parsed_result.get("error", "Unknown equity addition error"), + "message": f"Failed to add equity '{ticker}'", + "execution_output": output, + "instance_name": instance_name, + } + else: + # Fallback if JSON parsing fails but execution succeeded + return { + "status": "success", + "ticker": ticker, + "resolution": resolution, + "message": f"Successfully added equity '{ticker}' with {resolution} resolution", + "execution_output": output, + "instance_name": instance_name, + } - return { - "status": "success", - "ticker": ticker, - "symbol": str(symbol), - "resolution": resolution, - "message": f"Successfully added equity '{ticker}' with {resolution} resolution", - } + except json.JSONDecodeError as e: + return { + "status": "error", + "error": f"Failed to parse container result: {e}", + "message": f"Container executed but result parsing failed", + "execution_output": output, + "instance_name": instance_name, + } except Exception as e: + logger.error(f"Failed to add equity '{ticker}' in instance '{instance_name}': {e}") return { "status": "error", "error": str(e), @@ -81,52 +187,91 @@ async def add_multiple_equities( Returns: Dictionary containing results for all added securities """ - qb = get_quantbook_instance(instance_name) - if qb is None: + session = await get_quantbook_session(instance_name) + if session is None: return { "status": "error", "error": f"QuantBook instance '{instance_name}' not found", + "message": "Initialize a QuantBook instance first using initialize_quantbook", } try: - from QuantConnect import Resolution # type: ignore + # Validate resolution + valid_resolutions = ["Minute", "Hour", "Daily"] + if resolution not in valid_resolutions: + return { + "status": "error", + "error": f"Invalid resolution '{resolution}'. Must be one of: {valid_resolutions}", + } - resolution_map = { + # Convert tickers list to Python code representation + tickers_str = str(tickers) + + # Execute code to add multiple equities in container + add_multiple_code = f""" + from QuantConnect import Resolution + + # Ensure qb is initialized + if 'qb' not in globals() or qb is None: + # Initialize QuantBook - this will use the container's environment + qb = QuantBook() + print("Initialized QuantBook instance") + + # Map string resolution to enum + resolution_map = {{ "Minute": Resolution.Minute, "Hour": Resolution.Hour, "Daily": Resolution.Daily, - } - - if resolution not in resolution_map: - return { - "status": "error", - "error": f"Invalid resolution '{resolution}'. Must be one of: {list(resolution_map.keys())}", - } + }} + tickers = {tickers_str} + resolution = "{resolution}" results = [] - symbols = {} + symbols = {{}} for ticker in tickers: try: - symbol = qb.AddEquity(ticker, resolution_map[resolution]).Symbol - symbols[ticker] = str(symbol) - results.append( - {"ticker": ticker, "symbol": str(symbol), "status": "success"} - ) + # Add equity to QuantBook + security = qb.AddEquity(ticker, resolution_map[resolution]) + symbol = str(security.Symbol) + symbols[ticker] = symbol + results.append({{ + "ticker": ticker, + "symbol": symbol, + "status": "success" + }}) + print(f"Added equity {{ticker}} with symbol {{symbol}}") except Exception as e: - results.append( - {"ticker": ticker, "status": "error", "error": str(e)} - ) + results.append({{ + "ticker": ticker, + "status": "error", + "error": str(e) + }}) + print(f"Failed to add equity {{ticker}}: {{e}}") + + print(f"Successfully added {{len([r for r in results if r['status'] == 'success'])}} out of {{len(tickers)}} equities") + """ + + execution_result = await session.execute(add_multiple_code) + + if execution_result["status"] != "success": + return { + "status": "error", + "error": execution_result.get("error", "Unknown error"), + "message": "Failed to add multiple equities", + "execution_output": execution_result.get("output", ""), + } return { "status": "success", "resolution": resolution, - "symbols": symbols, - "results": results, - "total_added": len([r for r in results if r["status"] == "success"]), + "message": f"Processed {len(tickers)} equities", + "execution_output": execution_result.get("output", ""), + "instance_name": instance_name, } except Exception as e: + logger.error(f"Failed to add multiple equities in instance '{instance_name}': {e}") return { "status": "error", "error": str(e), @@ -156,86 +301,186 @@ async def get_history( Returns: Dictionary containing historical data """ - qb = get_quantbook_instance(instance_name) - if qb is None: + session = await get_quantbook_session(instance_name) + if session is None: return { "status": "error", "error": f"QuantBook instance '{instance_name}' not found", + "message": "Initialize a QuantBook instance first using initialize_quantbook", } try: - from QuantConnect import Resolution # type: ignore - from datetime import datetime - - # Parse dates - start = datetime.strptime(start_date, "%Y-%m-%d") - end = datetime.strptime(end_date, "%Y-%m-%d") - - # Map resolution - resolution_map = { - "Minute": Resolution.Minute, - "Hour": Resolution.Hour, - "Daily": Resolution.Daily, - } - - if resolution not in resolution_map: + # Validate resolution + valid_resolutions = ["Minute", "Hour", "Daily"] + if resolution not in valid_resolutions: return { "status": "error", - "error": f"Invalid resolution '{resolution}'. Must be one of: {list(resolution_map.keys())}", + "error": f"Invalid resolution '{resolution}'. Must be one of: {valid_resolutions}", } # Handle single symbol vs multiple symbols if isinstance(symbols, str): - symbols = [symbols] - - # Get securities keys for the symbols - security_keys = [] - for symbol in symbols: - # Find the security in qb.Securities - found = False - for sec_key in qb.Securities.Keys: - if str(sec_key).upper() == symbol.upper(): - security_keys.append(sec_key) - found = True - break - if not found: - return { - "status": "error", - "error": f"Symbol '{symbol}' not found in securities. Add it first using add_equity.", - } + symbols_list = [symbols] + else: + symbols_list = symbols + + # Convert symbols list to Python code representation + symbols_str = str(symbols_list) - # Get historical data - history = qb.History(security_keys, start, end, resolution_map[resolution]) + # Build fields filter if specified + fields_filter = "" + if fields: + fields_str = str(fields) + fields_filter = f""" + # Filter specific fields if requested + if not history.empty: + available_fields = [col for col in history.columns if col in {fields_str}] + if available_fields: + history = history[available_fields] +""" + + # Execute code to get historical data in container + get_history_code = f""" +from QuantConnect import Resolution +from datetime import datetime +import pandas as pd + +# Ensure qb is initialized +if 'qb' not in globals() or qb is None: + # Initialize QuantBook - this will use the container's environment + qb = QuantBook() + print("Initialized QuantBook instance") + +# Map string resolution to enum +resolution_map = {{ + "Minute": Resolution.Minute, + "Hour": Resolution.Hour, + "Daily": Resolution.Daily, +}} + +try: + # Parse dates + start_date = datetime.strptime("{start_date}", "%Y-%m-%d") + end_date = datetime.strptime("{end_date}", "%Y-%m-%d") + + symbols_list = {symbols_str} + resolution_val = resolution_map["{resolution}"] + + # Get historical data + history = qb.History(symbols_list, start_date, end_date, resolution_val) + + print(f"Retrieved history for {{symbols_list}}: {{len(history)}} data points") + + if history.empty: + print("No data found for the specified period") + result = {{ + "status": "success", + "data": {{}}, + "message": "No data found for the specified period", + "symbols": symbols_list, + "start_date": "{start_date}", + "end_date": "{end_date}", + "resolution": "{resolution}", + "shape": [0, 0] + }} + else: + {fields_filter} + + # Convert to JSON-serializable format + data = {{}} + for col in history.columns: + if col in ["open", "high", "low", "close", "volume"]: + if len(symbols_list) == 1: + # Single symbol - simpler format + data[col] = history[col].to_dict() + else: + # Multiple symbols - unstack format + data[col] = history[col].unstack(level=0).to_dict() - if history.empty: + result = {{ + "status": "success", + "symbols": symbols_list, + "start_date": "{start_date}", + "end_date": "{end_date}", + "resolution": "{resolution}", + "data": data, + "shape": list(history.shape), + }} + + # Print result as JSON for MCP to parse + import json + print("=== QUANTBOOK_RESULT_START ===") + print(json.dumps(result, default=str)) # default=str handles datetime objects + print("=== QUANTBOOK_RESULT_END ===") + + print("Historical data retrieval completed successfully") + +except Exception as e: + print(f"Error retrieving historical data: {{e}}") + result = {{ + "status": "error", + "error": str(e), + "message": f"Failed to retrieve history for symbols: {symbols_str}", + }} + +# Print error result as JSON +import json +print("=== QUANTBOOK_RESULT_START ===") +print(json.dumps(result)) +print("=== QUANTBOOK_RESULT_END ===") +""" + + execution_result = await session.execute(get_history_code) + + if execution_result["status"] != "success": return { - "status": "success", - "data": {}, - "message": "No data found for the specified period", + "status": "error", + "error": execution_result.get("error", "Unknown error"), + "message": f"Failed to retrieve history for symbols: {symbols}", + "execution_output": execution_result.get("output", ""), } - # Convert to dictionary format - if fields: - # Filter specific fields - available_fields = [col for col in history.columns if col in fields] - if available_fields: - history = history[available_fields] - - # Convert to JSON-serializable format - data = {} - for col in history.columns: - if col in ["open", "high", "low", "close", "volume"]: - data[col] = history[col].unstack(level=0).to_dict() + # Parse the JSON result from container output + output = execution_result.get("output", "") + parsed_result = None + + try: + # Extract JSON result from container output + if "=== QUANTBOOK_RESULT_START ===" in output and "=== QUANTBOOK_RESULT_END ===" in output: + start_marker = output.find("=== QUANTBOOK_RESULT_START ===") + end_marker = output.find("=== QUANTBOOK_RESULT_END ===") + if start_marker != -1 and end_marker != -1: + json_start = start_marker + len("=== QUANTBOOK_RESULT_START ===\n") + json_content = output[json_start:end_marker].strip() + parsed_result = json.loads(json_content) + + if parsed_result: + # Return the parsed result with additional metadata + result = parsed_result.copy() + result["execution_output"] = output + result["instance_name"] = instance_name + return result + else: + # Fallback if JSON parsing fails + return { + "status": "success", + "symbols": symbols, + "start_date": start_date, + "end_date": end_date, + "resolution": resolution, + "message": f"Successfully executed but no structured result found", + "execution_output": output, + "instance_name": instance_name, + } - return { - "status": "success", - "symbols": symbols, - "start_date": start_date, - "end_date": end_date, - "resolution": resolution, - "data": data, - "shape": list(history.shape), - } + except json.JSONDecodeError as e: + return { + "status": "error", + "error": f"Failed to parse container result: {e}", + "message": f"Container executed but result parsing failed", + "execution_output": output, + "instance_name": instance_name, + } except Exception as e: return { @@ -259,45 +504,21 @@ async def add_alternative_data( Returns: Dictionary containing alternative data subscription info """ - qb = get_quantbook_instance(instance_name) - if qb is None: + session = await get_quantbook_session(instance_name) + if session is None: return { "status": "error", "error": f"QuantBook instance '{instance_name}' not found", + "message": "Initialize a QuantBook instance first using initialize_quantbook", } try: - # Map data types to QuantConnect classes - if data_type == "SmartInsiderTransaction": - from QuantConnect.DataSource import SmartInsiderTransaction # type: ignore - - # Find the symbol in securities - target_symbol = None - for sec_key in qb.Securities.Keys: - if str(sec_key).upper() == symbol.upper(): - target_symbol = sec_key - break - - if target_symbol is None: - return { - "status": "error", - "error": f"Symbol '{symbol}' not found. Add it as equity first.", - } - - alt_symbol = qb.AddData(SmartInsiderTransaction, target_symbol).Symbol - - return { - "status": "success", - "data_type": data_type, - "symbol": symbol, - "alt_symbol": str(alt_symbol), - "message": f"Successfully added {data_type} data for {symbol}", - } - else: - return { - "status": "error", - "error": f"Unsupported data type '{data_type}'. Currently supported: SmartInsiderTransaction", - } + # TODO: Convert to container execution like other functions + return { + "status": "error", + "error": "Alternative data functions need to be updated for container execution", + "message": f"add_alternative_data is temporarily disabled pending container execution update", + } except Exception as e: return { @@ -327,64 +548,20 @@ async def get_alternative_data_history( Returns: Dictionary containing alternative data history """ - qb = get_quantbook_instance(instance_name) - if qb is None: + session = await get_quantbook_session(instance_name) + if session is None: return { "status": "error", "error": f"QuantBook instance '{instance_name}' not found", + "message": "Initialize a QuantBook instance first using initialize_quantbook", } try: - from datetime import datetime - - start = datetime.strptime(start_date, "%Y-%m-%d") - end = datetime.strptime(end_date, "%Y-%m-%d") - - if isinstance(symbols, str): - symbols = [symbols] - - # Get alternative data symbols - alt_symbols = [] - for symbol in symbols: - # Find alternative data symbols for this equity - for sec_key in qb.Securities.Keys: - if ( - data_type.lower() in str(sec_key).lower() - and symbol.upper() in str(sec_key).upper() - ): - alt_symbols.append(sec_key) - - if not alt_symbols: - return { - "status": "error", - "error": f"No {data_type} data found for symbols {symbols}. Add alternative data first.", - } - - # Get history - from QuantConnect import Resolution # type: ignore - - history = qb.History(alt_symbols, start, end, Resolution.Daily) - - if history.empty: - return { - "status": "success", - "data": {}, - "message": "No alternative data found for the specified period", - } - - # Convert to JSON format - data = {} - for col in history.columns: - data[col] = history[col].unstack(level=0).to_dict() - + # TODO: Convert to container execution like other functions return { - "status": "success", - "data_type": data_type, - "symbols": symbols, - "start_date": start_date, - "end_date": end_date, - "data": data, - "shape": list(history.shape), + "status": "error", + "error": "Alternative data functions need to be updated for container execution", + "message": f"get_alternative_data_history is temporarily disabled pending container execution update", } except Exception as e: diff --git a/quantconnect_mcp/src/tools/portfolio_tools.py b/quantconnect_mcp/src/tools/portfolio_tools.py index 79d952d..820791c 100644 --- a/quantconnect_mcp/src/tools/portfolio_tools.py +++ b/quantconnect_mcp/src/tools/portfolio_tools.py @@ -5,7 +5,15 @@ import pandas as pd import numpy as np import json -from .quantbook_tools import get_quantbook_instance + +# Conditional import to avoid issues when Docker/QuantBook not available +def get_quantbook_instance(instance_name: str = "default"): + """Get QuantBook instance - always returns None when quantbook unavailable.""" + try: + from .quantbook_tools import get_quantbook_instance as _get_instance + return _get_instance(instance_name) + except ImportError: + return None def register_portfolio_tools(mcp: FastMCP): diff --git a/quantconnect_mcp/src/tools/quantbook_tools.py b/quantconnect_mcp/src/tools/quantbook_tools.py index d6bed83..1df2cde 100644 --- a/quantconnect_mcp/src/tools/quantbook_tools.py +++ b/quantconnect_mcp/src/tools/quantbook_tools.py @@ -1,11 +1,37 @@ -"""QuantBook Management Tools for QuantConnect MCP Server""" +"""QuantBook Management Tools for QuantConnect MCP Server (Container-Based) +CRITICAL FOR LLMs - HOW TO EXECUTE QUANTBOOK CODE: +- DO NOT execute standalone Python scripts - qb won't be available! +- INSTEAD: Modify the existing /LeanCLI/research.ipynb notebook file +- qb is ONLY available in Jupyter notebooks with Foundation-Py-Default kernel + +CORRECT WORKFLOW for execute_quantbook_code: +1. Read the existing /LeanCLI/research.ipynb file +2. Add your code as a new cell to the notebook +3. Execute the notebook to get results +4. qb will be available automatically in the notebook + +ENVIRONMENT NOTES: +- Research notebooks are located at /LeanCLI in the container +- A default starter notebook 'research.ipynb' is automatically created +- QuantBook is pre-initialized and available as 'qb' in all notebooks +- When creating new notebooks, they MUST use the 'Foundation-Py-Default' kernel +""" + +import asyncio from fastmcp import FastMCP from typing import Dict, Any, List, Optional import json +import logging + +from ..adapters import ( + SessionManager, + ResearchSession, + get_session_manager, + initialize_session_manager, +) -# Global QuantBook instance storage -_quantbook_instances: Dict[str, Any] = {} +logger = logging.getLogger(__name__) def register_quantbook_tools(mcp: FastMCP): @@ -16,42 +42,110 @@ async def initialize_quantbook( instance_name: str = "default", organization_id: Optional[str] = None, token: Optional[str] = None, + memory_limit: str = "2g", + cpu_limit: float = 1.0, + timeout: int = 300, ) -> Dict[str, Any]: """ - Initialize a new QuantBook instance for research operations. + Initialize a new QuantBook instance in a Docker container for research operations. + + IMPORTANT: Research notebooks are located at /LeanCLI in the container. + - A default starter notebook 'research.ipynb' is automatically created + - QuantBook is pre-initialized and available as 'qb' in all notebooks + - When creating new notebooks, they MUST use the 'Foundation-Py-Default' kernel to have qb access Args: instance_name: Name identifier for this QuantBook instance - organization_id: Optional organization ID for QuantConnect - token: Optional API token for QuantConnect + organization_id: Optional organization ID for QuantConnect (not used in container) + token: Optional API token for QuantConnect (not used in container) + memory_limit: Container memory limit (e.g., "2g", "512m") + cpu_limit: Container CPU limit (fraction of CPU, e.g. 1.0 = 1 CPU) + timeout: Default execution timeout in seconds Returns: Dictionary containing initialization status and instance info """ try: - # Import QuantConnect modules - from QuantConnect.Research import QuantBook # type: ignore + # Initialize session manager if needed + await initialize_session_manager() + manager = get_session_manager() - # Create new QuantBook instance - qb = QuantBook() + # Create or get research session + # Note: lean-cli manages memory and CPU limits internally + session = await manager.get_or_create_session( + session_id=instance_name, + # Only pass supported parameters for lean-cli based session + port=None, # Will use default or env var + ) - # Store the instance - _quantbook_instances[instance_name] = qb + # Initialize the session and wait for container to be ready + await session.initialize() - return { - "status": "success", - "instance_name": instance_name, - "message": f"QuantBook instance '{instance_name}' initialized successfully", - "available_instances": list(_quantbook_instances.keys()), - } + # Check if session initialized successfully + if not session._initialized: + return { + "status": "error", + "error": "Failed to initialize research session", + "message": f"Failed to initialize QuantBook instance '{instance_name}'", + } + + # Try a simple test to see if we can detect the container is ready + # But don't execute complex QuantBook code during initialization + try: + test_result = await session.execute("print('Container ready')", timeout=10) + container_ready = test_result["status"] == "success" + except Exception: + container_ready = False + + if container_ready: + return { + "status": "success", + "instance_name": instance_name, + "session_id": session.session_id, + "message": f"QuantBook instance '{instance_name}' initialized successfully in container", + "container_info": { + "memory_limit": memory_limit, + "cpu_limit": cpu_limit, + "timeout": timeout, + "workspace": str(session.workspace_dir), + "port": session.port, + }, + "usage_instructions": { + "CRITICAL": "To use QuantBook functions, the first line in a cell should be qb = QuantBook()", + "DO": "Use qb directly: equity = qb.AddEquity('AAPL')", + "example": "equity = qb.AddEquity('AAPL')\nhistory = qb.History(equity.Symbol, 10, Resolution.Daily)", + "notebook_location": "/LeanCLI - where research.ipynb is located", + "kernel": "Use 'Foundation-Py-Default' kernel for new notebooks" + } + } + else: + # Container is still starting up, but session was created successfully + return { + "status": "success", + "instance_name": instance_name, + "session_id": session.session_id, + "message": f"QuantBook instance '{instance_name}' is starting up. Jupyter Lab will be available at http://localhost:{session.port}", + "container_info": { + "memory_limit": memory_limit, + "cpu_limit": cpu_limit, + "timeout": timeout, + "workspace": str(session.workspace_dir), + "port": session.port, + }, + "note": "Container is still starting. You can check the web interface or try executing code in a few seconds.", + "usage_instructions": { + "CRITICAL": "To use QuantBook functions, the first line in a cell should be qb = QuantBook()", + "DO": "Use qb directly: equity = qb.AddEquity('AAPL')", + "example": "equity = qb.AddEquity('AAPL')\nhistory = qb.History(equity.Symbol, 10, Resolution.Daily)", + "notebook_location": "/LeanCLI - where research.ipynb is located", + "kernel": "Use 'Foundation-Py-Default' kernel for new notebooks" + } + } - except ImportError as e: - return { - "status": "error", - "error": f"Failed to import QuantConnect modules: {str(e)}", - "message": "Ensure QuantConnect LEAN is properly installed", - } except Exception as e: + logger.error( + f"Failed to initialize QuantBook instance '{instance_name}': {e}" + ) return { "status": "error", "error": str(e), @@ -66,11 +160,24 @@ async def list_quantbook_instances() -> Dict[str, Any]: Returns: Dictionary containing all active QuantBook instances """ - return { - "instances": list(_quantbook_instances.keys()), - "count": len(_quantbook_instances), - "status": "success", - } + try: + manager = get_session_manager() + sessions = manager.list_sessions() + session_count = manager.get_session_count() + + return { + "instances": [s["session_id"] for s in sessions], + "count": len(sessions), + "session_details": sessions, + "capacity": session_count, + "status": "success", + } + except Exception as e: + return { + "status": "error", + "error": str(e), + "message": "Failed to list QuantBook instances", + } @mcp.tool() async def get_quantbook_info(instance_name: str = "default") -> Dict[str, Any]: @@ -83,40 +190,158 @@ async def get_quantbook_info(instance_name: str = "default") -> Dict[str, Any]: Returns: Dictionary containing instance information """ - if instance_name not in _quantbook_instances: + try: + manager = get_session_manager() + session = await manager.get_session(instance_name) + + if session is None: + available_sessions = [s["session_id"] for s in manager.list_sessions()] + return { + "status": "error", + "error": f"QuantBook instance '{instance_name}' not found", + "available_instances": available_sessions, + } + + # Get QuantBook info from container + info_code = """ + # Ensure qb is initialized + if 'qb' not in globals() or qb is None: + # Initialize QuantBook - this will use the container's environment + qb = QuantBook() + print("Initialized QuantBook instance") + + try: + # Get securities count + securities_count = len(qb.Securities) if hasattr(qb, 'Securities') else 0 + + # Get available methods + available_methods = [method for method in dir(qb) if not method.startswith('_')] + + print(f"Securities count: {securities_count}") + print(f"Available methods: {len(available_methods)}") + print(f"QuantBook type: {type(qb).__name__}") + + # Store results for JSON return + qb_info = { + 'securities_count': securities_count, + 'available_methods': available_methods[:50], # Limit to first 50 methods + 'total_methods': len(available_methods), + 'type': type(qb).__name__ + } + + except Exception as e: + print(f"Error getting QuantBook info: {e}") + qb_info = { + 'error': str(e), + 'securities_count': 0, + 'available_methods': [], + 'total_methods': 0, + 'type': 'Unknown' + } + """ + + result = await session.execute(info_code) + + # Handle case where container is still starting + if result["status"] == "error" and "Container not found" in result.get( + "error", "" + ): + return { + "status": "success", + "instance_name": instance_name, + "session_id": session.session_id, + "container_info": { + "created_at": session.created_at.isoformat(), + "last_used": session.last_used.isoformat(), + "port": session.port, + "workspace": str(session.workspace_dir), + "initialized": session._initialized, + "jupyter_url": f"http://localhost:{session.port}", + }, + "message": "Container is still starting up. Jupyter Lab should be available soon.", + "note": result.get("message", ""), + } + + return { + "status": "success", + "instance_name": instance_name, + "session_id": session.session_id, + "container_info": { + "created_at": session.created_at.isoformat(), + "last_used": session.last_used.isoformat(), + "port": session.port, + "workspace": str(session.workspace_dir), + "initialized": session._initialized, + }, + "execution_result": result, + } + + except Exception as e: + logger.error( + f"Failed to get info for QuantBook instance '{instance_name}': {e}" + ) return { "status": "error", - "error": f"QuantBook instance '{instance_name}' not found", - "available_instances": list(_quantbook_instances.keys()), + "error": str(e), + "message": f"Failed to get info for QuantBook instance '{instance_name}'", } + @mcp.tool() + async def check_quantbook_container( + instance_name: str = "default", + ) -> Dict[str, Any]: + """ + Check if the container for a QuantBook instance is running. + + Args: + instance_name: Name of the QuantBook instance + + Returns: + Dictionary containing container status + """ try: - qb = _quantbook_instances[instance_name] + manager = get_session_manager() + session = await manager.get_session(instance_name) + + if session is None: + available_sessions = [s["session_id"] for s in manager.list_sessions()] + return { + "status": "error", + "error": f"QuantBook instance '{instance_name}' not found", + "available_instances": available_sessions, + } - # Get basic info about the instance - securities_count = len(qb.Securities) if hasattr(qb, "Securities") else 0 + # Try to find the container + await session._find_container() return { "status": "success", "instance_name": instance_name, - "securities_count": securities_count, - "type": str(type(qb).__name__), - "available_methods": [ - method for method in dir(qb) if not method.startswith("_") - ], + "container_found": session.container is not None, + "container_name": session.container.name if session.container else None, + "port": session.port, + "jupyter_url": f"http://localhost:{session.port}", + "message": ( + "Container is running" + if session.container + else "Container not yet found - may still be starting" + ), } except Exception as e: + logger.error( + f"Failed to check container for instance '{instance_name}': {e}" + ) return { "status": "error", "error": str(e), - "message": f"Failed to get info for QuantBook instance '{instance_name}'", + "message": f"Failed to check container for instance '{instance_name}'", } @mcp.tool() async def remove_quantbook_instance(instance_name: str) -> Dict[str, Any]: """ - Remove a QuantBook instance from memory. + Remove a QuantBook instance and clean up its container. Args: instance_name: Name of the QuantBook instance to remove @@ -124,28 +349,160 @@ async def remove_quantbook_instance(instance_name: str) -> Dict[str, Any]: Returns: Dictionary containing removal status """ - if instance_name not in _quantbook_instances: + try: + manager = get_session_manager() + success = await manager.close_session(instance_name) + + if not success: + available_sessions = [s["session_id"] for s in manager.list_sessions()] + return { + "status": "error", + "error": f"QuantBook instance '{instance_name}' not found", + "available_instances": available_sessions, + } + + remaining_sessions = [s["session_id"] for s in manager.list_sessions()] + return { + "status": "success", + "message": f"QuantBook instance '{instance_name}' removed successfully", + "remaining_instances": remaining_sessions, + } + + except Exception as e: + logger.error(f"Failed to remove QuantBook instance '{instance_name}': {e}") + return { + "status": "error", + "error": str(e), + "message": f"Failed to remove QuantBook instance '{instance_name}'", + } + + @mcp.tool() + async def execute_quantbook_code( + code: str, + instance_name: str = "default", + timeout: Optional[int] = None, + ) -> Dict[str, Any]: + """ + Execute Python code in a QuantBook container. + + IMPORTANT: This function should modify and execute code in /LeanCLI/research.ipynb + + The LLM should: + 1. Read the existing /LeanCLI/research.ipynb file + 2. Add the code as a new cell to the notebook + 3. Execute the notebook cell + 4. Return the results + + QuantBook (qb) is ONLY available inside Jupyter notebooks with Foundation-Py-Default kernel! + DO NOT try to execute standalone Python scripts - they won't have access to qb. + + Example workflow: + 1. Read /LeanCLI/research.ipynb + 2. Add cell with: equity = qb.AddEquity("AAPL") + 3. Execute the notebook + 4. Return results + + Args: + code: Python code to add to research.ipynb and execute + instance_name: Name of the QuantBook instance + timeout: Execution timeout in seconds (uses session default if None) + + Returns: + Dictionary containing execution results + """ + try: + manager = get_session_manager() + session = await manager.get_session(instance_name) + + if session is None: + available_sessions = [s["session_id"] for s in manager.list_sessions()] + return { + "status": "error", + "error": f"QuantBook instance '{instance_name}' not found", + "available_instances": available_sessions, + "message": "Initialize a QuantBook instance first using initialize_quantbook", + } + + # Execute the code + result = await session.execute(code, timeout=timeout) + result["instance_name"] = instance_name + + return result + + except Exception as e: + logger.error( + f"Failed to execute code in QuantBook instance '{instance_name}': {e}" + ) return { "status": "error", - "error": f"QuantBook instance '{instance_name}' not found", - "available_instances": list(_quantbook_instances.keys()), + "error": str(e), + "message": f"Failed to execute code in QuantBook instance '{instance_name}'", + "instance_name": instance_name, } + @mcp.tool() + async def get_session_manager_status() -> Dict[str, Any]: + """ + Get status information about the session manager. + + Returns: + Dictionary containing session manager status + """ try: - del _quantbook_instances[instance_name] + manager = get_session_manager() + session_count = manager.get_session_count() + sessions = manager.list_sessions() + return { "status": "success", - "message": f"QuantBook instance '{instance_name}' removed successfully", - "remaining_instances": list(_quantbook_instances.keys()), + "running": manager._running, + "session_count": session_count, + "sessions": sessions, + "configuration": { + "max_sessions": manager.max_sessions, + "session_timeout_hours": manager.session_timeout.total_seconds() + / 3600, + "cleanup_interval_seconds": manager.cleanup_interval, + }, } + except Exception as e: return { "status": "error", "error": str(e), - "message": f"Failed to remove QuantBook instance '{instance_name}'", + "message": "Failed to get session manager status", } +async def get_quantbook_session( + instance_name: str = "default", +) -> Optional[ResearchSession]: + """ + Helper function to get QuantBook session for other tools. + + Args: + instance_name: Name of the QuantBook instance + + Returns: + ResearchSession instance or None if not found + """ + try: + manager = get_session_manager() + return await manager.get_session(instance_name) + except Exception as e: + logger.error(f"Failed to get QuantBook session '{instance_name}': {e}") + return None + + def get_quantbook_instance(instance_name: str = "default"): - """Helper function to get QuantBook instance for other tools.""" - return _quantbook_instances.get(instance_name) + """ + Legacy compatibility function for get_quantbook_instance. + Returns None since the old synchronous API is no longer supported. + + This function exists to prevent import errors but will return None, + causing tools that depend on it to fail gracefully. + """ + logger.warning( + f"get_quantbook_instance is deprecated and no longer functional. Use get_quantbook_session instead." + ) + return None diff --git a/quantconnect_mcp/src/tools/universe_tools.py b/quantconnect_mcp/src/tools/universe_tools.py index fa8e03b..eeac04d 100644 --- a/quantconnect_mcp/src/tools/universe_tools.py +++ b/quantconnect_mcp/src/tools/universe_tools.py @@ -5,7 +5,15 @@ import pandas as pd import numpy as np from datetime import datetime -from .quantbook_tools import get_quantbook_instance + +# Conditional import to avoid issues when Docker/QuantBook not available +def get_quantbook_instance(instance_name: str = "default"): + """Get QuantBook instance - always returns None when quantbook unavailable.""" + try: + from .quantbook_tools import get_quantbook_instance as _get_instance + return _get_instance(instance_name) + except ImportError: + return None async def _get_etf_constituents_helper( diff --git a/quantconnect_mcp/src/utils.py b/quantconnect_mcp/src/utils.py index f998a5b..5b0df7b 100644 --- a/quantconnect_mcp/src/utils.py +++ b/quantconnect_mcp/src/utils.py @@ -7,7 +7,7 @@ def safe_print(text): - """Print text safely, handling emojis and MCP server context. + """Print text safely, handling emojis, broken pipes, and MCP server context. Don't print to stderr when running as MCP server via uvx to avoid JSON parsing errors. Check if we're running as MCP server (no TTY and uvx in process name). @@ -16,10 +16,28 @@ def safe_print(text): # Check if we're running as MCP server (no TTY and uvx in process name) if not sys.stderr.isatty(): # Running as MCP server, suppress output to avoid JSON parsing errors - logger.debug(f"[MCP Server] {text}") + try: + logger.debug(f"[MCP Server] {text}") + except Exception: + # If logging fails, just ignore silently + pass return try: print(text, file=sys.stderr) - except UnicodeEncodeError: - print(text.encode('ascii', errors='replace').decode(), file=sys.stderr) \ No newline at end of file + sys.stderr.flush() # Ensure immediate output + except (UnicodeEncodeError, OSError, BrokenPipeError): + try: + # Handle broken pipes and encoding errors gracefully + if isinstance(text, str): + # Try ASCII fallback for encoding issues + safe_text = text.encode('ascii', errors='replace').decode() + print(safe_text, file=sys.stderr) + sys.stderr.flush() + except (OSError, BrokenPipeError): + # If we still can't print, log instead + try: + logger.info(f"[Output] {text}") + except: + # Final fallback - just ignore if nothing works + pass \ No newline at end of file diff --git a/uv.lock b/uv.lock index e0b4070..3b522fd 100644 --- a/uv.lock +++ b/uv.lock @@ -138,6 +138,41 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/c5/55/51844dd50c4fc7a33b653bfaba4c2456f06955289ca770a5dbd5fd267374/cfgv-3.4.0-py2.py3-none-any.whl", hash = "sha256:b7265b1f29fd3316bfcd2b330d63d024f2bfd8bcb8b0272f8e19a504856c48f9", size = 7249, upload_time = "2023-08-12T20:38:16.269Z" }, ] +[[package]] +name = "charset-normalizer" +version = "3.4.2" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/e4/33/89c2ced2b67d1c2a61c19c6751aa8902d46ce3dacb23600a283619f5a12d/charset_normalizer-3.4.2.tar.gz", hash = "sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63", size = 126367, upload_time = "2025-05-02T08:34:42.01Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/d7/a4/37f4d6035c89cac7930395a35cc0f1b872e652eaafb76a6075943754f095/charset_normalizer-3.4.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:0c29de6a1a95f24b9a1aa7aefd27d2487263f00dfd55a77719b530788f75cff7", size = 199936, upload_time = "2025-05-02T08:32:33.712Z" }, + { url = "https://files.pythonhosted.org/packages/ee/8a/1a5e33b73e0d9287274f899d967907cd0bf9c343e651755d9307e0dbf2b3/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cddf7bd982eaa998934a91f69d182aec997c6c468898efe6679af88283b498d3", size = 143790, upload_time = "2025-05-02T08:32:35.768Z" }, + { url = "https://files.pythonhosted.org/packages/66/52/59521f1d8e6ab1482164fa21409c5ef44da3e9f653c13ba71becdd98dec3/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fcbe676a55d7445b22c10967bceaaf0ee69407fbe0ece4d032b6eb8d4565982a", size = 153924, upload_time = "2025-05-02T08:32:37.284Z" }, + { url = "https://files.pythonhosted.org/packages/86/2d/fb55fdf41964ec782febbf33cb64be480a6b8f16ded2dbe8db27a405c09f/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d41c4d287cfc69060fa91cae9683eacffad989f1a10811995fa309df656ec214", size = 146626, upload_time = "2025-05-02T08:32:38.803Z" }, + { url = "https://files.pythonhosted.org/packages/8c/73/6ede2ec59bce19b3edf4209d70004253ec5f4e319f9a2e3f2f15601ed5f7/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4e594135de17ab3866138f496755f302b72157d115086d100c3f19370839dd3a", size = 148567, upload_time = "2025-05-02T08:32:40.251Z" }, + { url = "https://files.pythonhosted.org/packages/09/14/957d03c6dc343c04904530b6bef4e5efae5ec7d7990a7cbb868e4595ee30/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cf713fe9a71ef6fd5adf7a79670135081cd4431c2943864757f0fa3a65b1fafd", size = 150957, upload_time = "2025-05-02T08:32:41.705Z" }, + { url = "https://files.pythonhosted.org/packages/0d/c8/8174d0e5c10ccebdcb1b53cc959591c4c722a3ad92461a273e86b9f5a302/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a370b3e078e418187da8c3674eddb9d983ec09445c99a3a263c2011993522981", size = 145408, upload_time = "2025-05-02T08:32:43.709Z" }, + { url = "https://files.pythonhosted.org/packages/58/aa/8904b84bc8084ac19dc52feb4f5952c6df03ffb460a887b42615ee1382e8/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:a955b438e62efdf7e0b7b52a64dc5c3396e2634baa62471768a64bc2adb73d5c", size = 153399, upload_time = "2025-05-02T08:32:46.197Z" }, + { url = "https://files.pythonhosted.org/packages/c2/26/89ee1f0e264d201cb65cf054aca6038c03b1a0c6b4ae998070392a3ce605/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:7222ffd5e4de8e57e03ce2cef95a4c43c98fcb72ad86909abdfc2c17d227fc1b", size = 156815, upload_time = "2025-05-02T08:32:48.105Z" }, + { url = "https://files.pythonhosted.org/packages/fd/07/68e95b4b345bad3dbbd3a8681737b4338ff2c9df29856a6d6d23ac4c73cb/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:bee093bf902e1d8fc0ac143c88902c3dfc8941f7ea1d6a8dd2bcb786d33db03d", size = 154537, upload_time = "2025-05-02T08:32:49.719Z" }, + { url = "https://files.pythonhosted.org/packages/77/1a/5eefc0ce04affb98af07bc05f3bac9094513c0e23b0562d64af46a06aae4/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:dedb8adb91d11846ee08bec4c8236c8549ac721c245678282dcb06b221aab59f", size = 149565, upload_time = "2025-05-02T08:32:51.404Z" }, + { url = "https://files.pythonhosted.org/packages/37/a0/2410e5e6032a174c95e0806b1a6585eb21e12f445ebe239fac441995226a/charset_normalizer-3.4.2-cp312-cp312-win32.whl", hash = "sha256:db4c7bf0e07fc3b7d89ac2a5880a6a8062056801b83ff56d8464b70f65482b6c", size = 98357, upload_time = "2025-05-02T08:32:53.079Z" }, + { url = "https://files.pythonhosted.org/packages/6c/4f/c02d5c493967af3eda9c771ad4d2bbc8df6f99ddbeb37ceea6e8716a32bc/charset_normalizer-3.4.2-cp312-cp312-win_amd64.whl", hash = "sha256:5a9979887252a82fefd3d3ed2a8e3b937a7a809f65dcb1e068b090e165bbe99e", size = 105776, upload_time = "2025-05-02T08:32:54.573Z" }, + { url = "https://files.pythonhosted.org/packages/ea/12/a93df3366ed32db1d907d7593a94f1fe6293903e3e92967bebd6950ed12c/charset_normalizer-3.4.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:926ca93accd5d36ccdabd803392ddc3e03e6d4cd1cf17deff3b989ab8e9dbcf0", size = 199622, upload_time = "2025-05-02T08:32:56.363Z" }, + { url = "https://files.pythonhosted.org/packages/04/93/bf204e6f344c39d9937d3c13c8cd5bbfc266472e51fc8c07cb7f64fcd2de/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eba9904b0f38a143592d9fc0e19e2df0fa2e41c3c3745554761c5f6447eedabf", size = 143435, upload_time = "2025-05-02T08:32:58.551Z" }, + { url = "https://files.pythonhosted.org/packages/22/2a/ea8a2095b0bafa6c5b5a55ffdc2f924455233ee7b91c69b7edfcc9e02284/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3fddb7e2c84ac87ac3a947cb4e66d143ca5863ef48e4a5ecb83bd48619e4634e", size = 153653, upload_time = "2025-05-02T08:33:00.342Z" }, + { url = "https://files.pythonhosted.org/packages/b6/57/1b090ff183d13cef485dfbe272e2fe57622a76694061353c59da52c9a659/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98f862da73774290f251b9df8d11161b6cf25b599a66baf087c1ffe340e9bfd1", size = 146231, upload_time = "2025-05-02T08:33:02.081Z" }, + { url = "https://files.pythonhosted.org/packages/e2/28/ffc026b26f441fc67bd21ab7f03b313ab3fe46714a14b516f931abe1a2d8/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c9379d65defcab82d07b2a9dfbfc2e95bc8fe0ebb1b176a3190230a3ef0e07c", size = 148243, upload_time = "2025-05-02T08:33:04.063Z" }, + { url = "https://files.pythonhosted.org/packages/c0/0f/9abe9bd191629c33e69e47c6ef45ef99773320e9ad8e9cb08b8ab4a8d4cb/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e635b87f01ebc977342e2697d05b56632f5f879a4f15955dfe8cef2448b51691", size = 150442, upload_time = "2025-05-02T08:33:06.418Z" }, + { url = "https://files.pythonhosted.org/packages/67/7c/a123bbcedca91d5916c056407f89a7f5e8fdfce12ba825d7d6b9954a1a3c/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1c95a1e2902a8b722868587c0e1184ad5c55631de5afc0eb96bc4b0d738092c0", size = 145147, upload_time = "2025-05-02T08:33:08.183Z" }, + { url = "https://files.pythonhosted.org/packages/ec/fe/1ac556fa4899d967b83e9893788e86b6af4d83e4726511eaaad035e36595/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:ef8de666d6179b009dce7bcb2ad4c4a779f113f12caf8dc77f0162c29d20490b", size = 153057, upload_time = "2025-05-02T08:33:09.986Z" }, + { url = "https://files.pythonhosted.org/packages/2b/ff/acfc0b0a70b19e3e54febdd5301a98b72fa07635e56f24f60502e954c461/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:32fc0341d72e0f73f80acb0a2c94216bd704f4f0bce10aedea38f30502b271ff", size = 156454, upload_time = "2025-05-02T08:33:11.814Z" }, + { url = "https://files.pythonhosted.org/packages/92/08/95b458ce9c740d0645feb0e96cea1f5ec946ea9c580a94adfe0b617f3573/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:289200a18fa698949d2b39c671c2cc7a24d44096784e76614899a7ccf2574b7b", size = 154174, upload_time = "2025-05-02T08:33:13.707Z" }, + { url = "https://files.pythonhosted.org/packages/78/be/8392efc43487ac051eee6c36d5fbd63032d78f7728cb37aebcc98191f1ff/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4a476b06fbcf359ad25d34a057b7219281286ae2477cc5ff5e3f70a246971148", size = 149166, upload_time = "2025-05-02T08:33:15.458Z" }, + { url = "https://files.pythonhosted.org/packages/44/96/392abd49b094d30b91d9fbda6a69519e95802250b777841cf3bda8fe136c/charset_normalizer-3.4.2-cp313-cp313-win32.whl", hash = "sha256:aaeeb6a479c7667fbe1099af9617c83aaca22182d6cf8c53966491a0f1b7ffb7", size = 98064, upload_time = "2025-05-02T08:33:17.06Z" }, + { url = "https://files.pythonhosted.org/packages/e9/b0/0200da600134e001d91851ddc797809e2fe0ea72de90e09bec5a2fbdaccb/charset_normalizer-3.4.2-cp313-cp313-win_amd64.whl", hash = "sha256:aa6af9e7d59f9c12b33ae4e9450619cf2488e2bbe9b44030905877f0b2324980", size = 105641, upload_time = "2025-05-02T08:33:18.753Z" }, + { url = "https://files.pythonhosted.org/packages/20/94/c5790835a017658cbfabd07f3bfb549140c3ac458cfc196323996b10095a/charset_normalizer-3.4.2-py3-none-any.whl", hash = "sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0", size = 52626, upload_time = "2025-05-02T08:34:40.053Z" }, +] + [[package]] name = "click" version = "8.2.1" @@ -253,6 +288,20 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/91/a1/cf2472db20f7ce4a6be1253a81cfdf85ad9c7885ffbed7047fb72c24cf87/distlib-0.3.9-py2.py3-none-any.whl", hash = "sha256:47f8c22fd27c27e25a65601af709b38e4f0a45ea4fc2e710f65755fa8caaaf87", size = 468973, upload_time = "2024-10-09T18:35:44.272Z" }, ] +[[package]] +name = "docker" +version = "7.1.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "pywin32", marker = "sys_platform == 'win32'" }, + { name = "requests" }, + { name = "urllib3" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/91/9b/4a2ea29aeba62471211598dac5d96825bb49348fa07e906ea930394a83ce/docker-7.1.0.tar.gz", hash = "sha256:ad8c70e6e3f8926cb8a92619b832b4ea5299e2831c14284663184e200546fa6c", size = 117834, upload_time = "2024-05-23T11:13:57.216Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/e3/26/57c6fb270950d476074c087527a558ccb6f4436657314bfb6cdf484114c4/docker-7.1.0-py3-none-any.whl", hash = "sha256:c96b93b7f0a746f9e77d325bcfb87422a3d8bd4f03136ae8a85b37f1898d5fc0", size = 147774, upload_time = "2024-05-23T11:13:55.01Z" }, +] + [[package]] name = "exceptiongroup" version = "1.3.0" @@ -920,6 +969,22 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/81/c4/34e93fe5f5429d7570ec1fa436f1986fb1f00c3e0f43a589fe2bbcd22c3f/pytz-2025.2-py2.py3-none-any.whl", hash = "sha256:5ddf76296dd8c44c26eb8f4b6f35488f3ccbf6fbbd7adee0b7262d43f0ec2f00", size = 509225, upload_time = "2025-03-25T02:24:58.468Z" }, ] +[[package]] +name = "pywin32" +version = "311" +source = { registry = "https://pypi.org/simple" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/e7/ab/01ea1943d4eba0f850c3c61e78e8dd59757ff815ff3ccd0a84de5f541f42/pywin32-311-cp312-cp312-win32.whl", hash = "sha256:750ec6e621af2b948540032557b10a2d43b0cee2ae9758c54154d711cc852d31", size = 8706543, upload_time = "2025-07-14T20:13:20.765Z" }, + { url = "https://files.pythonhosted.org/packages/d1/a8/a0e8d07d4d051ec7502cd58b291ec98dcc0c3fff027caad0470b72cfcc2f/pywin32-311-cp312-cp312-win_amd64.whl", hash = "sha256:b8c095edad5c211ff31c05223658e71bf7116daa0ecf3ad85f3201ea3190d067", size = 9495040, upload_time = "2025-07-14T20:13:22.543Z" }, + { url = "https://files.pythonhosted.org/packages/ba/3a/2ae996277b4b50f17d61f0603efd8253cb2d79cc7ae159468007b586396d/pywin32-311-cp312-cp312-win_arm64.whl", hash = "sha256:e286f46a9a39c4a18b319c28f59b61de793654af2f395c102b4f819e584b5852", size = 8710102, upload_time = "2025-07-14T20:13:24.682Z" }, + { url = "https://files.pythonhosted.org/packages/a5/be/3fd5de0979fcb3994bfee0d65ed8ca9506a8a1260651b86174f6a86f52b3/pywin32-311-cp313-cp313-win32.whl", hash = "sha256:f95ba5a847cba10dd8c4d8fefa9f2a6cf283b8b88ed6178fa8a6c1ab16054d0d", size = 8705700, upload_time = "2025-07-14T20:13:26.471Z" }, + { url = "https://files.pythonhosted.org/packages/e3/28/e0a1909523c6890208295a29e05c2adb2126364e289826c0a8bc7297bd5c/pywin32-311-cp313-cp313-win_amd64.whl", hash = "sha256:718a38f7e5b058e76aee1c56ddd06908116d35147e133427e59a3983f703a20d", size = 9494700, upload_time = "2025-07-14T20:13:28.243Z" }, + { url = "https://files.pythonhosted.org/packages/04/bf/90339ac0f55726dce7d794e6d79a18a91265bdf3aa70b6b9ca52f35e022a/pywin32-311-cp313-cp313-win_arm64.whl", hash = "sha256:7b4075d959648406202d92a2310cb990fea19b535c7f4a78d3f5e10b926eeb8a", size = 8709318, upload_time = "2025-07-14T20:13:30.348Z" }, + { url = "https://files.pythonhosted.org/packages/c9/31/097f2e132c4f16d99a22bfb777e0fd88bd8e1c634304e102f313af69ace5/pywin32-311-cp314-cp314-win32.whl", hash = "sha256:b7a2c10b93f8986666d0c803ee19b5990885872a7de910fc460f9b0c2fbf92ee", size = 8840714, upload_time = "2025-07-14T20:13:32.449Z" }, + { url = "https://files.pythonhosted.org/packages/90/4b/07c77d8ba0e01349358082713400435347df8426208171ce297da32c313d/pywin32-311-cp314-cp314-win_amd64.whl", hash = "sha256:3aca44c046bd2ed8c90de9cb8427f581c479e594e99b5c0bb19b29c10fd6cb87", size = 9656800, upload_time = "2025-07-14T20:13:34.312Z" }, + { url = "https://files.pythonhosted.org/packages/c0/d2/21af5c535501a7233e734b8af901574572da66fcc254cb35d0609c9080dd/pywin32-311-cp314-cp314-win_arm64.whl", hash = "sha256:a508e2d9025764a8270f93111a970e1d0fbfc33f4153b388bb649b7eec4f9b42", size = 8932540, upload_time = "2025-07-14T20:13:36.379Z" }, +] + [[package]] name = "pyyaml" version = "6.0.2" @@ -946,15 +1011,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/fa/de/02b54f42487e3d3c6efb3f89428677074ca7bf43aae402517bc7cca949f3/PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563", size = 156446, upload_time = "2024-08-06T20:33:04.33Z" }, ] -[[package]] -name = "quantconnect" -version = "0.1.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/11/2a/2762a4e3497d35830ac5122fd2bcb7f5fb996f7c8ea47a255c03378466ff/quantconnect-0.1.0.tar.gz", hash = "sha256:9c47411e925141112b40893e0ae1b9364e63b487ce710322cb031d57e022ffd2", size = 921, upload_time = "2020-06-19T21:59:54.104Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/26/30/6624de8b559a496cf34957c4bdb25622713754bcbdfefc06812ed841e92f/quantconnect-0.1.0-py3-none-any.whl", hash = "sha256:c134dcfaf628066932984bc28377c4d717658af108fc70e1f603e5c63856be15", size = 5313, upload_time = "2020-06-19T21:59:52.096Z" }, -] - [[package]] name = "quantconnect-lean" version = "0.1.0" @@ -966,10 +1022,11 @@ wheels = [ [[package]] name = "quantconnect-mcp" -version = "0.1.9" +version = "0.1.11" source = { editable = "." } dependencies = [ { name = "arch" }, + { name = "docker" }, { name = "fastmcp" }, { name = "httpx" }, { name = "matplotlib" }, @@ -977,13 +1034,16 @@ dependencies = [ { name = "pandas" }, { name = "psutil" }, { name = "pytest-asyncio" }, - { name = "quantconnect" }, { name = "quantconnect-lean" }, { name = "scikit-learn" }, { name = "scipy" }, { name = "seaborn" }, { name = "statsmodels" }, - { name = "tomlkit" }, +] + +[package.optional-dependencies] +quantbook = [ + { name = "docker" }, ] [package.dev-dependencies] @@ -994,11 +1054,14 @@ dev = [ { name = "pytest" }, { name = "pytest-asyncio" }, { name = "ruff" }, + { name = "tomlkit" }, ] [package.metadata] requires-dist = [ { name = "arch", specifier = ">=7.2.0" }, + { name = "docker", specifier = ">=7.1.0" }, + { name = "docker", marker = "extra == 'quantbook'", specifier = ">=7.1.0" }, { name = "fastmcp", specifier = ">=2.7.1" }, { name = "httpx", specifier = ">=0.28.1" }, { name = "matplotlib", specifier = ">=3.10.3" }, @@ -1006,14 +1069,13 @@ requires-dist = [ { name = "pandas", specifier = ">=2.3.0" }, { name = "psutil", specifier = ">=7.0.0" }, { name = "pytest-asyncio", specifier = ">=1.0.0" }, - { name = "quantconnect", specifier = ">=0.1.0" }, { name = "quantconnect-lean" }, { name = "scikit-learn", specifier = ">=1.7.0" }, { name = "scipy", specifier = ">=1.15.3" }, { name = "seaborn", specifier = ">=0.13.2" }, { name = "statsmodels", specifier = ">=0.14.4" }, - { name = "tomlkit", specifier = ">=0.13.3" }, ] +provides-extras = ["quantbook"] [package.metadata.requires-dev] dev = [ @@ -1023,6 +1085,22 @@ dev = [ { name = "pytest", specifier = ">=8.4.0" }, { name = "pytest-asyncio", specifier = ">=1.0.0" }, { name = "ruff", specifier = ">=0.11.13" }, + { name = "tomlkit", specifier = ">=0.13.3" }, +] + +[[package]] +name = "requests" +version = "2.32.4" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "certifi" }, + { name = "charset-normalizer" }, + { name = "idna" }, + { name = "urllib3" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/e1/0a/929373653770d8a0d7ea76c37de6e41f11eb07559b103b1c02cafb3f7cf8/requests-2.32.4.tar.gz", hash = "sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422", size = 135258, upload_time = "2025-06-09T16:43:07.34Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7c/e4/56027c4a6b4ae70ca9de302488c5ca95ad4a39e190093d6c1a8ace08341b/requests-2.32.4-py3-none-any.whl", hash = "sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c", size = 64847, upload_time = "2025-06-09T16:43:05.728Z" }, ] [[package]] @@ -1284,6 +1362,15 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/5c/23/c7abc0ca0a1526a0774eca151daeb8de62ec457e77262b66b359c3c7679e/tzdata-2025.2-py2.py3-none-any.whl", hash = "sha256:1a403fada01ff9221ca8044d701868fa132215d84beb92242d9acd2147f667a8", size = 347839, upload_time = "2025-03-23T13:54:41.845Z" }, ] +[[package]] +name = "urllib3" +version = "2.5.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/15/22/9ee70a2574a4f4599c47dd506532914ce044817c7752a79b6a51286319bc/urllib3-2.5.0.tar.gz", hash = "sha256:3fc47733c7e419d4bc3f6b3dc2b4f890bb743906a30d56ba4a5bfa4bbff92760", size = 393185, upload_time = "2025-06-18T14:07:41.644Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/a7/c2/fe1e52489ae3122415c51f387e221dd0773709bad6c6cdaa599e8a2c5185/urllib3-2.5.0-py3-none-any.whl", hash = "sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc", size = 129795, upload_time = "2025-06-18T14:07:40.39Z" }, +] + [[package]] name = "uvicorn" version = "0.34.3"