Skip to content

Conversation

@AgbAccount
Copy link

Add AgentBay SDK Integration for Cloud Code Execution

Summary

This PR adds a new CrewAI example that integrates with the wuying-agentbay-sdk to enable cloud-based code execution and full development workflows. The integration demonstrates how CrewAI agents can orchestrate complex tasks in secure cloud environments, from simple code execution to complete project development pipelines.

This integration demonstrates:

  • Cloud sandbox execution patterns
  • Multi-agent orchestration for development workflows
  • Session lifecycle management
  • Tool integration with external SDKs
  • HTTP service verification patterns

Features

Core Capabilities

  • Cloud Code Execution: Execute Python and JavaScript code in secure cloud sandboxes
  • Full Development Pipeline: Complete workflow from requirements analysis → design → implementation → deployment → execution → analysis
  • Session Management: Support for both temporary (one-off) and persistent (multi-step) sessions
  • File Operations: Upload, download, and manage files in cloud sessions
  • HTTP Service Verification: Verify deployed services with JSON-aware response validation

Two Execution Modes

  1. Simple Code Execution (AgentBayTemporaryCodeCrew)

    • Quick one-off code execution tasks
    • Automatic session lifecycle management
    • Ideal for testing and simple scripts
  2. Full Development Pipeline (AgentBayCodeCrew)

    • Multi-agent collaboration for complete project development
    • Persistent sessions for stateful workflows
    • End-to-end automation from requirements to deployment

Architecture

Key Components

  • crew.py: Defines two crew classes with specialized agents and tasks
  • api/: Session management wrappers
    • agentbay_temporary_session.py: Temporary session operations
    • agentbay_persistent_session.py: Persistent session operations
  • tools/: CrewAI tools for agent use
    • agentbay_tools.py: AgentBay SDK integration tools
    • local_tools.py: Local filesystem and HTTP verification tools
    • tool_schemas.py: Pydantic schemas for input validation
  • config/: YAML configuration for agents and tasks
  • tests/: Comprehensive test suite

Agent Roles

  • Code Executor: Handles cloud code execution and session management
  • Framework Designer: Analyzes requirements and designs project structure
  • Implementer: Generates code files locally
  • Result Analyst: Analyzes execution results and verifies services

Usage

Quick Start

  1. Install dependencies:

    cd crews/agentbay_sdk
    poetry install
    # or
    pip install -e .
  2. Configure environment variables:

    cp env.example .env
    # Edit .env and set:
    # - AGENTBAY_API_KEY (required)
    # - OPENAI_API_KEY (required)
    # - OPENAI_API_BASE (optional, for custom LLM providers)
    # - OPENAI_MODEL_NAME (optional, default: gpt-4o-mini)
  3. Run examples:

    python src/agentbay_sdk/main.py

Example: Simple Code Execution

from agentbay_sdk.crew import AgentBayTemporaryCodeCrew

crew = AgentBayTemporaryCodeCrew().crew()
result = crew.kickoff(inputs={
    "code": "print('Hello from AgentBay!')",
    "language": "python"
})

Example: Full Development Pipeline

from agentbay_sdk.crew import AgentBayCodeCrew
from crewai import Crew, Process

crew_base = AgentBayCodeCrew()
dev_crew = Crew(
    agents=[
        crew_base.framework_designer(),
        crew_base.implementer(),
        crew_base.code_executor(),
        crew_base.result_analyst(),
    ],
    tasks=[
        crew_base.design_project(),
        crew_base.generate_project(),
        crew_base.upload_project(),
        crew_base.install_and_run(),
        crew_base.analyze_result(),
    ],
    process=Process.sequential,
)

result = dev_crew.kickoff(inputs={
    "user_requirement": "Build a FastAPI service with /health endpoint"
})

Testing

The project includes comprehensive tests:

# Run all tests
pytest tests/

# Run specific test file
python3 tests/test_agentbay_code_flow.py
python3 tests/test_agentbay_persistent_session.py

Test Coverage

  • ✅ Temporary session code execution flow
  • ✅ Persistent session lifecycle management
  • ✅ File operations (read/write/upload/download)
  • ✅ Command execution in sessions
  • ✅ Error handling and edge cases

Configuration

Required Environment Variables

  • AGENTBAY_API_KEY: AgentBay cloud execution API key
  • OPENAI_API_KEY: LLM API key (OpenAI/Bailian/Azure compatible)

Optional Environment Variables

  • OPENAI_API_BASE: Custom LLM endpoint (e.g., for Alibaba Cloud Bailian)
  • OPENAI_MODEL_NAME: LLM model name (default: gpt-4o-mini)

See .env.example for detailed configuration examples for different LLM providers.

Technical Highlights

  1. Clean Architecture: Separation of concerns between temporary and persistent sessions
  2. Type Safety: Full Pydantic validation for all tool inputs
  3. Error Handling: Comprehensive error handling with clear messages
  4. JSON-Aware Verification: Smart HTTP response validation that handles JSON format differences
  5. Flexible LLM Support: Compatible with OpenAI, Azure, Alibaba Cloud Bailian, and custom endpoints via LiteLLM

Files Added

crews/agentbay_sdk/
├── README.md                          # Comprehensive documentation
├── pyproject.toml                     # Project configuration
├── poetry.lock                        # Dependency lock file
├── env.example                        # Environment variable template
├── src/agentbay_sdk/
│   ├── __init__.py
│   ├── crew.py                        # Crew definitions
│   ├── main.py                        # Example usage script
│   ├── pipeline.py                    # Pipeline utilities
│   ├── api/
│   │   ├── agentbay_temporary_session.py
│   │   └── agentbay_persistent_session.py
│   ├── config/
│   │   ├── agents.yaml
│   │   └── tasks.yaml
│   └── tools/
│       ├── agentbay_tools.py
│       ├── local_tools.py
│       └── tool_schemas.py
└── tests/
    ├── test_agentbay_code_flow.py
    └── test_agentbay_persistent_session.py

Dependencies

  • crewai>=0.152.0
  • wuying-agentbay-sdk>=0.3.0
  • python-dotenv>=1.0.0
  • requests>=2.31.0
  • pytest>=7.0.0

Compatibility

  • Python: >=3.10,<3.14
  • CrewAI: >=0.152.0
  • Tested with Python 3.13.3

Documentation

  • Complete README with usage examples
  • Inline code documentation
  • Configuration examples for multiple LLM providers
  • Test files serve as additional usage examples

Checklist

  • Code follows repository style guidelines
  • All code is in English (no Chinese comments/strings)
  • Comprehensive README with examples
  • Test suite included and passing
  • Environment variable documentation
  • Type hints and Pydantic validation
  • Error handling implemented
  • Follows standard Python project structure (src/ layout)
  • Compatible with CrewAI 0.152.0+
  • No hardcoded secrets or API keys

Notes for Reviewers

  • The project uses a standard src/ layout for proper Python packaging
  • All user-facing strings and documentation are in English
  • The codebase is fully typed with Pydantic schemas
  • Tests can be run directly with python3 or via pytest
  • The integration supports both OpenAI-compatible and custom LLM endpoints

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants