Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

layout default
title AutoAgent Tutorial
nav_order 140
has_children true
format_version v2

AutoAgent Tutorial: Zero-Code Agent Creation and Automated Workflow Orchestration

Learn how to use HKUDS/AutoAgent to create and orchestrate LLM agents through natural-language workflows, with support for CLI operations, tool creation, and benchmark-oriented evaluation.

GitHub Repo License Docs

Why This Track Matters

AutoAgent targets zero-code agent building via natural language and automated orchestration, making it useful for teams exploring dynamic agent creation without deep framework coding.

This track focuses on:

  • launching AutoAgent quickly in CLI mode
  • understanding user/agent-editor/workflow-editor modes
  • configuring tools and model providers safely
  • evaluating planning workflows and governance controls

Current Snapshot (auto-updated)

Mental Model

flowchart LR
    A[User natural-language intent] --> B[AutoAgent mode selector]
    B --> C[Agent or workflow generation]
    C --> D[Tool and model orchestration]
    D --> E[Task execution and refinement]
    E --> F[Reusable agent workflows]
Loading

Chapter Guide

Chapter Key Question Outcome
01 - Getting Started How do I install and run AutoAgent quickly? Working baseline
02 - Architecture and Interaction Modes How do user/agent/workflow modes differ? Strong usage model
03 - Installation, Environment, and API Setup How do I configure runtime and model access safely? Stable setup baseline
04 - Agent and Workflow Creation Patterns How do I create agents and workflows with NL prompts? Better creation discipline
05 - Tooling, Python API, and Custom Extensions How do I extend AutoAgent behavior programmatically? Extensibility baseline
06 - CLI Operations and Provider Strategy How do I run reliable daily operations across model providers? Operational reliability
07 - Benchmarking, Evaluation, and Quality Gates How do I evaluate AutoAgent output quality? Evaluation discipline
08 - Contribution Workflow and Production Governance How do teams adopt and govern AutoAgent safely? Governance runbook

What You Will Learn

  • how to operate AutoAgent across its core interaction modes
  • how to configure providers and runtime settings for stable execution
  • how to extend workflows with custom tools and Python interfaces
  • how to evaluate and govern AutoAgent usage in team settings

Source References

Related Tutorials


Start with Chapter 1: Getting Started.

Navigation & Backlinks

Full Chapter Map

  1. Chapter 1: Getting Started
  2. Chapter 2: Architecture and Interaction Modes
  3. Chapter 3: Installation, Environment, and API Setup
  4. Chapter 4: Agent and Workflow Creation Patterns
  5. Chapter 5: Tooling, Python API, and Custom Extensions
  6. Chapter 6: CLI Operations and Provider Strategy
  7. Chapter 7: Benchmarking, Evaluation, and Quality Gates
  8. Chapter 8: Contribution Workflow and Production Governance

Generated by AI Codebase Knowledge Builder