Skip to content

akudo7/kudosflow

Repository files navigation

Kudosflow

Visual, production-ready AI workflows — portable as JSON

Release License Stars Issues


Features · Quick Start · Usage · Development · Support

Kudosflow Demo
Click to watch the demo video


What is Kudosflow?

“AI agent workflows keep turning into scattered scripts that nobody can reproduce…” “The prototype worked, but it breaks when handed off to the team…” “I want to build with nodes, but it never lands in a form that's actually operable…”

Kudosflow is a VSCode extension that lets you design and execute node-based AI agent workflows using a drag-and-drop UI — right inside your editor. Workflows are saved as portable JSON, so version control, sharing, and production execution are all in one straight line.

What you get

  • Specs live as a visual overview, not buried between lines of code
  • Workflows are JSON, making review, diff management, and reuse straightforward
  • The same artifact takes you from prototype to production

Why Kudosflow?

  • Visual First: See your entire workflow at a glance — no more scattered scripts
  • Production Ready: Go from prototype to production with the same JSON — minimize rewrites
  • Portable: Manage and share AI logic as standard JSON with Git
  • Integrated: Everything stays inside VSCode — supports A2A and MCP protocols

Features

  • 🎨 Visual Workflow Editor: Drag-and-drop interface powered by React Flow
  • 🔌 Node-Based Architecture: Connect nodes to build complex AI agent workflows
  • 💾 JSON Storage: Workflows stored as portable JSON files in your workspace
  • 🔄 A2A & MCP Integration: Support for Agent-to-Agent and MCP communication protocols
  • 🔧 System Skills Integration: Comprehensive support for System Skills with visual indicators and centralized management
  • 🤖 Advanced AI Models: Powered by GPT-5.2 for enhanced performance
  • 🎯 Context Menu Integration: Right-click any JSON file to open in workflow editor
  • 🚀 Live Execution: Real-time workflow execution and testing
  • 🧵 State Management: Thread-based conversation persistence across requests

Quick Start

Prerequisites

  • VSCode 1.96.0 or higher
  • API keys for your AI providers (OpenAI, Anthropic, or Ollama)

Installation

Option 1: From VSIX (Current)

code --install-extension kudosflow2-1.3.0.vsix

Option 2: From VSCode Marketplace (Coming Soon)

Search for "Kudosflow2" in the VSCode extensions marketplace.

Included Folders

The extension package includes the following folders that provide workflows, scripts, and skills:

Folder Description
json/ Sample workflow JSON files and agent configurations
scripts/ Utility scripts for A2A server and messaging
skills/ Agent skill definitions (e.g., Teams, Arxiv Search)

These folders are located inside the installed extension directory:

~/.vscode/extensions/akirakudo911.kudosflow2-1.3.0/
├── json/
├── scripts/
└── skills/

To use them in your project, copy or symlink them to your project root:

Copy:

cp -r ~/.vscode/extensions/akirakudo911.kudosflow2-1.3.0/json ./json
cp -r ~/.vscode/extensions/akirakudo911.kudosflow2-1.3.0/scripts ./scripts
cp -r ~/.vscode/extensions/akirakudo911.kudosflow2-1.3.0/skills ./skills

Symlink (macOS/Linux):

ln -s ~/.vscode/extensions/akirakudo911.kudosflow2-1.3.0/json ./json
ln -s ~/.vscode/extensions/akirakudo911.kudosflow2-1.3.0/scripts ./scripts
ln -s ~/.vscode/extensions/akirakudo911.kudosflow2-1.3.0/skills ./skills

Setup

  1. Configure API Keys

    Create a .env file in your project root:

    # OpenAI (optional)
    OPENAI_API_KEY=your_openai_api_key_here
    
    # Anthropic (optional)
    ANTHROPIC_API_KEY=your_anthropic_api_key_here
    
    # Ollama (optional, local)
    OLLAMA_BASE_URL=http://127.0.0.1:11434
  2. Explore Sample Workflows

    Sample workflows are automatically installed to:

    ~/.vscode/extensions/akirakudo911.kudosflow2-1.3.0/json/
    

    Basic Examples:

    • interrupt.json - Interactive workflow with user interrupts
    • model.json - Career counselor with OpenAI integration

    A2A Examples:

    • a2a/client.json - A2A client workflow
    • a2a/servers/task-creation.json - Task creation server
    • a2a/servers/research-execution.json - Research execution server
    • a2a/servers/quality-evaluation.json - Quality evaluation server

Usage

Opening Workflow Editor

Three ways to open:

  • From Explorer: Right-click any .json file → "Open Workflow Editor"
  • Command Palette: Ctrl+Shift+P (or Cmd+Shift+P) → "Kudosflow: Open Workflow Editor"
  • Create New: Right-click a folder → "Create New Workflow Here"

Building Workflows

  1. Click the + button to add nodes to the canvas
  2. Drag nodes to position them on the canvas
  3. Connect nodes by dragging from output anchors (right) to input anchors (left)
  4. Configure each node by clicking it and editing parameters
  5. Save your workflow using the Save button in the toolbar
  6. Execute your workflow using the Run button

Example: A2A Workflow Pattern

Task Creation → Approval → Research Execution → Approval
  → Report Generation → Report Approval → Quality Evaluation → Complete

Each step can be an independent agent workflow, communicating via A2A protocol.


State Management & Thread Persistence

Kudosflow supports stateful conversations using thread IDs:

  • thread_id: Optional parameter for maintaining conversation state
  • State Persistence: Same thread_id retrieves previous context
  • Fresh Start: Omit thread_id to start a new conversation

Example: API Usage with Thread Persistence

# Start new conversation (no thread_id)
curl -X POST http://localhost:3000/message/send \
  -H "Content-Type: application/json" \
  -d '{
    "message": {"parts": [{"type": "text", "text": "Research the AI market"}]}
  }'
# Response includes: thread_id: "thread-1234567890-abc123"

# Continue conversation (with thread_id)
curl -X POST http://localhost:3000/message/send \
  -H "Content-Type: application/json" \
  -d '{
    "message": {"parts": [{"type": "text", "text": "Approved"}]},
    "thread_id": "thread-1234567890-abc123"
  }'
# State is preserved, context maintained

Agent Teams: Fan-out/Fan-in Parallel Execution

The Agent Teams feature dynamically assembles a team of specialist agents at runtime based on any user prompt. Workers run in parallel as native LangGraph nodes (fan-out/fan-in) — no external processes, no port management.

How to Use

  1. Open json/teams/leader.json in the Workflow Editor (right-click → "Open Workflow Editor")
  2. Run the workflow and enter your prompt (any domain — research, writing, analysis, etc.)
  3. Workers execute in parallel and results are integrated automatically
  4. When complete, you will be prompted to confirm the final report

How it Works

User Prompt
    │
    ▼
┌───────────────┐
│ planner_node  │  Analyzes task → outputs JSON array of worker definitions
└──────┬────────┘
       │ Send(worker_A), Send(worker_B), Send(worker_C)  ← fan-out
       ├──────────────────────┬─────────────────────────┐
       ▼                      ▼                         ▼
┌─────────────┐       ┌─────────────┐           ┌─────────────┐
│ worker_node │       │ worker_node │    ...     │ worker_node │  (parallel)
└──────┬──────┘       └──────┬──────┘           └──────┬──────┘
       └──────────────────────┴─────────────────────────┘
                              │ fan-in
                              ▼
                   ┌──────────────────┐
                   │ aggregator_node  │  Merges all worker results → final report
                   └────────┬─────────┘
                            ▼
                   ┌──────────────────┐
                   │  finalize_node   │  Presents report + confirmation prompt
                   └──────────────────┘
Node Role
planner_node Analyzes the prompt and outputs a workerPlans array (name, role, task)
worker_node Executes each worker's task independently and in parallel via LangGraph Send
aggregator_node Collects all workerResults and synthesizes an integrated finalReport
finalize_node Presents the final report and prompts user for confirmation

Key Files

File Role
json/teams/leader.json Workflow definition (planner → worker×N → aggregator → finalize)

Testing Agent Teams

To verify that Agent Teams works correctly across different domains, the following test prompts are provided.

Verification Criteria

Each test case checks the following:

Item How to Verify
Correct number of workers planned Check workerPlans count in execution logs
Role names are domain-appropriate Review name field in planner output
Workers executed in parallel Confirm multiple worker_node entries appear concurrently in logs
Each worker produced a result Check workerResults array in aggregator input
Final report includes all worker outputs Review finalReport content

Test Cases

ID Domain Prompt Summary Expected Workers
T-01 Marketing Research Survey Japan's streaming video market (players, pricing, users, forecast) market_researcher, competitor_analyst, user_analyst
T-02 Academic Survey Summarize LLM fine-tuning trends since 2023 (LoRA, QLoRA, DPO) literature_reviewer, technique_comparator, application_analyst
T-03 Travel Planning 5-day Tokyo → Kyoto/Osaka itinerary with transport, lodging, food sightseeing_planner, logistics_coordinator, food_curator
T-04 Content Creation Blog post: "10 ways to boost remote work productivity" with SEO seo_researcher, content_writer, editor
T-05 Data Analysis Design an e-commerce analytics framework (RFM, churn prediction) data_architect, segmentation_specialist, ml_engineer
T-06 Legal / Compliance Explain key components of a SaaS Terms of Service legal_analyst (1 worker expected — simple task)
T-07 Code Generation Implement formatDateJP(date: Date): string in TypeScript with Jest tests implementer, tester

T-07 pass/fail criterion: yarn jest src/formatDateJP.test.ts — all tests must pass.

Running a Test

# 1. Open leader.json in the Workflow Editor
# 2. Enter the test prompt and run
# 3. Review the final report presented by finalize_node

Development

Build Prerequisites

  • Node.js 20.x or higher
  • Yarn package manager (not npm)
  • VSCode 1.96.0 or higher

Project Setup

# Install all dependencies (extension + webview)
yarn install:all

# Copy environment example
cp .env.example .env
# Edit .env with your API keys

Build Commands

# Compile TypeScript for extension
yarn compile

# Watch mode for extension development
yarn watch

# Start webview development server with hot reload
yarn start:webview

# Build webview for production
yarn build:webview

# Package extension
yarn package

# Run linter
yarn lint

# Run tests
yarn pretest

Development Workflow

  1. Press F5 in VSCode to launch the Extension Development Host
  2. Make changes to extension code → yarn compile → Reload window (Ctrl+R)
  3. For webview changes, run yarn start:webview for hot reload

Architecture Overview

The extension uses a two-part architecture:

1. Extension Side (Node.js context)

  • Entry: src/extension.ts
  • Build: TypeScript → out/ directory
  • Manages VSCode extension lifecycle and webview panel

2. Webview Side (Browser context)

Communication between extension and webview uses message passing via postMessage API.


Related Projects


Contributing

Contributions are welcome! Please feel free to submit a Pull Request.


License

MIT License - see LICENSE for details.


Author

Hand-crafted by Akira Kudo in Tokyo, Japan

Copyright © 2023-present Akira Kudo