diff --git a/Guide/src/SUMMARY.md b/Guide/src/SUMMARY.md index 71f85b232e..a9fc85f4fe 100644 --- a/Guide/src/SUMMARY.md +++ b/Guide/src/SUMMARY.md @@ -35,6 +35,13 @@ - [Running Fuzzers](./dev_guide/tests/fuzzing/running.md) - [Writing Fuzzers](./dev_guide/tests/fuzzing/writing.md) - [Developer Tools / Utilities](./dev_guide/dev_tools.md) + - [`flowey`](./dev_guide/dev_tools/flowey.md) + - [`Flowey Fundamentals`](./dev_guide/dev_tools/flowey/flowey_fundamentals.md) + - [`Steps`](./dev_guide/dev_tools/flowey/steps.md) + - [`Variables`](./dev_guide/dev_tools/flowey/variables.md) + - [`Nodes`](./dev_guide/dev_tools/flowey/nodes.md) + - [`Artifacts`](./dev_guide/dev_tools/flowey/artifacts.md) + - [`Pipelines`](./dev_guide/dev_tools/flowey/pipelines.md) - [`cargo xtask`](./dev_guide/dev_tools/xtask.md) - [`cargo xflowey`](./dev_guide/dev_tools/xflowey.md) - [VmgsTool](./dev_guide/dev_tools/vmgstool.md) diff --git a/Guide/src/dev_guide/dev_tools/flowey.md b/Guide/src/dev_guide/dev_tools/flowey.md new file mode 100644 index 0000000000..bbbf4bf487 --- /dev/null +++ b/Guide/src/dev_guide/dev_tools/flowey.md @@ -0,0 +1,56 @@ +# Flowey + +Flowey is an in-house, Rust library for writing maintainable, cross-platform automation. It enables developers to define CI/CD pipelines and local workflows as type-safe Rust code that can generate backend-specific YAML (Azure DevOps, GitHub Actions) or execute directly on a local machine. Rather than writing automation logic in YAML with implicit dependencies, flowey treats automation as first-class Rust code with explicit, typed dependencies tracked through a directed acyclic graph (DAG). + +## Why Flowey? + +Traditional CI/CD pipelines using YAML-based configuration (e.g., Azure DevOps Pipelines, GitHub Actions workflows) have several fundamental limitations that become increasingly problematic as projects grow in complexity: + +### The Problems with Traditional YAML Pipelines + +#### Non-Local Reasoning and Global State + +- YAML pipelines heavily rely on global state and implicit dependencies (environment variables, file system state, installed tools) +- Understanding what a step does often requires mentally tracking state mutations across the entire pipeline +- Debugging requires reasoning about the entire pipeline context rather than isolated units of work +- Changes in one part of the pipeline can have unexpected effects in distant, seemingly unrelated parts + +#### Maintainability Challenges + +- YAML lacks type safety, making it easy to introduce subtle bugs (typos in variable names, incorrect data types, etc.) +- No compile-time validation means errors only surface at runtime +- Refactoring is risky and error-prone without automated tools to catch breaking changes +- Code duplication is common because YAML lacks good abstraction mechanisms +- Testing pipeline logic requires actually running the pipeline, making iteration slow and expensive + +#### Platform Lock-In + +- Pipelines are tightly coupled to their specific CI backend (ADO, GitHub Actions, etc.) +- Multi-platform support means maintaining multiple, divergent YAML files + +#### Local Development Gaps + +- Developers can't easily test pipeline changes before pushing to CI +- Reproducing CI failures locally is difficult or impossible +- The feedback loop is slow: push → wait for CI → debug → repeat + +### Flowey's Solution + +Flowey addresses these issues by treating automation as **first-class Rust code**: + +- **Type Safety**: Rust's type system catches errors at compile-time rather than runtime +- **Local Reasoning**: Dependencies are explicit through typed variables, not implicit through global state +- **Portability**: Write once, generate YAML for any backend (ADO, GitHub Actions, or run locally) +- **Reusability**: Nodes are composable building blocks that can be shared across pipelines + +## Flowey's Directory Structure + +Flowey is architected as a standalone tool with a layered crate structure that separates project-agnostic core functionality from project-specific implementations: + +- **`flowey_core`**: Provides the core types and traits shared between user-facing and internal Flowey code, such as the essential abstractions for nodes and pipelines. +- **`flowey`**: Thin wrapper around `flowey_core` that exposes the public API for defining nodes and pipelines. +- **`flowey_cli`**: Command-line interface for running flowey - handles YAML generation, local execution, and pipeline orchestration. +- **`schema_ado_yaml`**: Rust types for Azure DevOps YAML schemas used during pipeline generation. +- **`flowey_lib_common`**: Ecosystem-wide reusable nodes (installing Rust, running Cargo, downloading tools, etc.) that could be useful across projects outside of OpenVMM. +- **`flowey_lib_hvlite`**: OpenVMM-specific nodes and workflows that build on the common library primitives. +- **`flowey_hvlite`**: The OpenVMM pipeline definitions that compose nodes from the libraries above into complete CI/CD workflows. diff --git a/Guide/src/dev_guide/dev_tools/flowey/artifacts.md b/Guide/src/dev_guide/dev_tools/flowey/artifacts.md new file mode 100644 index 0000000000..9f0c96dfe2 --- /dev/null +++ b/Guide/src/dev_guide/dev_tools/flowey/artifacts.md @@ -0,0 +1,70 @@ +# Artifacts + +Artifacts enable typed data transfer between jobs with automatic dependency management, abstracting away CI system complexities like name collisions and manual job ordering. + +## Typed vs Untyped Artifacts + +**Typed artifacts (recommended)** provide type-safe artifact handling by defining +a custom type that implements the `Artifact` trait: + +```rust +#[derive(Serialize, Deserialize)] +struct MyArtifact { + #[serde(rename = "output.bin")] + binary: PathBuf, + #[serde(rename = "metadata.json")] + metadata: PathBuf, +} + +impl Artifact for MyArtifact {} + +let (pub_artifact, use_artifact) = pipeline.new_typed_artifact("my-files"); +``` + +**Untyped artifacts** provide simple directory-based artifacts for simpler cases: + +```rust +let (pub_artifact, use_artifact) = pipeline.new_artifact("my-files"); +``` + +For detailed examples of defining and using artifacts, see the [Artifact trait documentation](https://openvmm.dev/rustdoc/linux/flowey_core/pipeline/trait.Artifact.html). + +Both `pipeline.new_typed_artifact("name")` and `pipeline.new_artifact("name")` return a tuple of handles: `(pub_artifact, use_artifact)`. When defining a job you convert them with the job context: + +```rust +// In a producing job: +let artifact_out = ctx.publish_artifact(pub_artifact); +// artifact_out : WriteVar (typed) +// or WriteVar for untyped + +// In a consuming job: +let artifact_in = ctx.use_artifact(use_artifact); +// artifact_in : ReadVar (typed) +// or ReadVar for untyped +``` + +After conversion, you treat the returned `WriteVar` / `ReadVar` like any other flowey variable (claim them in steps, write/read values). +Key concepts: + +- The `Artifact` trait works by serializing your type to JSON in a format that reflects a directory structure +- Use `#[serde(rename = "file.exe")]` to specify exact file names +- Typed artifacts ensure compile-time type safety when passing data between jobs +- Untyped artifacts are simpler but don't provide type guarantees +- Tuple handles must be lifted with `ctx.publish_artifact(...)` / `ctx.use_artifact(...)` to become flowey variables + +## How Flowey Manages Artifacts Under the Hood + +During the **pipeline resolution phase** (build-time), flowey: + +1. **Identifies artifact producers and consumers** by analyzing which jobs write to vs read from each artifact's `WriteVar`/`ReadVar` +2. **Constructs the job dependency graph** ensuring producers run before consumers +3. **Generates backend-specific upload/download steps** in the appropriate places: + - For ADO: Uses `PublishPipelineArtifact` and `DownloadPipelineArtifact` tasks + - For GitHub Actions: Uses `actions/upload-artifact` and `actions/download-artifact` + - For local execution: Uses filesystem copying + +At **runtime**, the artifact `ReadVar` and `WriteVar` work just like any other flowey variable: + +- Producing jobs write artifact files to the path from `WriteVar` +- Flowey automatically uploads those files as an artifact +- Consuming jobs read the path from `ReadVar` where flowey has downloaded the artifact diff --git a/Guide/src/dev_guide/dev_tools/flowey/flowey_fundamentals.md b/Guide/src/dev_guide/dev_tools/flowey/flowey_fundamentals.md new file mode 100644 index 0000000000..b6db5dc84d --- /dev/null +++ b/Guide/src/dev_guide/dev_tools/flowey/flowey_fundamentals.md @@ -0,0 +1,118 @@ +# Flowey Fundamentals + +Before diving into how flowey works, let's establish the key building blocks that form the foundation of flowey's automation model. These concepts are flowey's Rust-based abstractions for common CI/CD workflow primitives. + +## The Automation Workflow Model + +In traditional CI/CD systems, workflows are defined using YAML with implicit dependencies and global state. Flowey takes a fundamentally different approach: **automation workflows are modeled as a directed acyclic graph (DAG) of typed, composable Rust components**. Each component has explicit inputs and outputs, and dependencies are tracked through the type system. + +### Core Building Blocks + +Flowey's model consists of a hierarchy of components: + +**[Pipelines](https://openvmm.dev/rustdoc/linux/flowey_core/pipeline/trait.IntoPipeline.html)** are the top-level construct that defines a complete automation workflow. A pipeline specifies what work needs to be done and how it should be organized. Pipelines can target different execution backends (local machine, Azure DevOps, GitHub Actions) and generate appropriate configuration for each. + +**[Jobs](https://openvmm.dev/rustdoc/linux/flowey_core/pipeline/struct.PipelineJob.html)** represent units of work that run on a specific platform (Windows, Linux, macOS) and architecture (x86_64, Aarch64). Jobs can run in parallel when they don't depend on each other, or sequentially when one job's output is needed by another. Each job is isolated and runs in its own environment. + +**[Nodes](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.FlowNode.html)** are reusable units of automation logic that perform specific tasks (e.g., "install Rust toolchain", "run cargo build", "publish test results"). Nodes are invoked by jobs and emit one or more steps to accomplish their purpose. Nodes can depend on other nodes, forming a composable ecosystem of automation building blocks. + +**Steps** are the individual units of work that execute at runtime. A step might run a shell command, execute Rust code, or interact with the CI backend. Steps are emitted by nodes during the build-time phase and executed in dependency order during runtime. + +### Connecting the Pieces + +These building blocks are connected through three key mechanisms: + +**[Variables (`ReadVar`/`WriteVar`)](https://openvmm.dev/rustdoc/linux/flowey/node/prelude/struct.ReadVar.html)** enable data flow between steps. A `WriteVar` represents a promise to produce a value of type `T` at runtime, while a `ReadVar` represents a dependency on that value. Variables enforce write-once semantics (each value has exactly one producer) and create explicit dependencies in the DAG. For example, a "build" step might write a binary path to a `WriteVar`, and a "test" step would read from the corresponding `ReadVar`. This echoes Rust's "shared XOR mutable" ownership rule: a value has either one writer or multiple readers, never both concurrently. + +**[Artifacts](https://openvmm.dev/rustdoc/linux/flowey_core/pipeline/trait.Artifact.html)** enable data transfer between jobs. Since jobs may run on different machines or at different times, artifacts package up files (like compiled binaries, test results, or build outputs) for transfer. Flowey automatically handles uploading artifacts at the end of producing jobs and downloading them at the start of consuming jobs, abstracting away backend-specific artifact APIs. + +**[Side Effects](https://openvmm.dev/rustdoc/linux/flowey/node/prelude/type.SideEffect.html)** represent dependencies without data. Sometimes step B needs to run after step A, but A doesn't produce any data that B consumes (e.g., "install dependencies" must happen before "run tests", even though the test step doesn't directly use the installation output). Side effects are represented as `ReadVar` and establish ordering constraints in the DAG without transferring actual values. + +### Putting It Together + +Here's an example of how these pieces relate: + +```txt +Pipeline + ├─ Job 1 (Linux x86_64) + │ ├─ Node A (install Rust) + │ │ └─ Step: Run rustup install + │ │ └─ Produces: WriteVar (installation complete) + │ └─ Node B (build project) + │ └─ Step: Run cargo build + │ └─ Consumes: ReadVar (installation complete) + │ └─ Produces: WriteVar (binary path) → Artifact + │ + └─ Job 2 (Windows x86_64) + └─ Node C (run tests) + └─ Step: Run binary with test inputs + └─ Consumes: ReadVar (binary path) ← Artifact + └─ Produces: WriteVar (test results) +``` + +In this example: + +- The **Pipeline** defines two jobs that run on different platforms +- **Job 1** installs Rust and builds the project, with step dependencies expressed through variables +- **Job 2** runs tests using the binary from Job 1, with the binary transferred via an artifact +- **Variables** create dependencies within a job (build depends on install) +- **Artifacts** create dependencies between jobs (Job 2 depends on Job 1's output) +- **Side Effects** represent the "Rust is installed" state without carrying data + +## Two-Phase Execution Model + +Flowey operates in two distinct phases: + +1. **Build-Time (Resolution Phase)**: When you run `cargo xflowey regen`, flowey: + - Reads `.flowey.toml` to determine which pipelines to regenerate + - Builds the flowey binary (e.g., `flowey-hvlite`) via `cargo build` + - Runs the flowey binary with `pipeline --out ` for each pipeline definition + - During this invocation, flowey constructs a **directed acyclic graph (DAG)** by: + - Instantiating all nodes (reusable units of automation logic) defined in the pipeline + - Processing their requests + - Resolving dependencies between nodes via variables and artifacts + - Determining the execution order + - Performing flowey-specific validations (dependency resolution, type checking, etc.) + - Generates YAML files for CI systems (ADO, GitHub Actions) at the paths specified in `.flowey.toml` + +2. **Runtime (Execution Phase)**: The generated YAML is executed by the CI system (or locally via `cargo xflowey `). Steps (units of work) run in the order determined at build-time: + - Variables are read and written with actual values + - Commands are executed + - Artifacts (data packages passed between jobs) are published/consumed + - Side effects (dependencies) are resolved + +The `.flowey.toml` file at the repo root defines which pipelines to generate and where. For example: + +```toml +[[pipeline.flowey_hvlite.github]] +file = ".github/workflows/openvmm-pr.yaml" +cmd = ["ci", "checkin-gates", "--config=pr"] +``` + +When you run `cargo xflowey regen`: + +1. It reads `.flowey.toml` +2. Builds the `flowey-hvlite` binary +3. Runs `flowey-hvlite pipeline github --out .github/workflows/openvmm-pr.yaml ci checkin-gates --config=pr` +4. This generates/updates the YAML file with the resolved pipeline + +**Key Distinction:** + +- `cargo build -p flowey-hvlite` - Only compiles the flowey code to verify it builds successfully. **Does not** construct the DAG or generate YAML files. +- `cargo xflowey regen` - Compiles the code **and** runs the full build-time resolution to construct the DAG, validate the pipeline, and regenerate all YAML files defined in `.flowey.toml`. + +Always run `cargo xflowey regen` after modifying pipeline definitions to ensure the generated YAML files reflect your changes. + +### Backend Abstraction + +Flowey supports multiple execution backends: + +- **Local**: Runs directly on your development machine +- **ADO (Azure DevOps)**: Generates ADO Pipeline YAML +- **GitHub Actions**: Generates GitHub Actions workflow YAML + +```admonish warning +Nodes should be written to work across ALL backends whenever possible. Relying on `ctx.backend()` to query the backend or manually emitting backend-specific steps (via `emit_ado_step` or `emit_gh_step`) should be avoided unless absolutely necessary. Most automation logic should be backend-agnostic, using `emit_rust_step` for cross-platform Rust code that works everywhere. Writing cross-platform flowey code enables locally testing pipelines which can be invaluable when iterating over CI changes. +``` + +If a node only supports certain backends, it should immediately fast‑fail with a clear error ("`` not supported on ``") instead of silently proceeding. That failure signals it's time either to add the missing backend support or introduce a multi‑platform abstraction/meta‑node that delegates to platform‑specific nodes. diff --git a/Guide/src/dev_guide/dev_tools/flowey/images/Parameters.png b/Guide/src/dev_guide/dev_tools/flowey/images/Parameters.png new file mode 100644 index 0000000000..bb4eb5c423 Binary files /dev/null and b/Guide/src/dev_guide/dev_tools/flowey/images/Parameters.png differ diff --git a/Guide/src/dev_guide/dev_tools/flowey/nodes.md b/Guide/src/dev_guide/dev_tools/flowey/nodes.md new file mode 100644 index 0000000000..0de6785b26 --- /dev/null +++ b/Guide/src/dev_guide/dev_tools/flowey/nodes.md @@ -0,0 +1,182 @@ +# Nodes + +At a conceptual level, a Flowey node is analogous to a strongly typed function: you "invoke" it by submitting one or more Request values (its parameters), and it responds by emitting steps that perform work and produce outputs (values written to `WriteVar`s, published artifacts, or side-effect dependencies). + +## The Node/Request Pattern + +Every node has an associated **Request** type that defines what operations the node can perform. Requests are defined using the [`flowey_request!`](https://openvmm.dev/rustdoc/linux/flowey_core/macro.flowey_request.html) macro and registered with [`new_flow_node!`](https://openvmm.dev/rustdoc/linux/flowey_core/macro.new_flow_node.html) or [`new_simple_flow_node!`](https://openvmm.dev/rustdoc/linux/flowey_core/macro.new_simple_flow_node.html) macros. + +For complete examples, see the [`FlowNode` trait documentation](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.FlowNode.html). + +## FlowNode vs SimpleFlowNode + +Flowey provides two node implementation patterns with a fundamental difference in their Request structure and complexity: + +[**`SimpleFlowNode`**](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.SimpleFlowNode.html) - for straightforward, function-like operations: + +- Uses a **single struct Request** type +- Processes one request at a time independently +- Behaves like a "plain old function" that resolves its single request type +- Each invocation is isolated - no shared state or coordination between requests +- Simpler implementation with less boilerplate +- Ideal for straightforward operations like running a command or transforming data + +**Example use case**: A node that runs `cargo build` - each request is independent and just needs to know what to build. + +[**`FlowNode`**](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.FlowNode.html) - for complex nodes requiring coordination and non-local configuration: + +- Often uses an **enum Request** with multiple variants +- Receives all requests as a `Vec` and processes them together +- Can aggregate, optimize, and consolidate multiple requests into fewer steps +- Enables **non-local configuration** - critical for simplifying complex pipelines + +### The Non-Local Configuration Pattern + +The key advantage of FlowNode is its ability to accept configuration from different parts of the node graph without forcing intermediate nodes to be aware of that configuration. This is the "non-local" aspect: + +Consider an "install Rust toolchain" node with an enum Request: + +```rust +enum Request { + SetVersion { version: String }, + GetToolchain { toolchain_path: WriteVar }, +} +``` + +**Without this pattern** (struct-only requests), you'd need to thread the Rust version through every intermediate node in the call graph: + +```txt +Root Node (knows version: "1.75") + → Node A (must pass through version) + → Node B (must pass through version) + → Node C (must pass through version) + → Install Rust Node (finally uses version) +``` + +**With FlowNode's enum Request**, the root node can send `Request::SetVersion` once, while intermediate nodes that don't care about the version can simply send `Request::GetToolchain`: + +```txt +Root Node → InstallRust::SetVersion("1.75") + → Node A + → Node B + → Node C → InstallRust::GetToolchain() +``` + +The Install Rust FlowNode receives both requests together, validates that exactly one `SetVersion` was provided, and fulfills all the `GetToolchain` requests with that configured version. The intermediate nodes (A, B, C) never needed to know about or pass through version information. + +This pattern: + +- **Eliminates plumbing complexity** in large pipelines +- **Allows global configuration** to be set once at the top level +- **Keeps unrelated nodes decoupled** from configuration they don't need +- **Enables validation** that required configuration was provided (exactly one `SetVersion`) + +**Additional Benefits of FlowNode:** + +- Optimize and consolidate multiple similar requests into fewer steps (e.g., installing a tool once for many consumers) +- Resolve conflicts or enforce consistency across requests + +For detailed comparisons and examples, see the [`FlowNode`](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.FlowNode.html) and [`SimpleFlowNode`](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.SimpleFlowNode.html) documentation. + +## Node Registration + +Nodes are automatically registered using macros that handle most of the boilerplate: + +- [`new_flow_node!(struct Node)`](https://openvmm.dev/rustdoc/linux/flowey_core/macro.new_flow_node.html) - registers a FlowNode +- [`new_simple_flow_node!(struct Node)`](https://openvmm.dev/rustdoc/linux/flowey_core/macro.new_simple_flow_node.html) - registers a SimpleFlowNode +- [`flowey_request!`](https://openvmm.dev/rustdoc/linux/flowey_core/macro.flowey_request.html) - defines the Request type and implements [`IntoRequest`](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.IntoRequest.html) + +## The imports() Method + +The `imports()` method declares which other nodes this node might depend on. This enables flowey to: + +- Validate that all dependencies are available +- Build the complete dependency graph +- Catch missing dependencies at build-time + +```admonish warning +Flowey does not catch unused imports today as part of its build-time validation step. +``` + +**Why declare imports?** Flowey needs to know the full set of potentially-used nodes at compilation time to properly resolve the dependency graph. + +For more on node imports, see the [`FlowNode::imports` documentation](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.FlowNode.html#tymethod.imports). + +## The emit() Method + +The [`emit()`](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.FlowNode.html#tymethod.emit) method is where a node's actual logic lives. For [`FlowNode`](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.FlowNode.html), it receives all requests together and must: + +1. Aggregate and validate requests (ensuring consistency where needed) +2. Emit steps to perform the work +3. Wire up dependencies between steps via variables + +For [`SimpleFlowNode`](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.SimpleFlowNode.html), the equivalent [`process_request()`](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.SimpleFlowNode.html#tymethod.process_request) method processes one request at a time. + +For complete implementation examples, see the [`FlowNode::emit` documentation](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.FlowNode.html#tymethod.emit). + +## Node Design Philosophy + +Flowey nodes are designed around several key principles: + +### 1. Composability + +Nodes should be reusable building blocks that can be combined to build complex +workflows. Each node should have a single, well-defined responsibility. + +❌ **Bad**: A node that "builds and tests the project" +✅ **Good**: Separate nodes for "build project" and "run tests" + +### 2. Explicit Dependencies + +Dependencies between steps should be explicit through variables, not implicit +through side effects. + +❌ **Bad**: Assuming a tool is already installed +✅ **Good**: Taking a `ReadVar` that proves installation happened + +### 3. Backend Abstraction + +Nodes should work across all backends when possible. Backend-specific behavior +should be isolated and documented. + +### 4. Separation of Concerns + +Keep node definition (request types, dependencies) separate from step +implementation (runtime logic): + +- **Node definition**: What the node does, what it depends on +- **Step implementation**: How it does it + +## Common Patterns + +### Request Aggregation and Validation + +When a FlowNode receives multiple requests, it often needs to ensure certain values are consistent across all requests while collecting others. The `same_across_all_reqs` helper function simplifies this pattern by validating that a value is identical across all requests. + +**Key concepts:** + +- Iterate through all requests and separate them by type +- Use `same_across_all_reqs` to validate values that must be consistent +- Collect values that can have multiple instances (like output variables) +- Validate that required values were provided + +For a complete example, see the [`same_across_all_reqs` documentation](https://openvmm.dev/rustdoc/linux/flowey_core/node/user_facing/fn.same_across_all_reqs.html). + +### Conditional Execution Based on Backend/Platform + +Nodes can query the current backend and platform to emit platform-specific or backend-specific steps. This allows nodes to adapt their behavior based on the execution environment. + +**Key concepts:** + +- Use `ctx.backend()` to check if running locally, on ADO, or on GitHub Actions +- Use `ctx.platform()` to check the operating system (Windows, Linux, macOS) +- Use `ctx.arch()` to check the architecture (x86_64, Aarch64) +- Emit different steps or use different tool configurations based on these values + +**When to use:** + +- Installing platform-specific tools or dependencies +- Using different commands on Windows vs Unix systems +- Optimizing for local development vs CI environments + +For more on backend and platform APIs, see the [`NodeCtx` documentation](https://openvmm.dev/rustdoc/linux/flowey_core/node/struct.NodeCtx.html). diff --git a/Guide/src/dev_guide/dev_tools/flowey/pipelines.md b/Guide/src/dev_guide/dev_tools/flowey/pipelines.md new file mode 100644 index 0000000000..ea7f9400cc --- /dev/null +++ b/Guide/src/dev_guide/dev_tools/flowey/pipelines.md @@ -0,0 +1,71 @@ +# Pipelines + +Pipelines define complete automation workflows consisting of jobs that run nodes. See the [IntoPipeline trait documentation](https://openvmm.dev/rustdoc/linux/flowey_core/pipeline/trait.IntoPipeline.html) for detailed examples. + +## Pipeline Jobs + +[`PipelineJob`](https://openvmm.dev/rustdoc/linux/flowey_core/pipeline/struct.PipelineJob.html) instances are configured using a builder pattern: + +```rust +let job = pipeline + .new_job(platform, arch, "my-job") + .with_timeout_in_minutes(60) + .with_condition(some_param) + .ado_set_pool("my-pool") + .gh_set_pool(GhRunner::UbuntuLatest) + .dep_on(|ctx| { + // Define what nodes this job depends on + some_node::Request { /* ... */ } + }) + .finish(); +``` + +### Pipeline Parameters + +Parameters allow runtime configuration of pipelines. In Azure DevOps, parameters appear as editable fields in the Run pipeline UI (name, description, default). + +![Azure DevOps parameter UI](images/Parameters.png) + +```rust +// Define a boolean parameter +let verbose = pipeline.new_parameter_bool( + "verbose", + "Run with verbose output", + ParameterKind::Stable, + Some(false) // default value +); + +// Use the parameter in a job +let job = pipeline.new_job(...) + .dep_on(|ctx| { + let verbose = ctx.use_parameter(verbose); + // verbose is now a ReadVar + }) + .finish(); +``` + +#### Stable vs Unstable Parameters + +Every parameter in flowey must be declared as either **Stable** or **Unstable** using [`ParameterKind`](https://openvmm.dev/rustdoc/linux/flowey_core/pipeline/enum.ParameterKind.html). This classification determines the parameter's visibility and API stability: + +**Stable Parameters ([`ParameterKind::Stable`](https://openvmm.dev/rustdoc/linux/flowey_core/pipeline/enum.ParameterKind.html#variant.Stable))** + +Stable parameters represent a **public, stable API** for the pipeline: + +- **External Visibility**: The parameter name is exposed as-is in the generated CI YAML, making it callable by external pipelines and users. +- **API Contract**: Once a parameter is marked stable, its name and behavior should be maintained for backward compatibility. Removing or renaming a stable parameter is a breaking change. +- **Use Cases**: + - Parameters that control major pipeline behavior (e.g., `enable_tests`, `build_configuration`) + - Parameters intended for use by other teams or external automation + - Parameters documented as part of the pipeline's public interface + +**Unstable Parameters ([`ParameterKind::Unstable`](https://openvmm.dev/rustdoc/linux/flowey_core/pipeline/enum.ParameterKind.html#variant.Unstable))** + +Unstable parameters are for **internal use** and experimentation: + +- **Internal Only**: The parameter name is prefixed with `__unstable_` in the generated YAML (e.g., `__unstable_debug_mode`), signaling that it's not part of the stable API. +- **No Stability Guarantee**: Unstable parameters can be renamed, removed, or have their behavior changed without notice. External consumers should not depend on them. +- **Use Cases**: + - Experimental features or debugging flags + - Internal pipeline configuration that may change frequently + - Parameters for development/testing that shouldn't be used in production diff --git a/Guide/src/dev_guide/dev_tools/flowey/steps.md b/Guide/src/dev_guide/dev_tools/flowey/steps.md new file mode 100644 index 0000000000..2d2e89f313 --- /dev/null +++ b/Guide/src/dev_guide/dev_tools/flowey/steps.md @@ -0,0 +1,155 @@ +# Steps + +**Steps** are units of work that will be executed at runtime. Different +step types exist for different purposes. + +## Types of Steps + +### Rust Steps + +Rust steps execute Rust code at runtime and are the most common step type in flowey. + +**`emit_rust_step`**: The primary method for emitting steps that run Rust code. Steps can claim variables, read inputs, perform work, and write outputs. Returns an optional `ReadVar` that other steps can use as a dependency. + +**`emit_minor_rust_step`**: Similar to `emit_rust_step` but for steps that cannot fail (no `Result` return) and don't need visibility in CI logs. Used for simple transformations and glue logic. Using minor steps also improve performance, since there is a slight cost to starting and ending a 'step' in GitHub and ADO. During the build stage, minor steps that are adjacent to each other will get merged into one giant CI step. + +**`emit_rust_stepv`**: Convenience method that combines creating a new variable and emitting a step in one call. The step's return value is automatically written to the new variable. + +For detailed examples of Rust steps, see the [`NodeCtx` emit methods documentation](https://openvmm.dev/rustdoc/linux/flowey/node/prelude/struct.NodeCtx.html). + +### ADO Steps + +**`emit_ado_step`**: Emits a step that generates Azure DevOps Pipeline YAML. Takes a closure that returns a YAML string snippet which is interpolated into the generated pipeline. + +For ADO step examples, see the [`NodeCtx::emit_ado_step` documentation](https://openvmm.dev/rustdoc/linux/flowey_core/node/struct.NodeCtx.html#method.emit_ado_step). + +### GitHub Steps + +**`emit_gh_step`**: Creates a GitHub Actions step using the fluent `GhStepBuilder` API. Supports specifying the action, parameters, outputs, dependencies, and permissions. Returns a builder that must be finalized with `.finish(ctx)`. + +For GitHub step examples, see the [`GhStepBuilder` documentation](https://openvmm.dev/rustdoc/linux/flowey_core/node/steps/github/struct.GhStepBuilder.html). + +### Side Effect Steps + +**`emit_side_effect_step`**: Creates a dependency relationship without executing code. Useful for aggregating multiple side effect dependencies into a single side effect. More efficient than emitting an empty Rust step. + +For side effect step examples, see the [`NodeCtx::emit_side_effect_step` documentation](https://openvmm.dev/rustdoc/linux/flowey_core/node/struct.NodeCtx.html#method.emit_side_effect_step). + +### Isolated Working Directories and Path Immutability + +```admonish warning title="Critical Constraint" +**Each step gets its own fresh local working directory.** This avoids the "single global working directory dumping ground" common in bash + YAML systems. + +However, while flowey variables enforce sharing XOR mutability at the type-system level, **developers must manually enforce this at the filesystem level**: + +**Steps must NEVER modify the contents of paths referenced by `ReadVar`.** +``` + +When you write a path to `WriteVar`, you're creating an immutable contract. Other steps reading that path must treat it as read-only. If you need to modify files from a `ReadVar`, copy them to your step's working directory. + +## Runtime Services + +Runtime services provide the API available during step execution (inside the +closures passed to `emit_rust_step`, etc.). + +### RustRuntimeServices + +[`RustRuntimeServices`](https://openvmm.dev/rustdoc/linux/flowey_core/node/steps/rust/struct.RustRuntimeServices.html) is the primary runtime service available in Rust steps. It provides: + +#### Variable Operations + +- Reading and writing flowey variables +- Secret handling (automatic secret propagation for safety) +- Support for reading values of any type that implements [`ReadVarValue`](https://openvmm.dev/rustdoc/linux/flowey_core/node/trait.ReadVarValue.html) + +#### Environment Queries + +- Backend identification (Local, ADO, or GitHub) +- Platform detection (Windows, Linux, macOS) +- Architecture information (x86_64, Aarch64) + +### AdoStepServices + +[`AdoStepServices`](https://openvmm.dev/rustdoc/linux/flowey_core/node/steps/ado/struct.AdoStepServices.html) provides integration with Azure DevOps-specific features when emitting ADO YAML steps: + +**ADO Variable Bridge:** + +- Convert ADO runtime variables (like `BUILD.SOURCEBRANCH`) into flowey vars +- Convert flowey vars back into ADO variables for use in YAML +- Handle secret variables appropriately + +**Repository Resources:** + +- Resolve repository IDs declared as pipeline resources +- Access repository information in ADO-specific steps + +### GhStepBuilder + +[`GhStepBuilder`](https://openvmm.dev/rustdoc/linux/flowey_core/node/steps/github/struct.GhStepBuilder.html) is a fluent builder for constructing GitHub Actions steps with: + +**Step Configuration:** + +- Specifying the action to use (e.g., `actions/checkout@v4`) +- Adding input parameters via `.with()` +- Capturing step outputs into flowey variables +- Setting conditional execution based on variables + +**Dependency Management:** + +- Declaring side-effect dependencies via `.run_after()` +- Ensuring steps run in the correct order + +**Permissions:** + +- Declaring required GITHUB_TOKEN permissions +- Automatic permission aggregation at the job level + +## Secret Variables and CI Backend Integration + +Flowey provides built-in support for handling sensitive data like API keys, tokens, and credentials through **secret variables**. Secret variables are treated specially to prevent accidental exposure in logs and CI outputs. + +### How Secret Handling Works + +When a variable is marked as secret, flowey ensures: + +- The value is not logged or printed in step output +- CI backends (ADO, GitHub Actions) are instructed to mask the value in their logs +- Secret status is automatically propagated to prevent leaks + +### Automatic Secret Propagation + +To prevent accidental leaks, flowey uses conservative automatic secret propagation: + +```admonish warning +If a step reads a secret value, **all subsequent writes from that step are automatically marked as secret** by default. This prevents accidentally leaking secrets through derived values. +``` + +For example: + +```rust +ctx.emit_rust_step("process token", |ctx| { + let secret_token = secret_token.claim(ctx); + let output_var = output_var.claim(ctx); + |rt| { + let token = rt.read(secret_token); // Reading a secret + + // This write is AUTOMATICALLY marked as secret + // (even though we're just writing "done") + rt.write(output_var, &"done".to_string()); + + Ok(()) + } +}); +``` + +If you need to write non-secret data after reading a secret, use `write_not_secret()`: + +```rust +rt.write_not_secret(output_var, &"done".to_string()); +``` + +### Best Practices for Secrets + +1. **Never use `ReadVar::from_static()` for secrets** - static values are encoded in plain text in the generated YAML +2. **Always use `write_secret()`** when writing sensitive data like tokens, passwords, or keys +3. **Minimize secret lifetime** - read secrets as late as possible and don't pass them through more variables than necessary diff --git a/Guide/src/dev_guide/dev_tools/flowey/variables.md b/Guide/src/dev_guide/dev_tools/flowey/variables.md new file mode 100644 index 0000000000..d342496d66 --- /dev/null +++ b/Guide/src/dev_guide/dev_tools/flowey/variables.md @@ -0,0 +1,101 @@ +# Variables + +Variables are flowey's mechanism for creating typed data dependencies between steps. When a node emits steps, it uses `ReadVar` and `WriteVar` to declare what data each step consumes and produces. This creates explicit edges in the dependency graph: if step B reads from a variable that step A writes to, flowey ensures step A executes before step B. + +## Claiming Variables + +Before a step can use a [`ReadVar`](https://openvmm.dev/rustdoc/linux/flowey/node/prelude/struct.ReadVar.html) or [`WriteVar`](https://openvmm.dev/rustdoc/linux/flowey/node/prelude/struct.WriteVar.html), it must **claim** it. Claiming serves several purposes: + +1. Registers that this step depends on (or produces) this variable +2. Converts `ReadVar` to `ReadVar` +3. Allows flowey to track variable usage for graph construction + +Variables can only be claimed inside step closures using the `claim()` method. + +**Nested closure pattern and related contexts:** + +```rust +// Inside a SimpleFlowNode's process_request() method +fn process_request(&self, request: Self::Request, ctx: &mut NodeCtx<'_>) { + // Assume a single Request provided an input ReadVar and output WriteVar + let input_var: ReadVar = /* from one of the requests */; + let output_var: WriteVar = /* from one of the requests */; + + // Declare a step (still build-time). This adds a node to the DAG. + ctx.emit_rust_step("compute length", |step| { + // step : StepCtx (outer closure, build-time) + // Claim dependencies so the graph knows: this step READS input_var, WRITES output_var. + let input_var = input_var.claim(step); + let output_var = output_var.claim(step); + + // Return the runtime closure. + move |rt| { + // rt : RustRuntimeServices (runtime phase) + let input = rt.read(input_var); // consume value + let len = input.len() as i32; + rt.write(output_var, &len); // fulfill promise + Ok(()) + } + }); +} +``` + +**Why the nested closure dance?** + +The nested closure pattern is fundamental to flowey's two-phase execution model: + +1. **Build-Time (Outer Closure)**: When flowey constructs the DAG, the outer closure runs to: + - Claim variables, which registers dependencies in the graph + - Determine what this step depends on (reads) and produces (writes) + - Allow flowey determine execution order + - Returns an inner closure that gets invoked during the job's runtime +2. **Runtime (Inner Closure)**: When the pipeline actually executes, the inner closure runs to: + - Read actual values from claimed `ReadVar`s + - Perform the real work (computations, running commands, etc.) + - Write actual values to claimed `WriteVar`s + +- [**`NodeCtx`**](https://openvmm.dev/rustdoc/linux/flowey/node/prelude/struct.NodeCtx.html): Used when emitting steps (during the build-time phase). Provides `emit_*` methods, `new_var()`, `req()`, etc. + +- [**`StepCtx`**](https://openvmm.dev/rustdoc/linux/flowey/node/prelude/struct.StepCtx.html): Used inside step closures (during runtime execution). Provides access to `claim()` for variables, and basic environment info (`backend()`, `platform()`). + +The type system enforces this separation: `claim()` requires `StepCtx` (only available in the outer closure), while `read()`/`write()` require `RustRuntimeServices` (only available in the inner closure). + +## ClaimedReadVar and ClaimedWriteVar + +These are type aliases for claimed variables: + +- [`ClaimedReadVar`](https://openvmm.dev/rustdoc/linux/flowey/node/prelude/type.ClaimedReadVar.html) = `ReadVar` +- [`ClaimedWriteVar`](https://openvmm.dev/rustdoc/linux/flowey/node/prelude/type.ClaimedWriteVar.html) = `WriteVar` + +Only claimed variables can be read/written at runtime. + +### Implementation Detail: Zero-Sized Types (ZSTs) + +The claim state markers [`VarClaimed`](https://openvmm.dev/rustdoc/linux/flowey/node/prelude/enum.VarClaimed.html) and [`VarNotClaimed`](https://openvmm.dev/rustdoc/linux/flowey/node/prelude/enum.VarNotClaimed.html) are zero-sized types (ZSTs) - they exist purely at the type level. It allows Rust to statically verify that all variables used in a runtime block have been claimed by that block. + +The type system ensures that `claim()` is the only way to convert from `VarNotClaimed` to `VarClaimed`, and this conversion can only happen within the outer closure where `StepCtx` is available. + +## Static Values vs Runtime Values + +Sometimes you know a value at build-time: + +```rust +// Create a ReadVar with a static value +let version = ReadVar::from_static("1.2.3".to_string()); + +// This is encoded directly in the pipeline, not computed at runtime +// WARNING: Never use this for secrets! +``` + +This can be used as an escape hatch when you have a Request (that expects a value to be determined at runtime), but in a given instance you know the value at build-time. + +## Variable Operations + +`ReadVar` provides operations for transforming and combining variables: + +- **`map()`**: Transform a `ReadVar` into a `ReadVar` +- **`zip()`**: Combine two ReadVars into `ReadVar<(T, U)>` +- **`into_side_effect()`**: Convert `ReadVar` to `ReadVar` when you only care about ordering, not the value +- **`depending_on()`**: Create a new ReadVar with an explicit dependency + +For detailed examples, see the [`ReadVar` documentation](https://openvmm.dev/rustdoc/linux/flowey/node/prelude/struct.ReadVar.html). diff --git a/Guide/src/dev_guide/dev_tools/xflowey.md b/Guide/src/dev_guide/dev_tools/xflowey.md index 8fc74b016c..f38dfcc6c9 100644 --- a/Guide/src/dev_guide/dev_tools/xflowey.md +++ b/Guide/src/dev_guide/dev_tools/xflowey.md @@ -1,7 +1,7 @@ # cargo xflowey To implement various developer workflows (both locally, as well as in CI), the -OpenVMM project relies on `flowey`: a custom, in-house Rust library/framework +OpenVMM project relies on [`flowey`](./flowey/flowey.md): a custom, in-house Rust library/framework for writing maintainable, cross-platform automation. `cargo xflowey` is a cargo alias that makes it easy for developers to run @@ -10,13 +10,17 @@ for writing maintainable, cross-platform automation. Some particularly notable pipelines: - `cargo xflowey build-igvm` - primarily dev-tool used to build OpenHCL IGVM files locally -- `cargo xflowey ci checkin-gates` - runs the entire PR checkin suite locally - `cargo xflowey restore-packages` - restores external packages needed to compile and run OpenVMM / OpenHCL -### `xflowey` vs `xtask` +## `xflowey` vs `xtask` In a nutshell: - `cargo xtask`: implements novel, standalone tools/utilities - `cargo xflowey`: orchestrates invoking a sequence of tools/utilities, without doing any non-trivial data processing itself + + +```admonish warning +While `cargo xflowey` technically has the ability to run CI pipelines locally (e.g., `cargo xflowey ci checkin-gates`), this functionality is currently broken and should not be relied upon. Use CI pipelines in their intended environments (Azure DevOps or GitHub Actions). [`GitHub issue tracking this`](https://github.com/microsoft/openvmm/issues/2322) +``` diff --git a/flowey/flowey_core/src/node.rs b/flowey/flowey_core/src/node.rs index cc7c64b27b..41607c2de0 100644 --- a/flowey/flowey_core/src/node.rs +++ b/flowey/flowey_core/src/node.rs @@ -67,6 +67,35 @@ pub mod user_facing { /// Helper method to streamline request validation in cases where a value is /// expected to be identical across all incoming requests. + /// + /// # Example: Request Aggregation Pattern + /// + /// When a node receives multiple requests, it often needs to ensure certain + /// values are consistent across all requests. This helper simplifies that pattern: + /// + /// ```rust,ignore + /// fn emit(requests: Vec, ctx: &mut NodeCtx<'_>) -> anyhow::Result<()> { + /// let mut version = None; + /// let mut ensure_installed = Vec::new(); + /// + /// for req in requests { + /// match req { + /// Request::Version(v) => { + /// // Ensure all requests agree on the version + /// same_across_all_reqs("Version", &mut version, v)?; + /// } + /// Request::EnsureInstalled(v) => { + /// ensure_installed.push(v); + /// } + /// } + /// } + /// + /// let version = version.ok_or(anyhow::anyhow!("Missing required request: Version"))?; + /// + /// // ... emit steps using aggregated requests + /// Ok(()) + /// } + /// ``` pub fn same_across_all_reqs( req_name: &str, var: &mut Option, @@ -2539,9 +2568,96 @@ macro_rules! new_flow_node_base { }; } -/// TODO: clearly verbalize what a `FlowNode` encompasses +/// A reusable unit of automation logic in flowey. +/// +/// FlowNodes process requests, emit steps, and can depend on other nodes. They are +/// the building blocks for creating complex automation workflows. +/// +/// # The Node/Request Pattern +/// +/// Every node has an associated **Request** type that defines what the node can do. +/// Nodes receive a vector of requests and process them together, allowing for +/// aggregation and conflict resolution. +/// +/// # Example: Basic FlowNode Implementation +/// +/// ```rust,ignore +/// use flowey_core::node::*; +/// +/// // Define the node +/// new_flow_node!(struct Node); +/// +/// // Define requests using the flowey_request! macro +/// flowey_request! { +/// pub enum Request { +/// InstallRust(String), // Install specific version +/// EnsureInstalled(WriteVar), // Ensure it's installed +/// GetCargoHome(WriteVar), // Get CARGO_HOME path +/// } +/// } +/// +/// impl FlowNode for Node { +/// type Request = Request; +/// +/// fn imports(ctx: &mut ImportCtx<'_>) { +/// // Declare node dependencies +/// ctx.import::(); +/// } +/// +/// fn emit(requests: Vec, ctx: &mut NodeCtx<'_>) -> anyhow::Result<()> { +/// // 1. Aggregate and validate requests +/// let mut version = None; +/// let mut ensure_installed = Vec::new(); +/// let mut get_cargo_home = Vec::new(); +/// +/// for req in requests { +/// match req { +/// Request::InstallRust(v) => { +/// same_across_all_reqs("version", &mut version, v)?; +/// } +/// Request::EnsureInstalled(var) => ensure_installed.push(var), +/// Request::GetCargoHome(var) => get_cargo_home.push(var), +/// } +/// } +/// +/// let version = version.ok_or(anyhow::anyhow!("Version not specified"))?; +/// +/// // 2. Emit steps to do the work +/// ctx.emit_rust_step("install rust", |ctx| { +/// let ensure_installed = ensure_installed.claim(ctx); +/// let get_cargo_home = get_cargo_home.claim(ctx); +/// move |rt| { +/// // Install rust with the specified version +/// // Write to all the output variables +/// for var in ensure_installed { +/// rt.write(var, &()); +/// } +/// for var in get_cargo_home { +/// rt.write(var, &PathBuf::from("/path/to/cargo")); +/// } +/// Ok(()) +/// } +/// }); +/// +/// Ok(()) +/// } +/// } +/// ``` +/// +/// # When to Use FlowNode vs SimpleFlowNode +/// +/// **Use `FlowNode`** when you need to: +/// - Aggregate multiple requests and process them together +/// - Resolve conflicts between requests +/// - Perform complex request validation +/// +/// **Use [`SimpleFlowNode`]** when: +/// - Each request can be processed independently +/// - No aggregation logic is needed pub trait FlowNode { - /// TODO: clearly verbalize what a Request encompasses + /// The request type that defines what operations this node can perform. + /// + /// Use the [`crate::flowey_request!`] macro to define this type. type Request: Serialize + DeserializeOwned; /// A list of nodes that this node is capable of taking a dependency on. diff --git a/flowey/flowey_core/src/pipeline.rs b/flowey/flowey_core/src/pipeline.rs index 83dbd62765..e71cd843cc 100644 --- a/flowey/flowey_core/src/pipeline.rs +++ b/flowey/flowey_core/src/pipeline.rs @@ -1269,6 +1269,90 @@ pub enum PipelineBackendHint { Github, } +/// Trait for types that can be converted into a [`Pipeline`]. +/// +/// This is the primary entry point for defining flowey pipelines. Implement this trait +/// to create a pipeline definition that can be executed locally or converted to CI YAML. +/// +/// # Example +/// +/// ```rust,no_run +/// use flowey_core::pipeline::{IntoPipeline, Pipeline, PipelineBackendHint}; +/// use flowey_core::node::{FlowPlatform, FlowPlatformLinuxDistro, FlowArch}; +/// +/// struct MyPipeline; +/// +/// impl IntoPipeline for MyPipeline { +/// fn into_pipeline(self, backend_hint: PipelineBackendHint) -> anyhow::Result { +/// let mut pipeline = Pipeline::new(); +/// +/// // Define a job that runs on Linux x86_64 +/// let _job = pipeline +/// .new_job( +/// FlowPlatform::Linux(FlowPlatformLinuxDistro::Ubuntu), +/// FlowArch::X86_64, +/// "build" +/// ) +/// .finish(); +/// +/// Ok(pipeline) +/// } +/// } +/// ``` +/// +/// # Complex Example with Parameters and Artifacts +/// +/// ```rust,ignore +/// use flowey_core::pipeline::{IntoPipeline, Pipeline, PipelineBackendHint, ParameterKind}; +/// use flowey_core::node::{FlowPlatform, FlowPlatformLinuxDistro, FlowArch}; +/// +/// struct BuildPipeline; +/// +/// impl IntoPipeline for BuildPipeline { +/// fn into_pipeline(self, backend_hint: PipelineBackendHint) -> anyhow::Result { +/// let mut pipeline = Pipeline::new(); +/// +/// // Define a runtime parameter +/// let enable_tests = pipeline.new_parameter_bool( +/// "enable_tests", +/// "Whether to run tests", +/// ParameterKind::Stable, +/// Some(true) // default value +/// ); +/// +/// // Create an artifact for passing data between jobs +/// let (publish_build, use_build) = pipeline.new_artifact("build-output"); +/// +/// // Job 1: Build +/// let build_job = pipeline +/// .new_job( +/// FlowPlatform::Linux(FlowPlatformLinuxDistro::Ubuntu), +/// FlowArch::X86_64, +/// "build" +/// ) +/// .with_timeout_in_minutes(30) +/// .dep_on(|ctx| flowey_lib_hvlite::_jobs::example_node::Request { +/// output_dir: ctx.publish_artifact(publish_build), +/// }) +/// .finish(); +/// +/// // Job 2: Test (conditionally run based on parameter) +/// let _test_job = pipeline +/// .new_job( +/// FlowPlatform::Linux(FlowPlatformLinuxDistro::Ubuntu), +/// FlowArch::X86_64, +/// "test" +/// ) +/// .with_condition(enable_tests) +/// .dep_on(|ctx| flowey_lib_hvlite::_jobs::example_node2::Request { +/// input_dir: ctx.use_artifact(&use_build), +/// }) +/// .finish(); +/// +/// Ok(pipeline) +/// } +/// } +/// ``` pub trait IntoPipeline { fn into_pipeline(self, backend_hint: PipelineBackendHint) -> anyhow::Result; }