Skip to content

rgardler-msft/InnovationEngine

 
 

Overview

Innovation Engine is a CLI (ie) that turns Markdown into executable, testable documentation. Every fenced shell block doubles as runnable code, so tutorials, demos, validations, and runbooks stay in sync with the narrative that describes them.

Executable Documentation wraps ordinary Markdown around shell scripts, leveraging whatever tooling is already available in the execution environment. Instead of copying commands into separate scripts, you elevate the documentation itself to be the source of truth.

Key Capabilities

  • Document once, reuse everywhere. Express intent, context, and commands in the same file—render it as Markdown, run it via ie, or publish it to a wiki without duplicating content.
  • Execute safely in multiple modes. Run documents interactively, as unattended scripts, or as tests that halt on validation failures.
  • Onboard with "learn mode." Pause at each heading or code block, review the narrative, and resume execution at your own pace.
  • Continuously validate results. Embed expected output blocks (exact match, fuzzy similarity, or regex) so CI pipelines can assert documentation accuracy automatically.
  • Extract portable scripts. Pull a pure shell script from any executable document when you need to embed the logic somewhere else.

Innovation Engine also powers richer host experiences. For example, Microsoft Azure Learn content and Azure Portal walkthroughs can run the exact same executable document—read-only in the browser, interactive in the portal, or automated in CI/CD.

Documentation Map

  • Start with docs/README.md for a curated index of tutorials, reference guides, and specs.
  • Walk through docs/helloWorldDemo.md to see executable documentation concepts end-to-end.
  • Use docs/Executable-Doc-Quickstart.md as a ready-to-run, timestamp-safe template for new scenarios.

Install Innovation Engine CLI

To install the Innovation Engine CLI, run the following commands. To install a specific version, set VERSION to the desired release number, such as "v0.1.3". You can find all releases here.

VERSION="latest"
wget -q -O ie https://github.com/Azure/InnovationEngine/releases/download/$VERSION/ie

# Setup permissions & move to the local bin
chmod +x ie
mkdir -p ~/.local/bin
mv ie ~/.local/bin

Build Innovation Engine from Source

Paste the following commands into the shell. This will clone the Innovation Engine repo, install the requirements, and build out the Innovation Engine executable.

git clone https://github.com/Azure/InnovationEngine;
cd InnovationEngine;
make build-ie;

Now you can run the Innovation Engine tutorial with the following command:

./bin/ie execute tutorial.md

Install to ~/bin

If you want to install the Innovation Engine CLI directly to your home bin directory (so it's available system-wide), use:

make install-ie-home

This will build and install the ie binary to ~/bin/ie. Make sure ~/bin is in your $PATH by adding this to your shell profile:

export PATH="$HOME/bin:$PATH"

Building a Container from Source

docker build -t ie .

Once built you can run the container and connect to it. Innovation Engine will automatically run an introductory document when you execute this command.

docker run -it ie

You can override the start command if you want to take control immediately with:

docker run -it ie /bin/sh

Testing Innovation Engine

Innovation Engine is self-documenting, that is all our documentation is written to be executable. Since Innovation Engine can test the results of an execution against the intended results this means our documentation is also part of our test suite. Testing against all our documentation is easy as:

make test-docs

For a fast smoke test of the core engine behaviors (code blocks, variables, fuzzy matching, streaming output, and the recursive prerequisite workflow), you can run the master testing scenario:

./bin/ie execute scenarios/testing/test.md

If you make any changes to the IE code (see Contributing below) we would encourage you to tun the full test suite before issuing a PR.

To manual test a document it is best to run in interactive mode (see below). This mode provides an interactive console for reading and executing the content of Executable Documentation.

How to Use Innovation Engine

The general format to run an executable document is: ie <MODE_OF_OPERATION> <MARKDOWN_FILE>

Modes of Operation

Today, executable documentation can be run in 3 modes of operation:

Interactive: Displays the descriptive text of the tutorial and pauses at code blocks and headings to allow user interaction ie interactive tutorial.md

Test: Runs the commands and then verifies that the output is sufficiently similar to the expected results (recorded in the markdown file) to be considered correct. ie test tutorial.md

Execute: Reads the document and executes all of the code blocks not pausing for input or testing output. Essentially executes a markdown file as a script. ie execute tutorial.md

Use Innovation Engine with any URL

Documentation does not need to be stored locally in order to run IE with it. With v0.1.3 and greater, you can run ie execute, ie interactive, and ie test with any URL that points to a public markdown file, including raw GitHub URLs. See the below demo:

IEExecuteWithURL.mp4

Prerequisite Validation Workflow

Executable documents often reference additional markdown prerequisites. Innovation Engine now validates those prerequisites before it executes their setup commands:

  • The engine scans the Prerequisites section of a document and, for each linked markdown file, runs a verification block (the code under the ## Verification heading) before any body commands.
  • Verification success writes a marker file (/tmp/prereq_<slug>_skip) and surfaces a Validating Prerequisite banner in the console; when the marker exists the prerequisite body is skipped so you do not re-run work a user already satisfied.
  • Verification failure removes the marker, prints Executing Prerequisite, and runs the prerequisite body commands so the document can install or configure the missing dependency.
  • Narrative text that introduced the prerequisites is rendered once—between the scenario title and step headings—so instructions stay visible even when the body is skipped.

Authoring tips:

  • Keep prerequisite intent and checks under a ## Verification heading; the engine reuses the original subheadings in console descriptions so keep them meaningful.
  • Place installation or configuration commands outside the ## Verification section so they run only when required.
  • When you need idempotent shell logic inside the body, wrap it in your own guard clauses; the engine will already prevent execution once verification passes, but defensive scripting keeps the document reliable when run manually.

Use Executable documentation for Automated Testing

One of the core benefits of executable documentation is the ability to run automated testing on markdown file. This can be used to ensure freshness of content.

In order to do this one will need to combine innovation engine executable documentation syntax with GitHub actions.

In order to test if a command or action ran correctly executable documentation needs something to compare the results against. This requirement is met with result blocks.

Result Blocks

Result blocks are distinguished in Executable documentation by a custom expected_similarity comment tag followed by a code block. For example

<!--expected_similarity=0.8-->

In the above example we have escaped the comment syntax so that it shows up in markdown. Otherwise, the tag of expected_similarity is completely invisible.

The expected similarity value is a floating point number between 0 and 1 which specifies how closely the output needs to match the results block. 0 being no similarity, 1 being an exact match.

Note It may take a little bit of trial and error to find the exact value for expected_similarity.

The numerical similarity value is quick and easy to use, but it can be inaccurate. An alternative approach, which is more reliable, is to use regular expressions, for example, the following will match both hello and goodbye messages:

<!-- expected_similarity="^(Hello|Goodbye) World$" -->

When you provide a quoted expected_similarity value, the engine treats it as a regular expression. Any environment variables referenced inside the pattern are expanded before the regex runs, and the failure message echoes both the original pattern and the concrete values (for example, ^Hello $GREETING followed by (where GREETING=RegEx World)). Only exported variables (or ones loaded from ie env-config) participate in that expansion—shell-local assignments such as GREETING=value do not escape the subshell and therefore cannot show up in the expectation. See scenarios/testing/fuzzyMatchTest.md for end-to-end samples covering fuzzy thresholds, regexes, and env-aware comparisons.

Environment Variables

You can pass in variable declarations as an argument to the ie CLI command using the 'var' parameter. For example:

ie execute tutorial.md --var REGION=eastus

CLI argument variables override environment variables declared within the markdown document, which override preexisting environment variables.

Local variables declared within the markdown document will override CLI argument variables.

Local variables (ex: REGION=eastus) will not persist across code blocks. It is recommended to instead use environment variables (ex: export REGION=eastus).

Logging and Verbose Output

Innovation Engine provides two related but distinct controls over runtime insight:

  • --log-level controls what severities are written by the structured logger to the persistent log file ie.log (falling back to stdout if file creation fails). Accepted values: trace, debug, info, warn, error, fatal.
  • --verbose is a convenience flag for richer, human‑facing console output during execution (e.g. working directory context and full command stdout blocks rendered inline).

Why both? They serve different audiences:

  • Use --log-level when you need deeper diagnostics for post‑run analysis or CI logs without cluttering interactive output for end users.
  • Use --verbose when you want more immediate context while stepping through a scenario but don’t need low‑level trace events persisted.

Overlap: Running with --log-level=debug (or trace) implicitly emits some verbose contextual lines even if --verbose is not set. This is intentional for developer diagnostics. If you want concise console output but detailed logs, choose --log-level=info (or higher) without --verbose.

Quick examples:

# Minimal console, detailed file logging
ie execute tutorial.md --log-level=debug

# Rich interactive console context, default debug logging
ie interactive tutorial.md --verbose

# Max diagnostics (console + file)
ie execute tutorial.md --verbose --log-level=trace

Tip: Keep routine automation at --log-level=info without --verbose to reduce noise; escalate only when investigating issues.

You can also redirect the persistent log file with --log-path /some/output/dir/ie.log (or by setting the IE_LOG_PATH environment variable). Innovation Engine rotates the last five log snapshots (ie.log through ie.log.4) automatically, deleting the oldest file before creating a new one, so long-lived sessions and CI runs do not grow the log directory unbounded.

Setting Up GitHub Actions to use Innovation Engine

After documentation is set up to take advantage of automated testing a github action will need to be created to run testing on a recurring basis. The action will simply create a basic Linux container, install Innovation Engine Executable Documentation and run Executable documentation in the Test mode on whatever markdown files are specified.

It is important to note that if you require any specific access or cli tools not included in standard bash that will need to be installed in the container. The following example is how this may be done for a document which runs Azure commands.

name: 00-testing

on:
  push:
    branches:
    - main

  workflow_dispatch:

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: azure/login@v1
      with:
        creds: ${{ secrets.AZURE_CREDENTIALS }}
    - name: Deploy
      env:
        AZURE_CREDENTIALS: ${{ secrets.AZURE_CREDENTIALS }}
        GITHUB_SHA: ${{ github.sha }}
      run: |
        cd $GITHUB_WORKSPACE/
        git clone https://github.com/Azure/InnovationEngine/tree/ParserAndExecutor
        cd innovationEngine
        pip3 install -r requirements.txt
        cp ../../articles/quick-create-cli.md README.md
        python3 main.py test README.md

Authoring Documents

Authoring documents for use in Innovation Engine is no different from writing high quality documentation for reading. However, it does force you to follow good practice and therefore can sometimes feel a little too involved. That is every edge case needs to be accounted for so that automated testing will reliably pass. We are therefore working on tools to help you in the authoring process.

These tools are independent of Innovation Engine, however, if you build a container from source they will be included in that container. To use them you will need an Azure OpenAI key (you can use an OpenAI key if you prefer) - be sure to add them in the command below.

docker run -it \
  -e AZURE_OPENAI_API_KEY=$AZURE_OPENAI_API_KEY \
  -e AZURE_OPENAI_ENDPOINT=$AZURE_OPENAI_ENDPOINT \
  ie /bin/sh -c "python AuthoringTools/ada.py"

Contributing

This is an open source project. Don't keep your code improvements, features and cool ideas to yourself. Please issue pull requests against our GitHub repo.

Be sure to use our Git pre-commit script to test your contributions before committing, simply run the following command: make test-docs

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

About

An experiment in simplicity for complex environments,

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 96.7%
  • Shell 1.4%
  • Makefile 1.4%
  • Dockerfile 0.5%