This document defines the required contract for container images used by
west-env and explains how those images relate to the west workspace on the
host.
Containers are an implementation detail used to provide a reproducible, portable build environment for Zephyr. They are not workspaces and must not contain project state.
If an image violates this contract, west-env behavior is undefined.
The container provides the environment. The workspace provides the project.
The container image exists to provide:
- a consistent toolchain
- a complete set of Zephyr build dependencies
- a stable execution environment for CI and developers
The workspace remains host-owned and is mounted into the container at runtime.
The image must include the native build tools required by Zephyr, including:
- CMake
- Ninja
- GNU build tools such as
gccandmake - Device Tree Compiler (
dtc) - common utilities such as
git,wget, andfile
These are infrastructure dependencies, not workspace concerns.
The image must include a compatible Zephyr SDK and expose it via:
ZEPHYR_TOOLCHAIN_VARIANT=zephyr
ZEPHYR_SDK_INSTALL_DIR=/opt/zephyr-sdkThe SDK version must be:
- explicitly pinned
- documented in the image
- compatible with the Zephyr version used by the workspace
The container is the sole owner of the toolchain.
The container uses system Python, not a Python virtual environment.
The image must include:
python3pip- all Python dependencies required by the supported Zephyr version
The authoritative source of Python dependencies is:
zephyr/scripts/requirements.txt
To avoid dependency drift, the image should:
- temporarily clone the matching Zephyr version during image build
- install Python dependencies from
scripts/requirements.txt - remove the temporary clone
Runtime installation of Python packages is not allowed.
The image must have west installed system-wide and available on PATH.
The installed version must meet or exceed the minimum version required by the
target Zephyr release.
The container image must not:
- contain a west workspace
- contain
.west/,zephyr/, ormodules/ - permanently clone or pin Zephyr source
- create or activate a Python virtual environment
- modify, generate, or assume workspace layout
- persist build artifacts or mutable state
All workspace state is owned by the host and mounted into the container.
When west-env executes commands in a container:
- the west workspace root, meaning the directory containing
.west/, is mounted at/work - the container working directory is set to the same relative subdirectory the user was in on the host
This guarantees:
west buildalways runs inside a valid workspace- nested workspace layouts are supported
- relative paths behave identically on host and container
The host owns:
.west/west.ymlwest-env.yml.venv/zephyr/modules/- source changes
- build output
The container owns:
- system Python for build dependencies
west- the Zephyr SDK or other required toolchains
- CMake, Ninja,
dtc, Git, and other build tools
This separation is intentional and required.
- Docker Desktop, preferably with a WSL2 backend
west-envinstalled in the workspace.venv- no requirement for Python, CMake, or SDK on the host when using container mode
- do not pass Windows paths as container working directories manually
- let
west-envhandle mounting and working-directory selection - expect native Windows filesystem mounts to be slower than WSL2 filesystem mounts
From the workspace root:
scripts\\bootstrap.cmd
scripts\\shell.cmd
west env doctor
west env build -b native_sim zephyr\\samples\\hello_world- Docker or Podman
west-envinstalled in the workspace.venv
From the workspace root:
./scripts/bootstrap.sh
source .venv/bin/activate
west env doctor
west env build -b native_sim zephyr/samples/hello_worldOn POSIX hosts, path translation is usually straightforward and container execution should closely mirror native workspace-relative behavior.
The container engine is selected via west-env.yml:
env:
type: container
container:
engine: auto
image: ghcr.io/bitconcepts/zephyr-build-env:latestBehavior:
autoselects Docker or Podman if available- Docker is preferred when both are available
- engine detection failures are reported clearly
- missing images produce warnings instead of fatal errors during doctor checks
The most important runtime validation scenarios for container-backed execution are:
- Docker on Windows with a workspace on the Windows filesystem
- Docker in WSL2 with a workspace in the Linux filesystem
- Docker on native Linux
- Podman on native Linux
- Podman rootless workspace mounting behavior
- at least one clean bootstrap + doctor + build flow per environment
This contract enforces:
- reproducible builds across developers and CI
- clear ownership of mutable state
- zero coupling between container images and workspace layout
- alignment with Zephyr and west mental models
- identical workflows on Windows and POSIX hosts
The container provides tools and dependencies only. The workspace provides source and state only.