Rebound is a framework that provides secure version controls for data stored in the cloud. It can be integrated into applications or used as a standalone service to provide a trust anchor (i.e., a hardware-rooted, cryptographic source of truth over data integrity and freshness) for arbitrary cloud applications. It is built on Tessera transparency logs. It is targeted specifically for cloud applications running inside trusted execution environments that require high assurance over data integrity and freshness; Rebound itself is also designed to run in such an environment. For a quick start with our GitLab CI benchmarks, skip to Quickstart below. Reference:
@inproceedings{bvs+26,
title={{It's a Feature, Not a Bug: Secure and Auditable State Rollback for Confidential Cloud Applications}},
booktitle={{2026 IEEE Symposium on Security and Privacy (S\&P)}},
author={Burke, Quinn and Vahldiek-Oberwagner, Anjo and Swift, Michael and McDaniel, Patrick},
month={may},
year={2026}
}
rebound/
├── librebound/ # Core library APIs
├── cmd/ # HTTP servers (prod-server, simple-server)
└── tests/ # Unit + end-to-end tests
- Docker (for micro-benchmarks) and Docker Compose (for macro-benchmarks)
- After cloning this repo, run:
git submodule update --init --recursiveto initialize submodulesgo mod tidyin all subdirs to clean up Go module dependencies
Two types:
libreboundunit tests: single-leaf updates, verification, heads-only/selective rollback with presence proofs, pruning/deauth gating, and recovery (double-increment reseal)tests/end-to-end: server API workflow and snapshot lifecycle
Run all tests:
docker run -it -v .:/rebound mcr.microsoft.com/devcontainers/go:1-1.24-bookworm
(enter container shell)
REBOUND_HOME=/rebound
cd $REBOUND_HOME && ./setup.sh
mkdir -p o
cd tests && ./test_all.shNote: This is a research prototype focused on clarity and verifiability; production deployments should add full TPM integration.
Run microbenchmarks by varying parameters such as the number of objects to version, how many updates to do, how many snapshots to take, etc.
docker run -it -v .:/rebound mcr.microsoft.com/devcontainers/go:1-1.24-bookworm
(enter container shell)
REBOUND_HOME=/rebound
cd $REBOUND_HOME && ./setup.sh
mkdir -p o
cd bench/microbench
# Run `go run microbench.go --help` for details on the flags; example:
go run microbench.go --sizes=25 --updates=1 --trials=1 --measure-storage=true --obj-bytes=1 --query-sample=25 --prune-keep=25 --skip-prune-bench=true --work=../../o/tessera --out=../../o/micro
python3 plot_bench.py ../../o/micro --prefix=test --obj-bytes=1 --prune-keep=25This section describes a local macrobenchmark that measures CI/CD overhead using a GitLab CE instance, a Docker executor runner, and a local Rebound service. Everything is orchestrated via Docker Compose in rebound/bench/macrobench.
- GitLab CE (HTTP at
http://${GITLAB_EXTERNAL_HOST}:8089on the host,http://gitlab:8089on the compose network), where GITLAB_EXTERNAL_HOST=<IP_ADDR or localhost> - GitLab Runner (Docker executor) attached to the compose network
- Rebound prod-server (
reboundservice) reachable from CI jobs ashttp://rebound:8080 - A sample project with a GitLab CI pipeline that performs state updates and maintenance actions against Rebound
- Start the stack
cd rebound/bench/macrobench
GITLAB_EXTERNAL_HOST=<IP_ADDR or localhost> docker compose up -d
# Then visit http://${GITLAB_EXTERNAL_HOST}:8089 in a web browser
# Optional: wait for GitLab to be ready
curl -sf http://${GITLAB_EXTERNAL_HOST}:8089/users/sign_in >/dev/null && echo ready
- Create and register a runner (new GitLab Runner workflow)
- Open http://${GITLAB_EXTERNAL_HOST}:8089 and sign in as root.
- Credentials: username
root, passwordxY-7ab_!zPq-R9(or whatever value you set forinitial_root_passwordin the Compose file)
- Credentials: username
- Admin Area → CI/CD → Runners → New runner
- Create an instance (or project) runner
- Assign the tag
localto the runner in the UI (Admin → Runners → your runner → Edit). - Copy the Runner authentication token (glrt-…)
Register and configure the runner via bootstrap (required):
cd rebound/bench/macrobench
./bootstrap.sh --runner-token glrt-PASTE_TOKEN_HEREMake sure all docker containers are running:
docker compose psNotes on bootstrap:
- The bootstrap script registers the runner inside the existing
gitlab-runnercontainer, setsrunners.docker.network_mode = "macrobench_ci_net", restarts the container, mints a PAT, creates/pushes the sample project, and writes.macrobench.env. - With Runner v18+, tags and other properties are managed on the server side. Make sure to assign the tag
localto the runner in the UI (Admin → Runners → your runner → Edit). - It always mints a fresh Personal Access Token for the root user programmatically (via gitlab-rails), validates it against the API, and saves it in
./.macrobench.envasexport GITLAB_PAT=.... - It creates or reuses a
sample-appproject and performs the initial Git push using the PAT (required for Git over HTTP). - Make sure there isn't a mismatch with 'protected' branch scoping - if the pipeline is running on a protected branch, runners (instance-wide or project-wide) must be configured to be allowed to run on protected branches, otherwise they might not pick up the jobs.
- Run the macrobenchmark
DEBUG={0,1} USE_REBOUND={0,1} TRIALS=n ./run.sh # finds the project, pushes commits, triggers maintenance jobs, writes results.csv
# You can monitor the active Gitlab pipelines for the repo (e.g., http://${GITLAB_EXTERNAL_HOST}:8089/root/sample-app/-/pipelines) to see the run.sh script in action.Plot macrobenchmark results (writes PDFs next to the CSVs, typically under $REBOUND_HOME/o):
cd bench/macrobench
# Outputs go to $REBOUND_HOME/o by default
python3 plot_results.py <macro results dir>Notes on run.sh:
- If a job fails (i.e., the script reports a failure, or you observe a failure in the GitLab web interface), check that all containers are running (
docker compose ps), check the runner logs (docker logs gitlab-runner), and check the job logs in the GitLab web interface for further details to debug. - It always sources
./.macrobench.envif present so values in that file (e.g.,GITLAB_PAT) override any existing environment variables. - CI jobs are tagged
[local]; ensure your runner has taglocal.
cd rebound/bench/macrobench
docker compose down -vDEPLOYMENT_SERVER_URL(default:http://rebound:8080)REPOSITORY(default:$CI_PROJECT_PATH)ACTOR(default:$GITLAB_USER_LOGIN)- Job-specific variables as noted above
- Auth errors on git push (HTTP Basic: Access denied): ensure the push URL uses
root:<PAT>@...and that the PAT is valid and not expired. Bootstrap handles this automatically. - Runner cannot reach services: ensure
runners.docker.network_mode = "macrobench_ci_net"and the runner container was restarted after config changes. - GitLab URL mismatch: host access uses
http://${GITLAB_EXTERNAL_HOST}:8089; services on the compose network usehttp://gitlab:8089. - PAT verification:
curl -sS -o /dev/null -w "%{http_code}\n" --header "PRIVATE-TOKEN: $GITLAB_PAT" http://${GITLAB_EXTERNAL_HOST}:8089/api/v4/usershould return200.
- This benchmark assumes you are running the local Docker Compose stack that includes the
reboundservice on the same Docker network as GitLab Runner, sohttp://rebound:8080is reachable from CI jobs. - Our examples do not use Sigstore/cosign for image signing, as Rebound already authenticates pipeline outputs (in addition to other things).
- We have a Tessera submodule because we needed to modify the
minCheckpointIntervalintessera/storage/posix/files.goto be lower than the default 1 second. - In
rebound_api.go, we also make sure to set theintervalwhen callingWithCheckpointInterval(which sets the timer tick for kicking off checkpointing events) and set themaxAgewhen callingWithBatching(which sets the time limit for leaf sequencing, which is related to I/O operations, separate from checkpointing).
