Founder & Research Architect — Waveframe Labs
Governed AI–Human Research · Reproducibility · Scientific Integrity
I design and maintain governed, deterministic AI–human research systems focused on transparent, reproducible, and falsifiable science.
My work centers on three connected layers of the Aurora research ecosystem:
- ARI — Aurora Research Initiative, the institutional governance and metadata framework that defines epistemic legitimacy
- AWO — Aurora Workflow Orchestration, the formal methodology for governed AI–human research
- CRI-CORE — the deterministic execution and constraint-enforcement runtime that implements AWO’s rules
These frameworks provide the backbone for open-science case studies, including Waveframe v4.0 (cosmology) and the Societal Health Simulator (SHS) (applied systems-science modeling).
I treat reproducibility and governance as first-class research objects, not afterthoughts.
Practical commitments:
- Replayability: Any published result should be re-runnable from code + metadata alone.
- Determinism: Given the same inputs and environment, workflows should converge to the same artifacts.
- Provenance: Every artifact must carry an auditable trail of decisions, versions, and model interactions.
- Governance before trust: If a process cannot be governed and constrained, its outputs are not scientifically trustworthy.
If research cannot be replayed, audited, and verified — it doesn’t count.
The Aurora stack is intentionally layered: governance → method → runtime → case studies.
┌─────────────────────────────────────────────┐
│ ARI — Aurora Research Initiative │
│ Governance, policy, and metadata standards │
└───────────────────────────────┬─────────────┘
│
┌───────────────────────────────┴─────────────┐
│ AWO — Aurora Workflow Orchestration │
│ Formal method for governed AI–human flows │
└───────────────────────────────┬─────────────┘
│
┌───────────────────────────────┴─────────────┐
│ CRI-CORE — Execution & Enforcement Runtime │
│ Deterministic runs, constraints, integrity │
└───────────────────────────────┬─────────────┘
│
┌───────────────────────────────┴─────────────┐
│ Case Studies / Applied Systems │
│ Waveframe v4.0 • Societal Health Simulator │
└─────────────────────────────────────────────┘
Institutional governance and metadata framework for reproducible AI–human research.
🔗 https://github.com/Waveframe-Labs/Aurora-Research-Initiative
Concept DOI: 10.5281/zenodo.17743096
Formal methodology for transparent, governed, human-in-the-loop research workflows.
🔗 https://github.com/Waveframe-Labs/Aurora-Workflow-Orchestration
Concept DOI: 10.5281/zenodo.17013612
Deterministic execution and constraint-enforcement engine implementing AWO rules.
🔗 https://github.com/Waveframe-Labs/CRI-CORE
Cosmology case study demonstrating governed reproducibility in scientific modeling.
🔗 https://github.com/Waveframe-Labs/Waveframe-v4.0
Concept DOI: 10.5281/zenodo.16872199
Applied systems-science reproducibility testbed for sociotechnical modeling.
🔗 https://github.com/Waveframe-Labs/Societal-Health-Simulator
Concept DOI: 10.5281/zenodo.17258419
- governed AI–human research workflows
- provenance and metadata architectures
- institutional research governance
- reproducible computational science
- model auditing and verification
- applied cosmology and systems modeling
📧 swright@waveframelabs.org
🌐 https://waveframelabs.org
🧭 ORCID: https://orcid.org/0009-0006-6043-9295


