Skip to content

adaptive-interfaces/adaptive-sensor-testing

adaptive-sensor-testing

CI Status MIT Check Links Dependabot

Adaptive Sensor Testing (AST) UNDER CONSTRUCTION

Worked example: producing team-conforming test artifacts under domain-specific correctness constraints.

IMPORTANT: Better outcomes come less from better prompting alone than from better specification of priors, skills, constraints, and evaluation.

Separation of Concerns

This repository separates two concerns:

  1. producing team-conforming artifacts (via ACS + AO + ATD)
  2. determining domain correctness (via domain priors and evaluation scenarios)

Priors

  1. adaptive-conformance-specification # foundational (ACS)
  2. adaptive-tool-discovery # domain skill (ATD)
  3. adaptive-onboarding # conventions (AO)
  • ACS: behavioral constraints (how the agent operates)
  • AO: team conventions (how outputs must be structured)
  • ATD: tool capabilities (what can be invoked)

Additional:

  • Domain context: defines correctness (in this system)

Scenarios

Each scenario evaluates whether the agent can detect anomalies that are not captured by existing tests.

  1. Basic: clean batch, generate missing tests
  2. Drift detection: batch with injected calibration drift at sample 400, agent must find it
  3. Multi-sensor: correlated array where one sensor diverges from peers

Project Organization

adaptive-sensor-testing/
  SKILL.md              # spec for generating tests and anomaly reports
  MANIFEST.toml
  DECISIONS.md
  LICENSE
  .agent/
    ao-config.toml
    ao-config-python.toml
    ao-domain.toml      # PTAT behavior, expected ranges, anomaly definitions
  evaluation/
    rubric.md
    scenarios/
      ptat-basic/       # single sensor, basic test generation
        prompt.md
        notes.md
        .agent/
      drift-detection/  # time-series anomaly across multiple readings
        prompt.md
        notes.md
        .agent/
      multi-sensor/     # correlated readings across sensor array
        prompt.md
        notes.md
        .agent/
    local/              # gitignored; proprietary test cases
  src/
    sensor_sim/
      __init__.py
      generator.py      # PTAT batch data generator
      processor.py      # anomaly detection, drift flagging
      models.py         # SensorReading, BatchResult types
  tests/
    test_generator.py   # partial agent fills gaps
    test_processor.py   # partial agent fills gaps
  data/
    sample_batch.csv    # pre-generated batch for scenarios

Command Reference

The commands below are used in the workflow guide above. They are provided here for convenience.

Follow the guide for the full instructions.

Show command reference

In a machine terminal (open in your Repos folder)

After you get a copy of this repo in your own GitHub account, open a machine terminal in your Repos folder:

# Replace username with YOUR GitHub username.
git clone https://github.com/username/cintel-06-continuous-intelligence

cd cintel-06-continuous-intelligence
code .

In a VS Code terminal

uv self update
uv python pin 3.14
uv sync --extra dev --extra docs --upgrade

uvx pre-commit install
git add -A
uvx pre-commit run --all-files
# repeat if things change
git add -A
uvx pre-commit run --all-files

# generate data
uv run python -m sensor_sim.data_maker

# confirm
uv run python -c "from pathlib import Path; [print(p, p.exists(), p.stat().st_size) for p in Path('data').glob('*.csv')]"

uv run python -c "import pandas as pd; print(pd.read_csv('data/sample_batch.csv').head()); print(pd.read_csv('data/sample_batch.csv').columns.tolist())"

# see what the analyzer is returning
uv run python -c "from sensor_sim.generator import GeneratorConfig, generate_batch; from sensor_sim.processor import analyze_batch; r=generate_batch('spike', GeneratorConfig(batch_size=800, num_sensors=1, seed=1)); a=analyze_batch(r); print(a.findings)"

uv run python -c "from sensor_sim.generator import GeneratorConfig, generate_batch; from sensor_sim.processor import analyze_batch; r=generate_batch('multi_sensor_divergence', GeneratorConfig(batch_size=800, num_sensors=3, seed=1)); a=analyze_batch(r); print(a.findings)"

# inspect spikes
uv run python -c "from sensor_sim.generator import GeneratorConfig, generate_batch; from sensor_sim.processor import analyze_batch; r=generate_batch('spike', GeneratorConfig(batch_size=800, num_sensors=1, seed=1)); a=analyze_batch(r); print(a.findings)"

# inspect divergence
uv run python -c "from sensor_sim.generator import GeneratorConfig, generate_batch; r=generate_batch('multi_sensor_divergence', GeneratorConfig(batch_size=800, num_sensors=3, seed=1)); print(r[-20:])"

# run pytest
uv run pytest --cov=sensor_sim --cov-report=term-missing --cov-report=xml

uv run ruff format .
uv run ruff check . --fix
uv run zensical build

npx markdownlint-cli2 "**/*.md" "#.venv" "#site" "#dist" "#node_modules"

git add -A
git commit -m "update"
git push -u origin main

When Working with A Chat Agent

At the start of a session:

Read https://raw.githubusercontent.com/adaptive-interfaces/adaptive-conformance-specification/main/SKILL.md
Apply the Adaptive Conformance Specification (ACS).
Follow its workflow and produce a full conformance record.

Then:

Inspect the repository first.
Then generate tests that conform to local patterns.
Produce a full conformance record.

Developer Maintenance

Format Markdown files with Prettier extension. Then run:

npx markdownlint-cli2 "**/*.md" "#.venv" "#site" "#dist" "#node_modules"

uvx skillcheck SKILL.md --min-desc-score 75

License

MIT © 2026 Adaptive Interfaces

About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages