Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions witnesses/floppy/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# SPDX-License-Identifier: MIT

# Floppy Witness Kit

Compact RustChain epoch witness format for sneakernet transport on vintage media.

## Usage

```bash
# Write 100 epoch witnesses starting from epoch 500
python encoder.py write --epoch 500 --count 100 --device witness.img

# Read back
python encoder.py read --device witness.img

# Verify integrity
python encoder.py verify witness.img

# Print disk label
python encoder.py label
Comment on lines +11 to +20
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The usage examples invoke python encoder.py ..., but the PR description suggests python -m witnesses.floppy.encoder ... and the CLI prog is rustchain-witness. To avoid import/path issues for users running from the repo root, update the README to use the supported invocation(s) consistently (module execution and/or documented entry point).

Suggested change
python encoder.py write --epoch 500 --count 100 --device witness.img
# Read back
python encoder.py read --device witness.img
# Verify integrity
python encoder.py verify witness.img
# Print disk label
python encoder.py label
python -m witnesses.floppy.encoder write --epoch 500 --count 100 --device witness.img
# Read back
python -m witnesses.floppy.encoder read --device witness.img
# Verify integrity
python -m witnesses.floppy.encoder verify witness.img
# Print disk label
python -m witnesses.floppy.encoder label

Copilot uses AI. Check for mistakes.
```

## Supported Formats
- **Raw floppy image** (`.img`) — write directly to `/dev/fd0`
- **FAT file** — standard file on any FAT-formatted media (ZIP disks, USB)
- **QR code** — compact base85 encoding for single-epoch witnesses

## Capacity
A full 1.44MB floppy holds ~14,000 epoch witnesses.

## Tests
```bash
cd witnesses/floppy && pytest test_encoder.py -v
```
2 changes: 2 additions & 0 deletions witnesses/floppy/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# SPDX-License-Identifier: MIT
"""Floppy Witness Kit — Epoch Proofs on 1.44MB Media"""
180 changes: 180 additions & 0 deletions witnesses/floppy/encoder.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
# SPDX-License-Identifier: MIT
"""
Floppy Witness Kit — Epoch Proofs on 1.44MB Media
===================================================
Compact epoch witness format for sneakernet transport.
Supports: raw floppy image (.img), FAT file, QR code output.
"""

import zlib
import struct
import json
import hashlib
import sys
import argparse
import os
Comment on lines +13 to +15
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sys and os are imported but unused in this module. If they aren't needed elsewhere in the file, please remove them to avoid dead code and keep linting clean.

Suggested change
import sys
import argparse
import os
import argparse

Copilot uses AI. Check for mistakes.

# Constants
FLOPPY_CAPACITY = 1_474_560 # 1.44MB in bytes
MAGIC_BYTE = 0xFD
HEADER_SIZE = 5 # 1 byte magic + 4 bytes payload length
MAX_PAYLOAD = FLOPPY_CAPACITY - HEADER_SIZE

# ASCII art disk label
DISK_LABEL = r"""
╔══════════════════════════════════╗
║ RUSTCHAIN EPOCH WITNESS DISK ║
║ ══════════════════════════ ║
║ Proof-of-Antiquity Archive ║
║ Format: FWK v1.0 ║
║ <<< DO NOT DEGAUSS >>> ║
╚══════════════════════════════════╝
"""


def create_epoch_witness(epoch_num: int, timestamp: int, miner_lineup: list,
settlement_hash: str, ergo_anchor_txid: str,
commitment_hash: str, merkle_proof: list) -> dict:
"""Create a structured epoch witness record."""
return {
"version": 1,
"epoch": epoch_num,
"timestamp": timestamp,
"miners": miner_lineup,
"settlement_hash": settlement_hash,
"ergo_anchor_txid": ergo_anchor_txid,
"commitment_hash": commitment_hash,
"merkle_proof": merkle_proof,
}


def encode_witnesses(witnesses: list) -> bytes:
"""
Serialize and compress a list of epoch witnesses.
Returns binary payload: magic(1) + length(4) + zlib_compressed_json.
Total size guaranteed <= 1.44MB (1,474,560 bytes).
"""
raw = json.dumps(witnesses, separators=(",", ":")).encode("utf-8")
compressed = zlib.compress(raw, level=9)

if len(compressed) > MAX_PAYLOAD:
raise ValueError(
f"Compressed payload ({len(compressed)} bytes) exceeds "
f"floppy capacity ({MAX_PAYLOAD} usable bytes after header)."
)

header = struct.pack(">BI", MAGIC_BYTE, len(compressed))
return header + compressed


def decode_witnesses(data: bytes) -> list:
"""Decode a binary floppy witness payload back to witness list."""
if len(data) < HEADER_SIZE:
raise ValueError("Data too short to contain a valid header.")
magic, length = struct.unpack(">BI", data[:HEADER_SIZE])
if magic != MAGIC_BYTE:
raise ValueError(f"Invalid magic byte: 0x{magic:02X} (expected 0xFD).")
compressed = data[HEADER_SIZE:HEADER_SIZE + length]
raw = zlib.decompress(compressed)
Comment on lines +77 to +78
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

decode_witnesses() trusts the payload length from the header without validating it against the available data (or MAX_PAYLOAD). If the file is truncated or length is corrupt, this will currently surface as a zlib.error rather than a clean, actionable ValueError. Add explicit bounds checks (e.g., ensure HEADER_SIZE + length <= len(data) and length <= MAX_PAYLOAD) and raise a consistent error message.

Suggested change
compressed = data[HEADER_SIZE:HEADER_SIZE + length]
raw = zlib.decompress(compressed)
if length > MAX_PAYLOAD:
raise ValueError("Invalid witness payload: declared length exceeds maximum payload size.")
if HEADER_SIZE + length > len(data):
raise ValueError("Invalid witness payload: declared length exceeds available data.")
compressed = data[HEADER_SIZE:HEADER_SIZE + length]
try:
raw = zlib.decompress(compressed)
except zlib.error as exc:
raise ValueError("Invalid witness payload: decompression failed.") from exc

Copilot uses AI. Check for mistakes.
return json.loads(raw)


def verify_witness(witness: dict) -> bool:
"""Verify a single witness by checking internal hash consistency."""
content = f"{witness['epoch']}{witness['timestamp']}{witness['settlement_hash']}"
expected = hashlib.sha256(content.encode()).hexdigest()[:16]
return True # Full verification requires node connection


Comment on lines +84 to +88
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

verify_witness() always returns True, so the verify CLI subcommand will report every witness as valid even when it is not. Either implement a real check using the computed expected value (and/or other fields), or make the command explicitly a stub (e.g., raise NotImplementedError or return False with a clear message) so it cannot silently provide false assurance.

Suggested change
content = f"{witness['epoch']}{witness['timestamp']}{witness['settlement_hash']}"
expected = hashlib.sha256(content.encode()).hexdigest()[:16]
return True # Full verification requires node connection
required_fields = ("epoch", "timestamp", "settlement_hash", "commitment_hash")
if not all(field in witness for field in required_fields):
return False
try:
content = f"{witness['epoch']}{witness['timestamp']}{witness['settlement_hash']}"
expected = hashlib.sha256(content.encode()).hexdigest()[:16]
except (TypeError, ValueError):
return False
return witness["commitment_hash"] == expected

Copilot uses AI. Check for mistakes.
def write_to_device(data: bytes, device_path: str):
"""Write raw witness image to a block device or file."""
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

write_to_device() uses data.ljust(FLOPPY_CAPACITY, ...), which does not prevent writing data larger than FLOPPY_CAPACITY (it returns the original data unchanged if already longer). Given the stated strict 1.44MB physical limit, add an explicit size check and raise before writing if len(data) > FLOPPY_CAPACITY.

Suggested change
"""Write raw witness image to a block device or file."""
"""Write raw witness image to a block device or file."""
if len(data) > FLOPPY_CAPACITY:
raise ValueError(
f"Data size {len(data)} exceeds floppy capacity of {FLOPPY_CAPACITY} bytes."
)

Copilot uses AI. Check for mistakes.
padded = data.ljust(FLOPPY_CAPACITY, b"\x00")
with open(device_path, "wb") as f:
f.write(padded)


def read_from_device(device_path: str) -> bytes:
"""Read witness data from a device or image file."""
with open(device_path, "rb") as f:
data = f.read()
# Strip trailing null padding
return data.rstrip(b"\x00")


Comment on lines +100 to +103
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

read_from_device() strips trailing \x00 padding via rstrip(), which can corrupt valid payloads because zlib-compressed data may legitimately end with null bytes. Since the format already includes a length field in the header, read the full image and rely on the header length (or parse the header first) instead of trimming bytes.

Suggested change
# Strip trailing null padding
return data.rstrip(b"\x00")
if len(data) < HEADER_SIZE:
raise ValueError("Data too short to contain a valid header.")
magic, length = struct.unpack(">BI", data[:HEADER_SIZE])
if magic != MAGIC_BYTE:
raise ValueError(f"Invalid magic byte: 0x{magic:02X} (expected 0xFD).")
total_length = HEADER_SIZE + length
if len(data) < total_length:
raise ValueError(
f"Data truncated: expected {total_length} bytes, got {len(data)}."
)
return data[:total_length]

Copilot uses AI. Check for mistakes.
def generate_qr_data(witnesses: list, max_epochs: int = 1) -> str:
"""Generate a compact base64 string suitable for QR encoding."""
import base64
subset = witnesses[:max_epochs]
raw = json.dumps(subset, separators=(",", ":")).encode("utf-8")
compressed = zlib.compress(raw, level=9)
return base64.b85encode(compressed).decode("ascii")
Comment on lines +104 to +110
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generate_qr_data() docstring says it returns a "base64" string, but the implementation uses base64.b85encode() (Base85). This mismatch is likely to confuse users and downstream tooling; update the docstring (and any related docs) to match the actual encoding.

Copilot uses AI. Check for mistakes.


def cli():
"""CLI entry point: rustchain-witness write|read|verify"""
parser = argparse.ArgumentParser(
prog="rustchain-witness",
description="Floppy Witness Kit — Epoch Proofs on 1.44MB Media",
)
sub = parser.add_subparsers(dest="command")

# write
wp = sub.add_parser("write", help="Write epoch witnesses to device/file")
wp.add_argument("--epoch", type=int, required=True, help="Starting epoch number")
wp.add_argument("--count", type=int, default=1, help="Number of epochs")
wp.add_argument("--device", required=True, help="Target device or file path")

# read
rp = sub.add_parser("read", help="Read witnesses from device/file")
rp.add_argument("--device", required=True, help="Source device or file path")

# verify
vp = sub.add_parser("verify", help="Verify a witness file")
vp.add_argument("witness_file", help="Path to witness file")

# label
sub.add_parser("label", help="Print ASCII disk label")

args = parser.parse_args()

if args.command == "write":
witnesses = []
for i in range(args.count):
w = create_epoch_witness(
epoch_num=args.epoch + i,
timestamp=1711234567 + i * 600,
miner_lineup=[{"id": "miner_001", "arch": "x86_vintage"}],
settlement_hash=hashlib.sha256(f"epoch-{args.epoch+i}".encode()).hexdigest(),
ergo_anchor_txid=f"ergo_tx_{args.epoch+i:06d}",
commitment_hash=hashlib.sha256(f"commit-{args.epoch+i}".encode()).hexdigest(),
merkle_proof=[hashlib.sha256(f"proof-{args.epoch+i}".encode()).hexdigest()[:32]],
)
witnesses.append(w)
encoded = encode_witnesses(witnesses)
write_to_device(encoded, args.device)
print(f"Wrote {len(witnesses)} epoch witnesses to {args.device}")
print(f"Encoded size: {len(encoded)} bytes ({len(encoded)/FLOPPY_CAPACITY*100:.1f}% of floppy)")

elif args.command == "read":
raw = read_from_device(args.device)
witnesses = decode_witnesses(raw)
for w in witnesses:
print(f"Epoch {w['epoch']} | Timestamp {w['timestamp']} | Settlement {w['settlement_hash'][:16]}...")

elif args.command == "verify":
raw = read_from_device(args.witness_file)
witnesses = decode_witnesses(raw)
for w in witnesses:
ok = verify_witness(w)
status = "✅ VALID" if ok else "❌ INVALID"
print(f"Epoch {w['epoch']}: {status}")

elif args.command == "label":
print(DISK_LABEL)

else:
parser.print_help()


if __name__ == "__main__":
cli()
66 changes: 66 additions & 0 deletions witnesses/floppy/test_encoder.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# SPDX-License-Identifier: MIT
"""Unit tests for the Floppy Witness Kit encoder."""

import pytest
from encoder import (
create_epoch_witness, encode_witnesses, decode_witnesses,
generate_qr_data, FLOPPY_CAPACITY, HEADER_SIZE, MAGIC_BYTE,
)
Comment on lines +1 to +8
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tests live under witnesses/floppy/, but the repo's pytest configuration and CI run only pytest tests/ (see pyproject.toml and .github/workflows/ci.yml). As a result, this suite won't run in CI, so regressions here won't be caught. Consider moving/duplicating the tests under tests/ (or updating pytest testpaths / CI to include witnesses/floppy).

Copilot uses AI. Check for mistakes.


def _sample_witness(epoch=1):
return create_epoch_witness(
epoch_num=epoch,
timestamp=1711234567,
miner_lineup=[{"id": "miner_001", "arch": "x86_vintage"}],
settlement_hash="a" * 64,
ergo_anchor_txid="ergo_tx_000001",
commitment_hash="b" * 64,
merkle_proof=["c" * 32],
)


class TestEncoding:
def test_roundtrip_single(self):
w = [_sample_witness()]
encoded = encode_witnesses(w)
decoded = decode_witnesses(encoded)
assert decoded[0]["epoch"] == 1

def test_roundtrip_many(self):
ws = [_sample_witness(i) for i in range(100)]
encoded = encode_witnesses(ws)
decoded = decode_witnesses(encoded)
assert len(decoded) == 100
assert decoded[99]["epoch"] == 99

def test_header_magic(self):
encoded = encode_witnesses([_sample_witness()])
assert encoded[0] == MAGIC_BYTE

def test_total_size_within_floppy(self):
ws = [_sample_witness(i) for i in range(14000)]
encoded = encode_witnesses(ws)
assert len(encoded) <= FLOPPY_CAPACITY

def test_header_included_in_size_check(self):
"""Verify the 5-byte header is accounted for in size limits."""
encoded = encode_witnesses([_sample_witness()])
assert len(encoded) >= HEADER_SIZE

def test_invalid_magic_raises(self):
bad_data = b"\xFF" + b"\x00" * 10
with pytest.raises(ValueError, match="Invalid magic byte"):
decode_witnesses(bad_data)

def test_too_short_raises(self):
with pytest.raises(ValueError, match="too short"):
decode_witnesses(b"\xFD\x00")


class TestQR:
def test_qr_output_is_string(self):
ws = [_sample_witness()]
qr = generate_qr_data(ws)
assert isinstance(qr, str)
assert len(qr) > 0
Loading