Skip to content

Releases: kacy/yoq

yoq v0.2.0

19 Apr 13:58

Choose a tag to compare

What's Changed

  • fix: audit and log remaining silent catch {} error sites by @kacy in #379
  • fix: raft replication match_index regression and stale truncation term by @kacy in #380
  • fix: add accept timeout to test upstream server to prevent mirror test hang by @kacy in #381
  • fix: update version test assertion to match v0.1.8 by @kacy in #382
  • test: deep cluster test coverage for state machine and gossip edge cases by @kacy in #383
  • fix: bug audit round 2 — snapshot ordering, dns race, buffer overflow, metadata validation by @kacy in #384
  • Release hardening for API, gossip, and snapshots by @kacy in #385
  • Add contract and simulation test targets by @kacy in #386
  • Add durability contract tests and harden HTTP fd handling by @kacy in #387
  • Expand raft restart and contract coverage by @kacy in #388
  • Add transport failure simulation coverage by @kacy in #389
  • Add randomized raft simulation seeds by @kacy in #390
  • Harden service recovery and privileged networking tests by @kacy in #391
  • Stabilize privileged networking and DNS tests by @kacy in #392
  • Fix privileged test coverage and deploy follow-ups by @kacy in #393
  • Phase 1: unify app spec and release workflows by @kacy in #394
  • Build shared apply engine for app releases by @kacy in #395
  • Add release orchestration state to local apply flow by @kacy in #396
  • Track pending app release transitions by @kacy in #397
  • Add app release progress and summary views by @kacy in #398
  • Broaden app control plane to all workloads by @kacy in #399
  • Refine app operator UX by @kacy in #400
  • Add rollout strategy controls and recovery by @kacy in #401
  • Refresh app platform documentation by @kacy in #402
  • Add operator-complete integration coverage by @kacy in #403
  • Stabilize operator smoke coverage by @kacy in #404
  • Improve network rollout reliability coverage by @kacy in #405
  • fix: recover fast suite regressions by @kacy in #406

Full Changelog: v0.1.8...v0.2.0

yoq v0.1.8

02 Apr 17:53

Choose a tag to compare

What's Changed

  • Stabilize raft leadership timing by @kacy in #316
  • Phase 0: add rollout flags and shadow reconciler sink by @kacy in #317
  • Phase 0: expose shadow rollout status by @kacy in #318
  • Phase 0: expose service rollout limits by @kacy in #319
  • Phase 0: add rollout observability bundle by @kacy in #320
  • Phase 0: centralize legacy service discovery writes by @kacy in #321
  • Phase 0: add source-aware shadow event auditing by @kacy in #322
  • Phase 0: add bridge fault-injection hooks by @kacy in #323
  • Phase 0: expose active bridge fault modes by @kacy in #324
  • Phase 0: finish service discovery guardrails by @kacy in #325
  • Phase 1: add durable service registry schema and store by @kacy in #326
  • Phase 1: add stable service VIP allocation by @kacy in #327
  • Phase 1: persist shadow service registrations by @kacy in #328
  • Phase 1: complete canonical service registry by @kacy in #329
  • Phase 2: make the service reconciler authoritative by @kacy in #330
  • Phase 2: add reconciler audit and resync loop by @kacy in #331
  • Wire node loss signals into service reconciler by @kacy in #332
  • Resync services when network components change by @kacy in #333
  • Quarantine stale service endpoints on bootstrap by @kacy in #334
  • Complete Phase 2 reconciler recovery and audit coverage by @kacy in #335
  • Complete Phase 3 stable VIP DNS and L4 load balancing by @kacy in #336
  • Complete Phase 4 health-gated service discovery by @kacy in #337
  • Phase 5: migration backfill and shadow cutover audit by @kacy in #338
  • Phase 5: add service rollout cutover readiness by @kacy in #339
  • Add Phase 6 HTTP proxy control-plane scaffolding by @kacy in #340
  • Materialize and expose L7 proxy routes by @kacy in #341
  • Add L7 proxy request resolution path by @kacy in #342
  • Build the L7 proxy execution path by @kacy in #343
  • Add L7 proxy observability by @kacy in #344
  • Add L7 proxy circuit breaking by @kacy in #345
  • Add L7 VIP steering control plane by @kacy in #346
  • Refresh L7 control plane after runtime mutations by @kacy in #347
  • Expose L7 steering readiness reasons by @kacy in #348
  • Resync steering on listener state changes by @kacy in #349
  • Harden L7 steering readiness and VIP cutover gating by @kacy in #350
  • Reconcile L7 steering against the real port-mapper state by @kacy in #351
  • Harden L7 control plane repair and steering visibility by @kacy in #352
  • Tighten L7 proxy identity and fallback visibility by @kacy in #353
  • Close out Phase 6 L7 proof gaps by @kacy in #354
  • Add service observability and rollout metrics by @kacy in #355
  • Promote HTTP routing to first-class config by @kacy in #356
  • Add multi-route HTTP service routing by @kacy in #357
  • Finalize canonical service discovery by @kacy in #358
  • Clean up finalized service discovery paths by @kacy in #359
  • Add first-class gRPC health checks and HTTP/2 routing by @kacy in #360
  • Productionize ACME certificate issuance by @kacy in #361
  • Define the operator golden path and recovery drills by @kacy in #362
  • Expand HTTP routing with rewrites, header matching, and weighted backends by @kacy in #363
  • Terminate TLS-routed HTTP/2 with ALPN by @kacy in #364
  • Harden TLS-routed HTTP/2 edge handling by @kacy in #365
  • Finish HTTP/2 stream routing cleanup by @kacy in #366
  • Add HTTP route method matching by @kacy in #367
  • Expose weighted route traffic in JSON status by @kacy in #368
  • Make ACME CLI email optional by @kacy in #369
  • Implement protocol-aware gRPC health checks by @kacy in #370
  • Add best-effort HTTP route mirroring by @kacy in #371
  • add per-route traffic policy configuration by @kacy in #372
  • refactor: improve proxy subsystem readability by @kacy in #373
  • fix: codebase-wide bug sweep by @kacy in #374
  • fix: replace silent catch {} with log warnings by @kacy in #375
  • test: add cluster consensus and gossip edge case coverage by @kacy in #376
  • fix: safety hardening for pointer casts, integer bounds, and magic constants by @kacy in #377
  • docs: update version strings and project stats across documentation by @kacy in #378

Full Changelog: v0.1.7...v0.1.8

yoq v0.1.7

21 Mar 23:44

Choose a tag to compare

What's Changed

  • fix: use connect-per-send to prevent stale pooled connections by @kacy in #315

Full Changelog: v0.1.6...v0.1.7

yoq v0.1.6

21 Mar 17:41

Choose a tag to compare

Full Changelog: v0.1.5...v0.1.6

yoq v0.1.5

21 Mar 14:28

Choose a tag to compare

Bug fixes

  • fix: add mutex to ConnectionPool preventing concurrent hashmap corruption — the production panic (reached unreachable at posix.zig:269) was caused by two threads hitting the unsynchronized ConnectionPool HashMap concurrently. Added std.Thread.Mutex to protect all map access. Also fixed a pre-existing double-close bug on error paths and replaced the sendto/MSG_NOSIGNAL workaround with a proper SIGPIPE ignore via sigaction.

Full changelog

v0.1.4...v0.1.5

Full Changelog: v0.1.6...v0.1.5

yoq v0.1.4

20 Mar 17:05

Choose a tag to compare

yoq 0.1.4

release tag: v0.1.4

artifacts:

  • yoq-linux-amd64-v0.1.4.tar.gz
  • yoq-linux-arm64-v0.1.4.tar.gz
  • yoq-linux-riscv64-v0.1.4.tar.gz

install: curl -fsSL https://raw.githubusercontent.com/kacy/yoq/main/scripts/install.sh | sh

changelog

added

fixed

removed

yoq v0.1.2

16 Mar 15:34

Choose a tag to compare

yoq 0.1.2

release tag: v0.1.2

artifacts:

  • yoq-linux-amd64-v0.1.2.tar.gz
  • yoq-linux-arm64-v0.1.2.tar.gz
  • yoq-linux-riscv64-v0.1.2.tar.gz

install: curl -fsSL https://raw.githubusercontent.com/kacy/yoq/main/scripts/install.sh | sh

changelog

CHANGELOG.md not found.

yoq v0.1.1

16 Mar 12:56

Choose a tag to compare

yoq 0.1.1

release tag: v0.1.1

artifacts:

  • linux amd64 tarball
  • sha256 checksum

changelog

added

fixed

removed

yoq v0.1.0

15 Mar 13:11

Choose a tag to compare

yoq 0.1.0

release tag: v0.1.0

artifacts:

  • linux amd64 tarball
  • sha256 checksum

changelog

added

  • volumes: volume abstraction with local, host, NFS, and parallel filesystem drivers
  • GPU mesh (phases 1-3): GPU detection, passthrough, health monitoring, MIG management
  • gang scheduling: distributed training workload scheduling across GPU nodes
  • InfiniBand: RDMA detection with NCCL topology generation
  • S3: S3-compatible object storage gateway with HEAD/GET/PUT routes
  • training: training job orchestration — [training.*] manifest type, yoq train CLI
  • storage: phase 2 storage layer
  • gossip: SWIM failure detection protocol for scalable membership and health monitoring
  • cluster auth: HMAC-SHA256 authentication on all cluster messages (raft and gossip), derived from join token
  • agent API: role and region fields in agent API JSON responses
  • transport: connection pooling in cluster transport for TCP connection reuse

fixed

  • gossip: WireGuard peer cleanup on gossip death
  • volumes: service startup failure on volume creation error
  • runtime: integer overflow panics, overlay dir leak, negative int parsing
  • security: token comparison timing side-channel — constant-time comparison regardless of token length
  • state machine: SQL statement redacted from state machine error logs
  • network hardening: message size limits and connection validation in registry and cluster transport
  • robustness: cgroup resource limit verification and safe integer casts for edge cases

removed

  • dead code: secureZero, batchMapUpdate, getMigMode, ClusterSettings, gossip.removeMember

added

  • container runtime: full namespace isolation (PID, NET, MNT, UTS, IPC, USER, CGROUP), cgroups v2 resource limits, overlayfs, seccomp filters, rootless containers
  • OCI images: pull/push to any OCI registry, content-addressable blob store, layer deduplication, image inspect and prune
  • networking: bridge + veth networking, eBPF DNS interception, load balancing, per-service metrics, network policy enforcement, WireGuard mesh
  • build engine: Dockerfile parser (all major directives), content-hash caching, multi-stage builds, TOML declarative build format
  • manifest: TOML manifest for multi-service apps, dependency ordering, health checks, readiness probes, rolling updates with automatic rollback
  • workers: one-shot tasks (e.g., database migrations) via run-worker
  • crons: scheduled recurring tasks with every interval syntax
  • selective startup: yoq up <service> starts individual services with their dependencies
  • dev mode: inotify file watching, hot restart, colored log multiplexing
  • clustering: raft consensus, SQLite state replication, bin-packing scheduler, agent join/drain, cross-node service discovery
  • secrets: encrypted storage, rotation, mounted as files or env vars
  • TLS: ACME auto-provisioning, TLS 1.3 handshake, SNI routing, auto-renewal
  • observability: eBPF per-service and per-pair metrics, PSI resource monitoring
  • network policies: eBPF-based allow/deny between services

fixed

  • raft transport: fixed authentication to work with TCP ephemeral ports (previously rejected valid messages due to port mismatch between ephemeral source port and peer's listening port)
  • cluster node initialization: fixed shared_key timing bug where authentication was checked before the key was set
  • cluster node cleanup: fixed double-free bug where raft peers were freed twice (once in raft.deinit, once in node.deinit)
  • cluster test harness: fixed simultaneous startup with proper full peer list configuration