Skip to content

Non-record: 11L GEPA + 25k Steps + Pure Int6 + Legal TTT (val_bpb=1.0944) - unlimited compute category#644

Open
Christopher-Lee-McClendon wants to merge 1 commit intoopenai:mainfrom
Christopher-Lee-McClendon:submission/11L-gepa-25k-pure-int6-legal-ttt
Open

Non-record: 11L GEPA + 25k Steps + Pure Int6 + Legal TTT (val_bpb=1.0944) - unlimited compute category#644
Christopher-Lee-McClendon wants to merge 1 commit intoopenai:mainfrom
Christopher-Lee-McClendon:submission/11L-gepa-25k-pure-int6-legal-ttt

Conversation

@Christopher-Lee-McClendon

Summary

  • val_bpb = 1.0944 — new personal best with legal score-first TTT
  • 11L GEPA architecture (27M params) trained for 25000 steps (12000 peak-LR + 13000 warmdown)
  • Pure int6 per-row quantization with 15-candidate GPTQ-lite + zstd-22 compression
  • Legal score-first TTT (SGD, momentum 0.9, 10 epochs): −0.014 BPP gain
  • Artifact: 13.83 MB (14,496,936 bytes) — smallest in our series
  • Includes model artifact (final_model.int6.ptz) for reproducibility

Key Result

Metric Value
Float base (25k steps) 1.1088
After legal TTT 1.0944
Eval time 2,074s on 4×A100-40GB
Training wallclock 12,509s (~3h 28m)

Scaling Law (5 data points, warmdown is the dominant lever)

Steps Peak-LR Warmdown Float Base TTT BPP Artifact
9,000 5,000 4,000 1.135 1.116 14.94 MB
12,000 7,000 5,000 1.127 1.108 14.79 MB
15,000 9,000 6,000 1.122 1.104 14.52 MB
20,000 12,000 8,000 1.115 1.098 14.22 MB
25,000 12,000 13,000 1.109 1.094 13.75 MB

All three metrics improve monotonically: float base, TTT BPP, and artifact size.

Key Insight: Warmdown Acceleration

The BPP improvement accelerates in the final warmdown steps despite the cosine LR schedule decelerating:

  • Steps 20k→21k: −7.4 BPP/kstep
  • Steps 22k→23k: −12.0 BPP/kstep
  • Steps 22k→25k: −14.0 BPP/kstep

This suggests fine-grained optimization at low LR is disproportionately effective.

Non-record unlimited-compute submission (4×A100-40GB, ~3.5 hours).

Prior Submissions in This Series

Acknowledgments

Builds on techniques from: @signalrush (PR #414, GPTQ-lite/EMA), @jfprincz (PRs #287/#315, XSA/Partial RoPE/LN Scale), @unnir (PR #265, Efficient XSA), raahilshah (PR #162, SmearGate/BigramHash), @aruniyer (PR #86, Int6 QAT), samacqua (LoRA TTT), @abaybektursun (PR #549, LeakyReLU²), and the OpenAI baseline.

- Non-record unlimited-compute submission: val_bpb=1.0944
- 25000-step training (12000 peak-LR + 13000 warmdown) on 4xA100-40GB
- Pure int6 per-row quantization with 15-candidate GPTQ-lite + zstd-22
- Legal score-first TTT (SGD, 10 epochs, momentum 0.9): -0.014 BPP gain
- Float base 1.1088, artifact 13.75 MiB (14,496,936 bytes total)
- Includes model artifact (final_model.int6.ptz) for reproducibility
@Christopher-Lee-McClendon Christopher-Lee-McClendon changed the title Non-record: 11L GEPA + 25k Steps + Pure Int6 + Legal TTT (val_bpb=1.0944) Non-record: 11L GEPA + 25k Steps + Pure Int6 + Legal TTT (val_bpb=1.0944) - unlimited compute category Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant