Skip to content

Record: BackoffNgramMixer + Drift-Free TTT (3-seed mean val_bpb=0.6683)#779

Closed
deanbrr wants to merge 2 commits intoopenai:mainfrom
deanbrr:submission/backoff-ttt-0.6683
Closed

Record: BackoffNgramMixer + Drift-Free TTT (3-seed mean val_bpb=0.6683)#779
deanbrr wants to merge 2 commits intoopenai:mainfrom
deanbrr:submission/backoff-ttt-0.6683

Conversation

@deanbrr
Copy link
Copy Markdown

@deanbrr deanbrr commented Mar 25, 2026

Record: BackoffNgramMixer + Drift-Free TTT (3-seed mean val_bpb=0.6683)

3-seed mean val_bpb: 0.6683 (std 0.0024), all artifacts under 16 MB, 8xH100 SXM, 600s training + 371s eval.

Results:
Seed 1337: 0.6663 BPB, 15.63 MB artifact
Seed 42: 0.6710 BPB, 15.78 MB artifact
Seed 2024: 0.6675 BPB, 15.48 MB artifact

Background:
I introduced the first n-gram eval cache in this competition (PR #659, val_bpb=1.0920, March 22 2026). That approach used a 5-gram cache with an oracle safety gate ruled illegal by organizers. This submission replaces the oracle gate with entropy-adaptive mixing and multi-order backoff, combined with a drift-free TTT configuration.

Technique:

  1. Multi-order n-gram backoff (orders 2-7). Try highest order first, cascade down on miss. Each order uses 4M hash buckets. Counts accumulated from already-scored tokens only.

  2. Entropy-adaptive alpha: alpha = 0.05 + 0.55 * sigmoid(2 * (H - 4.0)), where H is model entropy. High entropy trusts n-gram more, low entropy trusts the model. Depends only on the model's own output distribution, never on the true target. Mixed probability always applied, no oracle gate.

  3. Drift-free TTT: Q projections only (QTTT=1), eta=0.02, LR=3e-5, 1M token chunks, 1 epoch, no adaptive LR, no Polyak. Produces monotonic BPB improvement through all 60 chunks with no late-chunk reversal.

Ablation (seed 1337):
Base model (no mixer, no TTT): 1.1363
TTT only (no mixer): 1.1369
Mixer only (no TTT): 0.6712
Full system: 0.6663

The BackoffNgramMixer contributes 99% of the improvement. It is a pure eval-time technique requiring no architectural changes or retraining.

Compliance:
Score-first TTT: each chunk scored under inference_mode before training on it. Backward-looking n-gram: counts from already-scored tokens only. No oracle selection. No training data access at eval (naive int5 quantization, no GPTQ). Token count verified: ratio_scored = 1.000000.

Credits:
PR #700 RoyiRa (base architecture, TTT framework), PR #606 gowtham0992 (int5 + Soft-Round QAT), PR #727 Asukabot0 (backoff concept, entropy-adaptive alpha formula), PR #461 Christopher-Lee-McClendon (TTT recipe), PR #518 sofiabod (LeakyReLU, cosine TTT). Dean Barr (original n-gram eval cache concept first in competition PR #659, drift-free TTT discovery, BackoffNgramMixer implementation).

@newjordan
Copy link
Copy Markdown

awesome

@deanbrr
Copy link
Copy Markdown
Author

deanbrr commented Mar 25, 2026

awesome

Thank you. causing a big stir. some are calling it gaming

@newjordan
Copy link
Copy Markdown

it was definately a gamer move. but I dont think gaming. This is my night studying and testing....

@deanbrr deanbrr force-pushed the submission/backoff-ttt-0.6683 branch from 611612e to bd5e1b9 Compare March 26, 2026 00:35
travispchen added a commit to travispchen/parameter-golf that referenced this pull request Mar 26, 2026
…5466, 3-seed mean)

Adds order-adaptive entropy gating on top of PR openai#779's BackoffNgramMixer + Drift-Free TTT.
Per-order entropy centers replace single threshold: higher n-gram orders trusted at lower entropy.
3-seed validation: 0.5478, 0.5458, 0.5463 (mean 0.5466, std 0.0010).
All artifacts strictly under 16,000,000 bytes.

Co-Authored-By: Travis Chen <travispchen@gmail.com>
Asukabot0 added a commit to Asukabot0/parameter-golf that referenced this pull request Mar 26, 2026
All GPUs iterate all chunks (4M tokens each), share full 32M cache.
Replaces per-GPU partition that limited cache to 4M tokens/GPU.

Changes (eval_val_sliding only, no training changes):
- Add _bulk_cache_update: vectorized np.bincount (replaces np.add.at)
- Chunk-level iteration: windows interleaved rank::world_size per chunk
- Delete pre-fill loop (chunk-sync makes it unnecessary)
- Trim to orders 2-7 (was 2-28), per-order entropy centers for 2-7
- Upfront timing go/no-go: abort n-gram if est > 550s after 2 chunks
- Fix double-counting bug: all_reduce once at end, not per-chunk
- Default NGRAM_ALPHA=0.40, NGRAM_ENT_BASE=0.05, NGRAM_ENT_RANGE=0.55

Expected: 0.97 → ~0.30 BPB on 8xH100 (matching PR openai#779/809).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
AnirudhRahul pushed a commit to AnirudhRahul/parameter-golf that referenced this pull request Mar 26, 2026
… + Backoff TTT

Replaces the heuristic entropy-adaptive alpha with a learned 7-expert gate
(Linear 512→7) that routes between the neural model and n-gram orders 2-7.
The gate is trained end-to-end during the main training loop using a frozen
n-gram oracle pre-computed from training data (counted within wallclock).

3-seed results (8xH100 SXM, 600s):
  seed 1337: val_bpb=0.1661 (15.74 MB)
  seed 42:   val_bpb=0.1663 (15.76 MB)
  seed 2024: val_bpb=0.1666 (15.25 MB)
  mean:      val_bpb=0.1663 (std=0.0003)

Based on PR openai#779 (deanbrr) BackoffNgramMixer + DriftFreeTTT architecture.

Made-with: Cursor
AnirudhRahul pushed a commit to AnirudhRahul/parameter-golf that referenced this pull request Mar 26, 2026
… + Backoff TTT

Replaces the heuristic entropy-adaptive alpha with a learned 7-expert gate
(Linear 512→7) that routes between the neural model and n-gram orders 2-7.
The gate is trained end-to-end during the main training loop using a frozen
n-gram oracle pre-computed from training data (counted within wallclock).

3-seed results (8xH100 SXM, 600s):
  seed 1337: val_bpb=0.1661 (15.74 MB)
  seed 42:   val_bpb=0.1663 (15.76 MB)
  seed 2024: val_bpb=0.1666 (15.25 MB)
  mean:      val_bpb=0.1663 (std=0.0003)

Cleanup: removed dead code (adaptive LR, Polyak averaging, scalar mixer
path, unused function params). Added detailed order-of-operations to
README proving legality of the training and evaluation procedure.

Based on PR openai#779 (deanbrr) BackoffNgramMixer + DriftFreeTTT architecture.

Made-with: Cursor
@MatoTeziTanka
Copy link
Copy Markdown

Great submission — the entropy-adaptive alpha design is elegant, and the drift-free TTT configuration solving the late-chunk reversal problem is a solid engineering contribution. The ablation data is also really valuable for the community to understand what's driving improvements in this space.

One small thing I noticed while reviewing the eval loop: the cache update at the end of each chunk passes val_tokens[chunk_start_tok:chunk_end_tok + 1], where chunk_end_tok is already (ci + 1) * ttt_chunk_tokens. The + 1 means the n-gram cache receives one token beyond the chunk boundary — the first token of the next unscored chunk. For the highest-position n-grams, that extra token ends up as a target in the count tables before it's been scored.

The practical impact is almost certainly negligible on the final BPB, and it's likely there to handle the boundary condition for forming complete n-grams at the chunk edge. Just flagging it since the compliance section states "counts from already-scored tokens only" — might be worth a quick check to confirm it's intentional.

Nice work overall.

@deanbrr
Copy link
Copy Markdown
Author

deanbrr commented Mar 26, 2026

MateoTeziTanka "Good catch, thank you. The +1 was inherited from the base code and leaked one unscored token per chunk boundary into the n-gram counts. Fixed in c58742a."

@MatoTeziTanka
Copy link
Copy Markdown

@deanbrr Quick fix — nice. Glad it was useful. Clean submission overall.

sofiabod added a commit to sofiabod/parameter-golf that referenced this pull request Mar 26, 2026
…ual hash tables, per-window score-first, entropy-adaptive alpha, tc>0 check)
@valerio-oai
Copy link
Copy Markdown
Contributor

Thanks for your submission! Unfortunately, it's disallowed due to the use of hashed n-gram caches, which do not renormalize correctly / correctly reweight the LM's token distribution, look ahead to the target token to mix probabilities and therefore leak eval tokens. Please refer to the long discussion about this under the issues tab for more details, and please submit more runs in the future!

@deanbrr
Copy link
Copy Markdown
Author

deanbrr commented Mar 28, 2026

Revisionist history in my opinion and I don't think you are correct. That is not how entropy estimated n-gram works and you are penalizing the fact that the data set can be pseudo memorized.

You encouraged this and now have been influenced by the gang.

I was awarded the first ML patent in the US and while I respect different viewpoints, I disagree with this.

I think you would be better off looking at token train/test overlap

My humble opinion

@valerio-oai
Copy link
Copy Markdown
Contributor

I don't think the EvalCache concept as a whole is wrong, and I think the concept is interesting, don't get me wrong, but I am of the opinion that if done, it needs to act as a reweighting of the probability distribution.

Currently, this implementation only reweights the correct token by hashing the n-1 and n grams: if one thinks of the simplest possible case, where there is only one hash bucket, in the limit this leads to the n-gram probability for the correct token being 1 (since all hashes hash to the same thing). This boosts the probability of the correct token arbitrarily, regardless of the dataset, giving one arbitrarily low values of BPB, it doesn't actually compress it.

Reweighting all tokens instead of just the ground-truth one, and making sure the resulting distribution was a valid probability distribution might be legal depending on the implementation (as well as not hashing). Sorry if closing the PR came across as harsh.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants