Record: 0.4311 BPB - Complementary Training + Backoff N-gram Mixer + TTT#1033
Record: 0.4311 BPB - Complementary Training + Backoff N-gram Mixer + TTT#1033Naazimsnh02 wants to merge 2 commits intoopenai:mainfrom
Conversation
|
This submission violates Condition 2 as defined in #1017. The n-gram scoring method receives the realized target token as an argument and evaluates the n-gram probability exclusively at that token: neither the hash-based higher-order lookup nor the unigram fallback ever construct a distribution over the full vocabulary. The mixture of neural and n-gram components therefore operates on scalars rather than on committed probability vectors. No full distribution is defined before the realized token is observed, which means no codebook exists from which a message could actually be decompressed, and the quantity being reported is not a prequential code length. The entropy-adaptive mixing weight compounds this. It is a scalar functional of the neural distribution used to modulate the contribution of a component that was itself never normalized over the vocabulary, which is the exact pattern listed in Section VI of #1017. You can also check out #995 for more information on this. The remaining machinery (causal cache updates, score-first TTT) appears sound, but Condition 2 is load-bearing and it is not satisfied here. |
Summary
val_bpb: 0.4311 (3-seed mean, std < 0.0001) | ~15.9 MB | 8xH100 SXM | 600s train + ~562s eval
Key Innovation: Complementary Training
Standard approach: train model on uniform cross-entropy, bolt on n-gram cache at eval.
Our approach: during training, downweight tokens that a bigram predictor would get right (COMPLEMENT_ALPHA=0.5). The model learns to focus its 27M parameters on tokens that statistical caches can't predict — novel word choices, long-range dependencies, semantic surprises. This creates a natural division of labor between the neural model and the n-gram cache at eval time.
3-Seed Results
Training stopped at 600s (~6976 steps). Full eval (diag + q_rt + q_sw + TTT + ngram) completes in ~562s ≈ 9.37 min.
Architecture
11L 512d GQA 8/4, MLP 3.0x, XSA-4, LeakyReLU(0.5)², BigramHash(2048), Int6 + LZMA.
VRL (Value Residual Learning), SmearGate, Partial RoPE (16 dims), U-Net skip connections, EMA + SWA.
Eval Stack
0.20 + 0.55 · sigmoid(2 · (H − 3.0))— n-gram gets 20–75% weight based on model uncertaintyCompliance
Credits
This builds on community work:
Our contribution: Validated Complementary Training as a first-class technique that meaningfully improves n-gram mixer performance by specializing the neural model on statistically-hard tokens.