Non-record: 2026-03-22_SuperchunkBPE_SP1024#506
Open
eshansinghal14 wants to merge 1 commit intoopenai:mainfrom
Open
Non-record: 2026-03-22_SuperchunkBPE_SP1024#506eshansinghal14 wants to merge 1 commit intoopenai:mainfrom
eshansinghal14 wants to merge 1 commit intoopenai:mainfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Non-Record Submission: Superchunk BPE (vocab 1024)
Short run checking superchunk-trained Rust BPE (
tokenizer.pkl+ re-exportedfineweb10B_superchunk1024shards) against the stock 1024-vocab training recipe on 8×H100, 600s wall clock, non-record / 16MB track.What superchunking is
Standard GPT-style BPE (here: rustbpe + tiktoken) learns merges inside regex-defined chunks (words, numbers, etc.). Superchunk BPE adds a second phase: it builds sequences where each chunk is represented as a single phase-1 token, then learns cross-chunk merges; those merges are interleaved by frequency with phase-1 merges into one merge table. At inference there is no separate “superchunk mode”—behavior is whatever that combined table encodes.
Data and setup
fineweb10B_superchunk1024fromexport_fineweb_custom_bins.pyondocs_selected.jsonl.train_gpt.pywithTOKENIZER_PATHpointing at the Rust BPE directory (tokenizer_kind=rust_bpein log).Results (from
train.logtail +submission.json)stopping_early: wallclock_cap)val_bpb(last eval, step 9131)val_lossval_bpbval_lossbytes_total(int8+zlib + code)bytes_model_int8_zlibValidation checkpoints in the log show
val_bpbtrending down through the run (e.g. 1.3844 @ step 1000 → 1.2294 @ step 9131).Included files
train_gpy.py— training script snapshot for this run (filename as stored).train.log— full stdout (includes pasted source + per-step metrics).submission.json— leaderboard-style metadata for this entry.