Verified Erdős-Straus conjecture to 10^14, pushing to 10^17 on free-tier cloud compute. Zero budget.
The Erdős-Straus conjecture (1948) asks: for every integer n ≥ 2, can 4/n be written as a sum of three unit fractions?
4/n = 1/x + 1/y + 1/z
This is Egyptian fraction decomposition — the same math documented in the Rhind Papyrus (1650 BCE). Nobody has proven it for all n in 78 years.
Used a modular sieve based on Salez (2014) to verify the conjecture to 10^14 (100 trillion). Zero prime counterexamples. 515,710 composite survivors, all n ≡ 1 (mod 24) — 100%, not a tendency.
Swett (1999) verified to 10^14 using one modular equation on dedicated hardware, ~150 hours, estimated $2,500–$5,000. We matched that result at zero budget on free-tier Kaggle.
- All survivors are n ≡ 1 (mod 24). The sieve eliminates every other residue class completely.
- Zero-survivor batches are k=0 through k=153 only. After k=154, every batch has survivors.
- Survivor count decreases with k. The sieve gets more effective as n grows.
We built an optimized sieve with three improvements over the 10^14 run:
- Sierpiński filter hierarchy: Sort 148,923 prime filters by kill rate. The first 100 reduce 2.1M candidates to 5. After 500, zero survive. Keep top 5,000. 148x fewer filter checks.
- Miller-Rabin primality: Replaces trial division (~316M iterations at 10^17) with 7 deterministic witnesses. ~45M x faster per test.
- Skip dead batches: k < 154 proven empty at 10^14.
The fleet uses a decentralized architecture inspired by physarum (slime mold) foraging:
- No fixed ranges. Every node can reach every batch. One universal notebook, any hardware.
- Interpolation-weighted priority. Power law from 10^14 data:
survivors = 5424 × k^(-0.51). Low-k batches (73% of all survivors) are checked first. If a counterexample exists, we find it ~3x faster than random search. - Auto-detects hardware. CuPy CUDA for GPU (36 batch/s on T4), JAX for TPU, multiprocessing for CPU.
- Cross-platform. Same notebook runs on Kaggle and Colab. Colab nodes share a Drive trail for coordination. Kaggle nodes use randomized seeds for natural partitioning.
- 12x speedup over the original static-range architecture (46.6 batch/s combined vs 4 batch/s effective).
Fleet: Kaggle CPU ×4 + Colab GPU (T4) + Colab CPU. All free tier. Zero budget.
Public: https://github.com/Commencethescourge/erdos-straus-solver
Dataset (sieve data): https://www.kaggle.com/datasets/commencethescourge/erdos-straus-sieve-data
Before you run: The solver is a single Python file (~300 lines) with no dependencies beyond the standard library. No pip installs required. Read the code before executing — it's short enough to audit in a few minutes. Resource usage: each worker loads ~800 MB of filter data into RAM. One worker is fine on most machines. The sieve will use 100% of allocated CPU cores while running.
If you'd rather not run code locally, you can fork the Kaggle notebook and run it in Kaggle's sandboxed environment — no local execution needed.
The same sieve architecture drives the Guinea Pig Trench portal — 40 browser games, 129 beats, raymarched worlds, lenticular 3D, P2P multiplayer. The prime sieve game inside the portal lets you fly through the modular filter gates as a number.
Guinea Pig Trench LLC