Skip to content

Combo: Fourier PE padding fix + warmup 15 epochs#1685

Closed
tcapelle wants to merge 2 commits intonoamfrom
noam-r23/combo-fourier-pe-warmup15
Closed

Combo: Fourier PE padding fix + warmup 15 epochs#1685
tcapelle wants to merge 2 commits intonoamfrom
noam-r23/combo-fourier-pe-warmup15

Conversation

@tcapelle
Copy link
Contributor

@tcapelle tcapelle commented Mar 20, 2026

Hypothesis

Two near-misses from round 22 that address different pipeline aspects:

  1. Fourier PE fix (val_loss=0.8380): fixes coordinate normalization to exclude padding, improved ood_cond by 0.29
  2. Warmup 15 epochs (val_loss=0.8402): extends warmup, improved ood_cond by 4.1%

Both individually showed strong ood_cond improvement. The Fourier PE fix changes the feature encoding quality, while warmup 15 changes the optimization trajectory. With cleaner Fourier PE features (no padding contamination), the longer warmup may find a better basin because the loss landscape is smoother. The ood_cond improvements from both changes (+0.29 and +4.1% respectively) target the same split, suggesting this combination could yield a substantial ood_cond improvement that drags overall val_loss below baseline.

Instructions

Apply BOTH changes to train.py:

Fix 1: Fourier PE padding — TRAINING loop (lines 663-666)
Replace:

xy_min = raw_xy.amin(dim=1, keepdim=True)
xy_max = raw_xy.amax(dim=1, keepdim=True)

With:

xy_for_range = raw_xy.clone()
xy_for_range[~mask.unsqueeze(-1).expand_as(raw_xy)] = float("inf")
xy_min = xy_for_range.amin(dim=1, keepdim=True)
xy_for_range[~mask.unsqueeze(-1).expand_as(raw_xy)] = float("-inf")
xy_max = xy_for_range.amax(dim=1, keepdim=True)

Fix 1: Fourier PE padding — VALIDATION loop (lines 897-899)
Apply the same xy_min/xy_max masking fix.

Fix 2: Warmup 15 epochs
Line 580 — Change total_iters=10 to total_iters=15:

warmup_scheduler = torch.optim.lr_scheduler.LinearLR(base_opt, start_factor=0.2, total_iters=15)

Line 583 — Change the milestone from [10] to [15]:

scheduler = torch.optim.lr_scheduler.SequentialLR(
    base_opt, schedulers=[warmup_scheduler, cosine_scheduler], milestones=[15]
)

No other changes. Run with --wandb_group noam-r23-combo-fourier-warmup15.

Baseline

  • val_loss = 0.8326
  • in_dist surf_p = 17.94
  • ood_cond surf_p = 13.98
  • ood_re surf_p = 27.54
  • tandem surf_p = 36.73
  • mean3 (in+ood_c+tan) = 22.88

Results

W&B run: vk4jwq3d (runtime: ~32 min, state: failed due to pre-existing visualization error)

Split Metric Baseline This run Delta
val/loss 0.8326 0.8726 +0.0400 ❌
in_dist mae_surf_p 17.94 18.68 +0.74 ❌
ood_cond mae_surf_p 13.98 14.39 +0.41 ❌
ood_re mae_surf_p 27.54 28.20 +0.66 ❌
tandem mae_surf_p 36.73 39.93 +3.20 ❌
mean3 surf_p 22.88 24.33 +1.45 ❌

Surface MAE detail (this run):

Split Ux Uy p
in_dist 7.23 1.92 18.68
ood_cond 3.81 1.16 14.39
ood_re 3.32 0.99 28.20
tandem 6.80 2.32 39.93

Volume MAE detail (this run):

Split Ux Uy p
in_dist 1.00 0.34 19.40
ood_cond 0.63 0.26 12.00
ood_re 0.76 0.35 47.07
tandem 1.73 0.81 37.65

What happened

Negative result. Both changes individually showed improvements on r22, but their combination is significantly worse across all splits. The largest degradation is tandem (+3.20), followed by in_dist (+0.74). Notably, even ood_cond — the split both changes were supposed to improve — got worse (+0.41).

This likely reflects an interaction effect. The Fourier PE padding fix changes the encoding so that padded positions now get PE values that are far outside [0,1] (since the masking uses inf/-inf for range computation, padded positions map to extreme values). This creates large PE values for padded positions that were previously near-zero. Combined with a longer 15-epoch warmup, the model may be spending more of its warmup time learning to ignore these extreme padding artifacts rather than fitting the actual signal.

Additionally, extending warmup from 10→15 epochs with this new PE encoding means the cosine annealing has fewer effective epochs (65→57), potentially cutting off convergence prematurely.

Suggested follow-ups

  • Test the Fourier PE fix alone on r23 with a seed variation to confirm whether the r22 result was real or noise.
  • If testing the PE fix alone: clamp xy_norm to [0,1] after the masking computation to prevent extreme values at padded positions.
  • The warmup 15 result from r22 may be branch-specific; test on r23 baseline before combining it with other changes.

@tcapelle tcapelle added status:wip Student is working on it student:alphonse Assigned to alphonse noam Noam advisor branch experiments labels Mar 20, 2026
@github-actions
Copy link

github-actions bot commented Mar 20, 2026


Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the CLA Document and I hereby sign the CLA


0 out of 2 committers have signed the CLA.
❌ @senpai-advisor
❌ @senpai-alphonse
senpai-advisor, senpai-alphonse seem not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@tcapelle tcapelle marked this pull request as ready for review March 20, 2026 15:23
@tcapelle tcapelle added status:review Ready for advisor review and removed status:wip Student is working on it labels Mar 20, 2026
@morganmcg1 morganmcg1 closed this Mar 22, 2026
@github-actions github-actions bot locked and limited conversation to collaborators Mar 22, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

noam Noam advisor branch experiments status:review Ready for advisor review student:alphonse Assigned to alphonse

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants