Skip to content

Conversation

@daquinteroflex
Copy link
Collaborator

No description provided.

@github-actions
Copy link
Contributor

github-actions bot commented Sep 11, 2025

Spell Check Report

2025-10-09-invdes-seminar/00_setup_guide.ipynb:

Cell 1, Line 3: 'dual-layer'
  > > In this notebook, we will set up a baseline simulation for a dual-layer grating coupler. This device is designed to efficiently couple light from an optical fiber to a photonic integrated circuit. We will define the geometry, materials, and all the necessary components for a Tidy3D simulation. This initial setup will serve as the starting point for our optimization in the subsequent notebooks.

2025-10-09-invdes-seminar/01_bayes.ipynb:

Cell 1, Line 3: 'black-box'
  > > With our simulation setup in place, we now turn to optimization. Our goal is to find a set of grating parameters that maximizes the coupling efficiency. Since each simulation is computationally expensive, we will use Bayesian optimization. This technique is ideal for optimizing "black-box" functions that are costly to evaluate.
Cell 1, Line 7: 'higher-dimensional'
  > Exhaustive searches would require thousands of simulations. Bayesian optimization instead builds a probabilistic surrogate of the objective, balancing exploration of uncertain regions with exploitation of promising designs to converge in far fewer solver calls. It intelligently explores the parameter space to find the optimal design with a minimal number of simulations. Bayesian optimization works best when the design space has only a handful of effective degrees of freedom; beyond roughly five independent variables the surrogate becomes harder to learn, so we reserve higher-dimensional searches for gradient-based methods discussed later in the series.
Cell 5, Line 4: 'pbounds'
  > - `parameter_bounds` (the `pbounds` argument) defines the design window we explore.
Cell 5, Line 8: '-Parameter'
  > ## Framing the Problem: A 5-Parameter Global Search
Cell 5, Line 10: 'five-dimensional', 'inter-layer'
  > Rather than tune every tooth individually (30 variables per layer), we search a five-dimensional space of uniform widths, gaps, and inter-layer offset. This captures the dominant physics, keeps simulations fast, and yields a design that later gradient-based passes can refine.
Cell 10, Line 3: 'high-efficiency'
  > We extract the optimizer history, track the best observed loss, and visualize how the search converges toward high-efficiency gratings.
Cell 14, Line 3: 'optimizer's'
  > We reconstruct the best-performing structure, inspect its geometry, and analyze the spectral response to confirm the optimizer's progress.
Cell 23, Line 17: 'utf-'
  > with export_path.open("w", encoding="utf-8") as f:

2025-10-09-invdes-seminar/02_adjoint.ipynb:

Cell 1, Line 1: 'High-Dimensional'
  > # Adjoint Optimization: High-Dimensional Gradient-Based Refinement
Cell 1, Line 3: 'low-dimensional'
  > > In the previous notebook, we used Bayesian Optimization to find a good starting design. The strength of that global optimization approach was its ability to efficiently search a low-dimensional parameter space. However, it was limited: we assumed the grating was uniform, with every tooth and gap being identical.
Cell 1, Line 5: '-element', 'apodize', 'dual-layer'
  > > To push the performance further, we need to apodize the grating, which means varying the dimensions of each tooth individually to better match the profile of the incoming Gaussian beam. This drastically increases the number of design parameters. For our 15-element dual-layer grating, the design space just expanded from 5 global parameters to over 60 individual feature dimensions!
Cell 1, Line 7: 'high-dimensional'
  > > For such a high-dimensional problem, a global search is no longer efficient. In this notebook, we switch to a powerful local, gradient-based optimization technique, enabled by the adjoint method, to refine our design.
Cell 2, Line 5: 'higher-performance'
  > This is where the adjoint method comes in. Tidy3D's automatic differentiation capability uses this method under the hood. It allows us to compute the gradient of our objective function (the coupling efficiency) with respect to all design parameters simultaneously in just two simulations per iteration, regardless of how many parameters there are. This efficiency is what makes it possible to locally optimize structures with thousands of free parameters. We start from the global design found earlier and use these gradients to walk toward a nearby, higher-performance solution.
Cell 5, Line 1: 'High-Dimensional'
  > ## High-Dimensional Parameterization
Cell 5, Line 3: 'per-tooth'
  > We load the best uniform design from the Bayesian search and expand those scalars into per-tooth arrays. Each layer now has individual widths and gaps, and `first_gap_si` remains a crucial phase-matching variable.
Cell 6, Line 2: 'utf-'
  > with Path("./results/gc_bayes_opt_best.json").open("r", encoding="utf-8") as f:
Cell 11, Line 3: 'adjoint-driven'
  > The steadily rising power confirms the adjoint-driven search is homing in on a better design.
Cell 16, Line 3: 'mode-match'
  > Visual inspection highlights the non-uniform duty cycle discovered by the optimizer to better mode-match the incident beam.
Cell 19, Line 2: 'JSON-serializable'
  > """Detach autograd containers into JSON-serializable Python objects."""
Cell 19, Line 19: 'utf-'
  > with export_path.open("w", encoding="utf-8") as f:
Cell 20, Line 3: 'high-dimensional'
  > Switching to a gradient-based approach unlocked high-dimensional refinements and reduced the coupling loss by more than a decibel. The resulting design is finely tuned for nominal fabrication, so the next notebook introduces robust optimization to preserve performance under realistic manufacturing variations.

2025-10-09-invdes-seminar/03_sensitivity.ipynb:

Cell 1, Line 3: 'adjoint-optimized'
  > > The adjoint-optimized grating from the previous notebook delivers excellent nominal performance. In practice, however, fabrication variability means the manufactured device rarely matches the design exactly. Here we quantify how the current design responds to some assumed process deviations to see whether it is robust or brittle.
Cell 1, Line 5: 'follow-up', 'well-controlled'
  > > In the adjoint notebook we purposefully focused on maximizing performance at the nominal geometry. The natural follow-up question is: *how does that optimized design behave once it leaves the computer?* Photonic fabrication processes inevitably introduce small deviations in etched dimensions. Even a well-controlled foundry run can exhibit ±20 nm variations in tooth widths and gaps due to lithography or etch bias. A design that is overly sensitive to these changes might look great in simulation yet fail to meet targets on wafer, so our immediate goal is to measure that sensitivity before pursuing robustness improvements.
Cell 2, Line 3: 'over-etched', 'under-etched'
  > We begin by reloading the best adjoint design and defining a simple bias model. A ±20 nm shift in feature dimensions is a realistic foundry tolerance, so we will simulate three cases: the nominal geometry, an over-etched device (features narrower than intended), and an under-etched device (features wider than intended). This gives an intuitive first look at the design's sensitivity before launching a full Monte Carlo analysis.
Cell 4, Line 2: 'numpy-friendly'
  > """Load a design JSON (Bayes or adjoint) into numpy-friendly fields."""
Cell 4, Line 3: 'utf-'
  > data = json.loads(Path(path).read_text(encoding="utf-8"))
Cell 6, Line 11: 'over-etched'
  > # Create simulations for each fabrication scenario: over-etched, nominal,
Cell 6, Line 12: 'under-etched'
  > # and under-etched. Positive bias widens features, while a negative bias
Cell 6, Line 13: 'over-etching'
  > # corresponds to over-etching that narrows them.
Cell 6, Line 15: 'Over-etched'
  > "Over-etched (-20 nm)": builder(etch_bias=-bias),
Cell 6, Line 17: 'Under-etched'
  > "Under-etched (+20 nm)": builder(etch_bias=bias),
Cell 8, Line 3: 'high-efficiency'
  > The curves below compare the nominal spectrum to ±20 nm biased geometries. The separation between them conveys how quickly our high-efficiency design degrades under realistic fabrication shifts in tooth width and gap. Watch for both a drop in peak efficiency and a shift of the optimal wavelength.
Cell 9, Line 3: 'Over-etched'
  > "Over-etched (-20 nm)": "tab:orange",
Cell 9, Line 5: 'Under-etched'
  > "Under-etched (+20 nm)": "tab:blue",
Cell 11, Line 3: 'foundry-provided'
  > After inspecting the deterministic bias sweep, we broaden the analysis with a Monte Carlo study. We randomly sample overlay, spacer, and width variations according to foundry-provided sigma values to estimate the distribution of coupling efficiency across a wafer.
Cell 14, Line 1: 'silicon-width'
  > We draw overlay, spacer, and silicon-width perturbations from independent Gaussian models whose sigmas come straight from the (hypothetical) foundry tolerance table. Each row in the `samples` array represents one die that we will feed into the simulation pipeline.
Cell 21, Line 10: 'lightgray'
  > color="lightgray",
Cell 23, Line 1: 'center-wavelength', 'single-number'
  > The helper converts the center-wavelength transmission into dB loss and aggregates mean, standard deviation, and percentile values. These single-number metrics offer a quick dashboard before moving on to more detailed adjoint sensitivities.
Cell 26, Line 3: 'silicon-width'
  > Before launching a full robust optimization we want directional information: which fabrication knobs most strongly impact coupling efficiency near the nominal point? The objective below evaluates a single perturbed simulation and, through `value_and_grad`, returns both the power and its gradient with respect to the overlay, spacer, and silicon-width errors.
Cell 32, Line 3: 'gradient-scaled'
  > Normalizing the gradient-scaled sigmas reveals how much each parameter contributes to the linearized variance. Plotting the breakdown highlights the dominant sensitivities we should target when we redesign for robustness.
Cell 37, Line 5: 'lightgray'
  > color="lightgray",

2025-10-09-invdes-seminar/04_adjoint_robust.ipynb:

Cell 3, Line 3: 'over-etched', 'under-etched'
  > We evaluate the design under three fabrication scenarios: nominal, over-etched (−20 nm), and under-etched (+20 nm). We then maximize the mean transmission and simultaneously minimize the standard deviation in performance between these different scenarios, which should lead to a more robust design overall. The amount of weight we place on the standard deviation minimization determines the tradeoff between nominal performance and robustness.
Cell 5, Line 3: 'fabrication-sensitive'
  > We seed the optimizer with the fabrication-sensitive adjoint design and enforce the same foundry limits as before so the updates remain manufacturable.
Cell 6, Line 1: 'utf-'
  > data = json.loads(Path("./results/gc_adjoint_best.json").read_text(encoding="utf-8"))
Cell 7, Line 3: 'adjoint-optimized'
  > Starting from the adjoint-optimized design found earlier, we use Adam to minimize the robust objective.
Cell 11, Line 1: 'Post-Optimization'
  > ### Pre- and Post-Optimization Bias Sweeps
Cell 12, Line 16: 'Over-etched'
  > ("Over-etched (-20 nm)", apply_bias(param_dict, -bias)),
Cell 12, Line 18: 'Under-etched'
  > ("Under-etched (+20 nm)", apply_bias(param_dict, bias)),
Cell 14, Line 1: 'Over-etched'
  > labels = ["Over-etched (-20 nm)", "Nominal", "Under-etched (+20 nm)"]
Cell 14, Line 1: 'Under-etched'
  > labels = ["Over-etched (-20 nm)", "Nominal", "Under-etched (+20 nm)"]
Cell 14, Line 3: 'Over-etched'
  > "Over-etched (-20 nm)": "tab:orange",
Cell 14, Line 5: 'Under-etched'
  > "Under-etched (+20 nm)": "tab:blue",
Cell 15, Line 3: 're-running'
  > Finally we save the fabrication-aware geometry so downstream notebooks - or a GDS handoff - can reuse it without re-running the optimization loop.
Cell 16, Line 13: 'utf-'
  > export_path.write_text(json.dumps(export_payload, indent=2), encoding="utf-8")

2025-10-09-invdes-seminar/05_robust_comparison.ipynb:

Cell 1, Line 5: 'robustness-optimized'
  > > This notebook compares the nominal adjoint design against the robustness-optimized variant using a matched Monte Carlo experiment, highlighting the yield benefits of carrying fabrication awareness into the optimization loop.
Cell 4, Line 2: 'numpy-friendly'
  > """Load a design JSON (Bayes or adjoint) into numpy-friendly fields."""
Cell 4, Line 3: 'utf-'
  > data = json.loads(Path(path).read_text(encoding="utf-8"))
Cell 11, Line 1: 'Center-Wavelength'
  > ## Distribution of Center-Wavelength Loss
Cell 13, Line 11: '-percentile', 'worst-case'
  > 90th-percentile loss improves (2.86 -> 2.82 dB, **better worst-case**).
Cell 13, Line 12: '-percentile', 'best-case'
  > 10th-percentile loss worsens (2.31 -> 2.23 dB, **slightly lower best-case**).
Cell 13, Line 15: 'best-case', 'worst-case'
  > The robust design maintains essentially the same overall spread but shifts the entire distribution slightly toward lower loss. While variability remains comparable, the robust version delivers **a modest boost in average transmission and improved worst-case performance**, at the cost of a marginally weaker best-case - a balanced, realistic outcome consistent with fabrication-aware optimization.
Cell 14, Line 2: 'wafer-level'
  > Even a few hundredths of a decibel can translate to higher wafer-level yield when scaled to thousands of devices.

2025-10-09-invdes-seminar/06_measurement_calibration.ipynb:

Cell 1, Line 3: 'as-built'
  > > Our robust adjoint design is ready for fabrication, but once real devices come back from the foundry, their spectral responses rarely match the nominal simulation exactly. In this notebook we demonstrate a way to calibrate the simulation model to match measured data using adjoint optimization, recovering the as-built geometry so subsequent optimization or analysis stays grounded in reality.
Cell 1, Line 5: 'real-world'
  > Just as we used gradient-based optimization with adjoint derivatives to design the device, we can apply the same approach to calibrate fabrication parameters. Instead of optimizing geometric features to achieve a target performance, we now optimize fabrication corners (like width bias, etch depth, or sidewall angle) to match measured spectral data. Because we're using adjoint sensitivities, this approach scales efficiently to many parameters - real-world calibration often involves multiple fabrication variables simultaneously, and adjoint lets us handle that complexity with the same computational efficiency we saw during design.
Cell 6, Line 5: 'utf-'
  > robust_data = json.loads(robust_path.read_text(encoding="utf-8"))
Cell 11, Line 3: 'mean-squared'
  > We adjust the SiN tooth widths so the simulated spectrum matches the measured one. The loss is the mean-squared error between spectra sampled at the monitor frequencies, optimized with Adam while respecting fabrication bounds.
Cell 11, Line 5: 'multi-parameter'
  > In this example we optimize a single global width bias, which is a one-dimensional problem that could also be solved with simpler techniques like bisection or line search. However, we're showcasing adjoint optimization here because real calibration scenarios often involve multiple correlated fabrication parameters (width bias, etch depth, sidewall angle, material index shifts, etc.) and adjoint derivatives make it practical to optimize all of them simultaneously. This demonstration establishes the workflow that scales naturally to those multi-parameter calibration problems.
Cell 18, Line 3: 'higher-yield'
  > By calibrating the simulation to match measurement we keep the model and fabricated hardware in sync. Combined with robust optimization this closes the loop between design, fabrication, and test, enabling faster debug and higher-yield deployment of inverse-designed photonics.
Cell 18, Line 8: 'gradient-free'
  > - **Precision**: Gradient information enables faster convergence to accurate parameter estimates compared to gradient-free methods, especially important when calibration involves expensive simulations.

DifferentialStripline.ipynb:

Cell 30, Line 3: 'AutoImpedanceSpec', 'CustomImpedanceSpec', 'MicrowaveModeSpec'
  > * By default the `MicrowaveModeSpec` is setup to automatically compute impedance for modes using an `AutoImpedanceSpec`. Alternatively, you may use a `CustomImpedanceSpec` which requires either a `voltage_spec`, or `current_spec`, or both. These definitions are used to calculate the mode's characteristic impedance $Z_0$. Note that depending on which integral is specified, there are three possible conventions: PI, PV, or VI where P = power, V = voltage, I = current. Each convention will yield a slightly different value for the impedance. In this notebook, we use the VI convention. The `AutoImpedanceSpec` uses the PI convention for computing impedance.
Cell 48, Line 1: 'TransmissionLineDataset'
  > We obtain $Z_0$ by accessing the `Z0` field in the `TransmissionLineDataset`, making use of the previously defined current and voltage specifications.

PlanarHelicalAntennaArray.ipynb:

Cell 24, Line 3: 'geoemtries'
  > rbox_list_ant = []  # List of mesh refinement geoemtries
Cell 76, Line 4: 'Re-arrange'
  > # Re-arrange columns and convert units

TFLNTidy3d.ipynb:

Cell 30, Line 3: 'Pockels'
  > Now we can use the RF fields to calculate the Pockels effect and Vπ·L.
Cell 32, Line 1: 'Pockels', 'push-pull'
  > To calculate the Pockels effect, we apply an electric field along the LiNbO₃ cut direction to perturb the optical medium. First, we normalize the field by the applied voltage on the push-pull configuration.
Cell 34, Line 1: 'Pockels'
  > The normalized electric field is applied to the LiNbO₃ crystal along its z-axis, following the Pockels effect model:
Cell 35, Line 6: 'Pockels'
  > # Calculate refractive index variation from Pockels effect

Checked 83 notebook(s). Found spelling errors in 10 file(s).
Generated by GitHub Action run: https://github.com/flexcompute/tidy3d-notebooks/actions/runs/20139526023

@yuanshen-flexcompute yuanshen-flexcompute marked this pull request as ready for review September 11, 2025 14:16
dmarek-flex and others added 6 commits December 8, 2025 13:28
…ferences

- Remove experimental feature notes from autograd notebooks
- Update AutogradPlugin URLs to Autograd URLs
- Replace 'adjoint plugin' text with 'autograd' or 'adjoint method'
- Remove version 2.7 references
- Clean up stale entries in import_file_mapping.json

Resolves FXC-4452
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants