Skip to content

Conversation

@j-fletcher
Copy link
Contributor

@j-fletcher j-fletcher commented Jun 21, 2025

Description

Adds capability to bias univariate and multivariate distributions for source biasing. This capability will ultimately be useful for the addition of automated CADIS weight-windowing using the adjoint random ray solver.

For some probability density function $p(x)$, a bias distribution $b(x)$ may be specified which more frequently samples the phase space region(s) of interest, provided supp $(p) =$ supp $(b)$. Each sample $x_0$ drawn from $b(x)$ is then weighted according to $\frac{p(x_0)}{b(x_0)}$.

This PR adds a bias attribute to each Univariate distribution, with the exception of Mixture. The bias attribute can be any univariate distribution, although Discrete distributions may only be biased by another Discrete with the same number of elements. An error is raised if a bias distribution is itself biased by another distribution; eventually this process will also verify whether the parent and bias distributions share common support.

Likewise, multivariate independent distributions (PolarAzimuthal, CartesianIndependent, SphericalIndependent, CylindricalIndependent) can be biased by applying a bias distribution to any of their constituent distributions. Similarly, MeshSpatial and PointCloud can be biased by specifying a second vector of strengths, which achieves the same effect as a biased Discrete. Isotropic, however, is biased by a PolarAzimuthal, and Monodirectional and Point may not be biased.

Below is an example of creating two biased distributions.

import openmc.stats as stats

# Univariate distributions can be sampled using Python API

bias_dist = stats.PowerLaw(a=0,b=3,n=3)
biased_dist = stats.PowerLaw(a=0,b=3,n=2,bias=bias_dist)

sample_vec, wgt_vec = biased_dist.sample(n_samples=100,seed=1)

# Bias distributions are exported to XML along with parent distribution

sphere_dist = stats.spherical_uniform(r_outer=3) 
sphere_dist.r.bias = bias_dist
elem = sphere_dist.to_xml_element()

This PR does introduce a major change in that sampling these distributions will now return a weight along with the usual sample type. Where available, in Python this will be a tuple of the sample and weight, while in C++, sampling returns a std::pair of the usual return type (e.g. Position, Direction, or double) and its associated weight (a double). If no bias distribution is specified, analog sampling will occur and return unit weight. Documentation and tests are pending for this change.

Checklist

  • I have performed a self-review of my own code
  • I have run clang-format (version 15) on any C++ source files (if applicable)
  • I have followed the style guidelines for Python source files (if applicable)
  • I have made corresponding changes to the documentation (if applicable)
  • I have added tests that prove my fix is effective or that my feature works (if applicable)

j-fletcher and others added 30 commits April 23, 2025 11:13
…in Python. Added bias, Sample object and XML reading on cpp.
…emoved function fragment from DiscreteIndex.
…; added missing dereference for sampling distributions in PolarAzimuthal
@j-fletcher j-fletcher mentioned this pull request Jan 6, 2026
Copy link
Contributor

@paulromano paulromano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@j-fletcher Thanks so much for putting this together. I went through all the code and did quite a bit of refactoring but this was mostly to clean up and simplifying the internals. The basic user-facing API is preserved, where a bias distribution/array can be specified. To summarize the refactoring that I did:

  • The logic for handling biased sampling (sample bias distribution and assign weight as p(x)/b(x)) was common among the different classes. I moved this into the base class (for both C++ and Python) so that the individual subclasses don't need to worry about the biasing logic.
  • Similarly, I changed the unbiased sampling functions to simply return a single value with no weights. That way, if someone introduces a new distribution type, again they don't really need to be concerned about how biasing works since it is handled in the base class.
  • The normalization constant for the Watt distribution was incorrect, which is why that test was failing. The constant (and hence the test) have been fixed.
  • Taking multiple samples for Discrete on the Python side was very slow ($O(NM^2)$ where $N$ is the number of samples and $M$ is the number of discrete points). That has been reduced to $O(N)$.
  • Similar performance issues were seen with biased sampling on the Python side in general. By vectorizing the calls to evaluate, I'm seeing much better performance.
  • On the C++ side, I found the biasing logic with DiscreteIndex and Discrete to be quite confusing because it was intermixed in both classes. To simplify matters, I moved all of the bias-related logic to Discrete so that DiscreteIndex has a single purpose (sample an alias table). While doing this, I also optimized memory usage: for the biased case it only keeps one extra vector for the weights (no need to store original probabilities), and in the unbiased case there is no additional memory usage as the weights vector is kept empty.

One comment below I'd appreciate your thoughts on but otherwise I'm happy with merging this. I'll leave it open over the weekend in case others want to chime in.

Comment on lines +237 to +239
reduction. Consistency must also be enforced between the source particle
weights and the window into which they are born, so that unnecessary
splitting and roulette do not occur at the start of each history.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is no longer true thanks to #3459

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you're correct! Thanks for looking over this PR!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants