-
Notifications
You must be signed in to change notification settings - Fork 596
Source biasing capabilities #3460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
…in Python. Added bias, Sample object and XML reading on cpp.
…emoved function fragment from DiscreteIndex.
…implementation for Distribution::evaluate()
…; added missing dereference for sampling distributions in PolarAzimuthal
…py) bias setter and XML export
paulromano
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@j-fletcher Thanks so much for putting this together. I went through all the code and did quite a bit of refactoring but this was mostly to clean up and simplifying the internals. The basic user-facing API is preserved, where a bias distribution/array can be specified. To summarize the refactoring that I did:
- The logic for handling biased sampling (sample bias distribution and assign weight as p(x)/b(x)) was common among the different classes. I moved this into the base class (for both C++ and Python) so that the individual subclasses don't need to worry about the biasing logic.
- Similarly, I changed the unbiased sampling functions to simply return a single value with no weights. That way, if someone introduces a new distribution type, again they don't really need to be concerned about how biasing works since it is handled in the base class.
- The normalization constant for the Watt distribution was incorrect, which is why that test was failing. The constant (and hence the test) have been fixed.
- Taking multiple samples for
Discreteon the Python side was very slow ($O(NM^2)$ where$N$ is the number of samples and$M$ is the number of discrete points). That has been reduced to$O(N)$ . - Similar performance issues were seen with biased sampling on the Python side in general. By vectorizing the calls to
evaluate, I'm seeing much better performance. - On the C++ side, I found the biasing logic with DiscreteIndex and Discrete to be quite confusing because it was intermixed in both classes. To simplify matters, I moved all of the bias-related logic to Discrete so that DiscreteIndex has a single purpose (sample an alias table). While doing this, I also optimized memory usage: for the biased case it only keeps one extra vector for the weights (no need to store original probabilities), and in the unbiased case there is no additional memory usage as the weights vector is kept empty.
One comment below I'd appreciate your thoughts on but otherwise I'm happy with merging this. I'll leave it open over the weekend in case others want to chime in.
| reduction. Consistency must also be enforced between the source particle | ||
| weights and the window into which they are born, so that unnecessary | ||
| splitting and roulette do not occur at the start of each history. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is no longer true thanks to #3459
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe you're correct! Thanks for looking over this PR!
Description
Adds capability to bias univariate and multivariate distributions for source biasing. This capability will ultimately be useful for the addition of automated CADIS weight-windowing using the adjoint random ray solver.
For some probability density function$p(x)$ , a bias distribution $b(x)$ may be specified which more frequently samples the phase space region(s) of interest, provided supp $(p) =$ supp $(b)$ . Each sample $x_0$ drawn from $b(x)$ is then weighted according to $\frac{p(x_0)}{b(x_0)}$ .
This PR adds a
biasattribute to eachUnivariatedistribution, with the exception ofMixture. Thebiasattribute can be any univariate distribution, althoughDiscretedistributions may only be biased by anotherDiscretewith the same number of elements. An error is raised if a bias distribution is itself biased by another distribution; eventually this process will also verify whether the parent and bias distributions share common support.Likewise, multivariate independent distributions (
PolarAzimuthal,CartesianIndependent,SphericalIndependent,CylindricalIndependent) can be biased by applying a bias distribution to any of their constituent distributions. Similarly,MeshSpatialandPointCloudcan be biased by specifying a second vector of strengths, which achieves the same effect as a biasedDiscrete.Isotropic, however, is biased by aPolarAzimuthal, andMonodirectionalandPointmay not be biased.Below is an example of creating two biased distributions.
This PR does introduce a major change in that sampling these distributions will now return a weight along with the usual sample type. Where available, in Python this will be a tuple of the sample and weight, while in C++, sampling returns a
std::pairof the usual return type (e.g.Position,Direction, ordouble) and its associated weight (adouble). If no bias distribution is specified, analog sampling will occur and return unit weight. Documentation and tests are pending for this change.Checklist