Skip to content

add model-specific tests to comparisons? #6

@iantaylor-NOAA

Description

@iantaylor-NOAA

The new Simple_Lorenzen model discussed in #5 includes quantities related to M at age and Dynamic B0 under derived quantities. A proper test of these optional outputs would look for changes to the uncertainty in these quantities across model versions. These things are only calculated as output so even if they break, there should be no change in the other model results (unless the model crashes).

If the current comparison test still depends on a small set of quantities to compare across all models (I haven't looked lately), it doesn't make sense to test quantities which aren't output by some models. But making the test more flexible to allow model-specific comparisons could be a good way to ensure the features of interest for each model are getting tested appropriately.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions