I know discussions on this have already started, but for mutliple reasons, it would be good to be able to attach a scaling factor (with its own scaling uncertainty) to a dataset/array.
Doing so would allow:
- application of simple scaling operations to the scaling factor first, instead of to the array, followed by a later step "apply scaling", which will reset the scaling factor back to 1, but will still maintain a meaningful scaling uncertainty
- the clear separation of scaling uncertainties from inter-datapoint uncertainties. This helps, as scaling uncertainties can easily be on the order of 10%, whereas inter-datapoint uncertainties can be much, much smaller.
I know discussions on this have already started, but for mutliple reasons, it would be good to be able to attach a scaling factor (with its own scaling uncertainty) to a dataset/array.
Doing so would allow: