Dev/add caching for comp ref inference #167
Draft
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
(NOT INTENDED TO BE MERGED IN CURRENT STATE)
Add some form of caching to both the comparative regression KDMA estimation, and action parameter filling (with the intention of running through many many alignment targets without needing to re-run inference). The way this is set up we have to specify the dependencies for a cache entry manually (but that gives us a bit more power/flexibility; and other common ways of caching in Python are either only in memory (the stdlib approach), or require pure functions instead of methods). Let me know if I've missed any "dependencies" here for each function.
I'm on the fence about whether to clean this up and try to merge it in to
main, but I'm concerned that we might not be covering every case well here and use the cache when we didn't intend to, etc.