-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
I'd like to add the ability to run a prediction on a trained model and validate that the model has actually learned what we want it to. Good to verify that a trainer not only runs but also trains effectively.
I think the validation would have to live in the context of an individual training. Config design would probably be something like :
train:
destination: <generated prediction model, e.g. andreasjansson/test-predict. leave
blank to append '-dest' to the test model>
destination_hardware: <hardware for the created prediction model, e.g. cpu>
test_cases:
- exact_string: <exact string match>
inputs:
<input1>: <value1>
# shiny and new
post-training inputs:
<input1>:<value1>
post-training match:
<output1>: <condition1>
unsure if we'd want to move training matching underneath inputs - that'd break compatibility but also be a more reasonable partition.
Metadata
Metadata
Assignees
Labels
No labels