Skip to content

Hyper-parameters optimization between each active-learning iteration #9

@MelanieLu

Description

@MelanieLu

At the moment, the hyper-parameters are not re-optimized as new labeled patches are added to the training set. (number of epochs, data augmentation, batch size, etc., or even the size of the network)
We defined conservative hyper-parameters, optimized on the initial training set size, therefore, they might not be optimal as the training set grows.

We could consider re-optimizing the hyper-parameters after each active learning iteration.
Question: How to fairly compare with the baseline if the parameters are changing ?

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions