Skip to content

training doesn't properly reset state #207

@Sopel97

Description

@Sopel97

doing

setoption name SkipLoadingEval value true
setoption name Threads value 4
setoption name Use NNUE value pure
learn targetdir data_x epochs 1 batchsize 100000 use_draw_in_training 1 use_draw_in_validation 1 lr 1 lambda 1 eval_limit 32000 nn_batch_size 1000 newbob_decay 0.99 eval_save_interval 1000000 loss_output_interval 100000 set_recommended_uci_options

(using epochs based training from https://github.com/Sopel97/Stockfish/tree/cyclic_reader to stop learning after reasonable time)
twice in a row results in weird activations.

INFO: observed 23356 (out of 43979) features
INFO: (min, max) of pre-activations = -0.0883445, 1.06719 (limit = 258.008)
INFO: largest min activation = 3.40282e+38, smallest max activation = -3.40282e+
38

INFO: largest min activation = 0.316678, smallest max activation = 0.675339
INFO: largest min activation = 0.149582, smallest max activation = 0.653783

it may be a display only issue, but I'm not sure.

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions