Update default main net to nn-baff1edbea57.nnue
Created by retraining the previous main net nn-b1e55edbea57.nnue with:
- some of the same options as before: ranger21 optimizer, more WDL
skipping
- adding T80 aug filter-v6, sep, and oct 2023 data to the previous best
dataset
- increasing training loss for positions where predicted win rates were
higher than estimated match results from training data position scores
```yaml
experiment-name: 2560--S8-r21-more-wdl-skip-10p-more-loss-high-q-sk28
training-dataset:
# https://github.com/official-stockfish/Stockfish/pull/4782
- /data/S6-1ee1aba5ed.binpack
- /data/test80-aug2023-2tb7p.v6.min.binpack
- /data/test80-sep2023-2tb7p.binpack
- /data/test80-oct2023-2tb7p.binpack
early-fen-skipping: 28
start-from-engine-test-net: True
nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-10p-more-loss-high-q
num-epochs: 1000
lr: 4.375e-4
gamma: 0.995
start-lambda: 1.0
end-lambda: 0.7
```
Training data can be found at:
https://robotmoon.com/nnue-training-data/
Training loss was increased by 10% for positions where predicted win
rates were higher than suggested by the win rate model based on the
training data, by multiplying with: ((qf > pt) * 0.1 + 1). This was a
variant of experiments from Sopel's NNUE training & experimentation log:
https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY
Experiment 302 - increase loss when prediction too high, vondele’s idea
Experiment 309 - increase loss when prediction too high, normalize in a
batch
Passed STC:
https://tests.stockfishchess.org/tests/view/6597a21c79aa8af82b95fd5c
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 148320 W: 37960 L: 37475 D: 72885
Ptnml(0-2): 542, 17565, 37383, 18206, 464
Passed LTC:
https://tests.stockfishchess.org/tests/view/659834a679aa8af82b960845
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 55188 W: 13955 L: 13592 D: 27641
Ptnml(0-2): 34, 6162, 14834, 6535, 29
closes https://github.com/official-stockfish/Stockfish/pull/4972
Bench: 1219824