Update default main net to nn-b1a57edbea57.nnue
Created by retraining the previous main net `nn-baff1edbea57.nnue` with:
- some of the same options as before: ranger21, more WDL skipping
- the addition of T80 nov+dec 2023 data
- increasing loss by 15% when prediction is too high, up from 10%
- use of torch.compile to speed up training by over 25%
```yaml
experiment-name: 2560--S9-514G-T80-augtodec2023-more-wdl-skip-15p-more-loss-high-q-sk28
training-dataset:
# https://github.com/official-stockfish/Stockfish/pull/4782
- /data/S6-514G-1ee1aba5ed.binpack
- /data/test80-aug2023-2tb7p.v6.min.binpack
- /data/test80-sep2023-2tb7p.binpack
- /data/test80-oct2023-2tb7p.binpack
- /data/test80-nov2023-2tb7p.binpack
- /data/test80-dec2023-2tb7p.binpack
early-fen-skipping: 28
start-from-engine-test-net: True
nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-torch-compile
num-epochs: 1000
lr: 4.375e-4
gamma: 0.995
start-lambda: 1.0
end-lambda: 0.7
```
Epoch 819 trained with the above config led to this PR. Use of torch.compile
decorators in nnue-pytorch model.py was found to speed up training by at least
25% on Ampere gpus when using recent pytorch compiled with cuda 12:
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch
See recent main net PRs for more info on
- ranger21 and more WDL skipping: https://github.com/official-stockfish/Stockfish/pull/4942
- increasing loss when Q is too high: https://github.com/official-stockfish/Stockfish/pull/4972
Training data can be found at:
https://robotmoon.com/nnue-training-data/
Passed STC:
https://tests.stockfishchess.org/tests/view/65cd76151d8e83c78bfd2f52
LLR: 2.98 (-2.94,2.94) <0.00,2.00>
Total: 78336 W: 20504 L: 20115 D: 37717
Ptnml(0-2): 317, 9225, 19721, 9562, 343
Passed LTC:
https://tests.stockfishchess.org/tests/view/65ce5be61d8e83c78bfd43e9
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 41016 W: 10492 L: 10159 D: 20365
Ptnml(0-2): 22, 4533, 11071, 4854, 28
closes https://github.com/official-stockfish/Stockfish/pull/5056
Bench: 1351997