Dev Builds » 20210815-1005

Use this dev build

NCM plays each Stockfish dev build 20,000 times against Stockfish 15. This yields an approximate Elo difference and establishes confidence in the strength of the dev builds.

Summary

Host Duration Avg Base NPS Games WLD Standard Elo Ptnml(0-2) Gamepair Elo
ncm-dbt-01 06:56:28 582718 3992 603 1428 1961 -72.85 ± 5.07 11 953 883 148 1 -150.64 ± 11.44
ncm-dbt-02 06:57:10 586409 4014 614 1415 1985 -70.27 ± 5.12 13 939 893 160 2 -144.56 ± 11.38
ncm-dbt-03 06:57:07 585237 4010 608 1464 1938 -75.32 ± 4.98 19 944 917 124 1 -154.68 ± 11.18
ncm-dbt-04 06:56:09 568360 3998 607 1424 1967 -72.01 ± 5.16 15 946 884 149 5 -148.72 ± 11.44
ncm-dbt-05 06:57:22 581278 3986 548 1473 1965 -82.12 ± 5.08 22 1007 838 126 0 -169.75 ± 11.75
20000 2980 7204 9816 -74.5 ± 2.27 80 4789 4415 707 9 -153.57 ± 5.11

Test Detail

ID Host Base NPS Games WLD Standard Elo Ptnml(0-2) Gamepair Elo CLI PGN
436698 ncm-dbt-03 583656 10 1 3 6 -70.36 ± 79.51 0 2 3 0 0 -147.07 ± 211.27
436697 ncm-dbt-02 584285 14 2 4 8 -49.96 ± 94.42 0 3 3 1 0 -102.11 ± 234.06
436696 ncm-dbt-05 577889 486 65 172 249 -77.77 ± 14.28 0 124 102 17 0 -164.21 ± 33.85
436695 ncm-dbt-01 581194 492 71 178 243 -76.78 ± 14.43 1 123 104 18 0 -160.17 ± 33.52
436694 ncm-dbt-04 566099 498 83 180 235 -68.55 ± 14.59 2 113 115 18 1 -141.25 ± 31.76
436693 ncm-dbt-02 586097 500 79 185 236 -74.79 ± 14.76 5 113 116 15 1 -150.51 ± 31.54
436692 ncm-dbt-03 585759 500 68 183 249 -81.36 ± 14.25 3 124 108 15 0 -167.53 ± 32.81
436691 ncm-dbt-01 581402 500 71 184 245 -79.9 ± 14.21 2 125 107 16 0 -165.8 ± 33.0
436690 ncm-dbt-05 581860 500 61 187 252 -89.48 ± 14.44 3 135 97 15 0 -187.16 ± 34.76
436689 ncm-dbt-04 568713 500 82 163 255 -56.78 ± 14.99 0 111 109 30 0 -116.78 ± 32.73
436688 ncm-dbt-02 586435 500 78 173 249 -66.82 ± 13.48 1 108 126 15 0 -137.37 ± 29.98
436687 ncm-dbt-03 586435 500 83 182 235 -69.71 ± 14.73 2 116 112 19 1 -143.89 ± 32.25
436686 ncm-dbt-01 580945 500 75 155 270 -56.07 ± 15.22 0 112 106 32 0 -115.23 ± 33.15
436685 ncm-dbt-05 582152 500 68 186 246 -83.57 ± 13.87 1 130 105 14 0 -176.33 ± 33.3
436684 ncm-dbt-04 567324 500 69 185 246 -82.1 ± 14.13 4 121 112 13 0 -167.53 ± 32.08
436683 ncm-dbt-02 585169 500 71 171 258 -70.43 ± 14.48 2 116 112 20 0 -143.89 ± 32.25
436682 ncm-dbt-03 583656 500 91 175 234 -58.93 ± 13.43 1 99 133 17 0 -119.89 ± 29.04
436681 ncm-dbt-01 583321 500 77 186 237 -76.98 ± 14.41 4 116 116 13 1 -157.24 ± 31.47
436680 ncm-dbt-05 584117 500 73 187 240 -80.63 ± 14.38 1 130 101 18 0 -169.27 ± 34.05
436679 ncm-dbt-02 588472 500 83 178 239 -66.83 ± 14.62 0 118 110 21 1 -140.62 ± 32.58
436678 ncm-dbt-04 567126 500 75 173 252 -68.99 ± 13.71 0 115 118 17 0 -143.89 ± 31.26
436677 ncm-dbt-03 587792 500 75 192 233 -82.83 ± 14.29 4 123 109 14 0 -169.27 ± 32.62
436676 ncm-dbt-01 585421 500 77 179 244 -71.88 ± 14.1 2 115 116 17 0 -147.19 ± 31.57
436675 ncm-dbt-04 567284 500 76 184 240 -76.25 ± 14.67 3 120 110 16 1 -157.24 ± 32.52
436674 ncm-dbt-05 581194 500 64 192 244 -90.96 ± 14.32 3 136 97 14 0 -190.85 ± 34.75
436673 ncm-dbt-02 588132 500 70 174 256 -73.34 ± 14.15 0 123 108 19 0 -153.86 ± 32.88
436672 ncm-dbt-03 585211 500 68 177 255 -76.97 ± 13.83 2 119 115 14 0 -158.93 ± 31.63
436671 ncm-dbt-01 583992 500 74 188 238 -80.63 ± 13.94 1 127 107 15 0 -169.27 ± 32.98
436670 ncm-dbt-04 570909 500 73 168 259 -66.82 ± 14.75 1 116 111 21 1 -138.99 ± 32.43
436669 ncm-dbt-05 582987 500 70 184 246 -80.63 ± 13.79 1 126 109 14 0 -169.27 ± 32.62
436668 ncm-dbt-02 585464 500 89 179 232 -63.23 ± 15.4 1 118 101 30 0 -129.35 ± 33.93
436667 ncm-dbt-03 586266 500 78 192 230 -80.63 ± 13.33 0 126 112 12 0 -171.02 ± 32.04
436666 ncm-dbt-01 582778 500 75 171 254 -67.54 ± 14.37 1 115 113 21 0 -138.99 ± 32.11
436665 ncm-dbt-04 571270 500 71 188 241 -82.83 ± 14.86 3 129 101 16 1 -172.78 ± 34.05
436664 ncm-dbt-05 580821 500 81 176 243 -66.82 ± 15.16 6 105 117 22 0 -129.35 ± 31.52
436663 ncm-dbt-02 587409 500 68 174 258 -74.79 ± 14.77 3 120 107 20 0 -152.18 ± 33.05
436662 ncm-dbt-03 584369 500 68 181 251 -79.9 ± 13.92 3 120 114 13 0 -164.07 ± 31.75
436661 ncm-dbt-04 568156 500 78 183 239 -74.06 ± 14.74 2 121 108 18 1 -153.86 ± 32.88
436660 ncm-dbt-01 582694 500 83 187 230 -73.34 ± 13.72 0 120 114 16 0 -153.86 ± 31.85
436659 ncm-dbt-05 579207 500 66 189 245 -87.26 ± 14.54 7 121 110 12 0 -174.55 ± 32.38
436658 ncm-dbt-02 586223 500 74 177 249 -72.61 ± 14.27 1 120 110 19 0 -150.51 ± 32.56
436657 ncm-dbt-03 583992 500 76 179 245 -72.61 ± 14.83 4 115 111 20 0 -145.54 ± 32.41

Commit

Commit ID d61d38586ee35fd4d93445eb547e4af27cc86e6b
Author Tomasz Sobczyk
Date 2021-08-15 10:05:43 UTC
New NNUE architecture and net Introduces a new NNUE network architecture and associated network parameters The summary of the changes: * Position for each perspective mirrored such that the king is on e..h files. Cuts the feature transformer size in half, while preserving enough knowledge to be good. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.b40q4rb1w7on. * The number of neurons after the feature transformer increased two-fold, to 1024x2. This is possibly mostly due to the now very optimized feature transformer update code. * The number of neurons after the second layer is reduced from 16 to 8, to reduce the speed impact. This, perhaps surprisingly, doesn't harm the strength much. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.6qkocr97fezq The AffineTransform code did not work out-of-the box with the smaller number of neurons after the second layer, so some temporary changes have been made to add a special case for InputDimensions == 8. Also additional 0 padding is added to the output for some archs that cannot process inputs by <=8 (SSE2, NEON). VNNI uses an implementation that can keep all outputs in the registers while reducing the number of loads by 3 for each 16 inputs, thanks to the reduced number of output neurons. However GCC is particularily bad at optimization here (and perhaps why the current way the affine transform is done even passed sprt) (see https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit# for details) and more work will be done on this in the following days. I expect the current VNNI implementation to be improved and extended to other architectures. The network was trained with a slightly modified version of the pytorch trainer (https://github.com/glinscott/nnue-pytorch); the changes are in https://github.com/glinscott/nnue-pytorch/pull/143 The training utilized 2 datasets. dataset A - https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing dataset B - as described in https://github.com/official-stockfish/Stockfish/commit/ba01f4b95448bcb324755f4dd2a632a57c6e67bc The training process was as following: train on dataset A for 350 epochs, take the best net in terms of elo at 20k nodes per move (it's fine to take anything from later stages of training). convert the .ckpt to .pt --resume-from-model from the .pt file, train on dataset B for <600 epochs, take the best net. Lambda=0.8, applied before the loss function. The first training command: python3 train.py \ ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \ ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \ --gpus "$3," \ --threads 1 \ --num-workers 1 \ --batch-size 16384 \ --progress_bar_refresh_rate 20 \ --smart-fen-skipping \ --random-fen-skipping 3 \ --features=HalfKAv2_hm^ \ --lambda=1.0 \ --max_epochs=600 \ --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2 The second training command: python3 serialize.py \ --features=HalfKAv2_hm^ \ ../nnue-pytorch-training/experiment_131/run_6/default/version_0/checkpoints/epoch-499.ckpt \ ../nnue-pytorch-training/experiment_$1/base/base.pt python3 train.py \ ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \ ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \ --gpus "$3," \ --threads 1 \ --num-workers 1 \ --batch-size 16384 \ --progress_bar_refresh_rate 20 \ --smart-fen-skipping \ --random-fen-skipping 3 \ --features=HalfKAv2_hm^ \ --lambda=0.8 \ --max_epochs=600 \ --resume-from-model ../nnue-pytorch-training/experiment_$1/base/base.pt \ --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2 STC: https://tests.stockfishchess.org/tests/view/611120b32a8a49ac5be798c4 LLR: 2.97 (-2.94,2.94) <-0.50,2.50> Total: 22480 W: 2434 L: 2251 D: 17795 Ptnml(0-2): 101, 1736, 7410, 1865, 128 LTC: https://tests.stockfishchess.org/tests/view/611152b32a8a49ac5be798ea LLR: 2.93 (-2.94,2.94) <0.50,3.50> Total: 9776 W: 442 L: 333 D: 9001 Ptnml(0-2): 5, 295, 4180, 402, 6 closes https://github.com/official-stockfish/Stockfish/pull/3646 bench: 5189338
Copyright 2011–2025 Next Chess Move LLC