Dev Builds » 20210815-1005

You are viewing an old NCM Stockfish dev build test. You may find the most recent dev build tests using Stockfish 15 as the baseline here.

Use this dev build

NCM plays each Stockfish dev build 20,000 times against Stockfish 7. This yields an approximate Elo difference and establishes confidence in the strength of the dev builds.

Summary

Host Duration Avg Base NPS Games Wins Losses Draws Elo
ncm-et-3 08:21:23 1961204 3325 2814 9 502 +428.58 ± 15.21
ncm-et-4 08:21:27 1965483 3350 2819 8 523 +423.22 ± 14.89
ncm-et-9 08:21:26 1964257 3352 2914 9 429 +458.42 ± 16.47
ncm-et-10 08:21:26 1955691 3294 2768 8 518 +421.8 ± 14.97
ncm-et-13 08:21:32 1963410 3349 2836 4 509 +431.03 ± 15.09
ncm-et-15 08:21:16 1959535 3330 2821 11 498 +428.87 ± 15.28
20000 16972 49 2979 +431.67 ± 6.23

Test Detail

ID Host Started (UTC) Duration Base NPS Games Wins Losses Draws Elo CLI PGN
150770 ncm-et-10 2021-08-15 20:15 00:46:09 1942648 294 252 1 41 +441.17 ± 54.54
150769 ncm-et-15 2021-08-15 20:12 00:48:41 1961559 330 266 1 63 +384.64 ± 43.43
150768 ncm-et-3 2021-08-15 20:11 00:49:59 1956941 325 272 0 53 +420.67 ± 47.48
150767 ncm-et-13 2021-08-15 20:09 00:51:51 1963073 349 297 0 52 +437.68 ± 48.01
150766 ncm-et-4 2021-08-15 20:09 00:52:24 1964471 350 293 1 56 +417.64 ± 46.26
150765 ncm-et-9 2021-08-15 20:08 00:53:15 1971750 352 303 1 48 +446.64 ± 50.21
150764 ncm-et-10 2021-08-15 18:58 01:16:26 1945867 500 415 2 83 +408.38 ± 37.78
150763 ncm-et-15 2021-08-15 18:56 01:16:04 1957865 500 422 3 75 +421.93 ± 39.85
150762 ncm-et-4 2021-08-15 18:54 01:13:43 1977072 500 431 1 68 +449.35 ± 41.89
150761 ncm-et-3 2021-08-15 18:54 01:16:24 1957710 500 437 1 62 +466.04 ± 43.97
150760 ncm-et-13 2021-08-15 18:54 01:14:58 1968646 500 427 0 73 +441.5 ± 40.29
150759 ncm-et-9 2021-08-15 18:53 01:14:35 1964455 500 440 0 60 +477.98 ± 44.67
150758 ncm-et-10 2021-08-15 17:41 01:16:04 1965689 500 430 0 70 +449.35 ± 41.19
150757 ncm-et-15 2021-08-15 17:39 01:16:19 1953589 500 417 3 80 +410.58 ± 38.54
150756 ncm-et-9 2021-08-15 17:38 01:14:14 1961383 500 439 1 60 +471.92 ± 44.73
150755 ncm-et-13 2021-08-15 17:37 01:15:48 1954690 500 429 2 69 +441.5 ± 41.61
150754 ncm-et-4 2021-08-15 17:37 01:16:36 1955113 500 419 2 79 +417.32 ± 38.77
150753 ncm-et-3 2021-08-15 17:37 01:16:23 1957097 500 426 1 73 +436.43 ± 40.36
150752 ncm-et-10 2021-08-15 16:26 01:14:05 1961549 500 416 2 82 +410.58 ± 38.02
150751 ncm-et-15 2021-08-15 16:22 01:15:27 1955110 500 416 2 82 +410.58 ± 38.02
150750 ncm-et-13 2021-08-15 16:22 01:14:27 1966020 500 414 0 86 +410.58 ± 36.97
150749 ncm-et-9 2021-08-15 16:22 01:15:11 1960044 500 433 2 65 +452.04 ± 42.92
150748 ncm-et-4 2021-08-15 16:21 01:15:01 1971877 500 425 1 74 +433.94 ± 40.08
150747 ncm-et-3 2021-08-15 16:20 01:15:52 1955433 500 420 2 78 +419.61 ± 39.03
150746 ncm-et-10 2021-08-15 15:10 01:15:47 1961690 500 412 0 88 +406.2 ± 36.53
150745 ncm-et-15 2021-08-15 15:07 01:14:55 1963087 500 437 0 63 +468.95 ± 43.54
150744 ncm-et-9 2021-08-15 15:07 01:14:41 1965545 500 443 1 56 +484.25 ± 46.39
150743 ncm-et-13 2021-08-15 15:06 01:15:08 1966815 500 412 1 87 +404.05 ± 36.82
150742 ncm-et-4 2021-08-15 15:05 01:15:33 1960002 500 416 0 84 +415.04 ± 37.43
150741 ncm-et-3 2021-08-15 15:05 01:14:49 1965689 500 424 2 74 +429.05 ± 40.11
150740 ncm-et-10 2021-08-15 13:54 01:15:37 1956793 500 430 2 68 +444.09 ± 41.92
150739 ncm-et-9 2021-08-15 13:51 01:14:44 1966317 500 426 2 72 +433.94 ± 40.69
150738 ncm-et-13 2021-08-15 13:51 01:14:42 1965080 500 427 1 72 +438.95 ± 40.66
150737 ncm-et-15 2021-08-15 13:51 01:15:17 1963845 500 434 1 65 +457.52 ± 42.89
150736 ncm-et-3 2021-08-15 13:50 01:14:11 1967849 500 412 2 86 +401.92 ± 37.09
150735 ncm-et-4 2021-08-15 13:50 01:14:24 1968313 500 409 1 90 +397.72 ± 36.17
150734 ncm-et-10 2021-08-15 12:36 01:17:18 1955604 500 413 1 86 +406.2 ± 37.04
150733 ncm-et-13 2021-08-15 12:36 01:14:38 1959550 500 430 0 70 +449.35 ± 41.19
150732 ncm-et-15 2021-08-15 12:36 01:14:33 1961692 500 429 1 70 +444.09 ± 41.26
150731 ncm-et-9 2021-08-15 12:36 01:14:46 1960308 500 430 2 68 +444.09 ± 41.92
150730 ncm-et-3 2021-08-15 12:36 01:13:45 1967712 500 423 1 76 +429.05 ± 39.52
150729 ncm-et-4 2021-08-15 12:36 01:13:46 1961537 500 426 2 72 +433.94 ± 40.69

Commit

Commit ID d61d38586ee35fd4d93445eb547e4af27cc86e6b
Author Tomasz Sobczyk
Date 2021-08-15 10:05:43 UTC
New NNUE architecture and net Introduces a new NNUE network architecture and associated network parameters The summary of the changes: * Position for each perspective mirrored such that the king is on e..h files. Cuts the feature transformer size in half, while preserving enough knowledge to be good. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.b40q4rb1w7on. * The number of neurons after the feature transformer increased two-fold, to 1024x2. This is possibly mostly due to the now very optimized feature transformer update code. * The number of neurons after the second layer is reduced from 16 to 8, to reduce the speed impact. This, perhaps surprisingly, doesn't harm the strength much. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.6qkocr97fezq The AffineTransform code did not work out-of-the box with the smaller number of neurons after the second layer, so some temporary changes have been made to add a special case for InputDimensions == 8. Also additional 0 padding is added to the output for some archs that cannot process inputs by <=8 (SSE2, NEON). VNNI uses an implementation that can keep all outputs in the registers while reducing the number of loads by 3 for each 16 inputs, thanks to the reduced number of output neurons. However GCC is particularily bad at optimization here (and perhaps why the current way the affine transform is done even passed sprt) (see https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit# for details) and more work will be done on this in the following days. I expect the current VNNI implementation to be improved and extended to other architectures. The network was trained with a slightly modified version of the pytorch trainer (https://github.com/glinscott/nnue-pytorch); the changes are in https://github.com/glinscott/nnue-pytorch/pull/143 The training utilized 2 datasets. dataset A - https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing dataset B - as described in https://github.com/official-stockfish/Stockfish/commit/ba01f4b95448bcb324755f4dd2a632a57c6e67bc The training process was as following: train on dataset A for 350 epochs, take the best net in terms of elo at 20k nodes per move (it's fine to take anything from later stages of training). convert the .ckpt to .pt --resume-from-model from the .pt file, train on dataset B for <600 epochs, take the best net. Lambda=0.8, applied before the loss function. The first training command: python3 train.py \ ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \ ../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \ --gpus "$3," \ --threads 1 \ --num-workers 1 \ --batch-size 16384 \ --progress_bar_refresh_rate 20 \ --smart-fen-skipping \ --random-fen-skipping 3 \ --features=HalfKAv2_hm^ \ --lambda=1.0 \ --max_epochs=600 \ --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2 The second training command: python3 serialize.py \ --features=HalfKAv2_hm^ \ ../nnue-pytorch-training/experiment_131/run_6/default/version_0/checkpoints/epoch-499.ckpt \ ../nnue-pytorch-training/experiment_$1/base/base.pt python3 train.py \ ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \ ../nnue-pytorch-training/data/michael_commit_b94a65.binpack \ --gpus "$3," \ --threads 1 \ --num-workers 1 \ --batch-size 16384 \ --progress_bar_refresh_rate 20 \ --smart-fen-skipping \ --random-fen-skipping 3 \ --features=HalfKAv2_hm^ \ --lambda=0.8 \ --max_epochs=600 \ --resume-from-model ../nnue-pytorch-training/experiment_$1/base/base.pt \ --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2 STC: https://tests.stockfishchess.org/tests/view/611120b32a8a49ac5be798c4 LLR: 2.97 (-2.94,2.94) <-0.50,2.50> Total: 22480 W: 2434 L: 2251 D: 17795 Ptnml(0-2): 101, 1736, 7410, 1865, 128 LTC: https://tests.stockfishchess.org/tests/view/611152b32a8a49ac5be798ea LLR: 2.93 (-2.94,2.94) <0.50,3.50> Total: 9776 W: 442 L: 333 D: 9001 Ptnml(0-2): 5, 295, 4180, 402, 6 closes https://github.com/official-stockfish/Stockfish/pull/3646 bench: 5189338
Copyright 2011–2024 Next Chess Move LLC