Update default main net to nn-1111cefa1111.nnue
Created from 2 distinct spsa tunes of the latest main net (nn-31337bea577c.nnue)
and applying the params to the prior main net (nn-e8bac1c07a5a.nnue). This
effectively reverts the modifications to output weights and biases in
https://github.com/official-stockfish/Stockfish/pull/5509
SPSA:
A: 6000, alpha: 0.602, gamma: 0.101
1st - 437 feature transformer biases where values are < 25
54k / 120k games at 180+1.8
https://tests.stockfishchess.org/tests/view/66af98ac4ff211be9d4edad0
nn-808259761cca.nnue
2nd - 208 L2 weights where values are zero
112k / 120k games at 180+1.8
https://tests.stockfishchess.org/tests/view/66b0c3074ff211be9d4edbe5
nn-a56cb8c3d477.nnue
When creating the above 2 nets (nn-808259761cca.nnue, nn-a56cb8c3d477.nnue),
spsa params were unintentionally applied to nn-e8bac1c07a5a.nnue rather
than nn-31337bea577c.nnue due to an issue in a script that creates nets
by applying spsa results to base nets.
Since they both passed STC and were neutral or slightly positive at LTC,
they were combined to see if the elo from each set of params was additive.
The 2 nets can be merged on top of nn-e8bac1c07a5a.nnue with:
https://github.com/linrock/nnue-tools/blob/90942d3/spsa/combine_nnue.py
```
python3 combine_nnue.py \
nn-e8bac1c07a5a.nnue \
nn-808259761cca.nnue \
nn-a56cb8c3d477.nnue
```
Merging yields nn-87caa003fc6a.nnue which was renamed to nn-1111cefa1111.nnue
with an updated nnue-namer around 10x faster than before by:
- using a prefix trie for efficient prefix matches
- modifying 4 non-functional bytes near the end of the file instead of 2
https://github.com/linrock/nnue-namer
Thanks to @MinetaS for pointing out in #nnue-dev what the non-functional bytes are:
L3 is 32, 4 bytes for biases, 32 bytes for weights. (fc_2)
So -38 and -37 are technically -2 and -1 of fc_1 (type AffineTransform<30, 32>)
And since InputDimension is padded to 32 there are total 32 of 2 adjacent bytes padding.
So yes, it's non-functional whatever values are there.
It's possible to tweak bytes at -38 - 32 * N and -37 - 32 * N given N = 0 ... 31
The net renamed with the new method passed non-regression STC vs. the original net:
https://tests.stockfishchess.org/tests/view/66c0f0a821503a509c13b332
To print the spsa params with nnue-pytorch:
```
import features
from serialize import NNUEReader
feature_set = features.get_feature_set_from_name("HalfKAv2_hm")
with open("nn-31337bea577c.nnue", "rb") as f:
model = NNUEReader(f, feature_set).model
c_end = 16
for i,ft_bias in enumerate(model.input.bias.data[:3072]):
value = int(ft_bias * 254)
if abs(value) < 25:
print(f"ftB[{i}],{value},-1024,1024,{c_end},0.0020")
c_end = 6
for i in range(8):
for j in range(32):
for k in range(30):
value = int(model.layer_stacks.l2.weight.data[32 * i + j, k] * 64)
if value == 0:
print(f"twoW[{i}][{j}][{k}],{value},-127,127,{c_end},0.0020")
```
New params found with the same method as:
https://github.com/official-stockfish/Stockfish/pull/5459
Passed STC:
https://tests.stockfishchess.org/tests/view/66b4d4464ff211be9d4edf6e
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 136416 W: 35753 L: 35283 D: 65380
Ptnml(0-2): 510, 16159, 34416, 16597, 526
Passed LTC:
https://tests.stockfishchess.org/tests/view/66b76e814ff211be9d4ee1cc
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 159336 W: 40753 L: 40178 D: 78405
Ptnml(0-2): 126, 17497, 43864, 18038, 143
closes https://github.com/official-stockfish/Stockfish/pull/5534
bench 1613043