-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Errata #1
Comments
[DONE] page 54: Figure 2.15: Baikinman, appylied sobel operator and thresholding -> applied |
[DONE] page 219: Figure 4.23: White to move. There is no reasonable alternative to Bb3 that any |
p7 [DONE] "This book is brief"
[DONE] "romantic"
"referring"
p8 [DONE] "a chess players perspective"
[DONE] "44... Rd1" p9 [DONE] "According to Hsiu"
p11 [DONE] "and Ananad,"
p12 [DONE] "computer scientists perspective,"
p13 [DONE] "Based these evaluations,"
p14 [DONE] "•find a number of candidate moves. Then for each of them sequentially
[DONE] "focus on one specific lines, then decide by instinct
p15 [DONE] "have computers plagued"
[DONE] "to processes and judge"
p16 [DONE] "But how do become good player so good at chess?"
I hope that's helpful. Marek Soszynski |
page 31 dnet1/dw1 = ... = 1 And thank you very much for the book! |
(all fixed in v1.2) |
Page 12, 14 Hendrik's = Hendriks' (Willie's surname is Hendriks, see also https://grammar.yourdictionary.com/punctuation/apostrophe-rules.html for more info on the use of the apostrophe) page 18 HexapwanZero = HexapawnZero Very interesting and informative book on the use of neural nets in Computer Chess. |
page 256: instead we desire to select the index 0 only in about 10 percent of should read: index 0, ... index 1, ... index 2 |
page 262: it’s that alpha-beta searcher[s]! will prevail |
(all fixed in v1.3) |
page 71 line 3 |
page 62 last line of paragraph three |
page 28: should be (0.95 − 0)^2 instead of (0.95 − 1)^2 |
page 30: E_global should be E_total. |
page 79: of tactical threads. -> threats |
page 97: If you do not implement chess knowledge in the implementation function -> evaluation function |
page 99: 37.Be4 (space!) |
p148 In contrary, the rather universal gradient policy rein- |
p212: agrees while also searching with around 7,500,000 nodes per p213: Let’s again check how p215: about a move he got the replay "Nah you just don’t play like that". The idea to use neural networks to automatically construct neural networks -> construct evaluation functions p218:we are playing with an handicap of say p226: It’s a very basic mate threat that most human -> There is a very basic.... |
p. 65 par. 4 line 3 could be that |
p. 69 4th line above bottom common themes |
p. 70 third line from bottom all required network elements, there. |
p. 73 par. 3 line 2 none instead of noone? am not sure |
p. 96 line 2 and on many other places of the text w.r.t. to |
p. 155 ,the two formulas that give 1.2 and 1.0 as the result use wrong values (not the values used in Figure 4.8). The first formula should be 0.6 * (sqrt(1+1)/2) = 0.42 = 0.5+0.42 = 0.92 0.92 is still greater than 0.89 so the reasoning doesn't change |
(all fixed up to here), #todo: release new pdf |
P204 & P205 |
Everywhere else, as far as I can tell, the "E" in the acronym NNUE stands for the word "efficiently" (example). But in this book, the word "effectively" is used. |
(all fixed in v1.5) |
page 185, "will be": "These games will naturally of very poor quality initially and it wi" |
p185: ... rethink it ... instead of ... summarize what they investigated and re-think of it |
p205... have think carefully should be have to think carefully |
p204 ... on rather low-end office computer -> on a rather ... |
p216: cite official-stockfish/Stockfish#2916 |
p232: Hexapawn is a solved game, |
p219: the difference |
Excellent book! found in version 1.5: page 24: I think it should be: w_0 = -10 page 31: I think the general rule of thumb for updating weights needs index i for the denominator: |
p8: DeepBlue should be Deep Blue |
First of all: GREAT STUFF! Thank you very much!! What about page 198 "enemy king on e8, own knight on c3 from 1 to 0" ?! I am not sure, but
So i would expect "enemy king on e8, own knight on c3 from 0 to 1" Same for "own king on e8, enemy knight on c3 from 1 to 0" ?! |
Thanks for this great resource! Went through Ch. 2 (Back-Propagation and Gradient Descent), and I think there might be an issue with the partial derivatives (p. 30) It should be (for The corresponding calculations might need to be re-worked as well. |
p62: There is one large and deep neural neural network. (neural twice) |
p172:formatting of nxf6 in bullet point (should use \mathrm) |
Figure 4.12: "convolution" on the left: font is too small |
(all fixed in version 1.6) |
Page 118. In the formula, Vi must be at the bottom and Vparent at the top. Is not it? |
Page 198:
This should rather read: without really bad car analogies |
Page 39. The formula for the softmax function does not have an e on the denominator |
Page 16:
S/B "If we listen to Grandmasters' advice"
S/B "Look carefully at the mistakes" Page 23:
S/B "and lose." |
Page 30: In the chain rule for computing partial derivatives, the last term has delta-w-sub-i in the denominator. S/B "delta-w-sub-1" |
several "loose" and "looses" that I think should be "lose" or "loses" on pages 23, 40, 96, 140, 210, 217 |
p.121 and mcts.py sample code: In In |
First of all thanks a lot for writing this great book. A couple of comments regarding the section 4.6 about NNUEs
Besides it would make maybe sense to add some more remarks to the code. For example chapter 5, listing 5.18: It would make to remark that the |
page 83: As we can see, the evaluation function give_s_ a |
page 226/227: no mention of black's 4th move. |
page 11: "Silman explains how to characterizes a position by imbalances, i.e. advantages and disadvantages that characterize the position" should be Silman explains how to characterize |
If you spot mistakes, please leave a note as a reply to this issue.
The text was updated successfully, but these errors were encountered: