Leela promotes to SuperFinal of TCEC! She will face Stockfish.

Leela just made it on the superfinal of the TCEC tournament!
There, she will face Stockfish on a 100-games match for the title of TCEC champion.

As always with Leela it was a dramatic promotion at the last moment, at the last game.
Where in that game Stockfish missed a win to the relief of Leela's fans. The win of course was not that easy to find.

 So after approximately 10 months, where first nets of Leela were born, Leela has managed to break the dominance of the so called "big-3" of the computer Chess world, Stockfish, Houdini and Komodo, took 2nd place ahead of Komodo and Houdini and went to the superfinal.


Leela in December

Quick recap

Remember to consult the glossary if you find some terms confusing.

With Test20 being suspended on November 16th, we started December with Test30 as the only game in town. Remember that test30 was "test10 without the bugs" and "Test20 with policy sharpening". Test20's high CPUCT value (5.0) had never really worked and Test10's low setting was deemed too low. CPUCT is a parameter that influences how likely you are to try something new vs something you know works, and was one of the crucial details missing from the original DeepMind paper.  Test30 also used 5.0 but with a technique called policy sharpening to counteract the negative effects of a high CPUCT.

At the beginning of December Test30 had been stable for a while and the devs agreed to experiment with parameters, starting with CPUCT. For the nitty-gritty details one should consult the #dev-log channel in Discord where every parameter change is recorded together with a short reasoning behind the change. This change would lead to some weeks of lots of new knowledge at the cost of small elo gains.

Notable new functionality

DeepMind paper

Then, on December 6th Google released a new version of their paper. This ensured frantic activity amongst the devs. The most important new information was:

  • CPUCT used was 2.4, plus more details on the formula used 
  • Deep Mind set temperature to 0 after 15 moves (from both players), ensuring only the best alternative was selected from then on. Leela had used a constant temperature throughout the game, trying to find something that gave diversity during the opening but not blunder too much in the end game. Temperature settings are the main culprit behind Leelas ... sub-optimal... end game play. 
  • First Player Urgency, FPU, was revealed to be "assume any move you haven't evaluated as losing". Leela had until now tried to estimate this value based on the parent node's evaluation. 

The paper launched a range of experiments lasting roughly until December 17th. We learned that:

"Policy sharpening is bad"
"AlphaZero parameters are good"
Changing parameters mid-run gives results that might be hard to interpret.

A more indepth blog post on the paper and its impact was published earlier.

StockFish 9

Cscuile reported in our forum  that net 32406 is able to beat StockFish9. The post is a link to the a spreadsheet showing 32406 with a higher elo estimate than SF9 4 core. 32425 is reported as having even higher elo. 


Paralell to all of this TCEC was going on. 11248 had been cruising through all the lower divisions and Leela was certain to qualify for Premier Division around December 24th. (Premier Division still ongoing, Leela in 2nd place after 25/42 rounds). 11248 is an old net. From a make-the-best-chess-AI viewpoint, Leela had not made any progress for almost six months. This was beginning to dampen the morale and we saw drop in the number of contributors. The decision was made to lay off further experimentation and make the best Test30-net possible and hopefully send it to TCEC.

A solid indicator for how far into a training run one has come is the number of Learning Rate drops (LR-drops). Each training run typically has 3-4 and Test30's first LR-drop was November 1st. The second would have happened already if not for all the experimentation. Thus the experimentation phase ended with the second LR-drop on Dec 19th. The race was on to produce a new best net before deadline. The drop gave immediate results and eventually 32194 was sent to premier division after a community vote on Discord. The number of contributors started to rise again.


On December 10th Tilps started Test35, a small net with 10 blocks, to test whether the new SE-implementation works, which it seems to do. Test35 is not expected to produce a new best net. Self-elo graph can be seen at http://lczero.org/training_run/1 .

Status right now

Test35 and Test30 going in paralell. Test30 is not going away even though Test35 is receiving the bulk of contributions. Test35 will eventually give way for Test40, so that both test30 and test40 will continue training. A contributor can choose what test net to contribute to or to be auto assigned by the devs (which is the default) by using the '-run <num>' parameter. 0 means auto assign, 1 is Test35 and 2 is Test30.

Want to contribute?

Great! Please start with our guides and remember that both the forum and Discord channel #help are eager to help.


Lc0 vs GM Adam Tukhaev on Lichess

Not everyone knows, but recently there was a match between Lc0 and GM Daniel Naroditsky on Lichess.
For those tho missed, here is recording of this stream on Twitch and Lichess Blog entry about that event.

In two days, Leela is playing with another grandmaster, this time it's GM Adam Tukhaev!

When: Jan 6th 19:00 UTC (see your local time here).
Time control: 3+2, with a mix of bullet at 1+1 and 1+0.
Lichess handles: almostadams and LeelaChess.

Leela will be running on a CPU (i5-6600K @ 4.1 GHz, without GPU) and will give Adam piece odds. If Adam finds it too easy, Leela will play with equal pieces.


Leela versus Stockfish in Lichess is coming....

 Lichess.org will host a match between the mighty Stockfish 10 and Leela. It will be a 6 games match with time control of 5'+2" with ChessNetwork commentary.
Games will be played on 15th December at 17:00 UTC.

Stockfish 10 will run on 64 cores 2.3GHz Xeon, while Leela will use the latest v19.1 Lc0 with 11248 network and will run on one GTX 1080 Ti + one RTX 2080 GPU.

It will be played with the @LeelaChess and @Stockfish10Chess accounts so follow one of these to see the match.

The official announcement.

The games of the match and more details will be available in an updated post here.


TCEC Season 14. Leela promoted from 3rd division to div2....

Leela's big journey to try to go to premier division of TCEC, has started!
TCEC season 14 is running for the last couple of weeks and Leela has participated in 3rd division of it, finishing in the top position easily and now participates in the 2nd division trying to promote to 1st division.


AlphaZero paper, and Lc0 v0.19.1

As everyone has already heard, DeepMind has published a detailed paper on AlphaZero!

The announcement can be found here. Scroll down the announcement to get links to the full paper text as well as supplementary materials (including PGNs of games and training pseudocode).

The paper contains additional details that were missing in the original preprint from one year before. There were some aspects that were implemented in Leela differently from AlphaZero, and I'm sure we'll find some more.

Differences found

So, what differences have we found so far? Here is the list!
  • In training games, only first 15 moves (30 ply) are generated with temperature randomness.
    To explore more possibilities during training games, a randomness (including random blunders) was added to the training. The paper preprint told that that happens for all moves. Final paper also says so, but if you look into pseudocode, it turns out that it's only applied during first 15 moves!
    Training new networks with 15-move-temperature setting will possibly help us to improve endgame play. Leela won't longer wait opponent to blunder, having too high eval for drawn positions.
  • When played against stockfish, AlphaZero used a new technique to ensure game diversity.
    What AlphaZero did, is picked a random move with eval within 1% of the best move's eval, for the first 15 moves. Surprisingly, that improved winrate of AlphaZero in those games.
    We can try that too!
  • Action space turned out to be 0..1, not -1..1
    That's more of a technical detail rather than something that changes the algorithm. In AlphaGo paper, loss was encoded as 0 and win as 1. When AlphaZero preprint came out, they wrote that they changed MCTS action values to -1 for loss, 0 for draw and 1 for a win. But in the end it turned out that it wasn't correct understanding. Loss is still 0, and draw is 0.5.
    As I mentioned, it doesn't change algorithms. However, it changes the meaning of some constants from the paper.
  • Cpuct is not a constant
    CPUCT is a constant, which indicates what should be the balance between exploration and exploitation in the search algorithm. Turned out that that constant is not a constant! This value grows as search progresses!
    We had plans to do something along those lines, as there were problems which were seemingly caused by a constant Cpuct. Namely, it usually happend, that at large number of nodes Leela stuck to one move and never switched.
  • First Play Urgency value is known now. It's -1!
    FPU is a fancy name for a node eval for the case the node was never visited. We used a value based on a parent node (assuming that eval of children is roughly the same as parent's eval). Turned out that AlphaZero just considered unvisited nodes as lost (with very little confidence though)
  • When training new network, positions from last 1 000 000 games are used.
    We used 500 000 last games so far, as it was the number mentioned in previous papers.
  • DeepMind generated new network 4 times rarer than we do.
    We were worried that we did that too rare. But it happened that we were fine, in fact it's fine to have 4 times less networks per day.
  • The network architecture has differences.
    See here for the context.


What does those findings mean for us?

We want to experiment with new settings in play and training, so we are urgently releasing a new version of Lc0 v0.19.1 (as a release candidate today, the full release will happen during the next days), where we add missing parameters. There are lots of parameters, and many of them are expected to be renamed/rethought for version v0.20. So, please welcome new parameters:

  • --temp-cutoff-move=X
    After move number X, temperature will be fixed to what is set in --temp-endgame flag.
    To reproduce match a0 vs sf8, set this to 16
  • --temp-endgame
    See above for the meaning. This parameter is mostly exposed for training experiments. Default is 0, and it makes sense to keep it like that for play.
  • --temp-value-cutoff=X
    Only moves with eval within x percentage points from the bestmove are considered during temperature pick.
    Set to 1.0 to reproduce match a0 vs sf8
  • --temperature
    This is an old flag, but set to 10.0 to reproduce settings of match a0 vs sf8.
  • --fpu-strategy
    Default is "reduction", old way of handling first play urgency. Set to "absolute" to play like AlphaZero!
  • --fpu-value=X
    Only used in "absolute" FPU mode. -1.0 is the default, and that's what DeepMind used.
  • --cpuct
    That used to be a constant, and it was equal to 3.4 for quite a long time in Lc0.
    Correct value from AlphaZero is 2.5, but it slows down nps (will investigate why), so for now default is 3.0
  • --cpuct-base
    That's that factor which defines how Cpuct grows. The value from DeepMind paper is 19652, and that's now the default.
  • --cpuct-factorThat's the multiplier of the growing part of Cpuct. Default value now is 2, and that's what DeepMind used (well, they didn't have that factor, but as our action space is 2 times larger, we have to scale this parameter).

Those parameters will appear in today's release candidate v0.19.1-rc2, which will be available for download here. (Yesterday there was already v0.19.1-rc1 which had one new parameter, but rc2 will have more!)

Note that most of those parameters probably won't have immediate useful effect. For them to be useful, new networks have to be trained using those parameters.

Also, all those parameters were added into RC2 in a bit of a hurry. It's very probable there will be RC3 with fixes for bugs that I just introduced. If you see a bug, please report!


Lc0 v0.19.0 has been released.

v0.19.0 is finally out of "release candidate" status, and now is fully released!
It has been quite a long bugfixing run with 5 release candidates, but now all known issues seem to be resolved.

Can be downloaded here.

For the list of differences relative to v0.18, see post for v0.19.0-rc1.

For people contributing training games, there's no need to rush to upgrade, it's fine to use v0.18.


Where to play Leela online?

The play.lczero.org web site where everyone could quickly play Lc0 online is down pretty often recently.

But even while it doesn't work, there are some options to play Leela online.

The easiest way is to play on lichess.
There is for example a bot called LeelaChess, it is the very first lichess bot.
Also there are other bots of different configurations and strength. Check the all-the-leelas lichess team and pick the one which is online. You are also welcome to host your own Leela and join that team.

If you know other ways to play Leela online (FICS, etc), please reply in comments, I'll add it to this post.

And of course you can always download Leela and set it up to play locally. This blog post describes how to do that.

UPD: Kontrachess has a way to play with LCZero. It seems very to be very nice looking site! I did not try it myself though. (Initially I thought it was a paid site, but one of the site representatives said in comments below that it's actually free).

UPD2: Also NextChessMove has a number of options including different networks of Lc0. From what I can see, it is similar to what play.lczero.org was. It takes some time to get a move from a free version, but probably paid version is faster (again, I don't know anyone who tried that).


Lc0 v0.19.0-rc1 (UPD: rc2) has been released.

The release candidate of a new Leela version has been released:

Upd: we are releasing v0.19.0-rc2 immediately as due to mistake in the release procedure rc1 reported its version as v0.19.0-dev rather than v0.19.0-rc1

We expect testing phase to last around 7-10 days, after which proper v0.19.0 will be released.

Download here. Please test it thoroughly and report any bugs that you find.
Note: CudNN builds for Windows are now compiled with CUDA 10. You may need to update you GPU driver to run it.

Please don't use release candidates to generate training games. We only use stable versions for that.

What's new:

Search algorithm changes

When visiting terminal nodes and collisions, instead of counting that as one visit, estimate how many subsequent visits will also go to the same node, and do a batch update.
That should slightly improve nps near terminal nodes and in multithread configurations. Command line parameters that control that:
  • --max-collision-events – number of collision events allowed per batch. Default is 32. This parameter is roughly equivalent to --allowed-node-collisions in v0.18.
  • --max-collision-visits – total number of estimated collisions per NN batch. Default is 9999.

Time management

  • Multiple changes have been done to make Leela track used time more precisely (particularly, the moment when to start timer is now much closer to the moment GUIs start timer).
  • For smart pruning, Leela's timer only starts when the first batch comes from NN eval. That should help against instamoves, especially on non-even GPUs.
  • Also Leela stops the search quicker now when it sees that time is up (it could continue the search for hundreds of milliseconds after that, which caused time trouble if opponent moves very fast).
Those changes should help a lot in ultra-bullet configurations.

Better logging

Much more information is outputted now to the log file. That will allow us to easier diagnose problems if they occur. To have debug file written, add a command line option:
(or short option "-l /path/to/logfile", or corresponding UCI option "LogFile")
It's recommended to always have logging on, to make it easier to report bugs when it happens.

Configuration parameters change

Large part of parameter handling has been reworked. As the result:
  • All UCI parameters have been changed to have more "classical" look.
    E.g. was "Network weights file path", became "WeightsFile".
  • Much more detailed help is shown than before when you run
    ./lc0 --help
  • Some flags have been renamed, e.g.
    is renamed back to
  • After setting a parameter (using command line parameter or uci setoption command), uci command "uci" shows updated result. That way you can check the current option values.
  • Some command-line and UCI options are hidden now. Use --show-hidden command line parameter to unhide them. E.g.
    ./lc0 --show-hidden --help

Also, in selfplay mode the per player configuration format has been changed (although probably noone knew that anyway):
Was: ./lc0 selfplay player1: --movetime=14
Became: ./lc0 selfplay --player1.movetime=14


  • "go depth X" uci command now causes search to stop when depth information in uci info line reaches X. Not that it makes much sense for it to work this way, but at least it's better than noting.
  • Network file size can now be larger than 64MB.
  • There is now an experimental flag --ramlimit-mb. The engine tries to estimate how much memory it uses and stops search when tree size (plus cache size) reaches RAM limit. The estimation is very rough. We'll see how it performs and improve estimation later.
    In situations when search cannot be stopped (`go infinite` or ponder), `bestmove` is not automatically outputted. Instead, search stops progress and outputs warning.
  • Benchmark mode has been implemented. Run run, use the following command line:
    ./lc0 benchmark
    This feature is pretty basic in the current version, but will be expanded later.
  • As Leela plays much weaker in positions without history, it now is able to synthesize it and do not blunder in custom FEN positions. There is a --history-fill flag for it. Setting it to "no" disables the feature, setting to "fen_only" (default) enables it for all positions except chess start position, and setting it to "always" enables it even for startpos.
  • Instead of output current win estimation as centipawn score approximation, Leela can how show it's raw score. A flag that controls that is --score-type. Possible values:
    • centipawn (default) – approximate the win rate in centipawns, like Leela always did.
    • win_percentage – value from 0 to 100.0 which represents expected score in percents.
    • Q – the same, but scales from -100.0 to 100.0 rather than from 0 to 100.0


Lc0 training.

 If you are new to Leela (Lc0) Chess and have begun contributing games either using Google Cloud or some other online service or your own home computer, you may be wondering where all those games go and how training of Leela happens.


Leela beats Fire promoting to Semi-Final of TCEC Cup!

 Leela in a classic drama style, promoted in TCEC Cup Semi-Finals and it will face Stockfish today!
While in CCCC blitz tournament she is still at 3rd place ahead of Komodo, Ethereal and Fire and behind Stockfish and Houdini.