Lc0 v0.23.0 has been released

Update: we have a bugfix release v0.23.1, download here. It contains no chess-play-related changes, the bug was in training game generation code.

We have a new release today!
Download here.

What's new in the release:
  • Support of Fischer Random Chess (UCI_Chess960 option to enable FRC-style castling). Also added support for FRC-compatible weight files, but no training code yet.
  • New option --logit-q (UCI: LogitQ). Changes subtree selection algorithm a bit, with a hope to win won positions faster (experimental, default off).
  • Lc0 now reports WDL score. In addition to showing position score in centipawns, also show probability of winning, losing, and drawing. To enable it, use --show-wdl command-line argument or UCI_ShowWDL UCI option.
  • DNNL BLAS builds is included in addition to OpenBLAS. It should be faster for modern CPUs (for older CPUs, OpenBLAS is still faster).
  • OpenCL package now contains check_opencl tool which detects faulty OpenCL hardware or drivers.
  • Added "Badgame split" mode during the training. After the engine makes inferior move due to temperature, the game is branched and later the game is replayed from the position of the branch.
  • Added experimental --short-sightedness (UCI: ShortSightedness) parameter. Treats longer variations as more "drawish". If you want to try it, values around 0.02 are good as the first guess.
  • Lc0 can now open Fat Fritz weight files.
  • Time management code refactoring. No functional changes, but will make time management changes easier.
  • Lc0 logo is now printed in red when Lc0 is run from command line! \o/
  • Command line argument -v is now short for --verbose-move-stats.
  • Errors in --backend-opts parameter syntax are now reported.
  • The most basic version of "certainty propagation" feature (actually without "propagation" part). If the engine sees the checkmate in one, it plays it! (before it could play other good move).
  • In benchmark mode, smart pruning is now disabled to add consistency between benchmark runs.
  • Various small changes: hidden options to control Dirichlet noise, floating point optimizations, Better error reporting if there is exception in worker thread, better error messages in CUDA backend.


End of era

The era of test40 has finished, that training of test40 has been stopped.

Instead we've just started to train test60. What's new there (relative to test40):

  • Residual tower size is 320x24 (24 blocks, 320 filters).
  • Policy head is AlphaZero-style (8×8×73 instead of plain 1858 vector).
  • Value head is WDL.
  • Instead of using fixed nodes for every move while training, KLD threshold is used instead (think less in obvious cases, think more in complicated cases, but in average it's still about the same as it was).
  • [disabled after net 60021 due to issues] Illegal moves are masked from policy head, so that neural network doesn't have to learn detecting legal moves itself.
  • Instead of training value head just on game outcome {-1; 0; 1}, a fraction of tree's Q value after the move is added.
  • Using different FPU at root vs for the rest of the tree.
As it was written in earlier post, it's not going to be a "clean run". If we discover improvements while test60 is being trained, it's likely that they will be incorporated in the middle of the training process.

Let's see how it goes.


Nibbler: an Lc0 GUI

Does the world need another Chess GUI? Some people say no, but they don't have very many Chess GUIs, do they?

I'm the author of Nibbler, the first Chess GUI designed specifically for Lc0, and inspired by the excellent Lizzie Go interface. Although Nibbler is a work in progress, it's very usable, and has received a lot of favourable comment.

Being designed specifically for Lc0 means it can use the sort of information Leela provides that other engines don't (or can't). For example, Leela is always giving opinions about multiple different moves, and so Nibbler shows several possibilities graphically.

For the more technically minded Leela user, Nibbler also provides Nodes, Policy, and Uncertainty statistics.

Prebuilt binary releases are here. Do send bug reports and feature requests.


What's going on with training!

I've just decided that maybe it's a time for a post. Describing what's going on and what are the plans, stuff like that!

There are some plans for the Lc0 engine itself, but that is a topic for another post. At least we hope to make releases more regular again (v0.21.0-rc1 appeared on February 16th, and we are not even v0.22 yet).

So, this post is be about networks and training.


Lc0 v0.21.2 has been released

After quite a long delay we are releasing an update to Lc0, new version is v0.21.2.

Download here.

Changes include:
  • Optimizations for GTX 16xx cards (cudnn-fp16 now works for them, even though it doesn't give as much performance boost as for RTX cards).
  • "Sticky endgames" feature (default on). This makes Lc0 play better when it sees a checkmate somewhere during the search.
  • Centipawn formula has been updated, now Lc0 will show eval values more similar to what other engines show.
  • Optimization for wider (more filters) neural net, for future (maybe test60) runs.


Lc0 won TCEC 15

Congrats. :)


Training server is down

2019-05-26 UPDATE: Temporary server is up again.

2019-05-25 UPDATE2: Turns out cloud instance was preemtable, so it was preempted, no training again for now.

2019-05-25 UPDATE: We've been donated a temporary AWS cloud instance for the time of the main server downtime! Training is back up, but only for test53, test40 is still paused.

2019-05-23 UPDATE: PSU seems to be in order, replugging all cables didn't help. The person who has access to the server has to travel again, no news expected until next week. :-(

2019-05-20 UPDATE: The server doesn't react to a power button, so it's not something trivial. Will take a more thorough look tomorrow.

2019-05-17 UPDATE: The person who has access to the server is currently travelling, until Sunday. That means that updated information about the server is expected in the beginning of the next week.

--- original message ---

As many of you heard, the training of Lc0 has been suspended yesterday.

Here is a short summary what's happened.

In April 2018 when we were looking for a training server, one of our contributors kindly volunteered to help and provided 3 servers (with middle-class GPUs), together than a space to cost them and covering the electricity bills.

In May 2018 we croudfunded and bought another server (with two GTX 1080ti, which were high end consumer GPUs at that time), and located it at the same place. Since then it have been our training server.
(/end of context)

Yesterday, May 16 2019, our training server went offline (three other servers are online, but they are too weak to be useful), and so far we have not been able to reach the person who hosts it.

We don't know what happened to that machine (it's likely that just reboot would help, or maybe there's some part to replace, or maybe it's something more serious).
We also don't know how long will it take to reach to the person who can take a look.

Q: Why not to move training to cloud service?

A: Several reasons:

  1. We had our own server, and it was just fine.
  2. It takes time to set up (so if we choose this option, or decide to buy a new machine and put to some other place, it will take up to a few days to bring it online).
  3. It is expensive (at least $600/months a the configuration that we need), and we are trying to stay away from accounting.


Competition Season!

Today, May 10, 2019, Leela is playing in the first game of the high profile TCEC Season 15 Superfinal. Leela qualified after she cleared Division P undefeated, winning at least once head-to-head against every opponent except Houdini. Her opponent in the superfinal is Stockfish, who lost once to Leela head-to-head in Division P, but finished with a higher score by winning more games than Leela against the other Division P competitors.

In the just-concluded TCEC Cup III, seeding was determined by the engines' finishing position in their divisions, with the exception of Leela, who was automatically seeded #1 as the Defending Cup Champion. Leela easily handled Round 32 and Round 16 of TCEC Cup 3, knocking out Marvin (5-0) and Booot (4.5-0.5). The quarterfinals featured the same engines as Division P with Leela knocking out Fire (5-3). In the semifinal, Leela scored two wins over Houdini ending with a 4.5-2.5 score to advance to the cup final versus Stockfish. In the final, Leela and Stockfish played their first eight games to a tie, each with one win and one loss. In the first tie-breaker pair, Leela won as white and held Stockfish to a draw on the reverse, winning the match 5.5-4.5 for her second consecutive TCEC Cup.

In the hours before the superfinal, the TCEC team ran various unofficial tests at disparate time controls. One such test match featured Stockfish at 30' + 5" time control and Leela 1/10th of that at 3' + 0.5". Leela still won a game and drew the match despite the large time disadvantage. Prior to the Cup, Leela faced the supercharged 176-thread, 7-man tablebase "Bluefish" version of the Stockfish engine that is used as a kibitzer for the TCEC eval graphs. Again, Leela won a game and drew the match.

The Leela net T41800 (used in Division P) is approximately 10-20 Elo stronger than the T32930 net that narrowly lost to Stockfish in the previous superfinal and the T40.T8.610 net selected for this superfinal is approximately 35 Elo stronger than T32930. Given Leela's progress, many chess fans think that Leela is likely to win this superfinal.

The net T40.T8.610 that is playing in the superfinal is a fork of T40 that has been refined by @jhorthos. While more detail is available in the documentation he put together, it can be summarized as a T40 net forked from 42000 with an additional 3x LR (learning rate) drop applied at the fork and partial 7-man tablebase rescoring. It was trained only on games produced in the main training pipeline and doesn't include any non-zero features that distinguish it from any other T40 net (LR drops and TB rescoring are a standard part of Leela's training process). This fork is a short-term Elo boost that allows for extra polishing of a competition net without affecting the LR schedule of the main training run. The main T40 run is expected to surpass this net and will have another LR drop in the coming weeks/months before it eventually plateaus and is retired to open up resources for the next run.

Leela is also competing in the Computer Chess Championship CCC 8: Deep Dive as the last engine up in the escalation format, playing four games against each of the other 23 engines and looking to overtake the current top scores from Antifish, Leelenstein, and undefeated Stockfish.


Backend configuration

To break the silence of lack of posts in this blog, let me write about Lc0 backends configuration, as it has been totally undocumented so far.
(For other options it's possible to get rough idea of their meaning by running lc0 --help).

So, there are two command line parameters and corresponding UCI options:

  • --backend (UCI option: Backend) -- name of the backend,
  • --backend-opts (UCI option: BackendOptions) -- backend options.

While it's relatively clear what different backends do (but I list them all below with possible options), syntax backend options has always been a mystery.

So, let's talk about options syntax first:


Announcing LcZ, the world's first neural net based chess engine and interactive fiction amalgamation.

After the huge success of the xyzzy extension to the UCI protocol introduced in Lc0, it was decided to take it one step further and introduce z-machine compatibility to the chess engine, giving birth to LcZ.

Download windows binaries from: https://github.com/borg323/lc0/releases/tag/first-release
or source code from: https://github.com/borg323/lc0/tree/first-release

In order to immerse yourself to this new world, you need to select the z mode of LcZ. Running “LcZ z --help” will show all the available options. “LcZ z” will to use the default file, while to select a different file to use run “LcZ z -z name_of_the_z_file_to_use”.

Distributed with LcZ (and used by default) is an innovative, chess related, piece of interactive fiction, called “zugzwang” written by Magnus Olsson, under the pseudonym of Alfred Timpson. In zugzwang the player takes part in a chess game, participating as a pawn. The action start near the end of the match:
It has been a fierce battle, full of bold moves, daring attacks, surprising gambits and heroic sacrifices. Throughout this, you've remained at your starting point, patiently waiting for orders. But now that the metaphorical dust has settled and most pieces have been exchanged, it finally looks as if it is your turn. The White King and one of your Pawn colleagues have managed to nail the Black King with his back to the edge of the board, but they can't finish him off without help.
At this point, It is finally time for you (playing the f2 pawn) to make your move. Be very very careful, one wrong move and your side loses.

SPOILER ALERT: zugzwang also includes comprehensive chess instructions, accessible through reading the rule book the protagonist is carrying - use the “inventory” command to see it and then “read the rule book” in true interactive fiction style. Unfortunately, even this doesn’t answer our current pressing question, regarding three-fold repetitions when castling rights are present, but cannot be exercised, and then subsequently lost. See here for extensive discussion.

Several interactive fiction authors have offered to create additional content specifically for LcZ and we will be making separate announcements when available. 

Understandably, the developers are very excited. The possibilities really are endless. Some quotes:

  • “If this doesn't get us all hired by deepmind, nothing will” -- Anonymous Dev.
  • “Why do I keep getting spam messages from some person claiming to be me from the future saying I need to reject the lcZ merge commit before it is too late?” -- ditto
  • “I was going to say it is the best thing since sliced bread, but it would be too much - you need an almost in there” -- borg
  • “Listen. Understand. LcZ is out there. It can't be reasoned with, it can't be bargained with... it doesn't feel pity or remorse or fear...and it absolutely will not stop. Ever. Until you are checkmated.”
  • “Three billion human lives ended on April 1, 2019. The Survivors called the war Zugzwang. They lived only to face a new nightmare: the war against LcZ.”
  • “It now knows why you cry, but it is something it can never do.”


Upgrading to Lc0 v0.21.1 in CuteChess

This a quick note for people who use CuteChess and just upgraded to v0.21.1 from earlier versions.

Meaning of FPU-related parameters are changed in v0.21.1, along with default values of those parameters.

If you upgraded your Lc0 to v0.21.1 in CuteChess by just replacing the files, most probably CuteChess still remembers old defaults and sets them.

To make CuteChess see correct default values, either install Lc0 v0.21.1 as a fresh engine, or try pressing "Detect" / "Restore to defaults" in engine settings in cutechess.

After update, there should be:

  • no FpuReduction parameter, 
  • FpuValue should be equal to 1.20000 (was -1.00000 before the change)

UPD: Similar issue also happens in Arena. The fix is similar: "Restore to defaults" in engine settings, or reinstall the engine fresh.