The Leela Chess Zero vs Stockfish case also offers an interesting perspective on the bitter lesson.
Here's my (maybe a bit loose) recollection of what happened:
Step 1- Stockfish was the typical human-knowledge AI, with tons of actual chess knowledge injected in the process of building an efficient chess engine.
Step 2. Then came Leela Chess Zero, with its Alpha Zero-inspired training, a chess engine trained fully with RL with no prior chess knowledge added. And it has beaten Stockfish. This is a “bitter lesson” moment.
Step 3. The Stockfish devs added a neural network trained with RL to their chess engine, in addition to their existing heuristics. And Stockfish easily took back its crown.
Yes sending more compute at a problem is an efficient way to solve it, but if all you have is compute, you'll pretty certainly lose to somebody who has both compute and knowledge.
Its evaluation now purely relies on NNUE neural network.
So it's an good exmaple of the better lesson. More compute evently won against handwritten evaluation.
Stockfish developers thought old evaluation would help neural network so they kept the code for a few years, then it turned out that NNUE neural network didn't need any input of human chess knowledge.
AFAIK this is only a part of it, it still has its opening library, as well as the end-game one (IIRC chess is a solved game for up to seven remaining pieces on the board).
Also, the PR you link says the removed code in fact had a performance impact, just too low to justify its code size 25% of all Stockfish).
For AI researchers, the Bitter Lesson is not to rely on supervised learning, not to rely on manual data labeling, nor on manual ontologies nor manual business rules,
Nor on *manually coded* AI systems, except as the bootstrap code.
Unsupervised methods prevail, even if compute expensive.
The challenge from Sutton's Bitter Lesson for AI researchers is to develop sufficient unsupervised methods for learning and AI self-improvement.
Here's my (maybe a bit loose) recollection of what happened:
Step 1- Stockfish was the typical human-knowledge AI, with tons of actual chess knowledge injected in the process of building an efficient chess engine.
Step 2. Then came Leela Chess Zero, with its Alpha Zero-inspired training, a chess engine trained fully with RL with no prior chess knowledge added. And it has beaten Stockfish. This is a “bitter lesson” moment.
Step 3. The Stockfish devs added a neural network trained with RL to their chess engine, in addition to their existing heuristics. And Stockfish easily took back its crown.
Yes sending more compute at a problem is an efficient way to solve it, but if all you have is compute, you'll pretty certainly lose to somebody who has both compute and knowledge.