Skip to content

~2000 Elo Python Chess Engine that implements: Negamax, PeSTO’s Evaluation, Null Move, Quiescence Search, Lazy SMP.

License

Notifications You must be signed in to change notification settings

luccabb/moonfish

Repository files navigation

moonfish

Moonfish is a didactic Python chess engine designed to showcase parallel search algorithms and modern chess programming techniques. Built with code readability as a priority, Moonfish makes advanced concepts easily accessible providing a more approachable alternative to cpp engines.

The engine achieves approximately ~2000 Elo when playing against Lichess Stockfish bots (beats level 5 and loses to level 6) and includes comprehensive test suites including the Bratko-Kopec tactical test positions.

Play Online

Play against Moonfish in your browser - No installation required!

Quickstart

Requirements

  • Python 3.10

Installation and usage

Install the python library:

pip install moonfish

From python:

$ python
>>> import chess
>>> import moonfish
>>> board = chess.Board()
>>> moonfish.search_move(board)
Move.from_uci('g1f3')

You can also call the CLI, the CLI works as an UCI Compatible Engine:

$ moonfish --mode=uci
uci # <- user input
id name Moonfish
id author luccabb
uciok

You can also run it as an API:

moonfish --mode=api

Then send a request:

$ curl "http://localhost:5000/?fen=rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR%20w%20KQkq%20-%200%201&depth=4&quiescence_search_depth=3&null_move=True&null_move_r=2&algorithm=alpha_beta"
{
  "body": {
    "move": "e2e4"
  },
  "headers": {
    "Access-Control-Allow-Headers": "Content-Type",
    "Access-Control-Allow-Methods": "OPTIONS,GET",
    "Access-Control-Allow-Origin": "*"
  },
  "statusCode": 200
}

Features

Search Algorithms

  • Alpha-Beta Pruning - Negamax with α-β cutoffs
  • Lazy SMP - Shared memory parallel search utilizing all CPU cores
  • Layer-based Parallelization - Distributing work at specific search depths
  • Null Move Pruning - Skip moves to detect zugzwang positions
  • Quiescence Search - Extended search for tactical positions

Evaluation & Optimization

  • PeSTO Evaluation - Piece-square tables (PST) with tapered evaluation. Using Rofchade's PST.
  • Transposition Tables - Caching to avoid redundant calculations
  • Move Ordering - MVV-LVA (Most Valuable Victim - Least Valuable Attacker)
  • Syzygy Tablebase support for perfect endgame play
  • Opening Book integration (Cerebellum format)

Engine Interfaces

  • UCI Protocol - Compatible with popular chess GUIs
  • Web API - RESTful interface for online integration
  • Lichess Bot - Ready for deployment on Lichess.org
  • RL Environment - OpenEnv-compatible environment for reinforcement learning

Configuration Options

Parameter Description Default Options
--mode Engine Mode uci uci, api
--algorithm Search algorithm alpha_beta alpha_beta, lazy_smp, parallel_alpha_beta_layer_1
--depth Search depth 3 1-N
--null-move Whether to use null move pruning False True, False
--null-mov-r Null move reduction factor 2 1-N
--quiescence-search-depth Max depth of quiescence search 3 1-N
--syzygy-path Tablebase directory None Valid path

Reinforcement Learning Environment

Moonfish includes an OpenEnv-compatible RL environment for training chess agents. OpenEnv is a framework for RL environments that supports both local and remote (HTTP) execution.

Installation

pip install moonfish[rl]

Local Usage

from moonfish.rl import ChessEnvironment, ChessAction

# Create environment with moonfish as opponent
env = ChessEnvironment(opponent="moonfish", opponent_depth=2)

# Reset and play
obs = env.reset()
while not obs.done:
    move = select_move(obs.legal_moves)  # Your policy here
    obs, reward, done = env.step(ChessAction(move=move))

env.close()

Configuration Options

from moonfish.rl import ChessEnvironment, RewardConfig

# Custom reward shaping
config = RewardConfig(
    win=1.0,              # Reward for winning
    loss=-1.0,            # Penalty for losing
    draw=0.0,             # Reward for draw
    illegal_move=-0.1,    # Penalty for illegal moves
    use_evaluation=True,  # Enable position-based intermediate rewards
    evaluation_scale=0.001,
)

# Environment options
env = ChessEnvironment(
    reward_config=config,
    opponent="moonfish",   # "moonfish", "random", or None (self-play)
    opponent_depth=2,      # Search depth for moonfish opponent
    agent_color=True,      # True=White, False=Black, None=alternate
    max_moves=500,         # Max half-moves before draw
)

OpenEnv Server Mode

For distributed training or integration with OpenEnv-compatible frameworks:

# Start the server locally
python -m uvicorn moonfish.rl.server.app:app --port 8000
# Connect via HTTP client
from moonfish.rl import make_env, ChessAction

client = make_env("http://localhost:8000")
obs = client.reset()
result = client.step(ChessAction(move="e2e4"))
print(result.observation.fen)

API Endpoints

Endpoint Method Description
/health GET Health check
/metadata GET Environment configuration
/reset POST Start new game (optional: fen, seed)
/step POST Make a move ({"move": "e2e4"})
/state GET Current game state
/engine-move POST Get best move for position

Hosted on Hugging Face

A hosted version is available on Hugging Face Spaces - use it for training without running your own server:

from moonfish.rl import make_env, ChessAction

client = make_env("https://luccabb-moonfish-chess.hf.space")
obs = client.reset()
result = client.step(ChessAction(move="e2e4"))

Space URL: https://huggingface.co/spaces/luccabb/moonfish_chess

Deploy Your Own

# Install OpenEnv CLI
pip install openenv

# Clone and deploy to Hugging Face Spaces
cd moonfish/rl
openenv validate  # Check environment structure
openenv push      # Deploy to HF Spaces

Contributing

We welcome contributions, feel free to open PRs/Issues! Areas of interest:

  • New search algorithms
  • Improved evaluation functions
  • Time constrained search (e.g. find the best move in 40s)
  • Additional test positions
  • Github CI testing
  • Different evaluation functions
  • Neural Net integration
  • Performance benchmarking on different hardware
  • Improving caching

References

License

MIT License - see LICENSE file for details.

About

~2000 Elo Python Chess Engine that implements: Negamax, PeSTO’s Evaluation, Null Move, Quiescence Search, Lazy SMP.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •