This repository contains the implementation of the project described in this document.
This repository also includes the implementation of the learning-based synthesis algorithm described in this article which was developed by Alper Kamil Bozkurt from this repository.
The video rendering and recording is based on this gridworld repository.
- Python: (>=3.5)
- Rabinizer 4:
ltl2ldbamust be inPATH(ltl2ldrais optional) - NumPy: (>=1.15)
The examples in this repository also require the following optional libraries for visualization:
- Matplotlib: (>=3.03)
- JupyterLab: (>=1.0)
- ipywidgets: (>=7.5)
- Spot Library
To install the current release and install the CSRL codebase:
git clone https://github.com/IngyN/macsrl.git
cd macsrl
pip3 install .
The main class for this repo is MultiControlSynthesis, it takes set of ControlSynthesis classes (based on the number agents), a GridMDP object, a OmegaAutomaton object representing the shared automaton with sharedoa=True for our method.
The Graphing is done by loading the saved episode returns then loaded in the Graphing Notebook. For the video rendering, we use the Annotation and the Plotterclasses the annotation.py and the plotter.py.
The package consists of three main classes GridMDP, OmegaAutomaton and ControlSynthesis. The class GridMDP constructs a grid-world MDP using the parameters shape, structure and label. The class OmegaAutomaton takes an LTL formula ltl and translates it into an LDBA. The class ControlSynthesis can then be used to compose a product MDP of the given GridMDP and OmegaAutomaton objects and its method q_learning can be used to learn a control policy for the given objective. For example,
The repository contains a couple of example IPython notebooks:
Animations of the case studies:
- Buttons Experiment Video
- Buttons Step by Step Images
- Flag Collection Step by Step Images
- Rendez-vous Step by Step Images
HTML representation of the Automatons: