Logic Diffusion is a neuro-symbolic generative architecture designed to address bias in deep learning distributions.
Unlike standard diffusion models that blindly approximate the training data distribution
logic-diffusion-v0/
│
├── logic_diffusion/ # The Core Framework
│ ├── __init__.py # Package initialization
│ ├── config.py # Hyperparameters & Configuration
│ ├── logic.py # Differentiable Logic (T-Norms) & Constraints
│ ├── modeling.py # Lightweight U-Net Architecture
│ └── pipeline.py # Logic-Guided Diffusion Sampling Loop
│
├── train.py # Main training script (Joint Optimization)
├── app.py # Interactive Gradio Web Demo
├── requirements.txt # Project Dependencies
└── README.md # Documentation
Clone the repository and install the dependencies.
git clone [https://github.com/your-username/logic-diffusion-v0.git](https://github.com/your-username/logic-diffusion-v0.git)
cd logic-diffusion-v0
pip install -r requirements.txt
Run the training script to initialize the U-Net and train it on synthetic data. The script uses a joint loss function: $$ \mathcal{L}{total} = \mathcal{L}{MSE} + \lambda \cdot \mathcal{L}_{Logic} $$
python train.py
- Output: The script will print loss metrics to the console and save the trained model weights to
logic_diffusion_v0.pt.
Launch the Gradio interface to generate samples and tweak the "Logic Strictness" parameter in real-time.
python app.py
- Click the local URL (e.g.,
http://127.0.0.1:7860) to open the app in your browser.
Standard Boolean logic (True/False) has no gradients, making it unusable for deep learning training. Logic Diffusion uses Fuzzy Logic (T-Norms) to relax these rules into continuous functions.
- AND Operator: (Product T-Norm)
- IMPLIES Operator: (Reichenbach Implication)
In logic.py, we define constraints that calculate a "Truth Value" (0 to 1) for a generated batch. The model minimizes the violation of this truth value:
# Pseudo-code example
violation = 1.0 - truth_value(generated_image)
loss.backward() # Gradients update the image pixels to be "truer"- v0: Core implementation of Differentiable Logic and Simple U-Net.
- v0.1: Integration with Latent Diffusion (Stable Diffusion).
- v0.2: API for defining First-Order Logic rules via natural language.
Contributions are welcome! We are looking for help with:
- Implementing new T-Norms (Godel, Lukasiewicz).
- Optimizing the logical gradient calculation.
- Adding support for RGB datasets (CIFAR-10, CelebA).
This project is licensed under the MIT License - see the LICENSE file for details.