Skip to content

ython library for building composable prompt graphs. Unlike traditional linear prompt chains, promptmesh treats prompts as nodes in a directed graph

License

Notifications You must be signed in to change notification settings

TIVerse/promptmesh

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

promptmesh

Composable prompt graphs for LLM orchestration in Python.

Python 3.11+ License: MIT Code style: black


🌟 Overview

promptmesh is a Python library for building composable prompt graphs. Unlike traditional linear prompt chains, promptmesh treats prompts as nodes in a directed graph, enabling:

  • Conditional routing based on LLM outputs or semantic similarity
  • Parallel execution of independent nodes
  • Multi-layer caching for optimized performance
  • Local execution without external infrastructure
  • YAML-based graph definitions for non-programmers

πŸš€ Quick Start

Installation

# Basic installation
pip install promptmesh

# With OpenAI support
pip install promptmesh[openai]

# With all providers
pip install promptmesh[all]

Hello World Example

from promptmesh import Graph, LLMNode
from promptmesh.adapters import OpenAIAdapter

# Create adapter
adapter = OpenAIAdapter(api_key="your-api-key")

# Build graph
graph = Graph("hello", "Hello World")
graph.add_node(
    LLMNode(
        id="greeter",
        prompt_template="Say hello to {{name}} in a friendly way",
        model="gpt-4"
    )
)
graph.set_entry_nodes(["greeter"])
graph.set_exit_nodes(["greeter"])

# Execute
result = graph.run(
    inputs={"name": "VEDA"},
    adapters={"llm": adapter}
)

print(result["greeter"])
# Output: "Hello VEDA! It's wonderful to meet you! How are you doing today?"

YAML Example

# chat.yaml
id: simple_chat
name: Simple Chat

nodes:
  - id: responder
    type: llm
    prompt_template: "Respond to: {{message}}"
    model: gpt-4
    temperature: 0.7

entry_nodes:
  - responder

exit_nodes:
  - responder
from promptmesh.loaders import YAMLLoader

graph = YAMLLoader.load("chat.yaml")
result = graph.run({"message": "Hello!"})

✨ Features

Conditional Routing

Route execution based on LLM outputs, confidence scores, or semantic similarity:

from promptmesh import Edge

# DSL-based condition
graph.add_edge(Edge(
    source_id="classifier",
    target_id="technical_handler",
    condition_type="dsl",
    condition="output.category == 'technical'"
))

# Semantic similarity
graph.add_edge(Edge(
    source_id="query_encoder",
    target_id="relevant_handler",
    condition_type="semantic",
    semantic_threshold=0.85
))

Multi-Layer Caching

Optimize performance with automatic caching:

from promptmesh.cache import MemoryCache, RedisCache

# In-memory only
cache = MemoryCache(max_size_mb=100)

# Two-level: memory + Redis
cache = RedisCache(
    redis_url="redis://localhost:6379",
    l1_enabled=True
)

graph.set_cache(cache)

Streaming

Stream tokens as they're generated:

async for chunk in graph.run_stream(inputs={"query": "Explain AI"}):
    print(chunk.token, end="", flush=True)

Visualization

Generate visual diagrams of your graphs:

from promptmesh.visualization import render_mermaid

mermaid_code = render_mermaid(graph)
print(mermaid_code)

πŸ“– Documentation


πŸ—οΈ Core Concepts

Graphs

A PromptGraph is a directed graph with:

  • Nodes: Operations (LLM calls, embeddings, transforms)
  • Edges: Connections with optional conditions
  • Entry nodes: Starting points
  • Exit nodes: Final outputs

Node Types

  • LLM Node: Execute prompts against language models
  • Embedding Node: Generate text embeddings
  • Retrieval Node: Vector similarity search
  • Transform Node: Custom Python functions

Execution Context

State maintained across node executions:

  • User variables
  • Node outputs
  • Execution trace
  • Cache statistics

πŸ› οΈ Development

Setup

# Clone repository
git clone https://github.com/abhineeshpriyam/promptmesh.git
cd promptmesh

# Install with Poetry
poetry install

# Or with pip (editable)
pip install -e ".[dev]"

Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=promptmesh --cov-report=html

# Run specific test
pytest tests/unit/test_graph.py

Code Quality

# Format code
black promptmesh tests

# Lint
ruff check promptmesh

# Type check
mypy promptmesh

🀝 Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

Quick Contribution Guide

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Run tests (pytest)
  5. Commit your changes (git commit -m 'Add amazing feature')
  6. Push to the branch (git push origin feature/amazing-feature)
  7. Open a Pull Request

πŸ“ Examples

Customer Support Bot

graph = (
    GraphBuilder("support")
    .add_llm_node("classifier", prompt="Classify: {{query}}")
    .add_llm_node("tech_response", prompt="Technical answer: {{query}}")
    .add_llm_node("general_response", prompt="General answer: {{query}}")
    .add_conditional_edge("classifier", "tech_response", 
                         condition="output.type == 'technical'")
    .add_conditional_edge("classifier", "general_response",
                         condition="output.type == 'general'")
    .build()
)

RAG Pipeline

graph = (
    GraphBuilder("rag")
    .add_embedding_node("embed_query", model="text-embedding-3-small")
    .add_retrieval_node("search", vector_store="qdrant", top_k=5)
    .add_llm_node("generate", prompt="Answer using context: {{docs}}")
    .add_edge("embed_query", "search")
    .add_edge("search", "generate")
    .build()
)

More examples in examples/ directory.


πŸ”§ Configuration

Environment Variables

# LLM Provider Keys
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...

# Optional: Redis for L2 caching
export REDIS_URL=redis://localhost:6379/0

Provider Configuration

from promptmesh.adapters import OpenAIAdapter, AnthropicAdapter

adapters = {
    "llm": OpenAIAdapter(
        api_key="sk-...",
        default_model="gpt-4",
        timeout=30
    ),
    "embedding": OpenAIAdapter(
        api_key="sk-...",
        default_model="text-embedding-3-small"
    )
}

πŸ“Š Performance

  • Latency: <10ms per node (cached), <200ms (cache miss)
  • Cache Hit Rate: 70-90% typical
  • Memory: ~100MB base + ~10MB per cached result
  • Throughput: Limited by LLM provider rate limits

πŸ—ΊοΈ Roadmap

  • Visual graph editor (web UI)
  • More LLM providers (Google, Cohere, etc.)
  • Built-in observability dashboard
  • Graph versioning and A/B testing
  • Distributed execution support
  • Plugin system for custom nodes

πŸ“œ License

MIT License - see LICENSE file for details.

Copyright (c) 2025 Abhineesh Priyam


πŸ™ Acknowledgments

Inspired by:

  • LangChain's composable approach
  • LlamaIndex's data-centric design
  • Apache Airflow's DAG architecture

πŸ“§ Contact

Abhineesh Priyam


⭐ Star History

If you find promptmesh useful, please star the repository!

Star History Chart

About

ython library for building composable prompt graphs. Unlike traditional linear prompt chains, promptmesh treats prompts as nodes in a directed graph

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages