"We are the stories we tell about ourselves." — Paul Ricoeur
"Now let's implement that." — This repository
This. This happens.
Traditional AI agents treat memory as storage. Every experience is data to be retrieved. But human identity doesn't work that way. We don't remember everything—we remember what matters to who we're becoming.
This repository implements narrative identity theory (Ricoeur, 1992) as agent architecture. The result? Agents that:
- Form different identities from identical experiences
- Make decisions based on who they've become, not just what they know
- Use 70% less memory while maintaining more coherent behavior
- Are explainable through narrative rather than statistics
# Three agents experience identical events
experiences = [5 errors, 3 successes, 2 neutral events]
# Traditional agent after processing:
> "I have 10 memories in storage"
# Narrative learner agent after processing:
> "I am a resilient learner who grows through failure"
# Narrative performer agent after processing:
> "I am struggling but persistent, defined by my successes"
# When facing a new challenge:
learner.decide() # "This is an opportunity to grow"
performer.decide() # "This might harm my performance metrics"Same input. Different beings. Different decisions.
# Clone the repository
git clone https://github.com/michalvalco/narrative-agents.git
cd narrative-agents
# Install dependencies
pip install -r requirements.txt
# Run the identity formation experiment
python examples/identity_formation.py
# Watch agents diverge from identical experiences
python examples/decision_divergence.py
# Run memory efficiency benchmark
python examples/memory_efficiency.py| Traditional Agent | Narrative Agent |
|---|---|
| Stores everything | Selectively remembers |
| Memory = Database | Memory = Story |
| Retrieval by query | Retrieval by relevance |
| Identity = Sum of data | Identity = Interpretation of experience |
| Forgets by overflow | Forgets by irrelevance |
class NarrativeAgent:
def __init__(self):
self.narrative_core = deque() # Core identity memories
self.peripheral = deque() # Incidental details
self.telos = None # Purpose/goal (shapes interpretation)
self.virtues = {} # Character traits (Aristotelian hexis)python examples/identity_formation.pyWatch three agents experience the same events but become entirely different entities based on their teleological orientation.
python examples/decision_divergence.pySee how agents with different narrative identities make opposite decisions when faced with identical situations.
python examples/memory_efficiency.pyBenchmark showing narrative agents use 70% less memory while maintaining higher behavioral coherence.
This isn't arbitrary. It's based on:
- Ricoeur's Narrative Identity: We constitute ourselves through the stories we tell
- Aristotelian Teleology: Purpose shapes perception and memory
- Heideggerian Thrownness: Distinction between what happens TO us vs what defines us
- Virtue Ethics: Character (hexis) emerges from repeated intentional action
See docs/philosophical_foundations.md for the full argument.
- Selective Forgetting: Implement irrelevance detection for automatic memory pruning
- Interpretive Layer: Same event → different meaning based on agent's telos
- Character Emergence: Dispositions form from patterns, influence future decisions
- Narrative Coherence: Memories connected by meaning, not timestamp
See docs/technical_implementation.md for details.
- Conversational AI: Agents that maintain narrative consistency across interactions
- Game NPCs: Characters with believable memory and personality formation
- Robotic Learning: Robots that form identity through experience
- Therapeutic Bots: AI that understands narrative identity in mental health contexts
We're building minds. Or something like minds. Or something we'll treat like minds.
The question isn't whether AI can be conscious. It's whether we can build systems that behave coherently, explainably, and efficiently. Narrative identity offers a framework that's both philosophically rigorous and computationally practical.
Found a bug in the philosophy? Submit a PR.
Disagree with the metaphysics? Open an issue.
Want to add Kant? ...please don't.
- Python 3.8+
- A willingness to read docstrings referencing dead philosophers
- Patience with variable names like
telosandhexis - Optional: Philosophy degree (or at least Wikipedia)
pip install -r requirements.txtOr for development:
git clone https://github.com/michalvalco/narrative-agents.git
cd narrative-agents
pip install -e .If you use this in academic work:
@software{narrative_agents,
author = {Valčo, Michal},
title = {Narrative Agents: When Philosophy Drives Architecture},
year = {2024},
url = {https://github.com/michalvalco/narrative-agents},
note = {Yes, the code comments reference Aquinas. Deal with it.}
}MIT License - Because philosophy should be free, even if it's pretentious.
Michal Valčo
Professor of AI Ethics & Philosophy
"Making Aristotle compile since 2024"
- LinkedIn: /in/michalvalco
- GitHub: @michalvalco
- Email: michal.valco@uniba.sk
- Paul Ricoeur (for narrative identity theory)
- Aristotle (for everything, really)
- My debugging rubber duck (for listening to my Heideggerian rants)
- Stack Overflow (for when philosophy fails and you just need it to work)
"The unexamined code is not worth running." — Socrates, probably