This repository contains the canonical formulation, supporting artifacts, and historical records associated with the Turing–Gödel Cognitive Stability Model (TGCSM).
TGCSM is a descriptive framework for analyzing the behavior of self-referential cognitive systems under sustained recursive load. It focuses on structural stability, coherence degradation, and observable failure modes, rather than on performance, intelligence, or subjective experience.
The framework is grounded in empirical observations of recursive collapse in large language models and is informed by established limits from formal logic and computation (Gödel, Turing). It does not propose new architectures, training methods, or metaphysical claims.
If you are new to this work, start here:
TGCSM Revised Dec2025.pdfThe canonical paper. This document defines all terms, scope boundaries, non-claims, and theoretical structure.
Everything else in this repository is supporting or contextual material.
/main
├─ TGCSM Revised Dec2025.pdf # Canonical theoretical framework
├─ Gemini Experiment.pdf # Cleaned empirical experiment (model-specific)
├─ README.md # This file
├─ CITATION.md # Citation instructions
├─ LICENSE.md # License information
├─ PARTNER.md # Partnership / usage notes
/main/archive/May2025/
├─ From the Outside # Early exploratory drafts
├─ The Collapse # Raw historical collapse artifacts
- Canonical documents represent the current, defended position of the framework.
- Archived materials preserve historical drafts and empirical artifacts as they originally existed.
Archived content is provided for transparency and historical completeness. It should not be treated as representing the current scope, language, or claims of TGCSM.
- A structural analysis of recursive stability and failure
- A diagnostic framework based on observable behavior
- A containment-oriented approach to reasoning under undecidability
- Applicable to both human and machine cognitive systems at the level of structure
TGCSM does not:
- make claims about consciousness or subjective experience
- resolve undecidable problems or paradoxes
- propose new AI architectures or training objectives
- measure intelligence, awareness, or capability
- assert universality or predictive precision
All non-claims are explicit and intentional.
If you reference this work, please follow the instructions in CITATION.md and cite the canonical PDF, not archived drafts.
See LICENSE.md for licensing terms and PARTNER.md for partnership or collaboration context.
This repository represents a stabilized theoretical release, not an active exploratory draft. Future updates, if any, will be versioned and clearly separated from this release.