

Vol. 1, Issue 1.1, 2026
This is a Paragraph. Click on "Edit Text" or double click on the text box to start editing the content and make sure to add any relevant details or information that you want to share with your visitors.
LLM Model Autophagy
Researchers have long observed that language models degrade when trained on their own outputs, but no quantitative framework existed to predict decay rates or evaluate whether governance interventions could arrest the damage. This paper presents the first empirically validated model of epistemic decay in recursive AI training, formalizing the phenomenon through 21 coupled state-transition equations across four temporal scales and testing the framework through 110 controlled GPT-2 retraining observations. The central finding is that provenance-based governance not only prevents collapse but partially reverses integrity loss: corpus integrity under governance reached 0.894 compared to 0.489 under uncontrolled conditions (p = 0.004), with the calibrated stability condition (FIF * BRF = 0.179 < 1.0) providing the first empirical threshold for sustainable AI retraining. The work bridges a critical gap between AI safety research, which has documented model collapse, and AI governance practice, which lacks operational metrics for when and how to intervene.

Small Title
This is a Paragraph. Click on "Edit Text" or double click on the text box to start editing the content and make sure to add any relevant details or information that you want to share with your visitors.

Small Title
This is a Paragraph. Click on "Edit Text" or double click on the text box to start editing the content and make sure to add any relevant details or information that you want to share with your visitors.
