top of page

What Is This Tool?
The Anti-Autophagy Monitor is an interactive dashboard that demonstrates how large language models (LLMs) degrade when iteratively trained on their own outputs, a phenomenon called model autophagy. The simulator projects the trajectory of six key state variables across multiple retraining generations, comparing an uncontrolled baseline against a governed scenario with user-adjustable intervention parameters.
The simulator is built on a formal 21-equation mathematical model validated through controlled GPT-2 retraining experiments. The corpus integrity decay trajectory (the primary metric) is empirically calibrated with R-squared = 0.98. Governance efficacy parameters are theoretical projections pending validation in Phase 3b.
bottom of page