

At The Center for Ethical AI, we believe that knowledge should be both accessible and accountable. The works published here are part of an open-access initiative designed to advance ethical AI research, governance, and education. Each publication is made available to support academic collaboration, informed dialogue, and responsible innovation in artificial intelligence.
Monographs
Licensing and Use:
All open-access publications are released under a Creative Commons Attribution–NonCommercial (CC BY-NC) license.
This means you are free to download, share, and cite the materials with proper attribution, but commercial reuse or modification is prohibited without express written permission from the author or publisher.
​
Authors:
If you have a Monograph you would like to submit for posting to this repository, please contact us for guidelines and instructions for submitting your manuscript. If not previously published, we can assign ISBN or DOI for reference and discoverability.

Ethical AI Integration articulates a lifecycle-governance approach for deploying AI systems with epistemic integrity. It synthesizes bias, misinformation, and error diagnostics with standards-aligned auditing, continuous monitoring, and human oversight to translate ethical intent into measurable, operational AI accountability across real-world deployments.

SymPrompt+ is a structured, standards-aligned prompting framework for governing human–AI interactions. It operationalizes ethical constraint, epistemic clarity, and auditability through tagged prompt syntax, multi-turn evaluation, and lifecycle integration, reducing bias, misinformation, and error amplification in large language model outputs.

Academic’s Guide to Ethical AI and Prompt Engineering synthesizes ethical theory, governance standards, and applied prompt design to guide responsible use of large language models. It offers lifecycle-aware methods to reduce bias, misinformation, and error while preserving transparency, accountability, and scholarly integrity in human–AI interaction.
Whitepapers & Policy Briefs
Call to Engagement:
​We invite educators, researchers, and industry practitioners to engage with and contribute to this evolving collection of frameworks, toolkits, and applied guidance on ethical AI.
​
Article Submissions:
We welcome unsolicited whitepapers, research briefs, and policy analyses aligned with the mission of this repository. Prospective contributors are encouraged to contact us to discuss scope, fit, and submission guidelines.

Adaptive AI Governance as a Lifecycle Control System
DOI: 10.5281/zenodo.19453393
This policy brief presents AI governance as a lifecycle control system rather than static compliance. It outlines an integrated, standards-aligned framework that governs AI from data sourcing through deployment using continuous monitoring, feedback loops, and enforceable constraints to ensure accountability, resilience, and trustworthy AI operations.

LLM Model Autophagy
DOI: 10.5281/zenodo.19453508
This paper presents the first empirically validated framework for quantifying and governing epistemic decay in recursive AI training ecosystems. Through 110 controlled GPT-2 retraining observations across 10 generations, we demonstrate that synthetic data contamination produces exponential integrity decay (alpha=1.93, R-squared=0.98) and that provenance-based governance can partially reverse this damage, maintaining corpus integrity at 0.894 versus 0.489 under uncontrolled conditions (p=0.004). The calibrated framework bridges AI safety research and operational governance, providing measurable thresholds aligned with ISO/IEC 42001 and NIST AI RMF standards.
Methods, Tools, & Artifacts

Demo Model
The Bias Cascade Visualizer is an interactive educational tool that demonstrates a critical but often overlooked phenomenon in artificial intelligence: how small biases in data can become catastrophically large biases in deployed AI systems through a process of multiplicative amplification.

Demo Model
This interactive demo operationalizes ALAGF as an evidence-first AI governance system, showing how AI augments but never replaces human auditors. It visualizes lifecycle audits, authority-tagged decisions, and a regulator-ready ledger, making AI governance transparent, auditable, and defensible.

Demo Model
The Anti-Autophagy Monitor is an interactive dashboard that demonstrates how large language models (LLMs) degrade when iteratively trained on their own outputs, a phenomenon called model autophagy. The simulator projects the trajectory of six key state variables across multiple retraining generations, comparing an uncontrolled baseline against a governed scenario with user-adjustable intervention parameters.
