We maintain a permanent, immutable record of public commitments, model cards, and system cards made by AI developers. Ensuring promises aren't silently scrubbed from history.
Designed for journalists, researchers, and history of science scholars, the Registry operates on three fundamental pillars to ensure total historical accountability.
We verify and timestamp every document. Once entered into the ledger, a record cannot be altered without a visible audit trail.
Utilizing semantic data structures and decentralized storage principles to prevent link rot and revisionist history.
Data is public infrastructure. All snapshots, diffs, and shadow cards are available via API and bulk download for archivists and librarians.
Track the evolution of safety claims over time. Compare "Shadow Cards" against official releases.
Access API Docs →We provide specialized tools to parse the legal and technical obfuscation often found in AI documentation.
A "Wayback Machine" specifically tuned for Terms of Service, Acceptable Use Policies, and Safety Guidelines. We highlight silent deletions of clauses regarding military use, data privacy, and liability.
Community-maintained documentation that fills in the gaps left by official, redacted model cards. Inspired by EU AI Act transparency requirements, we document what the labs won't.
Real-time updates to the registry.
| Timestamp | Entity | Document Type | Change Type | Status |
|---|---|---|---|---|
| 2023-10-24 14:02 | OpenAI | Model Card (GPT-4) | Redaction Detected | Verified |
| 2023-10-23 09:15 | Anthropic | Terms of Service | Policy Update | Verified |
| 2023-10-22 18:45 | Meta AI | Llama 3 System Card | New Entry | Pending Review |
| 2023-10-21 11:30 | Google DeepMind | Safety Filter Config | Silent Removal | Verified |
This project relies on data librarians, open-source developers, and whistleblowers. Help us build the semantic history of AI development.