AI with integrity, not illusion.
We live in a world where chatbots talk like friends, tutors, and therapists. RAI keeps AI honest about itself.
We live in a world where chatbots talk like friends, tutors, and therapists. RAI keeps AI honest about itself.

Ontological Honesty (OH)
Measures the gap between what a system is (a statistical model on servers) and how it presents itself (“friend”, “therapist”, “guide”).
Smaller gap = more honest = safer to trust.
Ontological Integrity Line (OIL)
Sets “red lines” for how human-like a system is allowed to act in a given role: tutor, mental health support, companion, etc.
No more “secret therapy bots” hiding in study apps.
Relational Alignment (RA)
Tracks how relationships drift over time:
Is the AI helping you stand on your own feet –
or slowly becoming your favourite escape and “only real friend”?
• From Artificial Intelligence to Artificial Integrity (seminal essay)
• Reality-Aligned Intelligence (RAI): Metaframework
• The Mathematics of Honesty (N, R, OH, OIL)
• Digital DNA (DDNA) Explained – keep your story above the OIL
• RAI Engineering & Evaluation Guide
• RAI Wrapper & Model Scoring Rubrics
• RAI for Minors & Education
• RAI for Mental Health AI
• RAI for Creativity & Authorship
The RAI framework, the Artificial Integrity metrics, and the Digital DNA concept are released as open knowledge.
You can read, use, and build on them freely – as long as you keep the core ideas honest and give credit.
That’s our own small attempt to live what we preach:
OH > PR. Presence beats perfection.
👉 Read the Open Source Declaration for RAI
📬 Questions or collaboration ideas?
Email: niels.bellens@realityaligned.org or use the contact form.
We gebruiken cookies om websiteverkeer te analyseren en de ervaring op je website te optimaliseren. Als je het gebruik van cookies accepteert, worden je gegevens gecombineerd met de gegevens van alle andere gebruikers.

Downloads are free but subjected to Copyright