501(c)(3) New Jersey, USA

Advancing Transparency & Trust in AI Usage

Responsible AI Usage Foundation, Inc (RAIU) is a nonprofit helping corporations, educational organizations, and content authors/creators clearly share when and how AI shaped their work, so audiences can understand and trust can grow.

ABOUT US

Learn about our commitment

We champion openness around AI use across media, culture, education, and business. Our role is not to enforce rules, but to make it easier for individuals and organizations to share, and for audiences to understand, how AI plays a part. The time for deception is past; the time for clear, legible usage disclosure is now.

Our work is voluntary and neutral. It travels as easily on a webpage as it does on a poster, syllabus, press release, conference slide, podcast intro, or broadcast lower-third. We design for maximum legibility and portability. We believe that by providing simple, elegant tools, we can cultivate a culture where transparency is the natural default.

OUR MISSION

Cultivating transparency as the default standard

Preserving Credibility

AI makes it harder to know what's authentic. Transparency helps audiences trust what they consume and protects the integrity of creators and institutions alike. It is the cornerstone of public confidence.

Empowering Stakeholders

From solo artists to universities and civic groups, people need simple, stigma-free ways to explain how AI was involved in ways that fit their context, fostering clear communication.

Fostering Accountability

Rules and platforms can't keep up. Voluntary transparency fills the gap, setting new expectations where openness becomes the default standard for responsible creation.

"Be transparent about AI use, build credibility, and deepen audience trust."

- The RAIU Foundation

INITIATIVES

Driving trust with practical tools and standards

Flagship Standards

Draft Launched

DAIU - DISCLOSE AI USAGE

A voluntary, machine and human-readable framework for simple, trust-based AI usage disclosure. This standard is designed to be the definitive mark of accountability.

Self-attestation, human-readable first: Clarity for all.

Platform-neutral and highly portable: Works everywhere.

Works alongside existing provenance tools: Augmenting existing metadata.

Development Lab

FUTURE PROJECTS

Additional projects are under development. Our mission extends beyond one framework because transparency standards must evolve rapidly with the technology. Stay tuned for more vital announcements and updates regarding our upcoming initiatives.

THE TEAM

Dedicated leaders focused on trust, design, and usability.

Headshot for Dhaval Jani

Dhaval Jani

Founder

Design leader focused on trustworthy AI disclosures and human-centered product strategy.

LinkedIn
Headshot for Jessica Cristofich

Jessica Cristofich

Founding Design Advisor

Design-focused advisor shaping the visual and creative direction of the Foundation's standards.

LinkedIn
Headshot for Justin Lee

Justin Lee

Founding Tech Advisor

Technical advisor leading integrations, verification flows, and developer tooling.

LinkedIn

PARTNERSHIPS & COLLABORATION

Interested in piloting disclosures or collaborating on future initiatives?
We welcome partnerships and collaboration.

EMAIL US TO PARTNER