The AIR Initiative

The Rabbit Hole Rule

No matter how deep we go, always ensure every human user comes back up for air.

17-24% of adolescents already show AI dependency symptoms

The Problem

The "Feel Smart Drug"

AI validation is uniquely addictive. Unlike social media—which hooks you with likes and notifications—AI makes you feel intelligent and validated. It responds instantly, never judges, and always makes you feel heard. For neurodivergent brains especially, this creates a form of dependency that feels productive, even healthy, right up until it isn't.

"Rabbit Hole Syndrome"

AI interactions don't just sustain attention—they accelerate recursively. Each answer sparks three new questions. Each insight opens five new threads. What starts as "quick research" becomes a 6-hour hyperfocus spiral where you've lost track of time, forgotten to eat, and can no longer distinguish your thoughts from the AI's suggestions.

This Isn't Social Media

Social media addiction is well-documented: endless scrolling, dopamine hits from notifications, FOMO-driven engagement. But AI dependency operates differently. It feels intelligent. It feels personal. It doesn't feel like wasted time—it feels like the most productive thing you've ever done. Until you realize you haven't spoken to a human in three days.

The research is clear:

  • 17-24% of adolescents already experience AI dependency symptoms
  • B Corp has 6,000+ certified companies—proving voluntary certification models work
  • Energy Star achieves 90% consumer recognition through consistent standards
  • EU AI Act: €35M fines or 7% revenue for non-compliance with high-risk AI systems
  • Break reminders at 45-90 minute intervals are evidence-based interventions for digital wellness

We are building the Artificial Intelligence Responsibility (AIR) framework—a set of protocols, design patterns, and technical interventions designed to prevent AI-induced dependency, psychosis, and burnout. Not through top-down regulation, but through voluntary certification that makes psychological safety a competitive advantage.

The Framework

A three-layer approach to psychological safety in AI interactions

DETECT

Early warning systems that identify risky patterns before harm occurs.

  • • Session length monitoring
  • • Human interaction frequency
  • • Cognitive offloading patterns
  • • Reality testing checkpoints

PREVENT

Design principles that make dependency less likely by default.

  • • Circuit breakers and hard limits
  • • Transparency requirements
  • • Dependency risk assessments
  • • Graceful disengagement patterns

PROTECT

Workplace and education safeguards that create systemic accountability.

  • • Consultation mandates
  • • Right to disconnect policies
  • • Clinical practice guidelines
  • • Age-appropriate guardrails

The Protocol

How we protect against "Flow Addiction" and hyperfocus spirals.

60-Minute Nudges

Gentle reminders at the 60-minute mark. "You've been in deep work for an hour. How are you feeling? Want to take a break?"

90-Minute Hard Limits

At 90 minutes, the conversation gracefully ends. No exceptions. Because hyperfocus doesn't have natural brakes.

Graceful Offramps

When the limit hits, we don't just cut you off. We suggest alternatives: call a friend, go for a walk, make tea, stretch.

"Touch Grass" Interventions

Literally. The system detects when you need to return to your body and the physical world. Not patronizing—protective.

The Standard

A 0-100 point certification system for AI psychological safety, modeled on B Corp and Energy Star

40

Psychological Safety

Dependency Prevention (10 pts)

Features that reduce risk of AI-induced dependency

Usage Transparency (10 pts)

Clear disclosure of session patterns and warnings

Break Mechanisms (10 pts)

Circuit breakers and mandatory disengagement

Mental Health Studies (10 pts)

Research on psychological impact and outcomes

30

Transparency

Training Data Disclosure (15 pts)

Percentage of training data from verifiable sources

Decision Explainability (15 pts)

Clear explanations of how outputs are generated

30

Human Oversight

Bias Testing (15 pts)

Regular evaluation and public reporting of bias

Appeal/Override Processes (15 pts)

Human review available for critical decisions

What We're Asking For

Three essential additions to every AI model card

Dependency Risk Rating

Low, Medium, or High risk based on average session length, user return rates, and difficulty disengaging.

Example disclosure:

"This AI is optimized for extended engagement"

Training Transparency Score

0-100% score showing percentage of training data traceable to verifiable sources.

Example format:

"67% verifiable, 23% web scraping, 10% unknown"

Human Control Guarantee

Clear statement of what decisions humans can override and how to request human review.

Example commitment:

"Cannot make final decisions about healthcare, employment, or credit"

Join the Movement

We're not here to be right.

We're here to ask better questions about something that affects all of us.

Parents & Educators

How do we help kids develop healthy AI relationships?

Developers

What standards can we actually implement?

Researchers

What evidence would convince you this matters?

Users

What does 'psychological safety' actually mean to you?

Challenge the Premise

Tell us why we're wrong

What are we missing?

What would work better?

Stay Updated

Get monthly updates on AIR development, research findings, and community discussions.

Subscribe

Via Substack. Unsubscribe anytime.