It’s 9:00 PM, and your final therapy note still isn’t finished.
You’re mentally replaying a complex session from earlier when a new AI assistant in your EHR offers a draft summary and diagnostic suggestion. It’s surprisingly on-point—and yet… it feels unsettling. Can you trust this? Should you?
As artificial intelligence (AI) becomes more deeply embedded in mental health care—from session note generation to risk prediction and treatment recommendations—clinicians face not only technical but ethical challenges. How do we ensure these tools align with the values of therapeutic practice: empathy, consent, cultural humility, and clinical judgment? Can AI support shared decision-making—or does it risk replacing it?
Let’s explore the ethical dimensions of AI in mental health—and the frameworks that can help ensure it serves providers and patients with care.
The Ethical Promise and Peril of Mental Health AI
AI tools are increasingly being used to transcribe therapy sessions, assess risk, screen for symptoms, and even predict patient dropout or crisis. While this holds promise for improving care and reducing burnout, it raises critical questions:
- Opaque algorithms may undermine trust and informed consent.
- Bias in training data can reproduce cultural or systemic discrimination.
- Overreliance on machines might diminish clinical intuition and human presence.
- Undefined accountability blurs legal and ethical responsibility.
In therapy, where human connection and nuance matter deeply, the implications of using black-box tools are especially pronounced.
Four Ethical Models for AI in Mental Health
In a widely cited ethical framework, Gundersen and Bærøe outlined four models for responsible AI integration—each with relevance for mental health:
1. Ordinary Evidence Model
Treats AI like any clinical tool—therapists interpret and apply results using their own judgment.
- Pros: Respects clinician autonomy; fits existing workflows.
- Cons: Assumes clinicians can assess AI’s validity, ignoring hidden design flaws or bias.
2. Ethical Design Model
AI developers embed ethical principles like privacy, fairness, and transparency into the software itself.
- Pros: Addresses harm at the design level (e.g., data security or exclusion bias).
- Cons: May limit clinician input and create misplaced trust in “ethical” tech.
3. Collaborative Model
Brings therapists, ethicists, and AI developers into a shared design and evaluation process.
- Pros: Ensures tools align with therapeutic goals and foster trust.
- Cons: Requires institutional buy-in and cross-disciplinary collaboration.
4. Public Deliberation Model
Calls for broader societal input on whether and how AI should play roles that affect core therapeutic values.
- Pros: Centers lived experience and collective ethics.
- Cons: Difficult to structure and scale across diverse mental health settings.
Six Ethical Domains in Mental Health AI
Building on work by Abdullah et al., here are six crucial areas where ethical vigilance is needed:
1. Machine Training Ethics
- Who owns and governs the clinical data used to train AI tools?
- Are datasets representative of different mental health conditions, communities, and demographics?
- Have clients meaningfully consented to their therapy data being used?
2. Machine Accuracy Ethics
- Is the tool clinically validated across age, race, gender, and cultural contexts?
- Can therapists and clients understand how it arrives at its recommendations?
3. Client-Related Ethics
- Does the use of AI respect client autonomy and confidentiality?
- Are clients informed when AI plays a role in their care?
4. Clinician-Related Ethics
- Will AI augment therapeutic judgment—or deskill it over time?
- Can it enhance therapeutic presence, or does it interfere?
5. Shared Ethics
- Who is responsible if an AI-generated recommendation leads to harm?
- Can liability be distributed without eroding clinician accountability?
6. Regulatory Oversight
- Are mental health-specific regulators equipped to monitor AI tools?
- Can existing standards adapt to AI systems that evolve over time?
Toward Responsible AI in Mental Health Practice
Using AI ethically in mental health isn’t just about avoiding harm—it’s about reinforcing the core values of care: empathy, consent, fairness, and trust. This requires:
- Clinicians to build AI literacy: Understanding how these systems work and when to challenge them.
- Designers to cultivate ethical literacy: Embedding transparency and equity from the outset.
- Organizations to support responsible integration: Tools like Therassist follow a collaborative model—providing AI note support and session coaching that prioritize clinician judgment and patient agency.
In the end, AI should support—not reshape—the ethical fabric of mental health care.