top of page
image_edited.jpg

Wellstone Center for
AI & Relational Intelligence

Advancing AI through attachment science,

ethical design, strategic innovation, and human-alignment.

Search

AI in the Therapy Office: Control, Compliance, and What it Means for Care

Updated: Aug 20, 2025


AI in the Therapy Office: Control, Compliance, and What it Means for Care -


As AI becomes more integrated in behavioral healthcare, we need to pause and ask: What happens when tech attempts to script and pre-ordain the therapeutic process?


I’ve seen a growing number of tech/AI companies promising to enhance 'treatment fidelity' and adherence to practice standards through monitoring and intervention tracking, progress dashboards, and prescription of clinical content. While the goal of improving outcomes and ensuring the quality of care is admirable, an excessive level of prescriptive oversight raises serious clinical and ethical concerns.


Like all human dynamics, mental health conditions exist in a broader systemic context, shaped by complex macro and micro-level forces that influence how people feel, behave, and relate to the world. Treating socio-emotional and psychological concerns requires more than canned cognitive-behavioral interventions; practitioners must help clients process experiences and heal through reflection, the creation of meaning and narrative, and the cultivation of restorative experiences. A wide range of factors, including adverse childhood dynamics and systemic inequities, contribute to sustained distress and maladaptive patterns, and effective care is contingent on responding to the unique needs, history, worldview, and resources of each client. Thus, 'treatment' is not a linear or uniform process that can be predetermined by AI scripts. Best practice standards can help inform the provision of quality care, but when industry leaders begin to advocate for the use of AI to dictate how sessions unfold in the name of efficiency and standardization, we risk several things, including:


  1. Dampening openness and clinical disclosure by projecting clients’ sensitive, subjective experiences onto dashboards in ways that may feel misrepresentative to the client, ultimately undermining trust, reinforcing damaging power differentials, eroding privacy, and risking retraumatizing vulnerable populations 


  2. Compromising the foundational client-practitioner alliance through rigid protocols that reduce authenticity, attunement, and responsivity


  3. Prioritizing algorithmic compliance rather than shaping care around expert clinical judgment and client needs 


Helping people navigate mental health challenges is a dynamic, collaborative, and deeply human process that cannot be mass-produced without significant trade-offs. AI will play a role in shaping the future of behavioral healthcare, but it must be built to enhance rather than control the therapeutic process, allowing clinical care to center around safety, privacy, dignity, and autonomy, honoring the individualized nature of healing.


I am passionate about AI and believe it is the future of population health, but we must ensure that innovation in this space protects and empowers those it is meant to serve. 


What guardrails do you think we need to ensure that AI supports, rather than supplants, ethical and effective treatment?



Dr. Brittney Stanley, PhD

 
 
 

Comments


bottom of page