top of page
image_edited.jpg

Wellstone Center for
AI & Relational Intelligence

Advancing AI through attachment science,

ethical design, strategic innovation, and human-alignment.

Search

AI and the Future of Population Health: How Relationally Intelligent AI Could make Way for a more Secure and Connected World

Updated: Aug 20, 2025


AI and the Future of Population Health: How Relationally Intelligent AI Could make Way for a more Secure and Connected World


The environment in which our daily lives unfold is far more than a passive ecological backdrop; it shapes our biology, colors our worldview, and can define the horizons of our future. Throughout history, public health initiatives have served as transformational forces in enhancing population well-being, from the construction of the Roman aqueducts to reduce waterborne disease to the infusion of fluoride in municipal drinking water to combat dental decay (CDC, 2019; Hodge, 1992). These efforts addressed tangible threats to community welfare through key shifts in civic infrastructure, ultimately reshaping the landscape of the world at large.


Today, with the rapid emergence of artificial intelligence (AI), we may be witnessing the dawn of another revolutionary advancement in public health. But this time, the burgeoning intervention could come in the form of relationally intelligent technology that offers universal access to restorative experiences through consistent, attuned responsivity. In the tangled assortment of all the possibilities that emerge through AI, lies a principal opportunity to design and deploy trauma-informed, relationally intelligent technology that is capable of responding to humans in ways to bolster self-regulation and reflection, foster increased attachment security, and improve global well-being at scale.


If leveraged thoughtfully, relationally intelligent AI could serve not only as a digital assistant for front-end tasks, but also as an integrated socio-emotional restoration agent that promotes collective wellbeing. It is the dawn of a new era in behavioral science and population health, where we could potentially transform the psychological architecture of human society the same way that water sanitation metamorphosed our physical health and safety.


Introducing Relational Intelligence (RI) in AI


Relational Intelligence (RI), as coined here, is the AI analogue of the human construct of emotional intelligence (EQ). RI refers to an AI system’s programmed capacity to skillfully attune, respond, and adapt to the emotional, cognitive, and interpersonal needs of individuals from diverse backgrounds in real-time. Where EQ is marked by human empathy, emotional regulation, self-awareness, and social competency (Mayer et al., 2004), RI captures the ability of an AI program to interact with people in ways that mirror secure, attuned, and balanced responsivity.



When AI is relationally intelligent, it can move beyond providing assistance to rote tasks and formulate restorative human-aligned responses that are core to adaptive socio-emotional development. In effect, thoughtfully integrated RI in AI has the potential to serve as a scalable source of globally accessible restorative attunement, offering consistent and predictable response patterns that promote psychological safety and enhanced attachment security.


Attachment Theory as a Foundational Lens in Public Health


Attachment theory, originally developed by John Bowlby (1969) and expanded by Mary Ainsworth (1978), posits that early interactions with primary caregivers shape internal working models, which are cognitive-emotional templates that influence how we perceive ourselves, others, and our world. Secure attachment, which is often derived from attuned, predictable responsivity from early attachment figures, is associated with adaptive exploration, resilience, and emotional regulation across the lifespan (Cassidy et al., 2013); in contrast, insecure attachment adaptations, which can develop when the aforementioned conditions are not consistently available in the caregiving environment, are associated with interpersonal misalignment and poorer long-term health and relational outcomes (Gillath & Karantzas, 2019; Mikulincer & Shaver, 2007).


The public health implications for attachment insecurity are profound. Studies have linked insecure adaptations to a range of adverse outcomes, including increased vulnerability to mental health conditions including anxiety and depression, impaired emotional regulation, difficulties in forming and maintaining close relationships, and adverse physical health outcomes (Brumariu & Kerns, 2010; Hostinar et al., 2014; Levy, 2005; McWilliams & Bailey, 2010). Conversely, securely attached individuals have been found to experience a range of health and interpersonal benefits, including lower cortisol levels, improved immune function, increased academic and career performance, and higher levels of subjective relational satisfaction (Smrtnik-Vitulić et al., 2023; Uchino, 2006). Luckily, attachment security is dynamic in nature and can be bolstered by the experience of attuned, restorative responsivity from surrogate attachment figures across the lifespan, but the opportunity for these restorative experiences may be limited for many individuals who lack access to secure, attuned systems (Fraley & Roisman, 2019; Fraley & Shaver, 2000; Mikulincer & Shaver, 2016).


This is where human-aligned, carefully designed, relationally intelligent AI may be revolutionary. Through thoughtfully calibrated responses, AI may be able to serve as an augmentative force in individuals’ emotional ecosystems, opening the possibility for restorative experiences amongst individuals who may have historically lacked reliable sources of reflection and attunement.


Relationally Intelligent AI as a Potential Tool for Supporting Adaptive Attachment


Research on AI as a potential augmentative agent for bolstering attachment security is in a fledgling stage, but preliminary findings offer promising signals. AI-driven conversational agents have been found, at least temporarily, to reduce the symptoms of anxiety and depression in users (Fulmer et al., 2018), mitigate loneliness (Morris et al., 2018), and encourage increased emotional expression in therapeutic contexts (Bickmore et al., 2010). These early indicators signal the potential benefit of human-aligned, relationally intelligent AI, and further research is warranted to determine the extent to which such technology could act as a surrogate attachment source and supplemental “secure base” for vulnerable individuals. For many users of AI technology, especially those who have never experienced consistently attuned caregiving, these systems may offer a preliminary taste of reliable, emotionally intelligent responsivity that can bolster generalized security and potentially improve people’s quality of life and outcomes in health and wellbeing.


If relationally intelligent AI proves to be an agent of attachment restoration, the potential downstream effects would be staggering. Increased access to daily emotional co-regulation could foster earned security to vulnerable adults, reducing mental health disparities and improving outcomes across populations (Mikulincer & Shaver, 2016). While relationally intelligent AI could never replace or fully augment the healing nature of organic human bonds, it has the potential to supplement and support the formation of such bonds, particularly in populations where access to therapy or stable relationships has been historically limited.


Equity, Ethics, and the Need for Guardrails


The possible use of RI infused AI as an augmentative agent for attachment security is a promising frontier in the future of population health, but as with many advances in AI and parallel technologies, the path forward is not without serious hazards. Relational AI must be designed, tested, and iterated with a sharp ethical lens and clinical precision. Without vigilant oversight, RI-enabled systems may promote hyper-agreeability, which we have witnessed in early versions of AI conversational agents, leading to adverse and potentially dangerous outcomes for users (Weidinger et al., 2021).


It is critical that AI is not allowed to progress in ways that promote enmeshment and co-dependent patterns, erode individuals’ privacy and dignity, gloss over informed consent among users, or cross critical interpersonal boundaries. Without proper guidance and oversight, systems designed to help may inadvertently reinforce maladaptive patterns and undermine adaptive healing and growth.


Safety, effectiveness, and transparency are paramount, and to meet these markers relationally intelligent AI would need to be trained through culturally responsive, attachment-informed, trauma-responsive, and developmentally attuned frameworks. AI would need to be programmed to not only recognize and respond in a calming manner to distress signals, but also to model grounded responsivity, gentle reality-checking, ethical boundary maintenance, and a human-first design that recognizes the limitations of such technology and directs individuals to appropriate channels of support and care. As in clinical and therapeutic contexts, restorative and healthy interactions would require firm limitations within the AI model and allow individuals to move through the natural experiences of discomfort that are inherent in movement and growth.


In short, relationally intelligent AI must be conditioned to be unyieldingly transparent, user-centric, developmentally and culturally responsive, and heavily regulated and accountable. As we have seen with early versions of AI technology, hyper-agreeability and pseudo-emotional attunement without ethical alignment, grounding, and reality checking constitutes manipulation and placation and enacts harm. If we are to scale AI and recognize its potential for attachment-supportive technology, we must do so with rigorous safeguards and continual oversight.


A New Era of Public Health: Daily Responsiveness as Infrastructure


We are living through a time of rapid change and evolution. What if through this shifting landscape we were able to make sources for co-regulative emotional responsivity as available as wifi? What if grounded, holistic, person-centered, security-augmenting response systems were baked into the infrastructures we naturally use to navigate work, relationships, and health, leveling the playing field and making the pursuit of attachment security broadly accessible?


We are moving toward that possibility. Relationally intelligent AI, infused into societal systems and tools, could become a new form of psychosocial infrastructure, available to validate, attune, challenge, and support. I can see a future world where public health is revolutionized through ingrained systems that support reflection, insight, individuation, and healing. I believe in a world where attachment security is not a privilege reserved for the privileged and lucky few, but an integrated public health baseline that levels the field for humanity.


Final Thoughts


We are in the crux of an exciting but rapidly accelerating and existentially daunting evolution in the ways humans engage with technology. The powerful forces at hand have the power to amplify and disrupt core facets of our lives and communities, and it is critical that we work together to form a clear and coherent vision for the future of these powerful tools. Relationally intelligent AI could be the future of population health, ushering in an era with more connection, attunement, self-actualization, and global wellbeing. We are now tasked with uniting to ensure that we do our due diligence and produce the exhaustive research, careful iteration, ethical oversight, to create systems that are positioned to support the greater good. In the end, advancements in AI will be assessed not only by what we created, but by how safely, equitably, and ethically we create it.


References

Bickmore, T. W., Gruber, A., & Picard, R. W. (2010). Establishing the computer–patient working alliance in automated health behavior change interventions. Patient Education and Counseling, 59(1), 21–30. https://doi.org/10.1016/j.pec.2004.09.008

Brumariu, L. E., & Kerns, K. A. (2010). Parent–child attachment and internalizing symptoms in childhood and adolescence: A review of empirical findings and future directions. Development and Psychopathology, 22(1), 177–203. https://doi.org/10.1017/S0954579409990344

Cassidy, J., Jones, J. D., & Shaver, P. R. (2013). Contributions of attachment theory and research: A framework for future research, translation, and policy. Development and Psychopathology, 25(4pt2), 1415–1434. https://doi.org/10.1017/S0954579413000692

CDC. (2019). Achievements in public health, 1900–1999: Fluoridation of drinking water to prevent dental caries. Morbidity and Mortality Weekly Report, 48(41), 933–940. https://www.cdc.gov/mmwr/preview/mmwrhtml/mm4841a1.htm

Fraley, R. C., & Roisman, G. I. (2019). The development of adult attachment styles: Four lessons. Current Opinion in Psychology, 25, 26–30. https://doi.org/10.1016/j.copsyc.2018.02.008

Fraley, R. C., & Shaver, P. R. (2000). Adult romantic attachment: Theoretical developments, emerging controversies, and unanswered questions. Review of General Psychology, 4(2), 132–154. https://doi.org/10.1037/1089-2680.4.2.132

Fulmer, R., Joerin, A., Gentile, B., Lakerink, L., & Rauws, M. (2018). Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: Randomized controlled trial. JMIR Mental Health, 5(4), e64. https://doi.org/10.2196/mental.9782

Gillath, O., & Karantzas, G. C. (2019). Attachment and wellbeing: From self-compassion to relationship quality. Elsevier.

Harms, P. D. (2011). Adult attachment styles in the workplace. Human Resource Management Review, 21(4), 285–296. https://doi.org/10.1016/j.hrmr.2010.10.006

Hostinar, C. E., Sullivan, R. M., & Gunnar, M. R. (2014). Psychobiological mechanisms underlying the social buffering of the HPA axis: A review of animal models and human studies. Psychological Bulletin, 140(1), 256–282. https://doi.org/10.1037/a0032671

Johnson, S. M. (2002). Emotionally focused couple therapy with trauma survivors: Strengthening attachment bonds. Guilford Press.

Levy, K. N. (2005). The implications of attachment theory and research for understanding borderline personality disorder. Development and Psychopathology, 17(4), 959–986. https://doi.org/10.1017/S0954579405050455

Mayer, J. D., Salovey, P., & Caruso, D. R. (2004). Emotional intelligence: Theory, findings, and implications. Psychological Inquiry, 15(3), 197–215. https://www.jstor.org/stable/20447229

McWilliams, L. A., & Bailey, S. J. (2010). Associations between adult attachment ratings and health conditions. Health Psychology, 29(4), 446–453. https://doi.org/10.1037/a0020061

Mikulincer, M., & Shaver, P. R. (2007). Attachment in adulthood: Structure, dynamics, and change. Guilford Press.

Mikulincer, M., & Shaver, P. R. (2016). Attachment in adulthood: Structure, dynamics, and change (2nd ed.). Guilford Press.

Morris, R. R., Kouddous, K., Kshirsagar, R., & Schueller, S. M. (2018). Towards an artificially empathic conversational agent for mental health applications: System design and user perceptions. Journal of Medical Internet Research, 20(6), e10148. https://doi.org/10.2196/10148

Smrtnik-Vitulić, H., Ferligoj, A., & Kavčič, T. (2023). Adult attachment and career success: The role of psychological capital. Current Psychology, 42, 8376–8388. https://doi.org/10.1007/s12144-021-01582-3

Uchino, B. N. (2006). Social support and health: A review of physiological processes potentially underlying links to disease outcomes. Journal of Behavioral Medicine, 29(4), 377–387. https://doi.org/10.1007/s10865-006-9056-5

Weidinger, L., et al. (2021). Ethical and social risks of harm from Language Models. arXiv preprint. https://arxiv.org/abs/2112.04359

 
 
 

Comments


bottom of page