Research Focus

​
Attachment-Informed AI
Researching how attachment theory can inform emotionally attuned AI behavior, social modeling, and relationship-based system design. Focused on building AI that can understand, mirror, and support human relational patterns.
​
Trauma-Informed Machine Learning
Investigating how psychological trauma impacts user-AI interaction and informing model behavior, training data practices, and safeguards to prevent emotional harm and misattunement.
​
Ethics-Informed Product Strategy
Bridging research and deployment by advising on the responsible design and rollout of AI systems. Specializing in ethical foresight, social impact modeling, and alignment policy frameworks.
​
Human-Centered UX & Product Design
Designing interfaces and interactions that account for emotional safety, trust, and social cognition. Focused on relational dynamics in AI products, especially in conversational agents and assistive tools.
​
Strategic Alignment & Relational Governance
Studying how AI systems can be aligned not just with values but with human social-emotional needs. Offering models for governance and oversight that center relational and behavioral integrity.
Applied AI for Human Development & Support
Developing tools and use cases where AI assists with emotional regulation, insight, and decision-making—particularly in education, mental health, and interpersonal communication.
Cultivating Emotionally Intelligent AI Through Integrated Relational Intelligence (RI)
​
At Wellstone Center for AI & Relational Intelligence, our research is grounded in the conviction that truly aligned AI must go beyond cognitive tasks to navigate the complexities of human emotions and relational systems.
​
We are pioneering the concept of Relational Intelligence (RI), which is the foundational emotional intelligence layer necessary in the development of AI to cultivate attuned, ethical, and responsive systems. RI builds upon current trends in AI alignment around safety, compliance, and logic, homing in on the dimensions of emotional coherence, interpersonal context, and psychological safety. We believe that human-aligned AI could revolutionize population health through the propagation of restorative attachment dynamics, as a carefully crafted AI infrastructure could serve as a global immunization vector for relational health and adjustment, reinforcing adaptive attachment behaviors and interpersonal security by providing predictable, attuned responsivity and providing a source for accessible, secure, co-regulative interaction. When built around ethical design with core markers for trust, transparency, and openness, AI becomes a powerful took in supporting human wellbeing at scale.
​
⸻​
​
Why Relational Intelligence?
​
Relational Intelligence (RI) represents the foundation emotional intelligence (EQ) layer for machine systems. It draws from:
-
Attachment Theory
-
Interpersonal Neurobiology
-
Trauma-Informed Care
-
Behavioral Systems Thinking
-
Human-Centered Design
​
This multidisciplinary lens enables us to create AI that is relationally literate, capable of accurately interpreting tone and intent, offering co-regulative responses to cues of distress, and adapting to interpersonal contexts in real-time to provide meaningful support.
​
This direction is especially urgent in the domains of:
​​
-
Healthcare & digital mental health
-
Education & student support
-
Human-machine collaboration
-
Conflict resolution & relationship tools
-
AI companion systems
​
By embedding RI into applied systems design, we reduce emotional misalignment, increase psychological safety, and improve outcomes in emotionally charged domains. RI provides a systems-level approach to alignment that has the potential to scale from individual interaction to global impact.
​
⸻
​
Core Research Domains:
​
Our lab’s research spans six applied domains that reflect the mechanisms of emotionally intelligent machine design:
​
1) Attachment-Informed AI
Understanding how AI-human interactions impact attachment and bonding patterns in vulnerable populations and influence socio-cultural ecosystems.
​
2) Relational UX & Emotional Design
Developing user experiences that adapt to emotional tone, conflict states, and human relational needs using RI principles.
​
3. Trauma-Aware Machine Learning
Mitigating emotional harm in LLMs through trauma-informed prompts, filtered data pipelines, and response safeguards.
​
4. Human-AI Alignment Strategy
Centering psychological attunement and relational repair within frontier alignment architectures and RLHF optimization.
​
5. Behavioral Systems Engineering
Using systems thinking and behavioral modeling to design AI interventions that promote regulation, connection, and trust across diverse populations.
​
6. Applied AI for Human Support
Prototyping tools for couples, educators, providers, and the general populace that leverage AI to enhance restorative interactions and adaptive responsivity, such as Resolve, our conflict-assisting AI tool for relational coaching.
​
⸻
​
Strategic Positioning
​
The goal if the Wellstone Lab is to contribute research and applied tools/products that shape how emotional safety, trust, and relational coherence are integrated into the foundation of emerging AI systems.
We are:
Prototyping emotionally attuned tools (e.g. Resolve, our LLM-based conflict mediation assistant)
Producing white papers on Relational Intelligence for alignment and AI governance
Consulting with startups and tech developers building emotionally aware health and education tools
Bridging disciplines across psychology, machine learning, and systems strategy through publications and interdisciplinary collaborations
​
⸻
​
Tools and Research Initiatives Built for Cross-Sector Impact
​
Whether you’re building large language models, designing emotionally aware interfaces, or developing AI companions or tools for healthcare delivery—RI offers a new pathway for relational safety and scalable alignment.
​
We are particularly interested in collaboration with:
-
Frontier model developers
-
Human-in-the-loop strategy teams
-
AI ethics & foresight teams
-
Product leaders seeking relational UX innovation
-
Educators and mental health providers building AI-enhanced services
​
⸻
​
If you are interested in the nuanced emotional, ethical, and interpersonal dynamics
in human-AI interaction, let’s connect!
​
​
​
​
​


