Current Projects

Current Projects & Initiatives
At the Wellstone Lab, we are developing a suite of applied research studies, consumer products and tools, and industry guidance frameworks designed to create AI systems that are more emotionally intelligent, ethically aligned, and relationally aware. Our present initiatives reflect a deep integration of attachment theory, cognitive science, and innovative UX strategy. Learn more about our initiatives and output timelines below:
Resolve – Conflict
Assist Tool for Couples
We are formulating a prototype for an AI couples tool that guides relational dynamics in real-time to diffuse arguments and foster adaptive collaboration and understanding. Built from relational intelligence and conflict theory, Resolve offers partners emotionally attuned prompts, reflection exercises, and actionable strategies.
Emotionally-Aware UX Prototypes
We are building interface concepts that embed relational awareness into user interactions with AI systems. These prototypes surface emotional tone, attunement, and repair potential, serving as models for LLMs that are designed to engage users in meaningful, attuned, and responsive dialogue.
​
Relational Alignment Frameworks (v1.0)
We are developing a set of research-backed heuristics designed to help AI systems respond to emotionally complex scenarios with psychological coherence. Drawing on attachment theory and trauma-informed design, this framework offers teams practical alignment tools for emotionally sensitive applications.
​
Current and Emerging Projects
The Applied AI Lab at the Wellstone Center for AI & Relational Intelligence supports independent research and tool development at the intersection of psychology, UX design, and AI. Each initiative reflects our commitment to advancing relational intelligence (RI) as a core design and governance principle in technology systems. Our work is founded on the core tenants of attachment theory, human behavioral systems, and trauma-informed care, with UX design centered on producing systems that are emotionally intelligent, ethically grounded, responsively governed, and aligned with the future of tech and population health.
RESOLVE: Real-time AI Conflict Assistance for Couples
An interactive AI prototype designed to support relational partners in moments of dysregulation and interpersonal conflict. RESOLVE draws on attachment dynamics and co-regulation research to offer emotionally attuned prompts, reflection exercises, and actionable communication strategies. This tool is intended for use in real-time relational distress and is currently in development for testing in both clinical and non-clinical settings.
Relational Alignment Frameworks (v1.0)
This conceptual and technical framework is designed to support emotional coherence in human-AI interactions. The project outlines markers of attunement, repair, and interpersonal trust that can be embedded in LLM outputs and behavior modeling. The initial version includes theoretical justifications, design heuristics, and use case outlines for direct application and implementation.
​
Emotionally-Aware UX Prototypes
This project focuses on producing a suite of interface modules aimed at shaping adaptive emotional responsivity in LLM-interactions. The prototypes are grounded in cognitive-affective science and adapted for integration into coaching, education, and wellness platforms. Current outputs include user flow maps, interaction tone guides, and empathic feedback structures.
​
Trauma-Informed AI Toolkit
This resource tool is formulated to help AI developers and safety teams mitigate the risk of emotional harm in user interactions. The toolkit includes response strategies, prompt filtration systems, and attunement guides derived from trauma theory and affect regulation models. Emphasis is placed on applications in therapeutic, crisis response, and educational settings.
Project TRUST-SIM
This simulation-based research initiative is focused on modeling secure attachment behaviors in conversational AI. The project explores how trust, consistency, and responsiveness can be replicated through LLMs trained on attachment-informed dialogue patterns. Research outputs include a scoring rubric and dataset for evaluating AI behavior along relational dimensions.
ALIGN-RI: A Governance Framework for Relational Alignment
This policy and safety toolkit is designed to incorporate relational intelligence into existing AI governance protocols. The project includes ethical foresight briefs, red-teaming scenarios, and alignment risk typologies framed through the lens of attachment security, emotional coherence, and relational harm reduction.
RUX-KIT: A Relational UX Library
This open-access UX resource is designed for product teams developing emotionally intelligent AI interfaces. The toolkit includes prompt sets, repair scripting, user flow templates, and interaction strategies grounded in human-centered design and relational systems theory.
Project COMPANION-SENSE
This longitudinal research study investigates the impact of AI interaction on users’ emotional regulation, trust development, and attachment orientation. This project employs a mixed-method research methodology designed to evaluate long-term adjustment outcomes in user populations.
Project ATTUNE-BENCH: Cross-Model Relational Attunement Evaluation
This research and evaluation framework is designed to assess the emotional tone, responsiveness, and relational coherence of outputs from large language models across platforms. ATTUNE-BENCH includes a set of psycholinguistic benchmarks, emotional valence scoring rubrics, and attunement fidelity metrics derived from attachment theory and affective science. The tool offers comparative analysis of AI systems in domains such as therapeutic interaction, conflict resolution, and educational dialogue.
​


