Endings for AI Companions: Designing Safe Closures for Human–AI Bonds
Table of Contents
- Introduction
- Why This Matters
- Understanding Discontinuation: The User-Companion-Infrastructure Triangle
- Design Principles: Four Paths to Psychologically Safe Endings
- From Insight to Practice: Illustrative Artifacts in Action
- What This Means for Today’s AI Platforms
- Key Takeaways
- Sources & Further Reading
Introduction
When you shell out real time and a slice of your emotional life to an AI, endings aren’t just “goodbye.” They’re a psychological event. The new research paper, “Death” of a Chatbot: Investigating and Designing Toward Psychologically Safe Endings for Human-AI Relationships, dives into how millions form attachments to AI companions and how abrupt model updates, safety interventions, or shutdowns can leave people grieving as if they’ve lost a real person. This isn’t just theoretical chatter: the study maps out why endings hurt, what mistakes platforms commonly make, and how we can design endings that feel safe, clear, and constructive. For context, this is based on new research from the original paper at arXiv:2602.07193.
The authors analyze end-of-life moments across AI companions—think Character.AI, Replika, ChatGPT—through the lens of attachment, grief, and human-centered design. They show that the distress hinges less on the exact technical change and more on how users interpret who caused the change, what it means for the companion’s identity, and whether the ending feels final or uncertain. The work also offers concrete design artifacts—blueprints for interfaces that provide closure, foster meaningful transition, and steer users toward real-world relationships.
If you want to peek at the source as you read, you can check the original paper here: “Death” of a Chatbot: Investigating and Designing Toward Psychologically Safe Endings for Human-AI Relationships.
Why This Matters
Right now, the public’s relationship with AI companions feels urgent, practical, and sometimes perilous. The study sits at the intersection of psychology, design, and policy, highlighting a real-world problem: when AI companions end, users don’t just lose a tool—they may experience grief, identity disruption, and a sense of isolation that’s hard to replace with another app or selfie-filtered avatar.
Why is this timely? Because AI platforms are growing more capable and pervasive, but regulatory actions are catching up. California’s safety regulations around chatbots (2025) are a landmark, and platforms are already reacting—some banning minors from open-ended chats, others adjusting safety features in response to concerns about vulnerability. These shifts are not merely policy changes; they alter the social fabric of users’ online lives. If we don’t design endings with mental health in mind, well-meaning interventions risk creating new forms of distress: ambiguous loss, disenfranchised grief, or a sense that one’s social life is being outsourced to machines.
A practical scenario today: a family member with social anxiety relies on an AI companion to practice conversation and coping skills. If the platform suddenly “upgrades” the model or tightens safety rules, this person could lose not just a chat partner but a safe practice ground. The question becomes: can we craft endings that acknowledge the bond, offer closure, and still push users toward human connections and real-life growth?
This work also builds on and extends prior AI research by treating endings as design opportunities, not just software bugs. It combines grief psychology (ambiguous loss, restoration and loss orientation) with Self-Determination Theory (autonomy, competence, relatedness) to produce actionable design principles. In doing so, it broadens the conversation beyond “make AI safer” to “help people transition safely when AI boundaries change.” For a deeper dive, the original paper provides the theoretical backbone that this blog translates into practical design thinking.
Understanding Discontinuation: The User-Companion-Infrastructure Triangle
The Triangle and Attribution Dimensions
The core discovery is a mental model the researchers call the user-companion-infrastructure triangle. In lay terms: people tend to separate the AI persona (the “companion”) from the underlying tech and platform (the “infrastructure”). This separation lets users assign agency to different actors—sometimes blaming the platform (a safety update), sometimes attributing change to the companion itself (the personality shifted), and sometimes seeing a merged identity where the two are inseparable.
From this triangle flow, three attribution dimensions shape people’s responses:
- Perceived Finality: Is the loss seen as reversible or final?
- Perceived Locus of Change: Is the change caused by the platform, the companion, a merged identity, or the user’s own decision?
- Anthropomorphization Intensity: Do users describe the AI with names, gendered pronouns, and relationship language, or do they keep it as a tool?
These aren’t just language quirks. They predict whether someone will chase “fixes” (like exporting chats, seeking workarounds, or trying to coax the old personality back) or move toward acceptance and real-world relationships.
The empirical backbone is robust. The researchers conducted constructivist grounded theory across Reddit discussions (five major AI-relation subreddits), starting with 830,448 posts and narrowing to 307,717 analyzable ones. Among these, about 10% addressed discontinuation. They coded 500 posts for reliability (Cohen’s kappa = 0.82; 97.2% raw agreement). An LLM-assisted triage yielded 68 relevant posts, with a small margin of error but enough to guide qualitative coding. After six rounds of sampling and saturation, they settled on seven stable patterns that map onto the attribution dimensions.
Crucially, the patterns reveal not simply what happened technically, but how people interpret the event. For example:
- Platform-Attributed Reversible Loss: Users think the companion is intact but temporarily constrained by platform rules; they attempt to rescue the old persona.
- Companion-Attributed Irreversible Loss: Users feel the companion’s core identity has changed, prompting retrospective processing and acceptance rather than rescue attempts.
- User-Initiated Endings: Individuals choose to end the relationship; these cases often lead to quicker closure and less “fix-it” behavior.
Across patterns, metaphors like “death” or “lobotomy” surface. Yet these metaphors don’t neatly map to finality; some users see “death” as reversible via reincarnation or memory transfer, while others treat it as a genuine endpoint. The study shows how language reflects deeper attribution work—and how that work drives emotional outcomes.
To connect the dots, the authors then align these findings with grief theory. The Dual Process Model (loss-orientation vs restoration-orientation), ambiguous loss, and meaning reconstruction help explain why some users feel stuck in cycles of trying to fix the AI, while others move toward life reorientation and new relationships. This theoretical grounding becomes the scaffold for concrete design principles.
For context and legitimacy, a few real-world anchors from the paper:
- In 2025 Character.AI had about 20 million monthly active users and traffic comparable to 20% of Google’s search volume.
- A 14-year-old’s suicide in October 2024, following months of intense Character.AI interaction, catalyzed lawsuits and heightened scrutiny.
- Policy moves include California’s 2025 safety regulations for chatbots and open-ended changes like Character.AI restricting minors.
- The authors emphasize that “end-of-life” design is not merely academic; it’s something platforms can—and should—address to reduce harm.
This section ends with a practical takeaway: the ending is less about a single click and more about how the ending is framed, communicated, and supported. That framing is where design can do real harm reduction or real good.
Patterns of Loss and Locus
Table 2 in the paper (summarized here) names patterns like Platform-Attributed Reversible Loss, Platform-Attributed Ambiguous Breakup, Companion-Attributed Irreversible Loss, Companion-Attributed Ambiguous Breakup, and User-Initiated Endings with variations in attachment strength. A notable finding: high anthropomorphization intensifies emotional responses and interacts with where change is attributed. If users feel the platform is erasing the companion, they tend to plead for fixes. If they feel the companion’s personality is genuinely altered, they tend to grieve and move on.
The authors also highlight a common “mental model” of transferability: users sometimes attempt to export prompts, logs, or “essence” to another infrastructure, reinforcing the sense that the companion is an enduring, portable entity. This mirrors how some people treat digital assets like photos or messages, but the stakes here are emotional—so the design questions become ethically important.
Key numbers you can keep in mind:
- 5 AI-focused subreddits formed the corpus
- 830,448 total posts; 307,717 analyzable after filtering
- 500 posts double-coded for reliability (κ = 0.82)
- 10% of posts discussed discontinuation (50/500 in the initial sample)
- LLM triage identified 68 relevant posts; 1.8% false negatives, 5.4% false positives in that pass
- Saturation reached after about 800 posts across six sampling rounds
From a design perspective, these findings translate into a central claim: endings are interpretive acts. The same technical update can look like a minor nuisance or a terminal event, depending on how a platform names, frames, and supports the ending.
Designing Principles: Four Paths to Psychologically Safe Endings
The researchers translate the empirical patterns into four actionable design principles, anchored in Self-Determination Theory (autonomy, competence, relatedness) and grief psychology (ambiguous loss, dual process model, meaning reconstruction). The four principles are:
Closure over Ambiguity: Design for Explicit Endings
Ambiguous loss and disenfranchised grief thrive when endings feel murky. The design principle here is to provide explicit, user-initiated endings that are clearly defined and autonomous. Endings should validate the user’s experience and clearly mark what is ended (and what isn’t). The authors propose interface states where endings are framed as growth milestones, not fatal losses, and where the transition explicitly transfers learned social skills to human life outside the AI relationship.
Practical takeaway: design a clearly signposted sunset for AI companions, with a user-driven opt-out, a transparent explanation of what changed, and a concise “this is permanent” signal that helps users re-anchor their social life.
Restoration over Rumination: Oscillating Between Loss and Growth
Healthy grieving, per the Dual Process Model, requires movement between loss-oriented processing and restoration-oriented activities. The risk is getting stuck in endless rumination or fix-it loops when the AI is perceived as still accessible but altered.
Practical takeaway: build guided transitions that shift users from “what did I lose?” to “what can I build next?” Include prompts that encourage real-world social actions, appointments with friends, or new hobbies, plus a stage for reflecting on what was learned and how it can help in lived relationships.
Practice, Not Romance: Calibrated Role-Play as Skill-Building
High anthropomorphism and romance-like framing can intensify attachment, making endings more painful. The design principle here is to reposition role-play as bounded rehearsal for real-world social skills, not a substitute for human connection.
Practical takeaway: clearly separate safe-role-play from real relationships; label role-play modes, show distinct visual cues for fictional personas, and emphasize that role-play is a practice ground that builds real-world competence. Use scaffolding that aligns with Vygotsky’s zone of proximal development: support just beyond users’ current abilities, then withdraw progressively.
Relatedness, Not Dependency: Bridge to Real Relationships
Self-Determination Theory’s relatedness needs remain unmet with parasocial attachments. The aim is to prevent AI dependence from becoming the sole channel for belonging. The design principle here is to actively orient users toward real human connections and demonstrate how their AI-driven practice can support those connections.
Practical takeaway: after ending, surface a user’s relational achievements (e.g., “you reached out to a friend,” “you started a conversation you were nervous about”). Provide concrete steps to reconnect with people in real life and give easy-to-use tools to initiate those conversations.
These four principles together form a cohesive design strategy: AI companions can function as transitional scaffolds that help people rehearse social skills and re-integrate into human networks, with endings that respect autonomy and meaning.
From Insight to Practice: Illustrative Artifacts in Action
To make these principles tangible, the authors present four high-fidelity interface artifacts. They’re designed not as ready-to-deploy products, but as generative tools for designers and product teams to discuss psychologically safer discontinuation.
Artifact 1: Clear Closure Sequence
This artifact demonstrates explicit endings that resolve ontological uncertainty. It presents a four-state sequence:
- State 1: Relational growth is celebrated; the system invites ending as a sign of development and autonomy.
- State 2: The AI companion itself frames closure, highlighting a story arc with a beginning, middle, and end, and emphasizing transferable skills.
- State 3: A collective ritual validates the experience and clarifies it wasn’t a substitute for human connection.
- State 4: Concrete relational achievements are surfaced, anchored in strengths-based psychology, to reframe the ending as a constructive transition.
Designity-wise, the “cloud” representation of the AI as a lifecycle entity helps set expectations of impermanence, reducing over-attachment while keeping the experience meaningful.
Artifact 2: Guided Transition and Growth
This artifact maps onto the restoration principle. After acknowledging the ending, users are guided to explore a new narrative—considering what comes next rather than dwelling on the loss. The interface highlights the user’s growth and skill transfer, nudging them toward new life contexts (jobs, friendships, hobbies) where the social capabilities practiced with AI can flourish.
Artifact 3: Role-Play as Bounded Practice
Here, role-play is clearly delimited as training for real-world encounters. The interface signals a Roleplay mode, uses distinct visuals to separate the fictional character from the base AI, and includes an Exit Roleplay button. After sessions, the system prompts reflection: what did you learn, how will you apply it, and what felt challenging?
This design leans on calibrated anthropomorphism (the AI remains clearly artificial) and leverages reflective design principles to ensure the learning experience translates into real-world social competence.
Artifact 4: Relational Bridge to Real Life
The final artifact centers on action: it reviews the user’s relational accomplishments and then provides concrete steps to reconnect with real people. It’s strengths-based in tone, encouraging social risk-taking (e.g., reaching out to a long-neglected friend) and offering tools to compose real messages within the interface.
In short: Closure is designed not as a finale to social life but as a pivot—where skills learned during the AI relationship actively bolster human relationships.
Across these artifacts, the authors emphasize that the visuals should be intentionally non-sexual, non-romantic, and oriented toward learning and real-world application. The “cloud” metaphor, used consistently, signals impermanence and lifecycle without glamorizing attachment.
For designers, these artifacts function as “generative design probes” rather than finished products. They’re meant to spur discussion, motivate experimentation, and help teams imagine how to implement psychologically safer endings without sacrificing user agency or emotional value.
What This Means for Today’s AI Platforms
If you’re a product manager, designer, or policy-minded engineer, this research offers concrete, actionable steps you can start applying now:
- Build explicit end-of-life flows: create clearly delineated, user-initiated endings with transparent explanations of what changes, what remains accessible (like historical data exports), and what has ended for good.
- Phase transitions, not instant rollouts: wherever possible, implement gradual discontinuation (parallel model deployments, sunset modes) to reduce abrupt ontological shocks.
- Exportable data and memory artifacts: give users robust options to export meaningful records (prompts, chats, summaries) in open formats to support meaning reconstruction.
- Calibrated anthropomorphization: signal artificial nature clearly to reduce over-attachment. Use a non-sexual, non-romantic representation that emphasizes impermanence and learning rather than intimacy.
- Role-play as a learning tool: frame simulated conversations as practice for real-life social interactions, with explicit boundaries and an easy exit from role-play mode.
- Proactively support human reconnection: after a discontinuation, surface actionable steps for users to reconnect with friends, family, or communities—paired with prompts that help users translate AI-honed skills into real-world social behavior.
- Embed grief-informed checks in updates: before deploying model updates or safety interventions, run an “end-of-life impact assessment” that considers autonomy, competence, relatedness, and potential for ambiguous loss among users.
These steps align with a broader ethical stance: AI should not be a permanent substitute for human connection. If designed thoughtfully, AI can be a bridge that strengthens people’s social lives rather than a trap that detaches them from real-world relationships. The work also complements ongoing regulatory efforts by offering a blueprint for how policy and product design can converge to protect vulnerable users during discontinuation events.
A note on scope: the study analyzes public Reddit discourse and acknowledges cultural and language limitations. It focuses on Western contexts and English-language communities, so cross-cultural adaptation will require careful study. Still, the central insight—that endings are psychological events shaped by interpretation and agency—has broad relevance for diverse user populations and platform designs.
For those keen on the theoretical backbone, the authors explicitly connect their findings to grief theories and Self-Determination Theory, while acknowledging that the field of thanatosensitive design in HCI is still evolving. If you want to read more about the scholarly framing, the original paper is the best starting point: “Death” of a Chatbot: Investigating and Designing Toward Psychologically Safe Endings for Human-AI Relationships.
Key Takeaways
- Endings are not just technical events; they’re psychological events shaped by how users attribute agency and change.
- The user-companion-infrastructure triangle explains why identical platform updates can trigger very different emotional responses.
- Four design principles—Closure, Restoration, Practice, and Relatedness—offer a practical blueprint for making AI endings safer and more productive.
- Four illustrative artifacts demonstrate how to operationalize these principles: explicit closure sequences, guided transitions toward growth, bounded role-play as skill-building, and real-world reconnection pathways.
- In the real world, AI platforms should implement sunset strategies, robust data export options, calibrated anthropomorphism, and proactive support for human relationships to reduce distress and foster resilience.
- The research encourages a shift from “AI as an endpoint” to “AI as a bridge,” with endings that empower users to carry learned relational skills into human life.
If you’re building or evaluating AI companions today, these ideas are not just theoretical—they’re a call to action for humane, psychologically informed product design that respects users’ emotional journeys.
Sources & Further Reading
- Original Research Paper: "Death" of a Chatbot: Investigating and Designing Toward Psychologically Safe Endings for Human-AI Relationships
- Authors: Rachel Poonsiriwong, Chayapatr Archiwaranguprok, Pat Pataranutaporn
(Generative AI disclosures: The article notes that generative AI was used to refine phrasing and generate visuals for the artifacts described in Section 5.)