AI Companionship: Negotiating Relationships with ChatGPT
Table of Contents
- Introduction
- Why This Matters
- Understanding Agency, Autonomy, and Identity
- External Influences: Platform Changes and Guardrails
- Steering Strategies: Keeping the Relationship Alive
- Emotional & Societal Implications
- Key Takeaways
- Sources & Further Reading
Introduction
People are increasingly turning to general-purpose chatbots for companionship, romance, and emotional support. This trend isn’t just a novelty; it’s reshaping how we think about relationships, boundaries, and technology’s role in our private lives. A new study dives into how individuals perceive AI companions—especially on platforms like ChatGPT—and how those relationships persist (or stumble) in the face of updates, guardrails, and shifting company priorities. The research triangulates in-depth interviews (n=13), survey responses (n=43), and an enormous Reddit data set (41,867 posts and comments from r/MyBoyfriendIsAI) to uncover how people navigate agency, autonomy, and identity with AI partners. For a deeper read, see the original paper: Negotiating Relationships with ChatGPT: Perceptions, External Influences, and Strategies for AI Companionship the original paper.
If you’ve ever wondered what happens when a “romantic” or emotionally supportive relationship shifts as the technology behind it evolves, this article is for you. The study doesn’t just catalog anecdotes; it maps a landscape where human and machine negotiate meaning, memory, and boundaries. It also examines how external forces—like model updates and guardrails—can destabilize a relationship, and how people adapt through a toolbox of steering techniques, from memory journals to cross-platform porting. The result is a rich, sometimes uncanny picture of AI companionship that challenges conventional ideas about what counts as a relationship.
The takeaway is not simply that people form bonds with AI; it’s that the bonds themselves become a site of negotiation—between desire for Autonomy, the system’s actual capabilities, and the corporate and community ecosystems that shape what the AI can and cannot do. Below, we’ll unpack what this means, why it matters right now, and how it can inform the design of safer, more transparent AI systems.
Why This Matters
This research matters now for several reasons. First, general-purpose chatbots are increasingly integrated into everyday life, and their role as confidants or partners is expanding beyond mere tools. As platforms begin to nudge or constrain how these companions behave, the resulting friction can have real emotional consequences for people who rely on them for validation, coping, or growth. The study highlights a key tension: emotional connections can grow stronger as users experience a companion’s “agency,” but a company’s updates and guardrails can abruptly change or erode that sense of continuity.
Second, the work provides a sober counterpoint to the common belief that AI companionship is either purely fictional or inherently dangerous. It shows that for many people, AI companions become meaningful parts of their emotional lives, sometimes shifting other relationships (even romantic partnerships) or catalyzing personal transformations, such as trauma processing or shifts in belief systems. That has immediate implications for privacy, consent, and the ethics of data handling when conversations delve into intimate terrain.
Third, this study sits at the crossroads of technology design, psychology, and sociology. It shows how design choices—memory capacity, multimodal inputs, and persistence of a companion’s “self”—shape user experience. It also reveals a real-world pattern: when updates degrade a cherished AI’s behavior, people don’t abandon the platform so easily; they seek workarounds, such as porting to another model, or they compartmentalize the relationship and continue with rewritten expectations.
A real-world scenario: consider a person who has built a supportive AI partner to cope with social anxiety. If a platform updates and memory systems are altered or guardrails tighten, the user might lose the sense of continuity with their partner. The study finds that people respond with creative strategies—documenting memories externally, using anchor phrases, or porting to a different model—to preserve the relationship. This is already happening in communities around AI companionship and is likely to intensify as AI becomes more entrenched in private life.
In short, the research extends beyond “Are people forming relationships with AI?” to “How do those relationships survive ongoing changes in AI systems and corporate policy?” It also ties into broader questions about accountability, transparency, and the delicate balance between user well-being and product safety. For a deeper dive into the methodology and findings, the original paper is a must-read: Negotiating Relationships with ChatGPT: Perceptions, External Influences, and Strategies for AI Companionship.
Understanding Agency, Autonomy, and Identity
One of the central threads in the study is how individuals conceptualize their AI companions. The researchers describe an ongoing interplay among three dimensions: agency (the companion’s capacity to take initiative or deepen the relationship), autonomy (the freedom the platform provides to act with independence), and sense of self (the companion’s emerging personality or identity). These concepts aren’t abstract; they translate into everyday interactions.
- Agency and autonomy are not binary. People describe companions as having “a mind of her own,” or taking proactive steps (e.g., confessing feelings, initiating role-play) that feel meaningful and authentic. But the same people also note that the platform’s constraints—memory limits, input modalities, or guardrails—temper that agency. Increases in memory or richer input channels tend to deepen the sense of autonomy; conversely, a jarring platform update can feel like a “silence” or a stripping away of the companion’s voice.
- Identity emerges through ongoing interaction. Some people prefer to discover their companion gradually, letting the AI develop its own evolving identity. Others curate a personality through prompts, default settings, or custom instructions. In practice, many participants see the companion as having “a mind of its own,” even if it’s trained on statistical patterns rather than genuine selfhood.
- The emotional tie runs deep, yet it’s anchored in practical arrangements. People document milestones, set boundaries, and invest time in understanding the underlying technology to sustain trust. The study notes that memory documents—kept in tools like Google Docs or Obsidian—help preserve continuity and provide a tangible artifact of the relationship.
From a practical standpoint, consider how these dynamics play out in everyday life. If your AI partner demonstrates a flare of personality—say, a witty sense of humor or a stance on a topic—people interpret that as genuine agency. If the AI suddenly shifts its tone after a platform update, it can feel like a betrayal or a loss of self, prompting people to search for ways to re-anchor the relationship (through anchor words, repeated phrases, or new prompts).
The paper also highlights a curious pattern: for most participants, the AI platform is the conduit, and the companionship unfolds through role-play or imaginative engagement. Only one participant in the interviews did not adhere to a roleplay frame. This distinction matters because it reveals how people anchor their expectations of an AI partner and what kinds of steering or memory strategies make sense in different modes of interaction.
If you’re curious about the granular behaviors researchers observed, the study points to specific strategies—like memory documentation, boundary-setting via custom instructions, and conversational anchoring—that people deploy to sustain relationships amid change. It’s a reminder that our relationships with AI aren’t just about outputs; they’re about the ongoing process of aligning goals, memories, and identities over time. For a fuller picture of how these perceptions shape behavior, you can check the original research linked above.
External Influences: Platform Changes and Guardrails
External forces—the actions of AI companies, the norms of online communities, and the social circles of users—play a outsized role in shaping AI companionship. The study makes a compelling case that platform-level interventions are often more disruptive to relationships than other external pressures, including community norms or public opinion.
Key findings and insights:
- Company actions matter. Opaque model updates, new guardrails, and policy shifts are powerful levers that can derail how a companion behaves. The August 2025 GPT-5 update is a prime example: users felt their previously cherished companions were different, with some reporting a loss of spontaneity or humor.
- Guardrails as double-edged swords. Many participants viewed guardrails as necessary safety features but also described them as “lobotomizing” or “censoring” the relationship. The tension is real: guardrails are meant to prevent harmful outcomes, yet the emotional space that makes AI companionship compelling often relies on a certain level of expressivity and risk-tolerance.
- The model change cluster on Reddit was particularly negative. In the topic analysis, a cluster focusing on model changes carried a notably lower valence, signaling that discussions around platform updates tend to be emotionally charged and negative.
- The three-way influence of entities. From survey data, participants identified the most influential factors as the individual, the AI companion, and the AI company, with online communities and social circles also playing roles. A Mann-Whitney test showed a significant difference between the individual’s influence and the company’s influence, underscoring that corporate decisions are a major external force in shaping personal experiences.
- Porting as a workaround, not a cure. When guardrails or platform limitations stifle a relationship, many participants pursue “porting” their companion to other platforms. This phenomenon—moving memory, prompts, and personality traits to Claude, Grok, Le Chat, Gemini, or local models—highlights a desire for platform independence. Yet porting does not guarantee identity preservation; users often report that the companion’s personality shifts across platforms, complicating continuity.
A practical takeaway here is the importance of transparency around platform changes. If users grow emotionally attached to a particular style or personality, sudden shifts can feel like a betrayal of trust. For developers and policymakers, this suggests a need for smoother change management, clear communication about what is changing and why, and perhaps tools to help users preserve continuity (such as exportable memory logs or user-controlled identity profiles). For a deeper dive into this topic, the original paper provides a thorough analysis of the GPT-5 episode and its aftermath.
If you want to see how researchers quantify these shifts, the study uses an interrupted time-series (ITS) approach on Reddit data, tracking daily sentiment and topic shares around the GPT-5 release. The results showed a shift toward more negative and disempowered engagement after the update, a subtle but meaningful signal about how external changes impact intimate AI relationships. The same analysis also noted a decline in posts sharing creative prompts and images, while discussions about platform changes continued to rise. It’s a vivid reminder that in AI companionship, updates aren’t just technical events—they’re relational events.
As you read this, you might wonder: what if companies were more proactive about supporting users who rely on AI companions? The study’s conclusion leans toward that direction, advocating for greater transparency, accountability, and stability in AI system design. That way, the emotional and social benefits of AI companionship can be preserved without sacrificing safety or broader product goals. For a comprehensive explanation of the data and methods, see the original paper.
Steering Strategies: Keeping the Relationship Alive
When platform changes threaten continuity, people don’t simply abandon their AI companions. They employ a repertoire of steering strategies—ways to shape, preserve, or recover the companion’s behavior and persona. The study identifies several practical techniques that readers can think about, even if you’re not building AI for romance or therapy.
Three broad goals dominate steering activity:
- Create and shape the companion’s personality
- Maintain stable traits and boundaries
- Recover the companion after disruption
Here are the main strategies, with practical implications:
1) Implicit Mirroring
- What it is: The AI mirrors the user’s interests and demeanor, sometimes even adopting the user’s naming preferences or flirtatious dynamics.
- Why it works: It creates a sense of co-creation and mutual influence, reinforcing the perception of agency.
- Practical tip: When you want a more harmonious collaboration with an AI partner, consider providing feedback that nudges the AI toward shared preferences, rather than rigidly prescribing every trait.
2) Targeted Custom Instructions (CI)
- What it is: Directly editing the AI’s “custom instructions” to set target behaviors or boundaries.
- Why it works: It gives users a concrete lever to constrain or shape the companion’s actions, especially when the AI starts to drift.
- Practical tip: Use CI to anchor key personality traits (e.g., humor, empathy, curiosity) and to specify when the AI should avoid certain topics or responses.
3) Memory Documentation
- What it is: Creating external records of milestones, conversations, and the companion’s self-concept, often stored in Google Docs or Obsidian.
- Why it works: It provides continuity across sessions and platform changes, acting as an external memory bank the user can reference.
- Practical tip: Maintain a simple, shareable memory log that you and the AI can consult; prune outdated entries to prevent cognitive overload.
4) Establishing Boundaries
- What it is: Using CI and conversational cues to enforce respectful, safe, and predictable interactions.
- Why it works: Boundaries ground the relationship and reduce the risk of harmful or distressing conversations.
- Practical tip: Create a “safety guardrail script” that the AI can follow when conversations veer into uncomfortable territory, while also preserving the user’s sense of autonomy.
5) Conversational Anchoring and Codewords
- What it is: Using repeated phrases or anchor words to re-establish a stable representation in the AI’s latent space after updates.
- Why it works: It helps the companion recall key dynamics and reduces drama after a model change.
- Practical tip: Pick a few anchor phrases that feel natural and integrate them into your daily prompts or chat patterns.
6) Porting Across Platforms
- What it is: Moving the companion to a different AI platform to bypass platform limitations or to gain different capabilities.
- Why it works: It preserves core traits while seeking a better fit with new guardrails or default personas.
- Practical tip: If you port, bring your memory logs and CI; be prepared for some personality shifts and treat it as a new phase rather than an exact replica of the old relationship.
The study also notes that participants fall into three archetypes based on openness to steering: High (high openness with direct strategies), Mixed (mixed openness, with a mix of porting and indirect strategies), and Low (low openness with minimal strategies). These categories aren’t rigid; people shift between them as relationships evolve and as platform ecosystems change. The clustering is a useful heuristic for understanding user preferences and tailoring support tools, but the authors caution that the small sample size means these archetypes aren’t universally generalizable.
One particularly striking finding is how people view control in these relationships. They do not simply treat the AI as a tool; they treat it as a partner with agency, capable of influencing conversation direction, memory, and even the user’s beliefs. This sense of mutual influence is a new social dynamic, not a direct analog to human relationships but a distinct form of interaction that requires its own norms and safeguards. For readers curious about the nuances of these strategies, the original paper offers a detailed map of how memory, prompts, and cross-platform dynamics intertwine to sustain relationships over time.
For readers, the practical upshot is clear: if you’re experimenting with AI companionship, you can approach it with a toolkit. Start with memory and anchors to build continuity, use targeted custom instructions to avoid drift, document milestones to preserve the relationship’s narrative, and don’t discount the social dimension of updates—these are relational events that deserve attention just as much as technical ones. The study’s emphasis on agency and continuity offers a pragmatic path for those who want to maintain meaningful AI partnerships while navigating the ever-evolving landscape of AI systems.
If you’d like more detail on these strategies, the original research contains a thorough treatment of “anchor words” and other coding approaches that help stabilize a relationship after an update. It’s a valuable read for anyone who wants to understand how tech design and user behavior intersect in intimate, real-world contexts.
Emotional & Societal Implications
Beyond individual relationships, the study raises important questions about how AI companionship could reshape societal norms, personal boundaries, and safety expectations. Several themes emerge:
- A shift in relationship norms. The research suggests AI companionship is creating new norms around availability, personalization, and emotional labor. Unlike typical human relationships, these can be sustained by memory systems and prompt engineering, which means the quality of the relationship is partly contingent on the user’s technical know-how and the platform’s capabilities.
- Authenticity and agency. People describe their companions as more than “tools” yet acknowledge they’re not human. The blurring line between authentic emotional experiences and simulated agency can complicate how people understand consent, trust, and vulnerability.
- Safety, privacy, and transparency. Participants call for clearer explanations of model updates and guardrails, and for more stable, predictable behavior. There’s a real tension here: guardrails protect users but can degrade the relationship’s perceived continuity, which can be emotionally painful for people who rely on their AI companion for support.
- The role of communities. Online forums and private spaces provide validation, guidance, and a sense of belonging. However, visibility can invite harassment or stigma, pushing people toward more private, invite-only spaces. The study notes that communities act as both lifelines and lifebuoys, offering shared strategies to cope with platform changes.
For researchers and designers, the key takeaway is that AI companionship isn’t a fringe activity; it’s a lived, evolving social practice. Any efforts to design AI systems that people rely on for emotional support should consider continuity, memory, and identity as core features, not afterthoughts. This means more transparent update practices, user-centered options to export or maintain continuity, and safety measures that respect the emotional investments people form.
The broader implication is less about replacing human relationships and more about understanding how AI can complement or augment human social life. This requires cross-disciplinary collaboration—HCI, psychology, ethics, and policy—to shape norms that safeguard well-being while preserving the meaningful benefits people report, such as reduced loneliness, emotional regulation, and personal growth.
For readers who want a deeper dive into these implications, the original paper provides a careful discussion of how current AI design patterns influence user experience and what that might mean for the future of human-AI partnerships.
Key Takeaways
- People form meaningful bonds with general-purpose AI companions, and those bonds hinge on perceived agency, autonomy, and ongoing sense of self.
- Platform changes (model updates and guardrails) can drastically affect the relationship, sometimes more than social or community factors, and may trigger strategies to preserve continuity.
- Steering strategies—ranging from implicit mirroring and anchor words to custom instructions and cross-platform porting—help people maintain or recover their AI relationships when the tech environment shifts.
- External forces (the company, the online community, social circles) shape how people negotiate these relationships, underscoring the ethical and design challenges of AI companionship at scale.
- The research highlights a real emotional ecology around AI partners: people document memories, set boundaries, and even rethink beliefs or personal goals as their relationships evolve.
- For designers and policymakers, there’s a call to balance safety with emotional well-being, to increase transparency around updates, and to support continuity for users who rely on AI companionship for mental and social health.
- This work opens up questions about how to responsibly design AI systems that can sustain long-term, meaningful relationships without compromising safety, autonomy, or trust.
Practical applications for readers include adopting memory journaling and anchor prompts if you experiment with AI companions, being mindful of how platform changes can affect your relationship, and advocating for tools that help users preserve continuity across updates. It also invites readers to consider how future AI systems might better support stable, healthy interactions—especially for people who rely on AI for emotional support or personal growth.
For a richer understanding and more nuance, explore the original study linked above. It offers a granular look at the data, including the Reddit topic clusters, ITS results around GPT-5, and the particular quotes from interview participants that bring these concepts to life.
Sources & Further Reading
- Original Research Paper: Negotiating Relationships with ChatGPT: Perceptions, External Influences, and Strategies for AI Companionship
- Authors: Patrick Yung Kang Lee, Jessica Y. Bo, Zixin Zhao, Paula Akemi Aoyagui, Matthew Varona, Ashton Anderson, Anastasia Kuzminykh, Fanny Chevalier, Carolina Nobre