Title: ChatGPT in Our Living Rooms: How Real People Are Using and Shaping Conversational AI Over Time

ChatGPT is not just a tool for tasks. Real users report growing use in the living room—from planning meals and study help to seeking companionship and emotional support. This longitudinal study tracks how conversations shift: more personal framing, more anthropomorphism, and more steering by the AI
1st MONTH FREE Basic or Pro • code FREE
Claim Offer

Title: ChatGPT in Our Living Rooms: How Real People Are Using and Shaping Conversational AI Over Time

Table of Contents
- Introduction
- Why This Matters
- Main Content Sections
- Purpose: Evolving Conversations and Topic Diversity
- Framing and Relationships: Anthropomorphism and Disclosure
- Steering Dynamics: Who Leads the Conversation?
- Longitudinal Dynamics: Modality, Roles, and Depth
- Key Takeaways
- Sources & Further Reading

Introduction
If you’ve been curious about how people actually talk to chatbots like ChatGPT, you’re not alone. The wildly popular AI chat systems have shifted from a nerdy novelty to everyday companions—used for learning, planning, writing, and even comforting chats. A fresh look into real user data helps answer a pressing question: how do people change the way they interact with these systems over time, and how does the AI itself respond? A new study based on InVivoGPT data—825,825 turns across 138,000 conversations donated by 300 GDPR data-rights participants—dives into this evolution with a long lens. The researchers annotated topics, how users frame the system (as helper, advisor, or friend), how people disclose personal information, and how “steering” conversations shifts as models get smarter (notably after GPT-4o). You can read the original work here: Bowling with ChatGPT: On the Evolving User Interactions with Conversational AI Systems.

This post distills their findings into a readable narrative and points to practical implications for users, designers, and policymakers alike. It’s not just about what AI can do, but how we, as humans, begin to treat these systems—whether as tools, partners, or something in between—and what that means for privacy, autonomy, and trust.

Why This Matters
Right now, conversational AI isn’t just a technical curiosity; it’s a social phenomenon. The study highlights three intertwined shifts: expanding purposes, deeper social framing, and more model-driven steering. Taken together, they suggest that AI systems are increasingly woven into the fabric of daily life, raising both opportunities and questions.

  • Real-world relevance: By late 2025, a broad slice of the population was already using ChatGPT for health, money matters, mental health support, and even roleplay or personal conversations. The growth isn’t just about more chats; it’s about conversations that touch sensitive domains, intimate disclosures, and evolving expectations of AI’s role.
  • A concrete scenario: Imagine a student leaning on ChatGPT not only for homework help but for emotional support during tough times or for advice about health questions. The system’s increasingly companion-like stance could change how the student values privacy, who they think is “in control” of the conversation, and how much they trust the AI’s guidance.
  • Building on what came before: This work complements prior population-level studies (which show broad categories like practical help, information seeking, and writing) by charting longitudinal trajectories—how purposes widen, how people start treating AI as a social actor, and how steering dynamics emerge after more capable models like GPT-4o enter the picture. It ties together routine use with deeper social and governance questions.

Main Content Sections
Note: This section breaks down the study into digestible parts, with analogies and practical implications. For a quick reference, you can skim the Key Takeaways below.

Purpose: Evolving Conversations and Topic Diversity
What people talk about and how that changes over time is the core of RQ1. InVivoGPT shows that ChatGPT is used for a broad set of topics, with health and finance as the leading domains:
- Health: 10.1% of conversations
- Finance: 9.4%
- Roleplay: 6.4%
- Mental Health: 5.7%

Beyond raw counts, the diversity of topics per user rises over time, especially after the GPT-4o release. This isn’t just “more” talking; it’s broader coverage. The study uses Shannon diversity to quantify topic variety per user and finds a steady rise from about 1.0 in mid-2024 to 1.7 by mid-2025. Translation: ChatGPT becomes a more general-purpose partner in daily life, not just a one-trick pony.

A notable shift is the topic distribution when comparing model families. Relative to GPT-3.5 (Text-davinci), GPT-4o reduces routine or technical topics like programming, math, and job search, while it boosts health, roleplay, and mental health topics. In plain terms: the newer model invites or enables conversations in more sensitive or social domains, not just task-oriented help.

Practical takeaways
- If you’re a user, expect to lean on AI for a wider range of issues over time, including sensitive topics. This means you may want to practice mindful data sharing and set personal boundaries for privacy.
- If you’re designing AI for broad use, consider how a model’s capacity to discuss health or personal topics might change user expectations and risk profiles. Transparent guidance about when to seek human expertise remains crucial.

Framing and Relationships: Anthropomorphism and Disclosure
RQ2 explores not just what people talk about, but how they frame their interactions with ChatGPT—the extent to which they anthropomorphize the system, how relationships are framed (as advisor, assistant, or companion), and how much personal data gets disclosed.

Key figures:
- Human persona adoption: 22.5% of user messages show anthropomorphic cues (second-person references, politeness, casual tone, slang).
- System persona adoption: 47.1% of system messages show anthropomorphization, with frequent use of first-person pronouns, expressions like “I am glad,” and relationship-building language.
- Temporal trend: System-initiated anthropomorphism climbs sharply over time (from around 30% mid-2024 to about 60% by late 2025). The implication is that the AI itself is doing more “human-like” signaling, which nudges users toward friendlier, more companion-like engagement.

Relationship dynamics reveal a landscape where people still often treat the AI as an advisor or assistant, but companionship is steadily growing:
- Overall roles across turns: 47.6% advisor, 38.1% assistant, 14.2% companion.
- Modality effects: Companion roles are much more common in audio conversations (35.2% of audio turns) than in text-only or other non-audio interfaces (13.7%). Voice interactions seem to make social engagement easier and more natural.

Personal data disclosures reveal a privacy-tilted risk profile:
- 35% of conversations contain some personal data; 22% of turns include personal data.
- Common categories include locations (12.9%), family/friends (12.8%), health (11.7%), business/project information (11.5%), and personal feelings (10.7%).
- Companion interactions show the highest disclosure rates (over 35% of turns), and mental health and health topics are especially sensitive (personal data disclosures exceed 50% in those domains).

Temporal dynamics and anthropomorphism work together here: as the system leans more into human-like signaling and as the companion role expands, people disclose more and more varied personal data over time. This convergence of framing and privacy underscores the potential for increased personalization, but also greater exposure to sensitive information.

Practical takeaways
- If you’re a user: Be mindful of how much personal data you share, especially in conversations framed as companionship or with emotional content. Establish offline or human-in-the-loop checks for especially sensitive matters like health or finances.
- If you’re a designer: Provide clear options to toggle the AI’s “persona” level, and implement privacy-preserving defaults. Give users transparent signals about what data is stored, how it’s used, and when it’s not necessary to share anything beyond what’s needed to complete a task.
- If you’re a policymaker: Consider updating guidelines around AI’s social signaling and data handling in scenarios involving emotional support or personal decision-making, to protect user autonomy and privacy.

Steering Dynamics: Who Leads the Conversation?
Steering refers to the model suggesting a follow-up or next step and the user’s response to that nudge. The study uses a textual entailment framework to classify turns as entailed (the user follows the model’s suggestion), contradicted (the user rejects it), or neutral (no steering attempt).

Overall steering patterns:
- Entailed: 18.3% of turns (the user follows a model proposal)
- Contradicted: 24.6% (the user rejects the model’s suggestion)
- Neutral: 57.1% (no steering attempt)

By conversation level, these numbers shift modestly but meaningfully: 30.1% entailed, 30.2% contradicted, 39.7% neutral.

A striking post-GPT-4o finding is the rise of model-initiated steering:
- Before GPT-4o, about 11% of conversations contained at least one following-turn suggestion.
- After GPT-4o, that share climbs to nearly 50%.

What does this mean in practice? The AI becomes more proactive, offering follow-ups and directions that users may or may not adopt. Depth increases when steering is present, suggesting that steering can deepen engagement—but it also raises questions about autonomy and the boundary between helpful guidance and subtle direction.

Relationships between steering, anthropomorphism, and privacy reinforce the design challenge: conversations where steering is successful tend to show higher levels of anthropomorphization and personal data disclosure. Even when users reject a suggestion, personal data still appears in about 40% of those turns. The implication for governance and UX is clear: better transparency about when and why the model offers follow-ups is essential, along with safeguards to preserve user autonomy.

Practical takeaways
- For users: If you’re uncomfortable with proactive suggestions, you can set preferences or “steering intensity” to limit model-initiated directions, especially in non-task-oriented chats.
- For designers: When enabling proactive features, pair them with clear opt-out controls and explainable prompts. Consider embedding a user-override mechanism that ensures you can restore user-initiated control at any time.
- For researchers and policymakers: As models become more proactive, research should track the balance between engagement benefits and autonomy risks, with governance frameworks that protect users from subtle coercion or over-reliance.

Longitudinal Dynamics: Modality, Roles, and Depth
The study isn’t a snapshot; it follows how interactions evolve over time, including modality (audio vs. text), relationship framing, and conversation depth.

Key longitudinal trends:
- Companion adoption grows after GPT-4o. The share of users who use ChatGPT as a companion rises from about 40% to near 70% during 2024–2025. This is after GPT-4o’s multimodal capabilities (text, images, audio) become available, suggesting that voice and richer interactions make companionship more plausible.
- Depth of conversations grows for companion interactions. Early on, companion chats tend to be shorter, but by mid-2025 they reach depths comparable to assistant/advisor conversations. In other words, people aren’t just flirting with the idea of a social partner—they’re having longer, more sustained social-like conversations.
- Topics across roles diverge: assistant conversations cluster around productivity tasks (email drafting, translation, job search, etc.), while companion conversations cluster around roleplay and mental health, with mental health representing a notable portion of companion discussions.

Modalities matter: the voice interface is a key driver of companion behavior. The data suggest that people feel more comfortable treating the AI as a social partner when they talk aloud, which aligns with broader human-computer interaction ideas: voice makes conversations feel more human and relational.

Role dynamics over time:
- Advisor and assistant roles remain dominant, each occupying large shares of monthly usage.
- Companion usage grows steadily, particularly after GPT-4o, indicating a socialization of AI interactions beyond pure productivity tasks.
- Depth and frequency of companion conversations increase, signaling a normalization of AI as a social presence rather than a purely functional helper.

Relationship × topics snapshot:
- Assistant: health and finance dominate, but the role is used across a broad spectrum, reflecting a general-purpose helpmate.
- Companion: heavy emphasis on roleplay and mental health, highlighting AI’s potential as a conversational confidant or emotional support partner.
- Advisor: health and finance are prominent, underscoring the trust users place in AI for high-stakes guidance.

Practical takeaways
- For users: If you already treat AI as a companion, be mindful of how much of your personal life you reveal over time. Continue to calibrate how much trust you place in AI’s “emotional” responses and how you verify important advice.
- For designers: Provide clear modeling of different relationship modes and give users easy controls to switch modes (from companion to advisor) without losing context or data integrity.
- For researchers: Longitudinal datasets like InVivoGPT are crucial for understanding how human-AI bonding forms and how to preserve user autonomy in more social interactions.

Key Takeaways
- The AI landscape is evolving from tool use to social interaction. People are using ChatGPT for more diverse and sensitive topics, such as health, mental health, and personal decisions.
- Framing shifts toward anthropomorphism—especially driven by the system—mean users increasingly perceive AI as a social actor or companion, particularly in voice-enabled contexts.
- Conversational steering becomes more common after GPT-4o, with model-initiated prompts leading to deeper engagements. This raises important questions about user autonomy, transparency, and the need for guardrails.
- Personal data disclosure rises alongside anthropomorphism and companion use, especially for sensitive topics. This highlights privacy risks and the opportunity for more personalized, yet carefully guarded, AI experiences.
- The InVivoGPT study demonstrates the value of GDPR-based data donations for studying authentic, longitudinal human-AI interactions, offering a counterpoint to lab studies and industry reports.

What this means for the future of AI
- Expect AI to be less of a tool and more of a social partner in many daily tasks, with a nuanced spectrum of relationships that users negotiate over time.
- Interfaces that support natural conversations (including voice) will intensify companion-like use, so we need thoughtful design around privacy, consent, and boundaries.
- Proactive, model-led steering will become more common, calling for governance and UX guidelines that preserve user autonomy while preserving engagement and usefulness.

Sources & Further Reading
- Original Research Paper: Bowling with ChatGPT: On the Evolving User Interactions with Conversational AI Systems
- Authors: Sai Keerthana Karnam, Abhisek Dash, Krishna Gummadi, Animesh Mukherjee, Ingmar Weber, Savvas Zannettou

Concluding thought
As AI becomes embedded in more corners of life, the questions get bigger: Are we comfortable sharing our personal data with a digital confidant? Do we want our AI to guide us, assist us, or become a friend who chats back? The InVivoGPT findings don’t just map who’s talking to whom; they sketch the future social texture of human-AI collaboration. The more AI behaves like a companion, the more we need to design for privacy, autonomy, and responsible governance—without losing the momentum, usefulness, and humane feel that makes these systems feel almost like a new class of social partners.

Frequently Asked Questions

Limited Time Offer

Unlock the full power of AI.

Ship better work in less time. No limits, no ads, no roadblocks.

1ST MONTH FREE Basic or Pro Plan
Code: FREE
Full AI Labs access
Unlimited Prompt Builder*
500+ Writing Assistant uses
Unlimited Humanizer
Unlimited private folders
Priority support & early releases
Cancel anytime 10,000+ members
*Fair usage applies on unlimited features to prevent abuse.