When a Chatbot Feels Human: What Our Love for Conversational Search Means for How We Find Stuff
If you’ve tried a chatbot like ChatGPT for a quick answer, you’ve probably noticed two things: the flow feels easy and friendly, and you might end up trusting what it says a little more than you intended. A recent study dives into why that happens—and what it means for how we search online. The short version: people aren’t just choosing between “faster answers” and “reliable links.” They’re weighing personality, ease of use, and how human-like the tool feels. And that mix can tip us toward overtrust, especially when the information isn’t perfect.
In this post, I’ll unpack the core ideas from the study “Personality over Precision: Exploring the Influence of Human-Likeness on ChatGPT Use for Search” in plain language. I’ll spotlight the two big user groups the researchers found, explain how trust and “human-likeness” shape our preferences, and offer practical takeaways for designers, educators, and everyday users who want smarter, safer search experiences.
Introduction: Why conversational search now?
The core appeal of conversational search is simple: it’s interactive, context-aware, and tailors responses to you. Instead of a long list of links, you get a chat where the system can reformulate queries, ask clarifying questions, retrieve information, and generate a direct answer. Big players in tech are leaning into this approach, with Google and others weaving generative AI into search to feel more like a helpful conversation rather than a string of keyword results.
But there’s a catch. As these tools become more “human” in style and more adept at sounding confident, people can overtrust them. They’re designed to be quick, fluent, and user-friendly, which can make inaccuracies sneak past our skepticism. The study digs into what drives people to adopt conversational interfaces, how much they trust them, and how the sense of being treated like a conversational partner (anthropomorphism or “human-likeness”) affects both acceptance and risk.
Two user groups, two trust stories
One of the study’s most actionable findings is the emergence of two distinct user groups based on how people use the two main tools—ChatGPT and Google:
- Daily Users of Both (DUB): People who actively use ChatGPT and Google every day. They tend to trust ChatGPT more, see it as more human-like, and are more willing to trade factual accuracy for personalization and a smoother conversational flow.
- Daily Users of Google (DUG): People who rely on Google daily but use ChatGPT only occasionally (weekly or monthly). They generally trust Google more than ChatGPT and may value ad-free experiences and the speed of responses, but they still appreciate the conversational vibe of ChatGPT.
A big takeaway here is that ChatGPT isn’t replacing Google for most people. Even among the “daily users of both,” Google still has a foothold. The real story is about relationships: how people feel about the tool (trust and human-likeness) and how that feeling nudges them to tolerate downsides (like occasional factual hiccups) for a better experience.
Trust, human-likeness, and the willingness to trade off facts
Trust isn’t just a dry metric; it’s shaped by how the interface talks to you and how natural the interaction feels. The study found:
- DUB respondents tended to trust ChatGPT more than the DUG group did. They also perceived ChatGPT as more human-like.
- Both groups cited personalization as a key reason to prefer ChatGPT over Google. The conversational flow and interaction ease were especially valued by the daily users of both.
- When asked whether they’d trade off factual accuracy for certain benefits, the groups diverged. DUB folks were more open to sacrificing some factual precision in exchange for ease of use, conversational flow, and personalization. DUG respondents were less inclined to sacrifice factuality, even if they appreciated aspects like an ad-free experience.
Analogy time: thinking in terms of a “friendly tour guide” vs. a “fast librarian”
- DUB users are like people who hire a personable tour guide who knows the city well. You get context, stories, and a tailored itinerary, and you’re willing to overlook a few wrong turns if the trip feels smoother and more enjoyable overall.
- DUG users are more like people who want a fast, reliable librarian who gives you exact, sourced information. They’re less swayed by the chat’s warmth, more by accuracy and transparency.
In practice, this means that for some users, a warm, human-like chat helps them accomplish tasks faster and with less cognitive effort. For others, that same warmth can blur the line between helpful and incorrect when accuracy matters most.
Design, trust, and human-likeness: what people actually like
Beyond the big groups, the study looks at what design features people find appealing in ChatGPT versus Google. Some of the standout preferences:
- Personalization and flow: The ability to tailor responses and maintain a natural, continuous conversation is repeatedly cited as a major advantage of ChatGPT.
- Interaction feel: The sense of a smooth, flowing dialogue—almost like talking to a helpful human—resonates with users, especially those who use both tools regularly.
- Ad-free experiences: The absence of ads in ChatGPT’s interface is appreciated, particularly by Google-heavy users who value a clean, distraction-free experience.
- Transparency and sources: The study notes that users see Google as advantageous for source transparency and detailed factual checks. This is a reminder that many people still want verifiable information, especially when accuracy is critical.
So, while ChatGPT’s conversational style is a win for engagement, the practical edge often comes from a better experience that doesn’t overwhelm with ads and supports a natural back-and-forth. On the flip side, Google’s strength lies in clarity about sources and faster direct answers for certain tasks.
Overtrust and why it matters
Overtrust is the risk you get when you rely on a tool that sometimes errs but feels reliable and human. The study highlights several dynamics at play:
- Anthropomorphism increases trust but can mask risk. The human-like vibe makes people more forgiving of errors, which can be dangerous for topics where accuracy is non-negotiable (like medical or safety information).
- People may not realize they’re overtrusting. Some users fall into a trap where they trust the assistant’s tone and flow, even when the platform has produced questionable outputs.
- Demographics matter. Middle-aged adults (roughly 30–55) tend to trust ChatGPT more and may be more susceptible to overtrust, especially if they’re using it for personalization. Older adults (55+) show lower usage but aren’t necessarily the biggest believers in ChatGPT’s accuracy. Younger adults (18–30) use it more but aren’t the most trusting, possibly because they’re more aware of the pitfalls of AI.
Practical implications: what this means for real-world use
If you’re building a conversational search tool, or you’re a curious consumer trying to decide when to rely on a chat vs. traditional search, here are some takeaways:
- Design for the human-like edge, but don’t forget accuracy. A friendly tone and smooth conversational flow can dramatically improve user experience, but that should come with explicit safeguards for factual accuracy.
- Use ChatGPT as a starting point, then verify. The study’s discussion about “starting with a chat and then fact-checking with traditional search” is a useful pattern. Treat the chat response as a draft or a curated overview, and consult primary sources for critical decisions.
- Offer transparent sourcing. Users want to know where information comes from. Where possible, include citations or a quick summary of sources so users can assess reliability themselves.
- Tailor experiences by user type. Recognize that two big groups exist: DUB and DUG. Different interfaces or prompts could be recommended based on a user’s typical habits, balancing personalization with a guardrail for accuracy.
- Mitigate overtrust with clear expectations. Set expectations about when the model might be confident but wrong, especially in sensitive domains. Include prompts that remind users to double-check important facts.
- Address aging and digital literacy gaps. The study points to potential vulnerability among older adults. Gentle, clear explanations, and easily accessible checks can help reduce misinformation risk for these users.
Real-world applications: how to use these ideas today
For everyday users:
- Treat ChatGPT as a first-pass tool for brainstorming, outlines, or quick summaries.
- Always double-check critical facts against reliable sources, especially if the topic matters for health, safety, or finances.
- If you value source transparency, actively look for tools that show citations or enable quick source checks.
For educators and students:
- Use conversational tools to draft study guides or summarize readings, then verify with textbooks or scholarly sources.
- Teach critical thinking about AI responses: how to spot potential hallucinations and how to verify claims.
For product designers and developers:
- Build in-source citations and easy-to-use verification checklists in the UI.
- Consider a two-mode approach: a chat mode for exploration and a traditional search mode for precision tasks.
- Provide customization knobs: allow users to adjust how much “personality” the tool should display, balancing engagement with conservatism in factual output.
For policymakers and researchers:
- Acknowledge and plan for the overtrust risk in citizen-facing AI systems.
- Prioritize user education about the limits of AI-generated information and the importance of cross-checking.
Demographics and what they tell us about adoption
The study’s nuanced look at age and gender reveals interesting patterns:
- Age matters more than gender in some respects. Adults aged 30–55 showed the strongest appetite for trade-offs—willing to sacrifice factuality for a better user experience. Those over 55 used ChatGPT less but didn’t always trust it more than younger folks, suggesting different risk perceptions and information needs across life stages.
- Gender didn’t show significant differences in trust or preferences in this study, but the researchers caution that more demographic variables (like education or profession) could reveal additional nuance.
A note on limitations and future directions
The authors were careful to frame this as exploratory research. They used a relatively small, US-based sample and single-item measures for trust and human-likeness to keep the survey concise. They also suggest that future work could develop multi-item scales for these constructs and test predictive models with larger, more diverse samples. The big-picture takeaway is to treat this as a starting point for understanding how real users think about personalization, trust, and the human-like feel of AI assistants.
Key takeaways for readers who want to improve their own prompting and usage
- Start with a clear goal, then choose your tool accordingly. If you need precision and quick source checks, Google-like search with citations might be best. If you want a guided exploration, brainstorming, or a conversational plan, ChatGPT can be a strong ally—just plan to verify critical facts.
- Use a two-step approach when accuracy matters. Ask ChatGPT for a draft answer or a summary, then cross-check with traditional sources. This leverages the best of both worlds: the efficiency of conversational AI and the rigor of direct sources.
- Demand transparency. When possible, prefer tools that reveal sources or offer a simple, one-click way to verify information.
- Be mindful of the chat’s “personality” settings. A warmer, more human-like tone can boost engagement, but it may also increase overtrust. If you’re dealing with important decisions, consider dialing down the persona and increasing the emphasis on accuracy.
- Educate yourself about hallucinations. Recognize that even confident-sounding AI can produce incorrect information. Treat uncertain or high-stakes answers as provisional until verified.
Conclusion: Embracing the human touch while staying grounded
The study paints a nuanced picture of modern search behavior. We’re not simply choosing between a chatty assistant and a meticulous search engine. We’re navigating a spectrum where personality, ease of use, and personalization influence our trust and our willingness to trade fact-checking for a better experience. The two-user groups—DUB and DUG—show that people can coexist with both approaches, each finding value in different aspects of conversational interfaces.
For designers, researchers, and everyday users, the takeaway is clear: human-likeness and customization matter, but so do safeguards around factuality. The future of search could well be a hybrid, where a friendly conversational interface acts as a smart starting point that invites verification rather than discouraging it. By balancing engaging design with transparent sources and strong fact-checking mechanisms, we can enjoy the benefits of conversational search—without slipping into the trap of overtrust.
Key Takeaways
- Two user profiles dominate conversational search adoption: Daily Users of Both (DUB) who trust ChatGPT more and see it as human-like, and Daily Users of Google (DUG) who still rely on Google’s clarity and speed.
- Personalization and smooth interaction flow are the biggest attractions of ChatGPT across both groups.
- People willing to trade off factual accuracy for ease-of-use and a more human-like conversation are more common among DUB than DUG, highlighting a risk of overtrust in the more engaged group.
- Anthropomorphism boosts trust but can lead to underestimating inaccuracies; this is especially risky in sensitive domains.
- Many users prefer ChatGPT as a starting point for exploration and then verify facts with traditional sources, suggesting a practical two-step workflow.
- Design implications: emphasize transparency, source citations, and easy fact-checking, while preserving the engaging, human-like interaction that users crave.
- Demographics matter: age patterns show mid-life users may exhibit higher trust and greater openness to trade-offs, while older users may be more cautious.
- For safer adoption, promote a mindset of verification and provide tools that clearly indicate when information should be double-checked.
If you’re curious about how you fit into these patterns, try reflecting on your own search habits: Do you value a warm, chatty assistant even if it means occasionally checking facts later, or do you prefer a crisp, source-backed answer you can trust right away? Understanding your own balance can help you tailor your prompts and use AI tools in a way that boosts productivity while keeping accuracy in sight.