Title: AI-Generated Content and Expert Decisions: How AI Shapes Choices on Specialized Topics
Table of Contents
- Introduction
- Why This Matters
- Domain-Specific Knowledge and Decision-Making
- How domain know-how shifts reasoning
- AI-Generated Content vs Human-Written Content in Decision-Making
- Trust, signals, and the illusion of expertise
- What the Experiment Found
- Quantitative insights: opinion changes and confidence
- Qualitative insights: thinking aloud and interviews
- Practical Implications for Today
- Key Takeaways
- Sources & Further Reading
Introduction
Decision-making on topics that require real expertise is hard enough in the best of times. When AI-generated content (AIGC) enters the mix, the dynamics get even trickier. A new lab-based study by Shangqian Li, Tianwa Chen, and Gianluca Demartini dives into exactly this intersection: how domain-specific knowledge interacts with AI-generated information during online decision-making. The study compares AI-generated content with carefully selected human-written sources across topics that range from general to chemistry-heavy, probing how people change their minds and how confident they feel as new information arrives. If you’ve ever wondered whether AI can truly help or subtly mislead when the stakes are professional, this research has something important to say. For context and further details, you can read the original paper here: The Impact of AI Generated Content on Decision Making for Topics Requiring Expertise.
Why This Matters
We’re living in an era where AI copilots are increasingly part of how we gather information, reason, and decide. The paper tackles questions many of us are facing in real life:
- When you’re not an expert, how much should you rely on AI-generated summaries or arguments versus human expert sources?
- Does labeling something as AI-generated change how we treat the information?
- How do signals like author, source credibility, and even the presence of social cues (likes, shares) color our judgments?
This research is timely because it speaks to both (a) the practical use of AI tools for decision support in professional domains and (b) the broader question of how AI shapes opinion formation in information-rich environments. The findings go beyond “is AI good or evil?” to offer concrete patterns about when AI helps and when it may introduce risk, especially for topics that require domain familiarity. It also ties into established theories in human-computer interaction, such as Technology Acceptance Model (TAM) and the Elaboration Likelihood Model (ELM), offering a bridge between cognitive processing and real-world behavior.
Domain-Specific Knowledge and Decision-Making
The study used an explanatory sequential design: first, a quantitative survey with eight tasks, followed by qualitative, in-depth interviews. Half of the topics were general/easy to grasp, and the other half demanded substantial domain knowledge (think chemical engineering topics and chemistry-related issues). For decision support, participants saw five pieces of information per topic, each piece either AI-generated (AIGC) or human-written (UGC from vetted sources). The researchers even manipulated how information sources were presented: a source-sensitive pattern (where mislabeling could mislead) and a source-insensitive pattern (where the source signals were balanced).
This setup mirrors real-world information ecosystems where people juggle content from AI assistants and human experts, often without knowing who authored what. It also allowed them to examine how people with different levels of domain knowledge react to AI-assisted information.
How domain know-how shifts reasoning
Key takeaway: domain-specific knowledge shapes decision-making in two broad ways:
- Contextual stance: People with relevant background tend to ground their judgments in domain-specific reasoning. In-domain PhD students, for example, spent more time articulating their full reasoning, while those with less domain experience tended to latch onto conclusions they felt were supported by the text.
- Information evaluation: Those with solid background are more capable of independent fact-checking and evaluating the quality of arguments. Non-experts often rely more on the perceived credibility of sources and the overall impression of the text, which makes them more susceptible to persuasive cues—even if those cues are misleading or mislabeled.
Analytically, domain knowledge mattered for both how willing people were to adjust their minds and how confident they felt afterward. The larger the knowledge gap between a person and a topic, the more their decision-making hinged on the information they were shown (and the less they delved into independent reasoning). This dovetails with the intuitive idea that expertise acts like a mental “compass” in information rafts that can feel murky to non-experts.
AI-Generated Content vs Human-Written Content in Decision-Making
A central thread in the paper is the comparison between AI-generated content and human-written content when people make decisions on topics that require expertise.
Trust, signals, and the illusion of expertise
- Distinguishing AI from human authors proved hard. Many participants couldn’t reliably tell AI-generated content from human-written text, especially when signals were misleading or when the AI was framed to appear trustworthy.
- About half of the participants (roughly 54.55%) perceived AI-generated information as equivalent in utility to human-written content for decision-making. A notable minority (13.63%) expressed explicit distrust or negative attitudes toward AIGC, citing concerns like lack of pipeline transparency, potential misinformation, and extra effort needed to fact-check AI outputs.
- A sizable share (nearly 60%) used AI intensively for information retrieval, mainly to scan and understand controversial topics or to obtain quick explanations of unfamiliar concepts. Only a small fraction (about 9.5%) said they would specifically ask AI for decision-support input.
Practical takeaways here: AIGC is already a mainstream information tool for students. When signals promise trust (credible sources, proper citations, or a familiar author name), people tend to rely on it. When signals are murky or misleading, trust—and with it, the likelihood of challenging the material—wanes.
What the Experiment Found
Quantitative insights: opinion changes and confidence
- A striking 87.27% of participants reported at least one change in opinion across the eight tasks. That shows how malleable online opinions can be when new information lands, even in a setting designed to isolate the effect of information content.
- The study used generalized linear mixed-effects models (GLMM) to tease apart what predicts opinion change and confidence shifts. Several factors mattered:
- Education level: PhD participants and those still pursuing undergraduate studies were more likely to change opinions than participants at other stages (graduates not currently in a PhD track showed different patterns). This suggests that people with more formal education in the relevant domain engage differently with new evidence.
- Domain familiarity: Higher self-reported familiarity with a topic reduced the odds of changing opinion. In other words, the more you think you know about a subject, the less likely you are to flip your stance based on new information.
- Perceived usefulness of the information: When participants rated the decision-support material as highly helpful, they were more likely to change their minds.
- Source-sensitive vs source-insensitive patterns: When the design included misleading labeling about sources (e.g., AI-generated content presented as human-written or vice versa), participants were less likely to change their opinions. In other words, misleading source cues can anchor people and dampen openness to new information.
- Confidence dynamics: Confidence before seeing the sources tended to be negatively associated with changing opinions, while confidence after receiving the information (and having sources revealed) was positively associated with opinion change. In short, knowing the sources can nudge people to adjust their confidence in light of the evidence.
- Other factors: pattern (the information-source labeling scheme) and “midconfidence” (a measure around the middle of the confidence spectrum) also showed statistical significance in predicting opinion change.
Qualitative insights: thinking aloud and interviews
- Domain knowledge matters in narrative form. In-domain participants (especially PhD students) tended to spend more time explaining their reasoning—sometimes out loud—highlighting the depth of domain-specific processing. Non-domain experts often anchored on what the text concluded rather than mapping out the step-by-step logic behind those conclusions.
- Attitudes toward AIGC varied. Some participants expressed skepticism about AI, particularly around the need to fact-check and the lack of transparency about how AI generates content. Yet even among those who disliked relying on AI for decisions, most still acknowledged its usefulness as a decision-support aid in at least some contexts.
- A striking finding: several participants couldn’t reliably tell whether content was AI-generated or human-written, especially when information was presented with convincing arguments and evidence. This underscores a key risk: AIGC’s ability to pose as credible, expert-level content can shape decisions even when the user cannot confidently verify its provenance.
- Information quality still dominates. Participants emphasized that trustworthy sources, objective facts, traceability, and external validation mattered most when deciding whether to rely on a given piece of information—whether AI-generated or human-authored. In many cases, people preferred content that aligned with their existing beliefs or appeared to come from an authoritative source.
Taken together, the quantitative and qualitative results paint a nuanced picture: AI-generated information can be as influential as human-authored content, but the impact hinges on domain knowledge, source signals, perceived usefulness, and how information is framed.
Practical Implications for Today
- For educators and researchers: When teaching or evaluating critical thinking around AI-assisted sources, emphasize the epistemology of sources. Teach students to interrogate not just what is said, but who says it, how it is framed, and what evidence backs it up. The study’s finding that domain familiarity lowers the likelihood of changing a stance suggests curricula should foreground robust evidence evaluation for students who are new to an area versus those who are already experts.
- For policy and platform design: The source-sensitive pattern results highlight a potential pitfall in user interfaces that mislabel AI-generated content. If a system tricks users into thinking content comes from a trusted human expert, it can unduly sway opinions. Transparent labeling and easy access to sourcing information can help users make more informed judgments.
- For AI developers and information-makers: AIGC shows real promise as a knowledge facilitator, especially in domains that require heavy expertise. The key is to balance usefulness with guardrails: accurate science communication, explicit sourcing, and pathways for users to verify claims quickly. The Elaboration Likelihood Model (ELM) suggests designing AI outputs that encourage deeper, central-route processing—clear, evidence-backed explanations rather than shallow summaries that only appeal to peripheral cues.
- For individuals: Be aware of your own domain familiarity. If you’re entering a topic where you’re not an expert, approach AI-generated content with structured critical thinking: check sources, compare with human-authored expert material, and look for explicit evidence and methodology behind conclusions. And remember that confidence can be a double-edged sword: feeling confident after seeing AI-provided information doesn’t guarantee accuracy.
Linking Back to the Original Paper
If you want to dive deeper into the mechanics, the full experimental design, specific topic examples, and the statistical modeling details are all laid out in the original study: The Impact of AI Generated Content on Decision Making for Topics Requiring Expertise. The authors provide a thorough account of topics, information prompts used for AI generation, and the two-pattern condition that tested how source signals influence decision dynamics.
Key Takeaways
- Domain knowledge matters a lot. People with stronger domain-specific knowledge approach decision-support content more critically, and non-experts tend to rely more on surface signals and perceived conclusions.
- AI-generated content is increasingly trusted as a decision-support resource. AIGC can be as helpful as human-written material when properly sourced and clearly presented, but it also raises questions about misinformation risk if labeling is misleading or absent.
- Source framing and information quality are powerful levers. Mislabeling (source-sensitive patterns) reduced the likelihood of opinion change, while high-quality, well-supported AI content increased the odds that people would adjust their views.
- Education level and gender differences emerge in predictable ways. PhD and undergraduate participants showed different propensities to change opinions, while gender differences appeared in how opinion shifts and confidence changes unfold.
- The study highlights a path forward for AI-assisted decision-making: build systems that promote central-route processing by offering transparent reasoning, credible evidence, and verifiable sourcing, while implementing safeguards to curb misinformation.
Sources & Further Reading
- Original Research Paper: The Impact of AI Generated Content on Decision Making for Topics Requiring Expertise
- Authors: Shangqian Li, Tianwa Chen, Gianluca Demartini
In today’s information landscape, AI-generated content is not just a novelty—it’s a decision-making partner for a growing share of students, professionals, and everyday readers. The Li, Chen, and Demartini study gives us a nuanced map of when AI helps, when it could mislead, and how domain knowledge and signaling shape our judgments. As AI tooling continues to evolve, both users and designers will benefit from keeping these dynamics in mind: seek evidence, interrogate sources, and recognize that the most robust decisions usually come from a careful blend of domain expertise and thoughtful AI-assisted reasoning.