How Novice Programmers View ChatGPT: Performance, Risk, and Decision-Making in Coding Education

This post summarizes a study on how first-year programmers perceive ChatGPT in coding tasks, examining performance, risk, and decision-making, and how these beliefs influence their intention to use the tool. Learn actionable implications for programming education today. Practical classroom tips now!
1st MONTH FREE Basic or Pro • code FREE
Claim Offer

How Novice Programmers View ChatGPT: Performance, Risk, and Decision-Making in Coding Education

Table of Contents


Introduction

If you’ve ever taught a beginner to code or watched a student wrestle with their first programming language, you know how pivotal the right set of tools can be. The big question is: how do novice programmers actually perceive AI helpers like ChatGPT when they’re tackling real programming tasks? A new study dives into this, examining how beginners’ beliefs about ChatGPT’s performance, risks, and their own decision-making shape their intention to use the tool in programming tasks. The work, titled Assessing novice programmers’ perception of ChatGPT: performance, risk, decision-making, and intentions, uses a PLS-SEM approach with 413 first-year undergraduates to unpack these dynamics. For readers curious about the practical impact of AI in education, this is one to watch. You can check the original paper here: https://arxiv.org/abs/2601.06044

What the study gives us is a structured look at four key ideas: performance expectancy (does ChatGPT improve programming tasks?), risk-reward appraisal (are the benefits worth the potential downsides?), decision-making (how does ChatGPT influence the way students choose among options while solving problems?), and intention to use (will they actually rely on it in the future?). The results aren’t just academic; they hint at how AI tools might be integrated into intro programming courses in ways that support learning while keeping expectations balanced.

In the rest of this post, we’ll translate the findings into relatable takeaways, explain what they mean for students and educators, and connect the dots to the broader landscape of AI in education. And yes, we’ll keep things practical: what to try in classrooms today, what to watch for, and how this builds on what we already know about AI-assisted learning.


Why This Matters

This research arrives at a moment when AI copilots and chat-based assistants are becoming everyday tools in education and coding practice. The study’s focus on novice programmers—students who are just learning basics like syntax, simple program design, and debugging—matters because this group is precisely where AI can either accelerate growth or foster overreliance if not integrated thoughtfully.

From a real-world perspective, imagine a first-year programming course where students routinely turn to ChatGPT for explanations, debugging tips, and quick code examples. If educators understand how PE, RRA, and DM interplay with IU, they can design assessments, scaffolds, and guidelines that help students use AI tools responsibly—capitalizing on the benefits (faster learning, clearer explanations, quicker feedback) while mitigating risk (misconceptions, brittle problem-solving, or overdependence).

This study also builds on a broader thread of AI-in-education research. Prior work has highlighted both the potential of large language models to aid understanding and its challenges—hallucinations, accuracy concerns, integrity issues, and the need for trustworthy usage patterns. By measuring how novice programmers actually perceive ChatGPT and how those perceptions translate into intention to use, the paper adds a concrete behavioral lens to the conversation. It complements theoretical models like UTAUT (for performance expectancy) and the theory of planned behavior (for intention formation) with education-specific insights.


Main Content Sections

Performance Expectancy and Decision-Making

Performance expectancy (PE) is the belief that using ChatGPT will improve your programming performance. In this study, PE was strongly tied to better decision-making (DM) during programming tasks. Put simply: when beginners expect that ChatGPT can help them code more effectively, they’re more likely to rely on it as part of the problem-solving process and to make more confident, informed choices.

Key takeaways to apply in practice:
- Emphasize tangible benefits in teaching: when introducing AI tools, pair them with concrete demonstrations of how ChatGPT can speed up debugging, clarify complex concepts, or reveal alternative approaches to a problem.
- Use guided exploration: allow students to use ChatGPT for a small task, then compare the AI-assisted approach with a manual solution. This makes the perceived benefits concrete and helps calibrate DM.
- Expect improved DM: higher PE predicted better DM in programming tasks, meaning students who trust the tool tend to deliberate more effectively, assess options with greater clarity, and reach higher-quality solutions.

In the paper’s results, the path from PE to DM was significant (original sample coefficient about 0.533, p < 0.001). In other words, those who believed ChatGPT could meaningfully boost their programming performance tended to engage in more effective decision-making when solving problems. This aligns with broader tech-acceptance research: when people anticipate real benefits, their usage decisions and problem-solving approaches become more purposeful.

Practical implication: teachers can frame AI tooling as a deliberate cognitive aid—something that supports, not replaces, thinking. Provide prompts or templates that help students ask better questions, verify code, and weigh trade-offs, reinforcing the idea that PE translates into smarter DM.

Risk-Reward Appraisals and Decision-Making

No tool is perfect, and beginners are especially sensitive to risks and benefits as they decide whether to lean on assistance like ChatGPT. Risk-reward appraisal (RRA) captures this calculation: do the potential gains (faster learning, new techniques, quicker feedback) outweigh the risks (inaccurate code, overreliance, misconceptions)? The study found that a favorable RRA is associated with enhanced DM in programming tasks.

What this means in classrooms:
- Teach risk awareness explicitly: help students identify common pitfalls of AI-assisted coding—misleading explanations, code that looks correct but is inefficient, or dependencies on the tool for routine tasks.
- Balance is key: encourage students to use ChatGPT for guidance but require them to justify solutions without AI help in a follow-up, ensuring they can translate AI-provided ideas into their own reasoning.
- Encourage critical evaluation: have students compare AI-suggested solutions with reference materials or peer solutions, highlighting where ChatGPT’s suggestions align with good practice and where they don’t.

In numbers, H2 (RRA → DM) was significant with a coefficient around 0.247 (p < 0.001). While smaller than the PE effect, it’s a meaningful driver: if students feel the payoff is worth the risk, they’re more capable of using the tool to navigate programming challenges confidently.

Practical implication: design course activities that surface the costs and benefits of AI use—not to scare students away, but to cultivate disciplined, reflective usage. For example, a “ChatGPT in practice” assignment could require students to present both an AI-generated solution and a justification analyzing its strengths and weaknesses.

Decision-Making and Intentions to Use ChatGPT

The third pillar ties the DM experience to actual behavioral intention (IU) to continue using ChatGPT. The study found a robust link: a positive DM experience with ChatGPT translates into a higher intention to use it for programming tasks. In other words, when ChatGPT helps students make better coding decisions and they trust the results, they’re more likely to integrate the tool into their ongoing workflow.

Supporting theory here: this aligns with the theory of planned behavior, where positive attitudes toward a behavior (using ChatGPT for DM) predict stronger intentions to perform that behavior. In practical terms, it suggests a feedback loop: good DM with the tool reinforces trust and reliance, which in turn sustains use.

Key practical ideas:
- Build trust through reliability and transparency: encourage students to phrase prompts clearly, explain how ChatGPT arrived at a suggestion, and annotate limitations of the output.
- Normalize AI-assisted workflows: space out AI use so it becomes a natural, integrated part of problem-solving rather than a novelty. The more students see ChatGPT consistently aiding their DM, the more likely they are to intend continued use.
- Scaffold the learning path: sequence activities so students experience initial successes with AI-supported DM, followed by tasks that require independent reasoning, ensuring balance.

In the study, the DM → IU path had a sizable coefficient (about 0.662, p < 0.001), underscoring how a positive DM stance strongly predicts willingness to keep using ChatGPT for programming tasks.

Practical Implications for Programming Education

Beyond the specific hypothesis tests, the study’s design and results carry actionable implications for educators designing intro programming courses or AI-augmented learning experiences:

  • Measuring the right levers: PE, RRA, and DM are not just abstract concepts; they map onto concrete teaching interventions. For example, a pre-course framing that highlights ChatGPT’s capabilities (PE) and a post-activity reflection on what ChatGPT did well and where it fell short (RRA) can shape students’ DM and IU in productive ways.
  • Validity and trust: the research used robust validity checks (e.g., AVE, composite reliability, Fornell-Larcker discriminant validity, HTMT ratios) to verify that the constructs captured distinct facets of perception. In practice, educators should likewise treat perceptions of AI tools as multifaceted—trust, perceived reliability, perceived value, and risk awareness all matter.
  • Demographics and context: the sample skewed toward younger, predominantly male students in public institutions, with Windows as the primary OS and varying levels of prior AI exposure. While this provides useful insight, educators should be mindful of diversity in their own classes and consider how different contexts might shape PE, RRA, and DM.
  • Use as a learning aid, not a shortcut: the paper emphasizes how AI can support novice learners, especially in areas like explaining concepts, assisting with debugging, and offering guidance on code design. The practical push is to use these tools to complement traditional instruction—office hours, pair programming, and code reviews—so students develop both AI literacy and solid foundational skills.
  • Link to the broader AI education literature: the authors discuss a wide range of related research on AI’s role in education, concerns about integrity, and the balance between automation and human learning. For instructors, it’s worth following up with readings on AI ethics, responsible AI use in classrooms, and best practices for integrating large language models into curricula.

If you’re curious how these results map onto a real course, the study’s discussion and conclusions anchor their claims in the data, yet they also invite educators to tailor the approach to their students’ needs. And for anyone who wants to dig deeper, you can revisit the original paper for the full methodology and statistical details: Assessing novice programmers’ perception of ChatGPT: performance, risk, decision-making, and intentions.


Key Takeaways

  • Novice programmers who expect ChatGPT to improve their programming performance tend to make better, more confident decisions during coding tasks.
  • A favorable risk-reward assessment of using ChatGPT is linked to more effective decision-making, indicating that when students believe the benefits outweigh the risks, they engage more productively with AI guidance.
  • Positive decision-making experiences with ChatGPT strongly predict the intention to continue using it for programming tasks, suggesting a self-reinforcing adoption pattern in early learners.
  • The study validates a model explaining a substantial share of variation in DM (47.4%) and IU (43.9%) among first-year programmers, reinforcing the potential value of AI tools in introductory education when used thoughtfully.
  • For educators, the takeaway is not to push AI as a crutch but to design learning experiences that cultivate PE, manage RRA, and support DM in ways that build trust and sustainable use.

Practical applications you can try now:
- In class, pair AI-assisted activities with explicit reflection prompts that surface benefits, risks, and decision-making processes.
- Create rubrics that reward transparent reasoning alongside AI-generated solutions.
- Provide prompts or templates to help students ask better questions and critique AI outputs, fostering stronger DM skills.

This balanced approach helps harness the strengths of AI copilots like ChatGPT while maintaining the core goal of programming education: developing independent, capable learners who can reasoning through problems—with or without AI, and with a critical eye for quality and reliability.


Sources & Further Reading

If you want to explore a broader landscape of AI in education and programming, the references in the study point to multiple related lines of inquiry, from safety and ethics to tool-assisted learning and cognitive factors in technology adoption. The conversation about how beginners interact with AI tools is ongoing, but this work provides a solid, data-driven snapshot of what happens when novices start using ChatGPT as part of their programming journey.

Frequently Asked Questions

Limited Time Offer

Unlock the full power of AI.

Ship better work in less time. No limits, no ads, no roadblocks.

1ST MONTH FREE Basic or Pro Plan
Code: FREE
Full AI Labs access
Unlimited Prompt Builder*
500+ Writing Assistant uses
Unlimited Humanizer
Unlimited private folders
Priority support & early releases
Cancel anytime 10,000+ members
*Fair usage applies on unlimited features to prevent abuse.