Title: AI Literacy in Action: How Students Learn with ChatGPT Through Everyday Practice

AI literacy is not a one-time skill; it's a learn-by-doing process. This post distills a year-long study of 36 undergrads interacting with ChatGPT, showing five genres students use to learn, negotiate roles, and develop repair literacy as AI breakdowns become teachable moments for smarter learning.
1st MONTH FREE Basic or Pro • code FREE
Claim Offer

Title: AI Literacy in Action: How Students Learn with ChatGPT Through Everyday Practice

Table of Contents
- Introduction
- Why This Matters
- Use Genres: Five Ways Students Domesticate ChatGPT
- Academic Workhorse
- Emotional Companion
- Metacognitive Partner
- Repair and Negotiation
- Trust Calibration
- Repair Literacy and Trust: Learning Through Breakdowns
- Implications for Teaching and Policy
- Key Takeaways
- Sources & Further Reading

Introduction
Artificial intelligence is not just a tool in a lab or a classroom—it's becoming a familiar companion in student life. The study behind “Learning to Live with AI: How Students Develop AI Literacy Through Naturalistic ChatGPT Interaction” digs into how real undergrads learn to work with generative AI through everyday practice, not just formal coursework. Over a year, 36 students generated 10,536 ChatGPT messages, revealing five distinct use genres that shape how they learn to collaborate with AI. This isn’t a one-and-done adoption; it’s a dynamic, ongoing relationship where students negotiate roles, test boundaries, and build what the researchers call “repair literacy.” If you’re curious about how AI literacy actually unfolds among learners, this study offers a grounded view grounded in real conversations rather than classroom tutorials. For a complete read, you can check the original paper here: Learning to Live with AI: How Students Develop AI Literacy Through Naturalistic ChatGPT Interaction.

What makes this work compelling is its move from normative lists of AI literacy competencies to a portrait of literacy as sociomaterial practice. The researchers draw on domestication theory and a growing set of AI-literacy frameworks to show that students don’t just acquire skills; they negotiate relationships with a smart system, curate a personal repertoire of interaction styles, and continually reframe what counts as “useful” AI assistance in the context of their own courses and goals. In short, AI literacy emerges from doing, conversing, and repairing together with the machine.

Why This Matters
This research lands at a moment when classrooms are experimenting with AI as a learning partner rather than a mere gadget. The study’s practical punchlines are timely for three big reasons:

  • Realistic AI literacy needs: Schools have long asked for better AI literacy frameworks (knowing, using, evaluating, and ethics). This work extends that by showing how students actually develop competencies like repair literacy and epistemic vigilance—skills that aren’t well captured by traditional prompts-and-solutions models.
  • Teacher-friendly implications: Rather than prescribing one-size-fits-all prompt techniques, educators can design environments that support diverse use genres (from efficient task completion to metacognitive planning) and foster students’ ability to shift genres as needed.
  • Responsible AI in practice: The research foregrounds relational AI literacy—how students manage trust, negotiate limits, and navigate ethical considerations in day-to-day use. That aligns with calls for more participatory, human-centered approaches to AI in education.

A real-world scenario where this matters today: imagine a university course adopting AI-enabled study sessions. Rather than forcing students to use AI only for quick answers, instructors could encourage them to experiment with multiple genres—using AI to draft outlines, test understanding through “co-struggling” dialogue, or practice metacognitive planning for exams. The approach would acknowledge students’ existing strengths (and their need to regulate reliance) while guiding them toward more deliberate, self-directed learning. The study’s findings suggest this kind of nuanced integration can enhance both academic performance and autonomy.

In relation to prior work, this study builds on AI-literacy frameworks that emphasize knowledge, use, evaluation, and ethics, but it also captures the social and emotional dimensions of human-AI interaction that earlier work often left implicit. It harmonizes ideas from self-regulated learning, trust calibration, and user collaboration with AI, offering a grounded, practice-based lens that complements theoretical models and intervention-based evaluations.

Use Genres: Five Ways Students Domesticate ChatGPT
A core insight from the paper is that students don’t just learn how to “use” ChatGPT; they develop a portfolio of interaction patterns—use genres—that fit different learning needs and contexts. The authors identify five major genres that emerge from students’ routines, rhythms, and social expectations within their academic lives. Think of these as distinct personas students wear when working with AI.

Academic Workhorse
- The most common use genre positions ChatGPT as a productivity engine: a fast, on-demand helper that can draft, summarize, or solve routine problems to meet deadlines.
- Practical flavor: students use highly targeted, direct prompts to retrieve solutions or perform routine tasks. They learn that precise prompts tend to yield clearer results.
- Implication: This is where students practice efficiency and procedural fluency—great for getting through homework and basic problem-solving, but it can risk shallow conceptual engagement if over-relied upon.

Emotional Companion
- ChatGPT becomes a low-risk, non-judgmental space to vent, rehearse, or ease anxiety around tough material or upcoming exams.
- Practical flavor: students anthropomorphize the AI, using phrases that signal social closeness and emotional validation.
- Implication: The tool supports motivation and emotional resilience, an often-overlooked dimension of learning that can influence long-term persistence.

Metacognitive Partner
- Here, the AI helps students plan, monitor, and reflect on their learning—co-designing study strategies and gauging understanding.
- Practical flavor: goal-setting prompts, self-assessment checks, and requests for clarifications that help students calibrate their own knowledge.
- Implication: This genre directly supports self-regulated learning, turning AI into a learning-design assistant rather than a content generator.

Repair and Negotiation
- When responses falter, students engage in targeted repair work: rephrasing, adding context, and testing alternative prompts to coax better outputs.
- Practical flavor: a suite of “continuation strategies” and iterative prompts to repair broken threads (for example, memory or output cut-offs tied to context-window limits).
- Implication: Repair literacy emerges here—a critical, often invisible form of labor where students master troubleshooting, maintain momentum, and learn about AI limitations.

Trust Calibration
- Students continuously evaluate when to trust AI outputs, challenge dubious claims, and selectively disengage when warranted.
- Practical flavor: explicit verification prompts, paraphrasing to ensure proper attribution, and boundaries-setting around sensitive or high-stakes material.
- Implication: Epistemic vigilance and decision-making about AI reliance are at the heart of responsible use, aligning with the broader push for trustworthy AI in education.

Across the dataset, students often carried multiple genres at once and shifted among them as tasks changed. A single student might lean on the workhorse mode for code, switch to metacognitive planning for study schedules, and dip into emotional companionship during exam stress—all within the same semester. They also blended genres creatively, giving rise to hybrid patterns like “academic therapy” (emotion + learning support) or “co-struggling” (AI as a fellow learner).

For educators, the big takeaway is not to police AI use but to support students’ genre portfolio management. Encouraging students to articulate when they are in a particular genre, and designing activities that require them to switch genres strategically, could cultivate more sophisticated AI literacy and better learning outcomes.

Repair Literacy and Trust: Learning Through Breakdowns
One of the standout concepts in the paper is repair literacy—the ability to diagnose failures and recover learning momentum when AI goes off-script. The study highlights several practical repair behaviors:

  • Breakpoint recognition: students detect when outputs are too generic, wrong, or missing crucial context. They don’t shrug and move on; they diagnose and reframe.
  • Prompt refinement as craft: rephrasing, adding clarifying details, or directing the AI with more precise goals become normal, repeatable practices.
  • Continuation strategies: when outputs stall, students ask for “continue” or request missing sections, turning a lag into a productive extension.
  • Emotional regulation: expressing frustration but continuing to engage with the tool. The AI’s apologetic responses and self-corrections can act as affective scaffolding, helping students stay motivated through rough patches.
  • Context-window awareness: longer sessions reveal issues with memory or recall, prompting strategies to keep the AI aligned with the student’s evolving task.

The repair-work lens also raises an ethical question: students are performing a kind of algorithmic labor—training, guiding, and validating AI systems in real time. This is valuable learning, but it happens in an environment where students aren’t compensated and where the boundary between learning and training a commercial tool becomes blurry. Recognizing this labor helps educators design AI-enabled learning experiences that acknowledge students’ contributions and provide appropriate support or credit.

Trust dynamics emerge as a central theme. The study shows that trust is not a simple yes-or-no verdict on accuracy. It hinges on how the AI communicates about its limits. Apologies, admissions of uncertainty, and transparent explanations tend to sustain engagement, while confident errors without acknowledgement erode trust quickly. In practice, this suggests that AI systems used in education should be designed to model accountable behavior—acknowledging limits, showing sources, and inviting critical review—so students can practice epistemic self-regulation in a safe, instructive way.

Implications for Teaching and Policy
What does all this mean for classrooms, curricula, and institutional policy? The findings point to several actionable directions:

  • Design for relational AI literacy: Build learning activities that cultivate emotional regulation, repair strategies, and metacognitive reflection, not just prompt-writing finesse. Teach students to manage their AI relationship as part of their study skills.
  • Encourage genre portfolio development: Instead of a single “best prompt” approach, help students compose a repertoire of interaction patterns tailored to tasks, subjects, and personal goals. Create opportunities to document and share which genres work best for different kinds of problems.
  • Promote transparent AI use policies: Develop guidelines that reflect epistemic vigilance and trust calibration. Encourage students to paraphrase outputs, verify claims, and attribute AI contributions appropriately. This aligns with responsible AI principles like transparency, autonomy, and accountability.
  • Support participatory design: Involve students in shaping AI-enabled learning environments. Student input can help balance benefits with risks to autonomy, privacy, and integrity—exactly the tensions highlighted by the broader AI-education literature.
  • Extend to multiple platforms: The study focuses on one AI system, but real-world use often spans several tools. Future classroom design should account for cross-platform domestication, guiding students to compare, combine, and critically evaluate different AI assistants.

For those implementing AI in education, the concept of “genre portfolio management” can be a practical framing. Instead of teaching prompt engineering in isolation, educators can help students orchestrate when to deploy AI for efficiency, when to lean on it for reflective practice, and when to switch to a more human-centered approach that preserves agency and deep learning. The study also reinforces the idea that AI literacy is not just about knowing what AI can do; it’s about learning to live with AI in a way that preserves learning integrity and personal growth.

You can read more about these insights in the original paper, which grounds these ideas in extensive data from 10,536 messages across 1,631 chats involving 36 undergraduates. The authors explicitly argue for viewing AI literacy as sociomaterial practice—a dynamic dance between human learners and algorithmic partners. For those who want to dive deeper into the data and methodology, the paper is available here: Learning to Live with AI: How Students Develop AI Literacy Through Naturalistic ChatGPT Interaction.

Key Takeaways
- AI literacy develops through practice, not just instruction: Students cultivate five use genres that let them adapt AI to different academic tasks and personal learning styles.
- Repair literacy is central: Breaking down moments teach students how to diagnose, fix, and learn from AI errors—an essential but often overlooked skill.
- Trust is a negotiation, not a verdict: Students practice epistemic vigilance, asking for verification and using apologies or corrections as affordances to keep collaboration productive.
- Genre portfolio management shows sophisticated agency: Learners don’t default to one mode; they strategically switch among workhorse, metacognitive, emotional, and repair roles as needed.
- Implications go beyond prompts: Designing AI-enabled learning environments that support emotional regulation, ethical use, and transparent interactions is more productive than focusing solely on prompt quality.

Sources & Further Reading
- Original Research Paper: Learning to Live with AI: How Students Develop AI Literacy Through Naturalistic ChatGPT Interaction
- Authors: Tawfiq Ammari, Meilun Chen, S M Mehedi Zaman, Kiran Garimella

Notes on the research context
- Dataset: 10,536 messages across 1,631 chats from 36 undergraduates; analyzed with a mixed-methods approach combining qualitative coding and GPT-4o-assisted labeling.
- Core concepts: five use genres (academic workhorse, emotional companion, metacognitive partner, repair and negotiation, trust calibration); repair literacy; epistemic vigilance; domestication and use genres; relational AI literacy.
- Methodology highlights: grounding in Jarrahi’s “interviewing AI” approach, with an explicit emphasis on “AI in situated action”; a careful ethics-forward stance around data privacy and anonymization.

Closing thought
If AI literacy is a set of checklists, we miss the bigger picture: learning to live with AI means learning to navigate a living, evolving partnership. Students aren’t just manipulating a tool; they are co-constructing their own learning paths with an AI that learns—through them, with them, and sometimes because of them. The five use genres capture this nuanced dance, and repair literacy turns breakdowns into breakthroughs. For educators and students alike, the takeaway is clear: design and engage with AI in education as a relational craft, not just a technical task.

Frequently Asked Questions

Limited Time Offer

Unlock the full power of AI.

Ship better work in less time. No limits, no ads, no roadblocks.

1ST MONTH FREE Basic or Pro Plan
Code: FREE
Full AI Labs access
Unlimited Prompt Builder*
500+ Writing Assistant uses
Unlimited Humanizer
Unlimited private folders
Priority support & early releases
Cancel anytime 10,000+ members
*Fair usage applies on unlimited features to prevent abuse.