AI Literacy as Graduate Education Infrastructure
generative ai productivity code

AI Literacy as Graduate Education Infrastructure

8 min read

In every movie in The Fast & The Furious franchise, there’s a moment in the last third of the final race where a driver uses NOS—a type of fuel that gives the engine a boost, allowing the driver to accelerate past their speedometer’s limits to win the race.

AI is the NOS of research.

What I mean by that: AI can take an already smart, creative, and productive scholar and supercharge them. But it’s not a tool you deploy when the car is in park or hasn’t warmed up. We still need to teach our students how to write, how to analyze data, how to think, how to argue. And then we can teach them how to use AI to boost those skills.

This is the central insight I keep returning to as our department’s Director of Graduate Studies. AI literacy has become as fundamental to graduate training in psychology as statistical software proficiency. Rather than treating AI as an optional add-on, we need to systematically integrate it throughout our curriculum and faculty development to prepare students for a competitive job market where AI-augmented productivity is becoming the baseline expectation.

The Case for Urgency

I spend a lot of time thinking about how we can help students find jobs after they complete their degrees. It’s not an especially fun task: the job market is a source of angst for nearly every student I talk to. And of course, you won’t be surprised to hear me say that we produce more Ph.D.s than we have academic roles available to fill.

What is new is the opportunity for productivity far beyond what we thought students and early career researchers were capable of, thanks to generative AI. These tools allow researchers to code, write, and produce faster than ever before. As competition for tenure-track research positions becomes fiercer, it seems inevitable to me that these coveted jobs will go to the scholars who can marry critical and innovative thinking with the supercharged speed of LLMs.

In my department, at least half of our students are actively training for non-academic careers, where the expectation for AI use is even stronger. Some companies are directly mandating the use of AI without explicit guidelines for how to do so, leading to untrained workers producing worse output more slowly.

Meanwhile, AI adoption is highly variable across labs, meaning a graduate student’s exposure can range from highly engaged to none whatsoever. Few faculty have attempted to incorporate AI into their courses; many have opted to ban it altogether.

This variability isn’t fair to our students. It’s our responsibility to prepare them for a competitive job market where AI-augmented productivity is becoming the baseline expectation.

Collaboration, Not Replacement

Let me tell you how I actually use these tools. I was an early adopter, playing around with ChatGPT in early 2023. I didn’t start using it in earnest until a year later, at which point I picked up a subscription. Today, I float between ChatGPT, Gemini, Cursor, NotebookLM, Elicit, v0, Grammarly, Granola, and—my personal favorite—Claude.

I treat these models like collaborators, not replacements. I once heard a colleague say, “I always get the first word and I always get the last word,” and I think that’s a great way to work with AI. I often start by turning off Agent mode or explicitly telling the model not to build or write on my behalf, at least not at the beginning. These models are my mirrors, bouncing my ideas back at me, asking me questions, organizing my thoughts, suggesting frameworks.

And it’s been very successful. Tasks that used to take hours or days take a fraction of the time. I feel less inhibited when writing because editing is easier. I love that I can dictate my stream-of-conscious thoughts and end a session with a coherent outline. I enjoy writing more than I used to.

I’ve seen it benefit my students, too. Across the board, they’re producing better work, specifically in the areas that were the biggest struggle for them. The student who wrote too concisely and lacked narrative flow? Their latest piece was clear and detailed. The student who needed more coding support? They’ve been rocking complex models with independence.

This is not to say it’s always been a great experience. In fact, we’ve had some pretty awful output. And it’s almost always occurred when our own thinking or knowledge was lacking. We tried writing a review paper but we lacked a clear and specific thesis: the result was overly broad and muddled.

That’s the productivity paradox: AI boosts output quality when used strategically. It exposes your weaknesses when you don’t know what you’re doing.

A Three-Pronged Curriculum Approach

The key here is thinking of AI as a tool, not a substitute. Learning how to use R is not a substitute for learning statistics. Learning PowerPoint is not a substitute for giving a presentation. Yet we assume competencies in both the skill and the tool. AI should be no different.

I propose a three-pronged approach to AI literacy in graduate education.

Integration Throughout Existing Courses

The first step in boosting graduate education with AI is to embed these tools into our existing coursework.

All our statistics courses use R. I used to explicitly teach strategies for getting R help via StackOverflow. Now, I teach the same lessons using Claude. This includes writing good questions (now prompts), evaluating the fit of the answer for your question, testing with your own code and data, diagnosing, and iterating.

I now include homework questions that ask students to practice using AI for real-world tasks: explaining a collaborator’s code, diagnosing errors, and providing detailed comments for long-term usability.

The same principles can extend to writing courses, where students learn to use AI as a revision partner rather than a ghostwriter. In research methods courses, AI can help students develop literature search strategies, identify gaps in existing research, and refine their research questions. Even content courses in areas like social or personality psychology can incorporate AI for generating study materials, exploring theoretical frameworks, or analyzing case studies.

A Standalone Course: AI as Method and Subject

Beyond integration, I believe there’s value in a dedicated course that treats AI as both a research methodology and a subject of psychological inquiry.

On the methodology side, students would learn to use AI for semantic similarity analysis, text processing, and other computational approaches to behavioral data. Psychology has always been interested in what people say and write. AI opens new avenues for understanding these language-based behaviors at scale.

On the subject side, students would explore the psychology of AI and cognition: How do people interact with AI systems? When does AI help versus hinder learning? What does AI reveal about human reasoning and decision-making? Students would also learn to critically evaluate the growing body of AI impact studies—a skill that’s increasingly important as claims about AI proliferate faster than the evidence.

Assessment That Maintains Standards

Here’s what I want to be clear about: integrating AI doesn’t mean lowering our standards. Our evaluation criteria remain the same: clear thinking, good arguments, effective writing. The tool changes; the expectations don’t.

That said, I believe in strategic “no AI” moments. Oral exams, for instance, reveal whether a student truly understands the material or has been relying on AI to paper over gaps in their knowledge. (Spoiler alert for anyone thinking of joining our department: this is the final exam for my first-year statistics course!) These aren’t punitive—they’re diagnostic. They help students recognize where they need to develop their own skills.

The key is explaining the reasoning. When students understand why certain assessments prohibit AI—to build foundational competencies they’ll need when the AI isn’t available or when they need to evaluate AI output—they’re more likely to embrace the constraint as part of their development.

The Faculty Development Challenge

Here’s the hard truth: we can’t solve this through coursework alone.

The apprenticeship model that defines graduate education means faculty are the primary teachers of methods. Students learn how to conduct research by working alongside their advisors, absorbing practices and approaches through observation and collaboration. If faculty aren’t using AI effectively, students won’t learn to either—regardless of what we teach in class.

The curriculum is already packed. We can’t simply add more courses. What we can do is help faculty develop their own AI literacy so they can model effective use in their labs and mentoring relationships.

This doesn’t require everyone to become an AI expert. It requires curiosity. Experiment with these tools. Share what works and what doesn’t. Be open about your own learning process. Students benefit enormously from seeing faculty grapple with new technologies rather than pretending to have all the answers.

AI as Methodological Innovation

Beyond productivity gains, AI opens genuinely new research possibilities for psychology.

I’ve seen students use AI for semantic similarity analysis in ways that would have been prohibitively time-consuming just a few years ago. Text-based behavioral data—social media posts, therapy transcripts, open-ended survey responses—can now be analyzed with sophisticated tools that capture meaning and nuance.

This isn’t just about efficiency. It’s about asking questions we couldn’t ask before. The intersection of AI and psychological research is fertile ground for methodological innovation, and our students should be prepared to explore it.

Infrastructure, Not Trend

AI literacy isn’t a buzzword—it’s infrastructure.

Just as we don’t debate whether students should learn statistics or word processing, we shouldn’t debate AI literacy. The question is how to integrate it thoughtfully throughout our training programs so students graduate as skilled practitioners who can leverage AI to enhance their already strong research, analytical, and writing abilities.

The NOS only works when the engine is already running. Our job is to build strong engines and then teach our students how to use every tool at their disposal to win the race.