How I Actually Use AI: A Researcher's Honest Account
The discourse around AI tends to oscillate between two poles: breathless enthusiasm about how it will revolutionize everything, and dire warnings about how it will destroy expertise and render us all incompetent. I find neither particularly useful.
I’m a psychology researcher; I currently work in academia, meaning my days are a mix of research, teaching, administration, and service. I use AI regularly: multiple platforms, multiple times a day. But I’ve developed some habits that seem to differ from how most people talk about using these tools, and I think those differences matter. This isn’t a guide to prompt engineering or a list of clever use cases. It’s a description of how I’ve learned to make AI genuinely useful without outsourcing my judgment.
What I Do
I explicitly ask for critical feedback
This sounds obvious, but it’s not. AI’s default mode is agreeable. Ask it to review something and you’ll get praise with light suggestions. Ask it if your idea is good and it will tell you why it’s good.
I don’t want that. I’ve built “tell me what’s wrong with this” into my standard workflow, and I mean it. When I’m drafting a research plan, I want to know where the logic is weak. When I’m writing a cover letter, I want to know where I’m being vague or where my framing doesn’t land. This requires actually being willing to hear it—which, honestly, is harder than it sounds when you’re already stressed about a deadline.
The payoff is that I catch problems earlier. The cost is that sometimes I have to sit with uncomfortable feedback about work I thought was good. That’s the point. And frankly, I’d rather have my work torn apart by the model than by the peer reviewers and colleagues; the AI won’t stop collaborating with me if I give it a terrible draft!
I use AI to learn, not just to produce
This is maybe the best way for students to use AI. I’m not trying to get answers to copy. I’m trying to build understanding I can transfer to new problems.
Here’s what this looks like in practice: I’ve been building my SQL skills. When I work through practice problems, I don’t ask AI for the solution when I get stuck. I ask it to explain why my approach isn’t working. What am I misunderstanding about window functions? Why does this subquery return unexpected results? The goal isn’t to solve this specific problem—it’s to build a mental model I can use next time.
This is slower. Obviously. If I just wanted the answer, I could have it in seconds. But I’d learn nothing, and the next time I hit a similar problem, I’d be starting from scratch. I’m optimizing for understanding, not output. That compounds over time in ways that optimizing for output doesn’t.
I document my AI sessions systematically
Most people treat AI conversations as ephemeral. You ask a question, you get an answer, you move on. The conversation disappears into the void. I don’t do that. For complex projects—especially multi-session work like developing a course or refactoring a codebase—I create session summaries, development reports, and SOPs. This serves a few purposes: it forces me to consolidate what I learned, it gives me something to reference later, and it means I’m not starting from zero the next time I pick up the project. This is particularly valuable for work that spans weeks or months. Future me doesn’t remember what past me was thinking, but she’ll be grateful for that extra time it took to document.
I treat AI as a process scaffold, not a shortcut
AI accelerates work I already know how to do. It cannot make me competent in areas where I’m not.
This framing changes what I ask for. I don’t ask AI for answers; I ask for structure. Translation. Pressure-testing. I’ll draft something messy and ask AI to help me organize it. I’ll describe a research question in plain language and ask for help translating it into a statistical model. I’ll lay out my reasoning and ask AI to poke holes in it. The intellectual content is mine. AI helps me refine and communicate it. Most importantly, if I can’t evaluate whether the output is correct, I don’t use it. (I’ve developed a specific method for turning scattered expertise into structured outlines using AI’s voice feature, which I’ll write about separately.)
My Standard Workflow
Start with context, not questions. I bring constraints, background, what I’ve already tried. “Garbage in, garbage out,” is a phrase many of us apply to data (cough, meta-analysis, cough), but it applies to prompting too. If the problem is still fuzzy, I ask AI to help clarify the question, not solve it. Instead of “help me design a survey,” I describe the construct, the population, the constraints, and what I’ve drafted so far.
Treat output as draft, not answer. AI produces proposals, starting points, first passes. Iteration is the norm. I push back, ask for revisions, add constraints. The quality of AI output depends heavily on how much thinking I’ve already done before I start prompting. My rule of thumb is, I get the first word and the last word.
Use AI for structure and translation. The recurring pattern in my work is translation: conceptual research questions into statistical models, plain language into code, rough drafts into polished prose, policy language into implementable logic. As I said before, the intellectual content is mine. AI helps me organize it and move it between levels of abstraction.
Verify and own the result. If I can’t explain or defend it, I don’t use it. Generated code is always a draft to be validated.
The Honest Limits
I want to be clear about what I don’t do. I don’t outsource hard thinking. I don’t treat AI output as neutral or objective. I don’t use AI for things I can’t verify. If AI is doing the thinking, I’m not learning—and I can’t vouch for the result. AI is a thinking partner, a translator, a process scaffold. It’s not a shortcut, and it’s not a replacement for expertise. This approach requires that I bring real knowledge to the table. If you’re hoping AI will make you competent in areas where you’re not, I’d gently suggest that’s not how it works.
Develop your own SOP. Don’t copy mine. The goal is intentional use that fits your work and your values. My practices will evolve as the tools change—yours should too.
