Using AI for More Than Coding
I know, I know, you’re probably exhausted hearing about AI. It’s everywhere, it’s in every conversation, every product announcement, every LinkedIn post. But I want to talk about it anyway, because my experience with it has gotten genuinely interesting and maybe a little uncomfortable, and I think it’s worth sharing.
I use AI every single day. It’s just part of the workflow now, Claude Code is open, Copilot is open, and I don’t really think twice about it anymore. That’s kind of the whole point of this post though. Because the question I’ve started asking myself isn’t “is AI impressive?”…it absolutely is,the question is whether it’s actually making me work better, or whether it just makes me feel faster. There’s a difference, and I’m not sure I’ve always been on the right side of it.
The Sparring Partner vs. the Answering Machine
The most useful framing I’ve landed on is thinking of AI as a sparring partner rather than an answering machine. There’s a big difference between those two things.
On the coding side, one of the best things Claude Code has done for me is kill blank page paralysis. Starting a new project is often the hardest part, you don’t know where to begin, the scope is fuzzy, and it’s easy to just stall. Being able to sit down and have it walk me through building out a PRD, asking me “what problem are you trying to solve, who’s the target persona, what does success look like” that kind of back-and-forth is genuinely valuable. It’s the coding ducky effect; sometimes you just need something to talk at while you figure out what you actually think.
Beyond coding it’s helped me move faster on the admin stuff that I honestly just don’t want to do. Meetings are all recorded and transcribed these days, so instead of watching a one-hour recording I’ll just grab the transcript, hand it to AI, and get the key points in two minutes. Summarising emails, rewording documents, drafting the skeleton of something I need to write — it adds up.
But here’s where I started catching myself.
The Transformer Moment
My son was playing with one of his Transformers the other day and I was trying to figure out what character it was. And rather than just Googling it which I would have done without thinking a year ago, I took a photo of the toy and sent it to ChatGPT. It came back almost instantly: that’s Boulder, he’s in this particular Transformers series, it’s streaming on Paramount+ if you want to watch it.
Mind-blowing, right? But I caught myself afterwards and thought, hang on. I didn’t think at all there. I just outsourced a ten-second task to AI because it was slightly more convenient than Googling. And then I started noticing how often I was doing that. Recipes, random questions, things I used to just look up or figure out, I was defaulting to AI for all of it. And I was trusting it completely, not because I’d verified anything, but because it sounded confident and it was telling me what I wanted to hear.
I tried a recipe it gave me. It didn’t work. And the thing is, if I’d gone to an actual recipe site, some person had tested that recipe, iterated on it, published it knowing it worked. ChatGPT just… generated something that sounded right.
That’s the double-edged sword. The confidence of the output has nothing to do with its accuracy.
The Kubernetes Example
This came up directly in a project I was working on. I was using Claude Code to set up some containers and it led me down this whole rabbit hole of doing things in a particular way, routing network traffic in a manner that, the more I looked at it, just didn’t feel right. So I pushed back. I said look, I think there’s a simpler, more secure way to do this, we shouldn’t be routing things this way. And it basically agreed, yeah, actually you’re right, let’s look at the Kubernetes documentation again and approach the ingress rules differently.
The thing is, I was only able to challenge it because I’d spent the previous few years actually learning Docker and Kubernetes. If I’d just been leaning on AI from the start without building that foundation, I wouldn’t have had the domain knowledge to know the answer sounded wrong. I would have just trusted it and shipped something broken.
That’s the piece I keep coming back to: AI is brilliant at accelerating momentum, but you need the knowledge to validate what it’s doing. Otherwise you’re not going faster, you’re just getting to the wrong destination faster.
The Human in the Loop
I’m old school on this one and people argue with me about it, but I don’t like autonomous agents just running free. I want a human in the loop. Whether that’s me approving what it’s done, reviewing the output, or just actually reading the code it generated there has to be that checkpoint. AI is going to take shortcuts sometimes. It will give you an answer that sounds great and is wrong. It’s not all-knowing, and treating it like it is will bite you.
One thing I’ve started doing that I really like: using multiple AI tools against each other. Have one raise a GitHub issue, have another do the actual coding, and have a third do the security review. Let them argue it out a bit, and then I’ll go through and verify everything myself. It’s a good way to get different perspectives and catch things that a single model might confidently miss.
What This Means for My Blogs
This is something I’ve thought about specifically for how I use AI with this blog. My how-to posts take a long time, spin up the lab, run through everything, take screenshots, write it all up. That’s easily four hours. Proofreading on top of that is another hour or two that I honestly just dread.
So AI has become my proofreader. Does the syntax make sense, is the vocabulary right, is the punctuation correct, go be my editor. But it is not the writer. The ideas, the voice, the actual experience I’m writing about that’s still me, and I want to keep it that way. If AI is writing the post, what are you actually reading? At some point it’s just AI talking to AI, and the whole long-form content ecosystem falls apart.
Genuinely had that conversation recently, if everyone stops creating long-form content because AI can just answer any question instantly, what is AI going to train on next? But that’s a topic for another episode.
Where I’ve Landed
Trust the AI, verify everything it does. Keep the human in the loop. Use it to accelerate what you’re already doing, not to replace the thinking behind it. And maybe the most personal one don’t let it flatten your voice. Use it to sharpen ideas, not to generate them.
That’s kind of where I’m at. Would love to know where you’re at with it, I think everyone’s working through this in their own way right now, and there’s no single right answer. Just making sure we’re using it in ways that are actually making us better rather than just making us faster.