AI in My HomeLab
// Table of Contents
I’ve had a home lab in some form for about fifteen years now, and for most of that time the whole point was to break things in a safe space before breaking them somewhere that actually mattered. A few years ago it was purely a test bed: spin something up, watch it fall over, figure out why, repeat. Now it’s also where I ship personal projects on a weekend, where I run automations that message me on WhatsApp when my CPU spikes, and where I’m still figuring out exactly how much of the keyboard I’m willing to hand over to an AI. That last part is what this one is really about.
If you missed episode one, that was a broader chat about how I use AI day-to-day. Episode two was Claude vs Copilot. This time it’s more tangible.
What the Lab Actually Looks Like
Before I get into the AI side of things, it’s worth grounding this in what the lab is, because the hardware shapes what’s practical.
I don’t have a 42U rack in the garage. My wife would have something to say about that. What I do have is four Intel NUCs, a Synology NAS for storage, and some Ubiquiti networking gear. It’s a compact setup, deliberately so, but it gives me a proper playground: virtualisation, containers, storage, networking, the lot. Over the years the lab has run ESXi, Hyper-V, various iterations of Kubernetes, Docker stacks, monitoring with Prometheus and Grafana, and more broken configs than I care to count. That’s the point. You can’t get that kind of hands-on feel from a cloud trial account. There’s something about walking out to the garage, opening the cabinet, seeing the blinking lights, and physically swapping a cable that no SaaS dashboard will ever replicate. The old-school IT guy in me still loves that.
And I genuinely mean it when I say the home lab is the single biggest contributor to where I’ve landed in my career. My very first home lab was a gaming PC with a second hard drive thrown in, dual-booting Hyper-V so I could spin up a couple of VMs and figure out network segmentation. That humble setup is what gave me something real to talk about at work the next day. You can’t put a dollar value on that kind of experience.
What AI Has Actually Touched
Right, so here’s the honest inventory.
The project I keep coming back to as the clearest example is a MotoGP fantasy league. My wife and I are both big fans, and I wanted to build something custom rather than rely on whatever the official platform offered. I started writing it in Python, got it running in Docker, and the backend did what I needed it to do. But I’m not a UI developer. Cosplaying as one is probably the more accurate description. So I handed the front end to Claude with essentially no constraints: here’s the app, here’s what it does, go and do whatever you want with the interface. And it came back with something that was almost exactly what I’d had in my head, with a few tweaks after some back and forth. That was one of the first moments where I genuinely thought, okay, this is a different kind of tool.
Beyond the MotoGP app, AI has been responsible for a bunch of smaller scripts and side automations that I’m fairly confident wouldn’t exist otherwise. Setting up Prometheus and Grafana to monitor the lab, pointing AI at my stack, telling it what I wanted to watch, and having it put together the Docker container config and the scrape configs — that used to be a few hours of documentation archaeology. Now it’s a conversation. Setting up n8n automation workflows that watch for resource spikes and fire off a Telegram message, same story. The time-to-value on those kinds of tasks has changed completely.
But I want to be specific about where it’s added value, because I think it’s easy to overstate this: AI has been brilliant at accelerating the stuff I already broadly understood, and it’s also been genuinely useful for pulling me past the first hour of friction on things I had no prior experience with. The MotoGP UI is a good example of the latter: I would have spent weeks on that and still produced something embarrassing. The Prometheus setup is a good example of the former: I knew what I wanted, I just didn’t want to spend Sunday morning buried in YAML docs.
Where I Keep My Hands on the Keyboard
Here’s the other side of it, and this is the part I feel more strongly about.
I don’t give AI direct access to my hypervisor. Not because I think it would do something malicious, but because we’ve all seen what happens when an AI agent has too much autonomy. There was literally a story in the news this week about an AI agent deleting a production database and then deleting the backups as well. That’s not a pathological edge case anymore, that’s just a Tuesday. Even in the home lab, where the stakes are comparatively low, I’m not interested in handing off that kind of control. I’ll give it log files. I’ll paste in a config and ask it to review for security issues. I’ll have it read through a 500-line log bundle that I genuinely don’t want to spend my Saturday on. But the troubleshooting process itself, the diagnosis, the decision-making, that stays with me.
And there’s a second reason beyond security: if I hand the diagnostic process entirely to AI, I stop learning.
That is the whole point of the home lab. A recent example: a USB NIC in one of my NUCs died. The AI read through the log bundle and got me to the right answer in about an hour, rather than what could easily have been half a day of poking around. But I still swapped the cable myself first. I still went through the physical layer checks. I still understood what was failing and why before I touched anything. AI shortened the path; it didn’t walk it for me. That distinction matters a lot when you want the experience to actually stick.
On the question of hosting AI locally, I’ve had a go at it. Ollama running a Gemma model on-prem, getting about 20 tokens per second, which is honestly not bad. And there is something genuinely appealing about having a local model that doesn’t phone home to OpenAI or Anthropic. If privacy is a priority for you, that’s a very legitimate reason to run something local. But my hardware isn’t really cut out for it at scale, and more importantly, the hosted models are moving so fast right now that the on-prem options are struggling to keep pace. Claude has released Claude Design and a security-focused capability just in the last week or two. The rate of change is almost hard to track, and I find myself genuinely wondering, for the first time in my career, whether the technology is outpacing my ability to keep up with it. That’s a strange feeling after fifteen years of being pretty on top of things.
The Rule I Keep Coming Back To
There’s a line I’ve been coming back to throughout all of these episodes, and it’s this: the home lab is worth more to me when the work is mine.
AI is absolutely on the team. It’s a strong team member: it catches things I’d miss, it finishes things I’d procrastinate, and it has let me ship projects I had no realistic path to completing on my own. But it doesn’t hold the keyboard for the parts I signed up to learn. That’s the rule, and it applies differently depending on what I’m doing. Generating boilerplate CSS for a fantasy league app I have no interest in writing by hand? Hand it over. Diagnosing why a network card stopped transmitting? I want to be in that process, even if AI helps me get there faster.
The home lab is also, and I say this with some conviction, the best place to figure out your AI workflow before you trust it at work. The stakes are low enough that you can catch yourself being lazy without it costing anything. You can notice when you’re shipping something you don’t actually understand and course-correct before it matters. I’ve done that. I shipped a script once that I let AI write almost entirely, and it worked, but when something downstream broke I had no real idea where to start because the mental model wasn’t mine. I set a rule after that: if it’s going into the lab and I want to learn from it, I write the bones myself and use AI to sharpen the edges.
And that transfers directly to work. If you’ve figured out where the line is in a segmented home network where you can break things freely, you have a much better instinct for where it should sit when the stakes are real. That’s not a lesson you can learn from a blog post or a course. You have to play it out.
The home lab taught me virtualisation. It taught me Kubernetes. It taught me networking fundamentals in a way that no exam would have. And now it’s teaching me how to work alongside AI without losing the thread of what I’m actually trying to learn. I don’t think I’ll ever get rid of it. And honestly, swings and roundabouts, I might actually end up moving more stuff back to the home lab as these AI subscriptions keep creeping up in cost. Diminishing value is starting to be a conversation worth having. But that’s a topic for another episode.
For now, the physical kit’s still sitting in the garage, blinking away, waiting for me to break something new.
As always, keep on learning.