2026-05-06 –, Ballroom
A year ago I wouldn't have imagined I'd be running my own fiber through my walls, self-hosting AI models, and managing it all with declarative infrastructure — at home. In the age of AI, having a consequence-free environment to build, break, and learn has become a real career advantage.
This talk walks through how I approached building a homelab from scratch: networking, hardware, DevOps principles applied at home, and running open source AI tools that directly sharpen my skills at an AI startup. Along the way, I reclaimed something that's easy to forget when everything lives in someone else's cloud — my family's data is ours again.
You don't need a big budget or a rack full of servers. You need a framework, some curiosity, and a willingness to pull some cable.
This is a story-driven talk built around four stages of my homelab journey, each mapping to skills that matter in real DevOps and AI engineering work.
Networking: Surveying router platforms from open source to prosumer to enterprise and picking something that made sense for home use. Then deciding to run OM-4 fiber through my house — which, honestly, was intimidating. Cutting into walls and fishing cable is a different kind of scary than a bad deploy. But I learned SFP/SFP+, how it compares to traditional copper, and how to securely access everything when I'm not home. Worth it.
Hardware: Working with what you have (10+ year old desktop PCs) while intentionally designing a new media/storage server — GPU for local model inference, enough compute for self-hosted apps, and real storage architecture you never think about in a cloud-only world. We'll also be honest about cost — it's less than you think to get started!
"DevOps at home": How do you manage a homelab with real rigor without over-engineering it? Part of this is tool selection — NixOS vs Ansible vs Docker vs systemd, declarative vs traditional — but part of it is being honest about what you actually need at home versus what's appropriate for enterprise production. Not everything needs to be HA. Some things just need to work.
Self-hosted AI in practice: A practical look at what I'm actually running and how it feeds into my day job at an AI startup. Local LLMs, self-hosted tooling, and owning your family's data rather than handing it to a dozen SaaS platforms.
The goal isn't to show off a build, or what router I bought. It's super important for me to give listeners a framework to make intentional decisions to empower their learning, and inspire the confidence to start your own!!
As a Platform Engineer, Matthew builds scalable, resilient systems and works to instill DevOps culture into the teams he embeds with (SLI, SLO, SLA, anyone?!). Previous roles have included DevOps Engineer, Linux Systems Administrator, and Site Reliability Engineer — oh, and professional Classical musician.
Originally from Columbus, OH, Matthew holds degrees from The Ohio State University and Carnegie Mellon University. He currently lives in Austin, TX, where he enjoys working with cloud native technologies in the age of AI.
Outside of work, you'll find him spending time with his family, training for a marathon, eating a whole-food plant-based diet, and talking or listening to all things Classical music.