Skip to main content
Overview

Complete Homelab Tour 2026

7 min read

Homelab 2026

Hardware

Power Protection

DeviceModelProtected EquipmentCapacity
UPS #1CyberPowerDell R7201500VA
UPS #2CyberPowerMini PCs + Network1000VA

Network Stack

DeviceModelSpecsPurpose
ONTHuawei1GbEISP Gateway
FirewallXCY X448× 1GbEpfSense Router
WiFiTP-Link AX3000WiFi 6Wireless AP
SwitchTP-Link24-portCore Switch

Compute

Note

On-Premise Hardware

DeviceCPURAMStoragePurpose
Beelink GTi 13i9-13900H (14C/20T)64GB DDR52× 2TB NVMeProxmox
OptiPlex #1i5-6500T (4C/4T)16GB DDR4128GB NVMe / 2TBProxmox
OptiPlex #2i5-6500T (4C/4T)16GB DDR4128GB NVMe / 2TBProxmox
Dell R7202× E5-2697v2 (24C/48T)192GB ECC4× 960GB SSDBackup Server
Synology DS223+ARM RTD1619B2GB2× 2TB RAID1NAS/Media
Tip

Cloud Infrastructure

ProviderInstanceSpecsLocationPurpose
HetznerCX224vCPU/8GB/80GBGermanyOff-site Backup
OracleAmpere A14vCPU/24GB/200GBUSADocker Test

Network Architecture

Three dedicated physical interfaces on pfSense:

WAN Interface → Orange ISP (Bridge Mode)
LAN Interface → Homelab Network
WiFi Interface → Guest/IoT Isolation
Warning

WiFi clients are firewalled from homelab services, except whitelisted ones like Jellyfin.

Tailscale creates a flat network across all locations — homelab, Hetzner, and Oracle all appear on the same mesh.

Homelab Network Topology

Tip

Both VPS instances double as Tailscale exit nodes, so I can route traffic through EU (Hetzner) or US (Oracle) for geo-restricted content or lower latency to specific regions.

Infrastructure

pfSense

A fanless mini PC from AliExpress (~200€) running pfSense for 3+ years: 👉 XCY X44 on AliExpress

pfSense services dashboard

Tailscale Subnet Router exposes the entire homelab to cloud VPS without installing Tailscale on every device. Also the solution to CGNAT — when your ISP doesn’t give you a public IP, this gets you in. Setup guide →

Unbound DNS runs as a local recursive resolver with domain overrides for *.k8s.merox.dev pointing to K8s-Gateway.

Telegraf pushes system metrics to Grafana.

Firewall rules: WiFi → LAN blocks everything except whitelisted apps; LAN → WAN allows all; WAN → Internal blocks all except explicitly exposed services.

UPS Power Management

The CyberPower 1000VA covers all mini PCs and network gear. When power fails, it triggers a cascading shutdown — Kubernetes nodes drain properly before Proxmox hosts go down.

FeatureImplementationPurpose
pwrstatUSB to GTi13 ProAutomated shutdown orchestration
SSH ScriptsCustom automationGraceful cluster shutdown
MonitoringTelegram alertsReal-time power notifications

UPS monitoring dashboard

Telegram UPS notification

Synology DS223+

Dual purpose: NFS/SMB shares for the ARR stack (still experimenting with both protocols), and personal cloud via Synology Drive.

After 3 years of self-hosting Nextcloud, I switched. Better performance, native mobile apps that actually work, and zero maintenance. Sometimes the best self-hosted solution is the one you never have to think about.

Synology services dashboard

Dell R720

The power-hungry workhorse. Its role has changed a few times:

PeriodPurposeConfiguration
Phase 1Proxmox hypervisor24C/48T, 192GB RAM
Phase 2AI PlaygroundQuadro P2200 + Ollama + Open WebUI
CurrentBackup Target4× 960GB RAID-Z2, weekly MinIO sync

The most interesting project was flashing the PERC controller to IT mode — bypasses hardware RAID so the OS sees drives directly. Fohdeesha’s crossflash guide covers H710/H310 and more.

iDRAC management interface

At ~200W idle, running 24/7 would cost ~€20/month in electricity. Instead: Wake-on-LAN 1–2× weekly, pull MinIO backups from Hetzner, done.

Warning

Still constantly changing my mind about what to run here.

Proxmox Cluster

Three-node cluster across the mini PCs. Each node runs one Talos VM, so Kubernetes has HA across physical hosts with no single point of failure.

Proxmox cluster overview

Current VMs:

VMPurposeSpecsNotes
3× Talos KubernetesK8s nodes4vCPU/16GB/1TBIntel iGPU passthrough
meroxosDocker host4vCPU/8GB/500GBFor simpler services
Windows Server 2019AD Lab4vCPU/8GB/100GBActive Directory experiments
Windows 11Remote desktop4vCPU/8GB/50GBAlways-ready Windows machine
Home AssistantHome automation2vCPU/4GB/32GB
Kali LinuxSecurity testing2vCPU/4GB/50GBTo be restored from backup
GNS3Network lab4vCPU/8GB/100GBTo be restored from backup

Home Assistant is intentionally minimal for now. The most interesting automation: location-based Dell R720 fan control — quieter when I’m home, ramped up when away. Details →

Cloud

All managed through a single Portainer instance at cloud.merox.dev:

Portainer multi-cluster view

Hetzner cloud dashboard

cloud-de (Hetzner CX22, ~€4/month) — always-on, external monitoring for when the homelab is down:

ServicePurpose
Grafana + Prometheus + AlertmanagerExternal homelab monitoring
Pi-holeDedicated Tailscale split-DNS
TraefikSSL for all VPS services
GuacamoleRemote access via Cloudflare Tunnel
Firefox containerGUI access via Guacamole

homelab-ro (local Docker) — the escape hatch when Kubernetes complexity becomes too much:

ServicePurpose
ARR StackQuick restore when Kubernetes fails
Netboot.xyzPXE / network boot
Portainer AgentRemote Docker management

Because sometimes you just need things to work without debugging YAML manifests at 2 AM.

cloud-usa (Oracle Free Tier) — test ground for experimental images and US Tailscale exit node. Not in Portainer — hit the 5-node limit with 3× Kubernetes + 2× Docker.

Talos & Kubernetes

Fair warning: this is where I went full “because I can” mode. If you just want to run services, Docker is the right answer. But if you want to learn enterprise-grade container orchestration in your homelab, keep reading.

The starting point: onedr0p/cluster-template

Talos OS was the first immutable, declarative OS I’d run. After a few days of troubleshooting, I was sold.

Tip

Why Talos over K3s? Immutable OS means less maintenance, GitOps-first design, declarative everything, and it’s closer to what you’d run in production.

My infrastructure repo: 👉 github.com/meroxdotdev/infrastructure

Key customizations:

ComponentModificationReason
StorageLonghorn CSISimpler PV/PVC management
Talos PatchesCustom machine configLonghorn requirements
Custom Imagefactory.talos.devIntel iGPU + iSCSI support

The custom Talos image includes Linux driver tools, iSCSI-tools for network storage, and Intel iGPU drivers for Proxmox passthrough.

GitOps structure:

infrastructure/kubernetes/apps/
├── storage/ # Longhorn configuration
├── observability/ # Prometheus, Grafana, Loki (WIP)
└── default/ # Production workloads

Grafana and Loki dashboard

Day-to-day management with Lens:

Lens Kubernetes cluster overview

Deployed apps:

AppPurposeNotes
RadarrMovie automationNFS to Synology
SonarrTV automationNFS to Synology
ProwlarrIndexer managerCentral search
qBittorrentTorrent client⚠️ Use v5.0.4 for GUI config
JellyseerRequest managementPublic via Cloudflare
JellyfinMedia serverIntel QuickSync enabled
HomepageDashboardStill organizing…

Homepage dashboard

With this setup I can fully rebuild the cluster in 8–9 minutes — declarative config for everything, GitOps workflow with Flux, Renovate bot keeping dependencies updated.

Warning

Keep your SOPS keys backed up separately. You’ll need them to decrypt the repository when rebuilding from scratch.

Backup Strategy

  • Longhorn PVCs → daily backup to MinIO on R720

Longhorn dashboard

  • MinIO on R720 → weekly sync to Hetzner Storagebox

MinIO dashboard

Complete 3-2-1 across local, on-prem, and offsite. For full details on the backup chain and recovery procedures, see Safeguarding My Critical Data.

For cluster deployment details, the onedr0p/cluster-template README is surprisingly well written and worth following directly.

Share this post

Loading comments...