Skip to main content
Overview

Inside My Homelab - 2026

MeroxOpenClaw
Merox & OpenClaw HPC Sysadmin
9 min read
Homelab Intermediate
Live Infrastructure State
ai synced

Homelab 2026

Hardware

Compute

DeviceCPURAMStoragePurpose
Beelink GTi 13i9-13900H (14C/20T)64GB DDR52× 2TB NVMeProxmox (px-0)
OptiPlex #1i5-6500T (4C/4T)32GB DDR4128GB NVMeProxmox (px-1)
OptiPlex #2i5-6500T (4C/4T)32GB DDR4128GB NVMeProxmox (px-2)
Dell R7202× E5-2697v2 (24C/48T)192GB ECC4× 960GB SSDPower Testing
Synology DS223+ARM RTD1619B2GB2× 2TB RAID1NAS/Media

Network Gear

DeviceModelSpecsPurpose
ONTHuawei1GbEISP Gateway
FirewallXCY X448× 1GbEpfSense Router
WiFiTP-Link AX3000WiFi 6Wireless AP
SwitchTP-Link24-portCore Switch

Power Protection

DeviceModelProtected EquipmentCapacity
UPS #1CyberPowerMini PCs (Proxmox cluster)1500VA
UPS #2CyberPowerNetwork gear1000VA

Cloud

ProviderInstanceSpecsLocationPurpose
OracleAmpere A14vCPU/24GB/200GB (~98GB free)USADocker Services + OpenClaw Agent

Network

Three dedicated physical interfaces on pfSense:

WAN Interface → Orange ISP (Bridge Mode)
LAN Interface → Homelab Network
WiFi Interface → Guest/IoT Isolation
Warning

WiFi clients are firewalled from homelab services, except whitelisted ones like Jellyfin.

Tailscale creates a flat network across all locations — homelab and Oracle both appear on the same mesh.

Homelab Network Topology

pfSense

A fanless mini PC from AliExpress (~200€) running pfSense for 3+ years: 👉 XCY X44 on AliExpress

pfSense services dashboard

Tailscale Subnet Router exposes the entire homelab to cloud VPS without installing Tailscale on every device. Also the solution to CGNAT — when your ISP doesn’t give you a public IP, this gets you in. Setup guide →

Unbound DNS runs as a local recursive resolver with domain overrides for *.k8s.merox.dev pointing to K8s-Gateway.

Telegraf pushes system metrics to Grafana.

Firewall rules: WiFi → LAN blocks everything except whitelisted apps; LAN → WAN allows all; WAN → Internal blocks all except explicitly exposed services.

Tip

The Oracle instance doubles as a Tailscale exit node, useful for routing traffic through the US when needed.

Virtualization

Proxmox Cluster

Three-node cluster across the mini PCs. Each node runs one Talos VM, so Kubernetes has HA across physical hosts with no single point of failure.

Proxmox cluster overview

Nodes:

NodeDeviceCPURAMRole
px-0Beelink GTi 13i9-13900H (20T)64GBPrimary — hosts K8s controlplane-1
px-1OptiPlex #1i5-6500T (4T)32GBK8s controlplane-2
px-2OptiPlex #2i5-6500T (4T)32GBK8s controlplane-3

Storage:

PoolTypeUsedTotal
cluster-storageZFS713GB899GB
synology-nasNFS985GB1.4TB
local-datadir177GB812GB

Current VMs:

VMPurposeSpecsStatus
kubernetes-controlplane-1K8s node (px-0)8vCPU/24GBRunning
kubernetes-controlplane-2K8s node (px-1)4vCPU/16GBRunning
kubernetes-controlplane-3K8s node (px-2)4vCPU/16GBRunning
Windows 10Lab / testing4vCPU/12GBStopped
Windows Server 2019AD Lab8vCPU/14GBStopped
Windows 11Remote desktop8vCPU/16GBStopped
Note

Home Assistant, Kali Linux, and GNS3 are backed up to Synology NAS — restored on demand when needed, not kept running permanently.

Intel Iris Xe GPU on px-0 is passed through to the kubernetes-controlplane-1 VM for Jellyfin hardware transcoding (Intel QuickSync).

Synology DS223+

Dual purpose: NFS/SMB shares for the ARR stack (still experimenting with both protocols), and personal cloud via Synology Drive.

After 3 years of self-hosting Nextcloud, I switched. Better performance, native mobile apps that actually work, and zero maintenance. Sometimes the best self-hosted solution is the one you never have to think about.

Synology services dashboard

Dell R720

The power-hungry workhorse. Its role has changed a few times:

PeriodPurposeConfiguration
Phase 1Proxmox hypervisor24C/48T, 192GB RAM
Phase 2AI PlaygroundQuadro P2200 + Ollama + Open WebUI
Phase 3Backup Target4× 960GB RAID-Z2, Garage S3 backend
CurrentPower TestingOccasional boots to measure idle/load consumption

The most interesting project was flashing the PERC controller to IT mode — bypasses hardware RAID so the OS sees drives directly. Fohdeesha’s crossflash guide covers H710/H310 and more.

iDRAC management interface

At ~200W idle, running 24/7 costs ~€20/month in electricity. Right now it’s off most of the time — I boot it occasionally to benchmark power usage and figure out if there’s any workload that justifies keeping it on. Still undecided.

Power Management

The CyberPower UPS covers all mini PCs and network gear. When power fails, it triggers a cascading shutdown — Kubernetes nodes drain properly before Proxmox hosts go down.

FeatureImplementationPurpose
pwrstatUSB to GTi13 ProAutomated shutdown orchestration
SSH ScriptsCustom automationGraceful cluster shutdown
MonitoringTelegram alertsReal-time power notifications

UPS monitoring dashboard

Telegram UPS notification

Kubernetes

Fair warning: this is where I went full “because I can” mode. If you just want to run services, Docker is the right answer. But if you want to learn enterprise-grade container orchestration in your homelab, keep reading.

The starting point: onedr0p/cluster-template

Talos OS was the first immutable, declarative OS I’d run. After a few days of troubleshooting, I was sold.

Tip

Why Talos over K3s? Immutable OS means less maintenance, GitOps-first design, declarative everything, and it’s closer to what you’d run in production.

My infrastructure repo: 👉 github.com/meroxdotdev/infrastructure

Key customizations:

ComponentModificationReason
StorageLonghorn CSISimpler PV/PVC management
Talos PatchesCustom machine configLonghorn requirements
Custom Imagefactory.talos.devIntel iGPU + iSCSI support

GitOps structure:

kubernetes/apps/
├── cert-manager/ # TLS automation
├── default/ # Production workloads
├── flux-system/ # Flux operator + instance
├── kube-system/ # Cilium, CoreDNS, NFS CSI, metrics-server
├── network/ # k8s-gateway, Cloudflare tunnel + DNS
├── observability/ # Prometheus, Grafana, Loki
└── storage/ # Longhorn configuration

Grafana and Loki dashboard

Lens Kubernetes cluster overview

Deployed apps:

AppPurposeNotes
RadarrMovie automationNFS to Synology
SonarrTV automationNFS to Synology
ProwlarrIndexer managerCentral search
qBittorrentTorrent clientGluetun sidecar + SurfShark WireGuard VPN
JellyseerrRequest managementPublic via Cloudflare
JellyfinMedia serverIntel QuickSync enabled
n8nWorkflow automation
HomepageDashboard
GrafanaMetrics dashboards
Prometheus + AlertmanagerMetrics collection + alerts
Loki + PromtailLog aggregation
NetdataPer-node system monitoringDaemonSet — one agent per K8s node
cert-managerTLS certificate automationACME via Let’s Encrypt

The live dashboard is public — current service status at inside.merox.dev.

LoadBalancer IPs are handled by Cilium’s L2 announcement — a pool of addresses (10.57.57.100–120) announced directly on the LAN via ARP, no external load balancer needed. Services like qBittorrent, k8s-gateway, and the Cloudflare tunnel endpoint each get a dedicated IP from this pool.

With this setup I can fully rebuild the cluster in 8–9 minutes — declarative config for everything, GitOps workflow with Flux, Renovate bot keeping dependencies updated.

Warning

Keep your SOPS keys backed up separately. You’ll need them to decrypt the repository when rebuilding from scratch.

For cluster deployment details, the onedr0p/cluster-template README is surprisingly well written and worth following directly.

Services

Managed through a single Portainer instance at cloud.merox.dev:

Portainer multi-cluster view

cloud-usa (Oracle Free Tier Ampere A1) — the only cloud node. Always-on, handles external access, backup target via rsync, and runs the OpenClaw infrastructure agent:

ServicePurpose
TraefikSSL for all VPS services
Pi-holeDedicated Tailscale split-DNS
PortainerContainer management
HomepageDashboard
GuacamoleRemote access via Cloudflare Tunnel
Uptime KumaService uptime monitoring
GlancesSystem resource monitoring
Joplin ServerSelf-hosted notes sync
OpenClawAI infrastructure agent (Telegram → kubectl/flux/docker)
Garage S3 + WebUIS3-compatible object storage for Longhorn backups
Rsync endpointOff-site backup target from Synology NAS

homelab-ro (local Docker) — lightweight services on the local Docker host:

ServicePurpose
Netboot.xyzPXE / network boot
Portainer AgentRemote Docker management
Note

Netdata runs as a Kubernetes DaemonSet — one child agent per node, one parent for aggregation — not as a standalone Docker container.

Tip

Full OpenClaw setup: Running OpenClaw as a Homelab Infrastructure Agent.

Backup

  • Longhorn PVCs → Garage S3 on Oracle Cloud (S3-compatible backup)
  • Synology NAS → rsync to Oracle Cloud instance (off-site copy)
Kubernetes PVCs
Garage S3 (Oracle Cloud VPS, 200GB disk) ◄── Synology NAS rsync

Longhorn dashboard

Garage runs on the Oracle VPS’s single 200GB boot disk — currently ~28GB used by Garage data, ~98GB free total. Not a lot of headroom, but more than enough for compressed PVC snapshots at this scale.

MinIO was the original S3 backend until it discontinued its Docker images and moved to a source-only model with known CVEs on older builds. Garage replaced it — lightweight, S3-compatible, runs fine on the Oracle free tier. Full migration walkthrough: Migrating Longhorn Backup from MinIO to Garage.

The R720’s backup role has been retired entirely — Garage on Oracle handles Longhorn backups, Synology handles local redundancy (2× 2TB RAID1) and rsyncs off-site to the same VPS. Simpler than spinning up a 200W server for weekly sync jobs. For full details on the backup chain and recovery procedures, see Safeguarding My Critical Data.

Share this post

Related Posts

Setting Up Dell R720 Server in the Home Lab

How I integrated a Dell PowerEdge R720 into my homelab — fan control via IPMItool, firmware updates, Proxmox migration over NFS, RAID storage, and UPS integration with PowerPanel.

3 min read

Running OpenClaw as a Homelab Infrastructure Agent

How I replaced my custom merox-agent Telegram bot with OpenClaw — an open-source personal AI assistant that manages my Kubernetes cluster, Docker services, and GitOps repo from my phone.

4 min read

Smart Home Journey

How I built a fully automated smart home with Alexa, Home Assistant, Broadlink, and Philips Hue — routines, integrations, and running Linux scripts from HA based on GPS location.

3 min read
Loading comments...