Restoring Your Kubernetes Applications from Longhorn Backups
When disaster strikes your Kubernetes cluster, having a solid backup strategy isn’t enough—you need to know how to restore your applications quickly and reliably. Recently, I had to rebuild my entire K8S cluster from scratch and restore all my applications from Longhorn backups stored in MinIO. Here’s the complete process that got my media stack and observability tools back online.
The Situation
After redeploying my K8S cluster with Flux GitOps, I found myself with:
- ✅ Fresh cluster with all applications deployed via Flux
- ✅ Longhorn storage configured and connected to MinIO backend
- ✅ All backup data visible in Longhorn UI
- ❌ Empty volumes for all applications
- ❌ Lost configurations, dashboards, and media metadata
The challenge? Restore 6 critical applications to their backup state without losing the current Flux-managed infrastructure.
Applications to Restore
Here’s what needed restoration:
- Prometheus (45GB) - Monitoring metrics and configuration
- Loki (20GB) - Log aggregation and retention
- Jellyfin (10GB) - Media library and metadata
- Grafana (10GB) - Dashboards and data sources
- QBittorrent (5GB) - Torrent client configuration
- Sonarr (5GB) - TV show management settings
Prerequisites
Before starting, ensure you have:
- Kubernetes cluster with kubectl access
- Longhorn installed and configured
- Backup storage backend accessible (MinIO/S3)
- Applications deployed (scaled up or down doesn’t really matter)
- Longhorn UI access for backup management
Step 1: Assess Current State
First, let’s understand what we’re working with:
# Check current deployments and statefulsetskubectl get deployments -Akubectl get statefulsets -A
# Review current PVCskubectl get pvc -A -o wide
# Verify Longhorn storage classkubectl get storageclassThis gives you a complete picture of your current infrastructure and identifies which PVCs need replacement.
Step 2: Scale Down Applications
Danger
Critical: Before touching any storage, scale down applications to prevent data corruption:
# Scale down deploymentskubectl scale deployment jellyfin --replicas=0 -n defaultkubectl scale deployment qbittorrent --replicas=0 -n defaultkubectl scale deployment sonarr --replicas=0 -n defaultkubectl scale deployment grafana --replicas=0 -n observability
# Scale down statefulsetskubectl scale statefulset loki --replicas=0 -n observabilitykubectl scale statefulset prometheus-kube-prometheus-stack --replicas=0 -n observabilitykubectl scale statefulset alertmanager-kube-prometheus-stack --replicas=0 -n observabilityWait for all pods to terminate before proceeding.
Step 3: Remove Current Empty PVCs
Since the current PVCs contain only empty data, we need to remove them:
# Delete PVCs in default namespacekubectl delete pvc jellyfin -n defaultkubectl delete pvc qbittorrent -n defaultkubectl delete pvc sonarr -n default
# Delete PVCs in observability namespacekubectl delete pvc grafana -n observabilitykubectl delete pvc storage-loki-0 -n observabilitykubectl delete pvc prometheus-kube-prometheus-stack-db-prometheus-kube-prometheus-stack-0 -n observabilityStep 4: Restore Backups via Longhorn UI
This is where the magic happens. Access your Longhorn UI and navigate to the Backup tab.
For each backup, click the ⟲ (restore) button and configure:
Prometheus Backup
- Name:
prometheus-restored - Storage Class:
longhorn - Access Mode:
ReadWriteOnce
Loki Backup
- Name:
loki-restored - Storage Class:
longhorn - Access Mode:
ReadWriteOnce
Jellyfin Backup
- Name:
jellyfin-restored - Storage Class:
longhorn - Access Mode:
ReadWriteOnce
Continue for all other backups…
Warning
Important: Wait for all restore operations to complete before proceeding. You can monitor progress in the Longhorn UI.
Step 5: Create PersistentVolumes
Once restoration completes, the restored Longhorn volumes need PersistentVolumes to be accessible by Kubernetes:
# Example for Jellyfin - repeat for all applications you want to be restoredapiVersion: v1kind: PersistentVolumemetadata: name: jellyfin-restored-pvspec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: longhorn csi: driver: driver.longhorn.io fsType: ext4 volumeAttributes: numberOfReplicas: "3" staleReplicaTimeout: "30" volumeHandle: jellyfin-restoredApply this pattern for all restored volumes, adjusting the storage capacity and volumeHandle to match your backups.
Step 6: Create PersistentVolumeClaims
Now create PVCs that bind to the restored PersistentVolumes:
# Example for JellyfinapiVersion: v1kind: PersistentVolumeClaimmetadata: name: jellyfin namespace: defaultspec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: longhorn volumeName: jellyfin-restored-pvThe key here is using volumeName to bind the PVC to the specific PV we created.
Step 7: Verify Binding
Check that all PVCs are properly bound:
# Check binding statuskubectl get pvc -n default | grep -E "(jellyfin|qbittorrent|sonarr)"kubectl get pvc -n observability | grep -E "(grafana|storage-loki|prometheus)"
# Verify Longhorn volume statuskubectl get volumes -n longhorn-system | grep "restored"You should see all PVCs in Bound status and Longhorn volumes as attached and healthy.
Step 8: Scale Applications Back Up
With storage properly restored and connected, bring your applications back online:
# Scale deployments back upkubectl scale deployment jellyfin --replicas=1 -n defaultkubectl scale deployment qbittorrent --replicas=1 -n defaultkubectl scale deployment sonarr --replicas=1 -n defaultkubectl scale deployment grafana --replicas=1 -n observability
# Scale statefulsets back upkubectl scale statefulset loki --replicas=1 -n observabilitykubectl scale statefulset prometheus-kube-prometheus-stack --replicas=1 -n observabilitykubectl scale statefulset alertmanager-kube-prometheus-stack --replicas=1 -n observabilityStep 9: Final Verification
Confirm everything is working correctly:
# Check pod statuskubectl get pods -A | grep -v Running | grep -v Completed
# Verify Longhorn volumes are healthykubectl get volumes -n longhorn-system | grep "restored"
# Test application functionalitykubectl get pods -n default -o widekubectl get pods -n observability -o wideAlternative: CLI-Based Restoration
For automation or when UI access isn’t available, you can restore via Longhorn’s CRD:
apiVersion: longhorn.io/v1beta1kind: Volumemetadata: name: jellyfin-restored namespace: longhorn-systemspec: size: "10737418240" # Size in bytes restoreVolumeRecurringJob: false fromBackup: "s3://your-minio-bucket/backups/backup-name"Conclusion
Restoring Kubernetes applications from Longhorn backups requires careful orchestration of scaling, PVC management, and volume binding. The process took about 30 minutes for 6 applications, but the result was a complete restoration to the previous backup state.
Having a solid backup strategy is crucial, but knowing how to restore efficiently under pressure is what separates good infrastructure management from great infrastructure management.
Your future self will thank you when disaster strikes again. 😆
Update (October 2025)
Since writing this guide, I’ve migrated my backup storage from MinIO to Garage due to MinIO’s decision to discontinue Docker images. The restoration process described above works identically with Garage as the backup target. If you’re interested in making the same migration, check out my guide on Migrating Longhorn Backup from MinIO to Garage.