After redeploying my K8s cluster with Flux GitOps, I had a fresh cluster with all apps deployed but empty volumes — no configurations, no dashboards, no media metadata. Here’s the full process I used to restore 6 applications from Longhorn backups stored in MinIO.
What needed restoring:
- Prometheus (45GB)
- Loki (20GB)
- Jellyfin (10GB)
- Grafana (10GB)
- qBittorrent (5GB)
- Sonarr (5GB)
Prerequisites: kubectl access, Longhorn installed and connected to MinIO/S3, Longhorn UI accessible.
Step 1: Assess Current State
kubectl get deployments -Akubectl get statefulsets -Akubectl get pvc -A -o widekubectl get storageclassStep 2: Scale Down Applications
Danger
Scale down everything before touching storage to prevent data corruption.
kubectl scale deployment jellyfin --replicas=0 -n defaultkubectl scale deployment qbittorrent --replicas=0 -n defaultkubectl scale deployment sonarr --replicas=0 -n defaultkubectl scale deployment grafana --replicas=0 -n observability
kubectl scale statefulset loki --replicas=0 -n observabilitykubectl scale statefulset prometheus-kube-prometheus-stack --replicas=0 -n observabilitykubectl scale statefulset alertmanager-kube-prometheus-stack --replicas=0 -n observabilityWait for all pods to terminate before continuing.
Step 3: Delete Empty PVCs
kubectl delete pvc jellyfin -n defaultkubectl delete pvc qbittorrent -n defaultkubectl delete pvc sonarr -n default
kubectl delete pvc grafana -n observabilitykubectl delete pvc storage-loki-0 -n observabilitykubectl delete pvc prometheus-kube-prometheus-stack-db-prometheus-kube-prometheus-stack-0 -n observabilityStep 4: Restore Backups via Longhorn UI
In the Longhorn UI, go to the Backup tab. For each backup, click the ⟲ restore button and configure:
- Name:
<app>-restored(e.g.jellyfin-restored) - Storage Class:
longhorn - Access Mode:
ReadWriteOnce
Repeat for all six applications.
Warning
Wait for all restore operations to complete before moving on. Monitor progress in the Longhorn UI.
Step 5: Create PersistentVolumes
For each restored volume, create a PV pointing to it. Adjust storage and volumeHandle per app:
# Repeat for each applicationapiVersion: v1kind: PersistentVolumemetadata: name: jellyfin-restored-pvspec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: longhorn csi: driver: driver.longhorn.io fsType: ext4 volumeAttributes: numberOfReplicas: "3" staleReplicaTimeout: "30" volumeHandle: jellyfin-restoredStep 6: Create PersistentVolumeClaims
Bind each PVC to the specific PV using volumeName:
# Repeat for each applicationapiVersion: v1kind: PersistentVolumeClaimmetadata: name: jellyfin namespace: defaultspec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: longhorn volumeName: jellyfin-restored-pvStep 7: Verify Binding
kubectl get pvc -n default | grep -E "(jellyfin|qbittorrent|sonarr)"kubectl get pvc -n observability | grep -E "(grafana|storage-loki|prometheus)"kubectl get volumes -n longhorn-system | grep "restored"All PVCs should show Bound and Longhorn volumes should show attached and healthy.
Step 8: Scale Applications Back Up
kubectl scale deployment jellyfin --replicas=1 -n defaultkubectl scale deployment qbittorrent --replicas=1 -n defaultkubectl scale deployment sonarr --replicas=1 -n defaultkubectl scale deployment grafana --replicas=1 -n observability
kubectl scale statefulset loki --replicas=1 -n observabilitykubectl scale statefulset prometheus-kube-prometheus-stack --replicas=1 -n observabilitykubectl scale statefulset alertmanager-kube-prometheus-stack --replicas=1 -n observabilityStep 9: Verify
kubectl get pods -A | grep -v Running | grep -v Completedkubectl get volumes -n longhorn-system | grep "restored"kubectl get pods -n default -o widekubectl get pods -n observability -o wideAlternative: CLI Restore via CRD
If the UI isn’t available or you want to automate:
apiVersion: longhorn.io/v1beta1kind: Volumemetadata: name: jellyfin-restored namespace: longhorn-systemspec: size: "10737418240" restoreVolumeRecurringJob: false fromBackup: "s3://your-minio-bucket/backups/backup-name"The full process for 6 applications took about 30 minutes.
Update (October 2025)
Since writing this, I’ve migrated backup storage from MinIO to Garage after MinIO discontinued their Docker images. The restoration process above works identically with Garage as the backend. See Migrating Longhorn Backup from MinIO to Garage for details.