Skip to main content
Overview

Restoring from Longhorn Backups

3 min read

After redeploying my K8s cluster with Flux GitOps, I had a fresh cluster with all apps deployed but empty volumes — no configurations, no dashboards, no media metadata. Here’s the full process I used to restore 6 applications from Longhorn backups stored in MinIO.

What needed restoring:

  • Prometheus (45GB)
  • Loki (20GB)
  • Jellyfin (10GB)
  • Grafana (10GB)
  • qBittorrent (5GB)
  • Sonarr (5GB)

Prerequisites: kubectl access, Longhorn installed and connected to MinIO/S3, Longhorn UI accessible.

Step 1: Assess Current State

Terminal window
kubectl get deployments -A
kubectl get statefulsets -A
kubectl get pvc -A -o wide
kubectl get storageclass

Step 2: Scale Down Applications

Danger

Scale down everything before touching storage to prevent data corruption.

Terminal window
kubectl scale deployment jellyfin --replicas=0 -n default
kubectl scale deployment qbittorrent --replicas=0 -n default
kubectl scale deployment sonarr --replicas=0 -n default
kubectl scale deployment grafana --replicas=0 -n observability
kubectl scale statefulset loki --replicas=0 -n observability
kubectl scale statefulset prometheus-kube-prometheus-stack --replicas=0 -n observability
kubectl scale statefulset alertmanager-kube-prometheus-stack --replicas=0 -n observability

Wait for all pods to terminate before continuing.

Step 3: Delete Empty PVCs

Terminal window
kubectl delete pvc jellyfin -n default
kubectl delete pvc qbittorrent -n default
kubectl delete pvc sonarr -n default
kubectl delete pvc grafana -n observability
kubectl delete pvc storage-loki-0 -n observability
kubectl delete pvc prometheus-kube-prometheus-stack-db-prometheus-kube-prometheus-stack-0 -n observability

Step 4: Restore Backups via Longhorn UI

In the Longhorn UI, go to the Backup tab. For each backup, click the ⟲ restore button and configure:

  • Name: <app>-restored (e.g. jellyfin-restored)
  • Storage Class: longhorn
  • Access Mode: ReadWriteOnce

Repeat for all six applications.

Warning

Wait for all restore operations to complete before moving on. Monitor progress in the Longhorn UI.

Step 5: Create PersistentVolumes

For each restored volume, create a PV pointing to it. Adjust storage and volumeHandle per app:

# Repeat for each application
apiVersion: v1
kind: PersistentVolume
metadata:
name: jellyfin-restored-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: longhorn
csi:
driver: driver.longhorn.io
fsType: ext4
volumeAttributes:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
volumeHandle: jellyfin-restored

Step 6: Create PersistentVolumeClaims

Bind each PVC to the specific PV using volumeName:

# Repeat for each application
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jellyfin
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: longhorn
volumeName: jellyfin-restored-pv

Step 7: Verify Binding

Terminal window
kubectl get pvc -n default | grep -E "(jellyfin|qbittorrent|sonarr)"
kubectl get pvc -n observability | grep -E "(grafana|storage-loki|prometheus)"
kubectl get volumes -n longhorn-system | grep "restored"

All PVCs should show Bound and Longhorn volumes should show attached and healthy.

Step 8: Scale Applications Back Up

Terminal window
kubectl scale deployment jellyfin --replicas=1 -n default
kubectl scale deployment qbittorrent --replicas=1 -n default
kubectl scale deployment sonarr --replicas=1 -n default
kubectl scale deployment grafana --replicas=1 -n observability
kubectl scale statefulset loki --replicas=1 -n observability
kubectl scale statefulset prometheus-kube-prometheus-stack --replicas=1 -n observability
kubectl scale statefulset alertmanager-kube-prometheus-stack --replicas=1 -n observability

Step 9: Verify

Terminal window
kubectl get pods -A | grep -v Running | grep -v Completed
kubectl get volumes -n longhorn-system | grep "restored"
kubectl get pods -n default -o wide
kubectl get pods -n observability -o wide

Alternative: CLI Restore via CRD

If the UI isn’t available or you want to automate:

apiVersion: longhorn.io/v1beta1
kind: Volume
metadata:
name: jellyfin-restored
namespace: longhorn-system
spec:
size: "10737418240"
restoreVolumeRecurringJob: false
fromBackup: "s3://your-minio-bucket/backups/backup-name"

The full process for 6 applications took about 30 minutes.

Update (October 2025)

Since writing this, I’ve migrated backup storage from MinIO to Garage after MinIO discontinued their Docker images. The restoration process above works identically with Garage as the backend. See Migrating Longhorn Backup from MinIO to Garage for details.

Share this post

Loading comments...