Skip to main content
Overview

Deploying a Kubernetes-Based Media Server

Merox
Merox HPC Sysadmin
6 min read
Kubernetes Intermediate

This is the setup I use to run a full media automation stack on my Kubernetes cluster. It covers Jellyfin for streaming, Radarr and Sonarr for media management, Prowlarr for indexers, qBittorrent for downloads, and Gluetun for VPN tunneling. Config lives on Longhorn; media is stored on a Synology NAS DS223 mounted via NFS 4.1 as a PersistentVolume.

Components

ApplicationRole
JellyfinMedia streaming
RadarrMovie library management
SonarrTV show management
ProwlarrTorrent indexer manager
qBittorrentDownload client
GluetunVPN sidecar for qBittorrent

Synology NAS NFS Setup

If you’re using a Synology NAS, this is the NFS share rule I use — applied before mounting on the Kubernetes side.

NFS rule configuration on Synology NAS

PersistentVolumes and PVCs

Media Storage

Create nfs-media-pv-and-pvc.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
name: jellyfin-videos
spec:
capacity:
storage: 400Gi
accessModes:
- ReadWriteOnce
nfs:
path: /volume1/server/k3s/media
server: storage.merox.cloud
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
storageClassName: ""
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jellyfin-videos
namespace: media
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 400Gi
volumeName: jellyfin-videos
storageClassName: ""
Terminal window
kubectl apply -f nfs-media-pv-and-pvc.yaml

Download Storage

Create nfs-download-pv-and-pvc.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
name: qbitt-download
spec:
capacity:
storage: 400Gi
accessModes:
- ReadWriteOnce
nfs:
path: /volume1/server/k3s/media/download
server: storage.merox.cloud
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
storageClassName: ""
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: qbitt-download
namespace: media
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 400Gi
volumeName: qbitt-download
storageClassName: ""
Terminal window
kubectl apply -f nfs-download-pv-and-pvc.yaml

Longhorn PVC for App Configs

Create app-config-pvc.yaml (repeat for each app):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app # radarr for example
namespace: media
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 5Gi
Terminal window
kubectl apply -f app-config-pvc.yaml
Danger

You need a separate PVC for each application: Jellyfin, Sonarr, Radarr, Prowlarr, and qBittorrent.

Deployments

Jellyfin

Create jellyfin-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: jellyfin
namespace: media
spec:
replicas: 1
selector:
matchLabels:
app: jellyfin
template:
metadata:
labels:
app: jellyfin
spec:
containers:
- name: jellyfin
image: jellyfin/jellyfin
volumeMounts:
- name: config
mountPath: /config
- name: videos
mountPath: /data/videos
ports:
- containerPort: 8096
volumes:
- name: config
persistentVolumeClaim:
claimName: jellyfin-config
- name: videos
persistentVolumeClaim:
claimName: jellyfin-videos
Terminal window
kubectl apply -f jellyfin-deployment.yaml

Sonarr

apiVersion: apps/v1
kind: Deployment
metadata:
name: sonarr
namespace: media
spec:
replicas: 1
selector:
matchLabels:
app: sonarr
template:
metadata:
labels:
app: sonarr
spec:
containers:
- name: sonarr
image: lscr.io/linuxserver/sonarr
env:
- name: PUID
value: "1057"
- name: PGID
value: "1056"
volumeMounts:
- name: config
mountPath: /config
- name: videos
mountPath: /tv
- name: downloads
mountPath: /downloads
ports:
- containerPort: 8989
volumes:
- name: config
persistentVolumeClaim:
claimName: sonarr-config
- name: videos
persistentVolumeClaim:
claimName: jellyfin-videos
- name: downloads
persistentVolumeClaim:
claimName: qbitt-download

Radarr

apiVersion: apps/v1
kind: Deployment
metadata:
name: radarr
namespace: media
spec:
replicas: 1
selector:
matchLabels:
app: radarr
template:
metadata:
labels:
app: radarr
spec:
containers:
- name: radarr
image: lscr.io/linuxserver/radarr
env:
- name: PUID
value: "1057"
- name: PGID
value: "1056"
volumeMounts:
- name: config
mountPath: /config
- name: videos
mountPath: /movies
- name: downloads
mountPath: /downloads
ports:
- containerPort: 7878
volumes:
- name: config
persistentVolumeClaim:
claimName: radarr-config
- name: videos
persistentVolumeClaim:
claimName: jellyfin-videos
- name: downloads
persistentVolumeClaim:
claimName: qbitt-download

Prowlarr

apiVersion: apps/v1
kind: Deployment
metadata:
name: prowlarr
namespace: media
spec:
replicas: 1
selector:
matchLabels:
app: prowlarr
template:
metadata:
labels:
app: prowlarr
spec:
containers:
- name: prowlarr
image: lscr.io/linuxserver/prowlarr
env:
- name: PUID
value: "1057"
- name: PGID
value: "1056"
volumeMounts:
- name: config
mountPath: /config
ports:
- containerPort: 9696
volumes:
- name: config
persistentVolumeClaim:
claimName: prowlarr-config

qBittorrent (standalone)

Warning

qBittorrent v5 renamed the API endpoints /torrents/pause and /torrents/resume to /torrents/stop and /torrents/start. If you use any scripts or integrations that call the qBittorrent API directly, update them before upgrading from v4.

apiVersion: apps/v1
kind: Deployment
metadata:
name: qbittorrent
namespace: media
spec:
replicas: 1
selector:
matchLabels:
app: qbittorrent
template:
metadata:
labels:
app: qbittorrent
spec:
containers:
- name: qbittorrent
image: lscr.io/linuxserver/qbittorrent
resources:
limits:
memory: "2Gi"
requests:
memory: "512Mi"
env:
- name: PUID
value: "1057"
- name: PGID
value: "1056"
volumeMounts:
- name: config
mountPath: /config
- name: downloads
mountPath: /downloads
ports:
- containerPort: 8080
volumes:
- name: config
persistentVolumeClaim:
claimName: qbitt-config
- name: downloads
persistentVolumeClaim:
claimName: qbitt-download

qBittorrent with Gluetun

If you want to route qBittorrent traffic through a VPN, use this version instead — Gluetun runs as a sidecar and shares the network namespace.

apiVersion: apps/v1
kind: Deployment
metadata:
name: qbittorrent
namespace: media
spec:
replicas: 1
selector:
matchLabels:
app: qbittorrent
template:
metadata:
labels:
app: qbittorrent
spec:
containers:
- name: qbittorrent
image: lscr.io/linuxserver/qbittorrent
resources:
limits:
memory: "2Gi"
requests:
memory: "512Mi"
env:
- name: PUID
value: "1057"
- name: PGID
value: "1056"
volumeMounts:
- name: config
mountPath: /config
- name: downloads
mountPath: /downloads
ports:
- containerPort: 8080
- name: gluetun
image: ghcr.io/qdm12/gluetun:v3.40.0
env:
- name: VPN_SERVICE_PROVIDER
value: "surfshark"
- name: VPN_TYPE
value: "wireguard"
- name: SERVER_COUNTRIES
value: "Netherlands"
- name: WIREGUARD_ADDRESSES
value: "10.14.0.2/16" # from SurfShark WireGuard config — Address field
- name: FIREWALL_INPUT_PORTS
value: "50413,8080" # torrent port + web UI port
- name: FIREWALL_OUTBOUND_SUBNETS
value: "10.0.0.0/8"
- name: DNS_KEEP_NAMESERVER
value: "on"
- name: DOT
value: "off"
- name: WIREGUARD_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: surfshark-secret
key: WIREGUARD_PRIVATE_KEY
securityContext:
capabilities:
add:
- NET_ADMIN
volumeMounts:
- name: tun
mountPath: /dev/net/tun
volumes:
- name: config
persistentVolumeClaim:
claimName: qbitt-config
- name: downloads
persistentVolumeClaim:
claimName: qbitt-download
- name: tun
hostPath:
path: /dev/net/tun
Note

I use SurfShark with WireGuard — faster than OpenVPN and natively supported by Gluetun. Generate your WireGuard key from the SurfShark dashboard under VPN → Manual Setup → WireGuard. Note: SurfShark does not support port forwarding, so peers cannot initiate inbound connections — downloads still work fine but may be slower without seeding peers.

ClusterIP Services

Each app needs a ClusterIP service so Traefik can route to it internally. Create app-service.yaml per application:

apiVersion: v1
kind: Service
metadata:
name: app # radarr for example
namespace: media
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 7878
selector:
app: app # radarr for example
Terminal window
kubectl apply -f app-service.yaml

Traefik Middleware

Create default-headers-media.yaml:

apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: default-headers-media
namespace: media
spec:
headers:
browserXssFilter: true
contentTypeNosniff: true
forceSTSHeader: true
stsIncludeSubdomains: true
stsPreload: true
stsSeconds: 15552000
customFrameOptionsValue: SAMEORIGIN
customRequestHeaders:
X-Forwarded-Proto: https
Terminal window
kubectl apply -f default-headers-media.yaml

IngressRoutes

Create app-ingress-route.yaml per application:

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: app # radarr for example
namespace: media
annotations:
kubernetes.io/ingress.class: traefik-external
spec:
entryPoints:
- websecure
routes:
- match: Host(`movies.merox.cloud`) # change to your domain
kind: Rule
services:
- name: app # radarr for example
port: 80
- match: Host(`movies.merox.cloud`) # change to your domain
kind: Rule
services:
- name: app # radarr for example
port: 80
middlewares:
- name: default-headers-media
tls:
secretName: mycert-tls # change to your cert name
Terminal window
kubectl apply -f app-ingress-route.yaml
Danger

Add the hostname declared in your IngressRoute to your DNS server before applying.

Manifest Files

All manifests are available here — copy and deploy what you need:

All manifest files 🔗

Share this post

Related Posts

Restoring from Longhorn Backups

How to restore Kubernetes applications from Longhorn backups in MinIO — scale down, remove empty PVCs, restore volumes, create PVs/PVCs, and bring everything back up.

3 min read

How to Set Up a K3S Cluster in 2025

Rebuilding my K3s cluster from scratch with Ansible — VM provisioning via Cloud-Init on Proxmox, HA across three nodes, and full automation.

10 min read 4 parts

Running OpenClaw as a Homelab Infrastructure Agent

How I replaced my custom merox-agent Telegram bot with OpenClaw — an open-source personal AI assistant that manages my Kubernetes cluster, Docker services, and GitOps repo from my phone.

4 min read
Loading comments...