When MinIO recently announced they’re discontinuing Docker images and moving to a source-only distribution model, many homelab users were left scrambling for alternatives. With hundreds of thousands of installations potentially running outdated versions with known CVEs, it was time to find a more reliable solution.
Enter Garage - a self-hosted, S3-compatible, distributed object storage service that’s perfect for Kubernetes backup storage.
Note
If you’re looking for information on how to restore from Longhorn backups after migrating to Garage, check out my previous guide on Restoring from Longhorn Backups. The restoration process remains identical regardless of whether you’re using MinIO or Garage as your S3 backend.
Why Garage?
After MinIO’s controversial decision to drop Docker support (Issue #21647), I needed a drop-in S3-compatible replacement that:
- Has active Docker image support
- Is lightweight enough for homelab use
- Provides reliable S3 API compatibility
- Works seamlessly with Longhorn backups
Garage checked all these boxes.
Prerequisites
- Ubuntu server with Docker and Docker Compose
- Kubernetes cluster with Longhorn installed
- Flux CD for GitOps (optional, but recommended)
Setting Up Garage
1. Directory Structure
sudo mkdir -p /srv/docker/garage/{meta,data}cd /srv/docker/garage2. Generate Secrets
# Generate RPC secret for internal communicationopenssl rand -hex 32
# Generate admin token for WebUIopenssl rand -hex 323. Create Configuration
Create /srv/docker/garage/garage.toml:
metadata_dir = "/var/lib/garage/meta"data_dir = "/var/lib/garage/data"db_engine = "lmdb"
replication_factor = 1
rpc_bind_addr = "0.0.0.0:3901"rpc_public_addr = "127.0.0.1:3901"rpc_secret = "YOUR_RPC_SECRET_HERE"
[s3_api]s3_region = "us-east-1"api_bind_addr = "0.0.0.0:3900"root_domain = ".s3.garage"
[admin]api_bind_addr = "0.0.0.0:3903"admin_token = "YOUR_ADMIN_TOKEN_HERE"Tip
For single-node deployments, replication_factor = 1 is sufficient. Multi-node clusters should use 3 for redundancy.
4. Docker Compose Setup
Create /srv/docker/garage/docker-compose.yml:
version: "3"services: garage: image: dxflrs/garage:v2.1.0 container_name: garage network_mode: "host" restart: unless-stopped volumes: - ./garage.toml:/etc/garage.toml - ./meta:/var/lib/garage/meta - ./data:/var/lib/garage/data
webui: image: khairul169/garage-webui:latest container_name: garage-webui restart: unless-stopped volumes: - ./garage.toml:/etc/garage.toml:ro environment: API_BASE_URL: "http://127.0.0.1:3903" S3_ENDPOINT_URL: "http://127.0.0.1:3900" network_mode: "host"5. Start Services
docker-compose up -dConfiguring Garage Storage
1. Initialize Node Layout
# Create alias for easier command executionalias garage="docker exec -ti garage /garage"
# Get node IDgarage node id
# Assign storage capacity (adjust based on your available space)garage layout assign <node-id> -z default -c 100G
# Apply layoutgarage layout showgarage layout apply --version 12. Create Bucket and Credentials
# Create bucket for Longhorn backupsgarage bucket create longhorn
# Create access keygarage key create longhorn-key
# Grant permissionsgarage bucket allow longhorn --read --write --owner --key longhorn-key
# View credentials (note the Key ID and Secret key)garage key info longhorn-key --show-secretIntegrating with Longhorn
1. Create Kubernetes Secret
Update your SOPS-encrypted secret or create a new one:
apiVersion: v1kind: Secretmetadata: name: minio-secret namespace: longhorn-systemtype: OpaquestringData: AWS_ACCESS_KEY_ID: <Key-ID-from-garage> AWS_SECRET_ACCESS_KEY: <Secret-Key-from-garage> AWS_ENDPOINTS: http://<GARAGE_SERVER_IP>:3900 AWS_REGION: us-east-1Apply the secret:
kubectl apply -f minio-secret.yaml2. Update Longhorn HelmRelease
---apiVersion: source.toolkit.fluxcd.io/v1kind: HelmRepositorymetadata: name: longhorn namespace: longhorn-systemspec: interval: 1h url: https://charts.longhorn.io---apiVersion: helm.toolkit.fluxcd.io/v2kind: HelmReleasemetadata: name: longhornspec: interval: 1h chart: spec: chart: longhorn version: 1.10.0 sourceRef: kind: HelmRepository name: longhorn namespace: longhorn-system values: defaultSettings: backupTarget: "s3://longhorn@us-east-1/" backupTargetCredentialSecret: "minio-secret" backupstorePollInterval: "300" # ... other settingsWarning
In Longhorn 1.10.0+, backup settings are under defaultSettings, not a separate defaultBackupStore section.
Testing the Setup
1. Verify Connectivity
Access the Garage WebUI at http://<SERVER_IP>:3909 to verify the node is connected and healthy.
2. Test Backup from Longhorn
- Navigate to Longhorn UI → Volume
- Select a volume → Create Backup
- Monitor the backup progress
- Verify the backup appears in Garage WebUI under the
longhornbucket
3. CLI Verification
# List bucketsgarage bucket list
# View bucket infogarage bucket info longhornTroubleshooting
WebUI Shows “Unknown Error”
Ensure the [admin] section is present in garage.toml and the container is restarted:
docker-compose restartdocker logs garageConnection Refused on Port 3903
Check if the admin API is listening:
netstat -tlnp | grep 3903Longhorn Can’t Connect to Garage
Verify network connectivity from Kubernetes cluster:
kubectl run -it --rm debug --image=amazon/aws-cli --restart=Never -- \ s3 ls s3://longhorn --endpoint-url http://<GARAGE_IP>:3900Benefits of This Migration
- Active maintenance: Garage continues to provide Docker images and updates
- Lightweight: Lower resource footprint compared to MinIO
- S3 compatible: Drop-in replacement for most S3 workloads
- Security: Regular updates without the concern of discontinued support
- Community-driven: Open-source project with transparent development
Conclusion
Migrating from MinIO to Garage was straightforward and took less than an hour. With MinIO’s shift to source-only distribution leaving many installations vulnerable, Garage provides a reliable, actively-maintained alternative that integrates seamlessly with Longhorn.
The WebUI makes management simple, and the S3 compatibility ensures that existing backup workflows continue working without modification. If you’re running MinIO in a homelab environment, now is the time to consider alternatives before the next CVE drops.