Self-Hosted Alternatives to Managed Kubernetes
Why Replace Managed Kubernetes?
Cost. Managed Kubernetes is expensive. AWS EKS charges $0.10/hour ($73/month) just for the control plane — before any worker nodes. GKE Autopilot charges per pod resource. AKS is “free” for the control plane but you still pay for VMs. A typical 3-node cluster on any cloud provider runs $150-500+/month.
Complexity tax. Cloud Kubernetes adds complexity: IAM roles, VPC networking, load balancer provisioning, storage classes, node groups, auto-scaling policies. You need cloud-specific knowledge on top of Kubernetes knowledge.
Overkill. Most self-hosters run a handful of services — not thousands of microservices. A managed Kubernetes cluster with auto-scaling and multi-AZ redundancy is massive overengineering for running Nextcloud, Immich, and a few other apps.
Vendor lock-in. Cloud-specific features (EBS storage, ALB ingress, IAM OIDC) tie your configs to a specific provider. Self-hosted Kubernetes is portable.
Best Alternatives
k3s — Best Overall
k3s is the obvious choice for self-hosted Kubernetes. It installs in 30 seconds, runs in 512 MB RAM, and is CNCF-certified — every Helm chart and kubectl command works. It bundles Traefik, CoreDNS, Flannel, and a local storage provisioner. One command to install, zero dependencies.
Replaces: EKS, GKE, AKS for self-hosters and small teams
Why it’s better for self-hosting:
- $5-10/month VPS vs $73+/month managed control plane
- Full Kubernetes API compatibility
- Installs in 30 seconds
- Runs on ARM (Raspberry Pi)
- HA mode with 3+ nodes
Read our full guide: How to Self-Host k3s
MicroK8s — Best Addon Ecosystem
MicroK8s is Canonical’s Kubernetes distribution. It installs via snap and adds functionality through addons — microk8s enable dashboard, microk8s enable gpu, etc. If you want a batteries-included Kubernetes with easy addon management, MicroK8s delivers.
Best for: Ubuntu users who want easy addon management and snap-based updates.
Trade-off: Requires snap. Not available on distros without snap support.
Read our full guide: How to Self-Host MicroK8s
Docker Swarm — Simplest Alternative
If you don’t actually need Kubernetes and just want multi-node container orchestration, Docker Swarm is built into Docker. No extra installation, uses Docker Compose files, and handles service discovery and load balancing.
Best for: Users who know Docker and want the simplest path to clustering.
Trade-off: No Helm charts, no operators, smaller ecosystem.
Read our full guide: How to Set Up Docker Swarm
Nomad — Best Non-Kubernetes Option
If Kubernetes feels like the wrong tool, HashiCorp Nomad offers workload orchestration with a simpler model. It handles containers, VMs, and binaries with HCL configuration files.
Best for: HashiCorp ecosystem users, mixed workloads.
Trade-off: Smaller ecosystem, BSL license.
Read our full guide: How to Self-Host Nomad
Migration Guide
From EKS/GKE/AKS to k3s
- Export your manifests —
kubectl get all -A -o yaml > cluster-export.yaml - Identify cloud-specific resources — replace cloud load balancers with Traefik ingress, cloud storage classes with local-path or Longhorn, cloud IAM with standard RBAC
- Set up k3s —
curl -sfL https://get.k3s.io | sh - - Apply manifests — remove cloud annotations, apply to k3s
- Migrate persistent data — export PV data, recreate PVCs on k3s, restore data
- Update DNS — point your domains to the new server
What Needs to Change
| Cloud Feature | Self-Hosted Equivalent |
|---|---|
| Cloud load balancer | Traefik (bundled with k3s) or MetalLB |
| EBS/Persistent Disk | Local-path provisioner (default) or Longhorn |
| IAM OIDC for pods | ServiceAccount tokens + RBAC |
| Cloud DNS integration | External-DNS or manual DNS |
| Cluster auto-scaling | Manual node addition |
| Node groups | k3s agent nodes with labels |
| Container registry | Harbor, Docker Registry, or public registries |
| Cloud monitoring | Prometheus + Grafana (self-hosted) |
Cost Comparison
| AWS EKS (3 nodes) | GKE Autopilot | k3s (Self-Hosted) | k3s (Hetzner VPS) | |
|---|---|---|---|---|
| Control plane | $73/month | Included in pod cost | Free | Free |
| Compute (3 nodes) | ~$150-300/month | ~$50-200/month | Your hardware ($0) | $15-45/month |
| Load balancer | $16+/month | $18+/month | Traefik (free) | Traefik (free) |
| Storage | ~$10-50/month | ~$10-50/month | Local disk (free) | Local disk (free) |
| Total | $250-440/month | $80-270/month | $0 (own hardware) | $15-45/month |
| Annual | $3,000-5,300 | $960-3,240 | $0 + electricity | $180-540 |
The self-hosted option is 10-30x cheaper for small clusters.
What You Give Up
- Auto-scaling. Cloud Kubernetes scales nodes automatically. Self-hosted requires manual node management.
- Managed upgrades. Cloud providers handle control plane upgrades. With k3s, you run the upgrade yourself (usually one command, but still your responsibility).
- Multi-AZ redundancy. Cloud providers distribute across availability zones. Self-hosted typically runs in one location.
- SLA. Cloud providers offer 99.95% SLA. Self-hosted uptime depends on your infrastructure.
- Integrations. Cloud-native services (managed databases, message queues, etc.) integrate deeply with managed Kubernetes. Self-hosted means running those services yourself too.
For most self-hosters running personal services, these trade-offs are irrelevant. You don’t need multi-AZ redundancy for Nextcloud.
FAQ
Can k3s run the same Helm charts as EKS/GKE/AKS?
Yes. k3s is CNCF-certified Kubernetes — it passes the same conformance tests as managed distributions. Every Helm chart, kubectl command, and Kubernetes manifest that works on EKS/GKE/AKS works on k3s without modification. The only exceptions are charts that depend on cloud-specific resources (cloud load balancers, cloud storage classes, IAM roles for service accounts). Replace those with self-hosted equivalents: Traefik or MetalLB for load balancing, local-path or Longhorn for storage.
Is k3s stable enough for production use?
Yes. k3s is maintained by SUSE/Rancher and used in production across thousands of deployments — from edge computing to enterprise environments. It’s the same Kubernetes API with a lighter binary. High availability requires 3+ server nodes with an embedded etcd or external datastore. For personal self-hosting or small teams, a single-node k3s cluster is reliable and easy to maintain. Back up the data directory (/var/lib/rancher/k3s/server) and you can recover from any failure.
Do I actually need Kubernetes, or is Docker Compose enough?
For most self-hosters, Docker Compose is sufficient. Kubernetes adds value when you need: automatic container rescheduling on failure, rolling updates with zero downtime, multi-node clustering, or you’re running 20+ services that need orchestrated networking. If you’re running 5-10 services on a single server, Docker Compose is simpler and lighter. Kubernetes adds operational complexity — RBAC, networking policies, storage provisioners, ingress controllers — that’s unnecessary for a homelab running Nextcloud and Jellyfin.
Can k3s run on a Raspberry Pi?
Yes. k3s supports ARM64 natively and runs on a Raspberry Pi 4/5 with 2+ GB RAM. A single Pi can run a lightweight k3s cluster with several pods. For a multi-node cluster, use 3 Pis as server nodes and additional Pis as agents. k3s’s low resource footprint (512 MB RAM minimum for the server) makes it practical on ARM hardware. Be aware that Pi storage (SD cards) is a reliability concern — use USB SSD for the k3s data directory to avoid corruption.
How do I handle persistent storage without cloud block storage?
k3s bundles a local-path provisioner that creates PersistentVolumes on the node’s local disk. For redundant storage across multiple nodes, install Longhorn — k3s’s companion project that provides replicated block storage across cluster nodes with snapshots and backups. For NFS-based storage, use the NFS CSI driver to mount NFS shares as PersistentVolumes. For single-node clusters, local-path storage is simple and performant — just ensure regular backups.
How do I expose services to the internet without a cloud load balancer?
k3s bundles Traefik as its ingress controller — configure Ingress resources to route traffic to your services. For bare-metal load balancing (assigning external IPs to LoadBalancer services), install MetalLB. Point your domain’s DNS A record to your server’s IP, and Traefik routes HTTPS traffic to the correct pod based on hostname. For automatic SSL certificates, install cert-manager with Let’s Encrypt — it provisions and renews certificates just like managed Kubernetes.
Is it worth migrating from managed Kubernetes if I already have workloads running?
Depends on your monthly spend. If you’re paying $200+/month for a small cluster (3 nodes, control plane, load balancer), migrating to k3s on Hetzner VPS ($15-45/month) saves $2,000-5,000/year. The migration effort is typically 1-2 days for a standard setup. Export your manifests, replace cloud-specific annotations, set up k3s, and apply. If you’re under $50/month on managed Kubernetes and your time is expensive, the savings may not justify the effort.
Related
Get self-hosting tips in your inbox
Get the Docker Compose configs, hardware picks, and setup shortcuts we don't put in articles. Weekly. No spam.
Comments