It’s been a while on the blog! I promise there will be more regular updates from now on, but maybe not always about tech…
But why
I know what you’re thinking - Kubernetes? On a home server? Who’d be that crazy? Well, a while ago I’d agree but a few things have changed my mind recently.
I’ve started a new job at a small startup that doesn’t have a DevOps team with Kubernetes (K8s from now on) knowledge on board, and even as a long-term K8s hater due to its complexity, I’ve been forced to admit that I miss its programmatic approach to deployments and pod access. There, I’ve said it! Also, I must admit the excitement of taming a beast of such complexity had been calling out to me for a while now. Besides, K8s is eating the world - knowing more about it can’t hurt.
I’m still not a huge fan of K8s, but Docker has kind of imploded and its Swarm project has been long dead, Nomad isn’t much better (or free) and Mesos hasn’t gathered critical mass. Which sadly makes K8s the last production-level container orchestration technology left standing. Don’t take this as an endorsement of it - in IT, we know sometimes success does not equate to quality (see Windows circa 1995). And like I’ve said, it’s way too complex, but recent advancements in tooling have made operating it a much easier endeavor.
As for why I’ll be using it for my personal server, it mostly comes down to reproducibility. My current setup runs around 35 containers for a bunch of services like a wiki, Airsonic music streaming server, MinIO for S3 API compatible storage and a lot more, plus Samba and NFS servers which are consumed by Kodi on my Shield TV and our four work PCs/laptops at home.
I’ve been content running all of this on OpenMediaVault for close to 5 years now, but its pace of development has slowed and, being Debian based, it suffers from the “release” problem. Every time Debian goes up in major version things inevitably break for a while. I’ve lived with it since Debian 8 or 9, but the recent release of 11 broke stuff heavily and it’s time for a change. I also suspect the slowing of development for OpenMediaVault is due to the increased adoption of K8s by typical nerdy NAS owners, given the amount of “easy” K8s templates on Github and Discord servers dedicated to it. Trusting templates that throw the kitchen sink at a problem is not really my style, and if I’m to trust something with managing my home server, I insist on understanding it.
Still, it’s not like the server is a time sink right now - updates are automated and I rarely have to tweak it once setup. Currently it has 161 days of uptime! However, reproducing my setup would be mostly a manual affair. Reinstall OpenMediaVault, add the ZFS plugin, import my 4-disk ZFS pool, configure the Samba and NFS shares, reinstall Portainer, re-import all of my docker-compose
files… it’s a bit much, and mostly manual. Since K8s manages state for a cluster, it should (in theory) be super simple to just reinstall my server, add ZFS support, import the pool, run a script that recreates all deployments and voila! In theory.
But hold on. If you’re totally new to it - just what is Kubernetes?
Brief(ish) overview of Kubernetes
Kubernetes (Greek for “helmsman”) is a container orchestration product originally created at Google. However they don’t use it internally much, which gives credence to the theory that it’s an elaborate Trojan horse to makes sure no rival company will ever challenge them in the future, because they’ll be spending all of their time managing the thing (it’s notoriously complex). 😄
In a nutshell, you install it on a server or more likely a cluster, and can then deploy different types of workloads to it. It takes care of spawning containers for you, scaling them, namespacing them, managing network access rules, you name it. You mainly interact with it by writing YAML files and then applying them to the cluster, usually with a CLI tool called kubectl
that validates and transforms the YAML into a JSON payload which is then sent to the cluster’s REST API endpoint.
There are many concepts in K8s, but I’ll just go over the main ones:
- Pods basic work unit, which is roughly a single container, or a set of containers. Pods are guaranteed to be present on the same node of the cluster. K8s assigns the IPs for pods, so you don’t have to manage them. Containers inside a pod can all reach each other, but not containers running on other pods. You shouldn’t manage pods directly though, that’s the job of Services.
- Services are entry points to sets of pods, and make it easier to manage them as a single unit. They don’t manage Pods directly (they use ReplicaSets) but you don’t even need to know what a ReplicaSet is most of the time. Services identify the Pods they control with labels.
- Labels every object in K8s can have metadata attached to it. Labels are a form of it, annotations are another. Most actions in K8s can be scoped with a selector that targets a specific label being present.
- Volumes just like for Docker, volumes connect containers to storage. For production use you’d have S3 or something with similar guarantees but for this home server use case, we’ll just use
hostPath
type mounts that map directly to folders on the server. K8s complicates this a bit for most cases - you need to have aPersistentVolume
(PV) declared, and aPersistentVolumeClaim
(PVC) to actually access it. You could just use directhostPath
mounts on a Deployment, but PVs and PVCs give you more control over the volume’s use. - Deployments are kind of the main work units. They declare what Docker image to use, which service the Deployment is part of via labels, which volumes to mount and ports to export, and optional security concerns.
- ConfigMaps are where you store configuration data in key-value form. The environment for a Deployment can be taken from a ConfigMap - all of it, or specific keys.
- Ingress without these, your Pods will be running but not exposed to the outside world. We’ll be using
nginx
ingresses in this blog post. - Jobs and CronJobs are one-off or recurrent workloads that can be executed.
There are more concepts to master, and third-party tools can add to this list and extend a K8s cluster with custom objects called CRDs. The official docs are a good place to learn more. For now, these will get us a long way towards a capable working example.
Let’s do it!
Step 1 - Linux installation
To get started, I recommend you use VirtualBox (it’s free) and install a basic Debian 11 VM with no desktop, just running OpenSSH. Other distros might work, but I did most of the testing on Debian. I plan to move to Arch in the future to avoid the “release problem”, but one change at a time. 😓 After you master the VM setup, moving to a physical server should pose no issue.
To prevent redoing the install from scratch when you make a mistake (I made a lot of them until I figured it all out) you might want to clone the installation VM into a new one. That way, you can just delete the clone VM and clone the master one again and try again. You can also use snapshots on the master VM, but I found cloning to be more intuitive.
Make sure your cloned VM’s network adapter is set to Bridged
and has the same MAC address as the master VM, so you get the same IP all the time. It will also make forwarding ports on your home router easier.
Make sure to route the following ports on your home router to the VM’s IP address:
- 80/TCP http
- 443/TCP https
You must also forward the following if you’re not in the same LAN as the VM or if you’re using a remote (DigitalOcean, Amazon, etc.) server:
- 22/TCP ssh
- 6443/TCP K8s API
- 10250/UDP kubelet
Before proceeding, make sure you add your SSH key to the server and when you SSH into it, you get a root shell without a password prompt. If you add:
Host k3s
User root
Hostname <your VM or server's IP>
to your .ssh/config
file, when you ssh k3s
you should get the aforementioned root prompt.
At this time, you should install kubectl
too. I recommend the asdf plugin.
Step 2 - k3s installation
Full-blown Kubernetes is complex and heavy on resources, so we’ll be using a lightweight alternative called K3s, a nimble single-binary solution that is 100% compatible with normal K8s.
To install K3s, and to interact with our server, I’ll be using a Makefile
(old-school, that’s how I roll). At the top, a few variables for you to fill in:
# set your host IP and name
HOST_IP=192.168.1.60
HOST=k3s
# do not change the next line
KUBECTL=kubectl --kubeconfig ~/.kube/k3s-vm-config
The IP is simple enough to set, and HOST
is the label for the server in the .ssh/config
file as above. It’s just easier to use it than user@HOST_IP
, but feel free to modify the Makefile as you see fit. The KUBECTL
variable will make more sense once we install K3s. Add the following target to the Makefile:
k3s_install:
ssh ${HOST} 'export INSTALL_K3S_EXEC=" --no-deploy servicelb --no-deploy traefik --no-deploy local-storage"; \
curl -sfL https://get.k3s.io | sh -'
scp ${HOST}:/etc/rancher/k3s/k3s.yaml .
sed -r 's/(\b[0-9]{1,3}\.){3}[0-9]{1,3}\b'/"${HOST_IP}"/ k3s.yaml > ~/.kube/k3s-vm-config && rm k3s.yaml
OK, a few things to unpack. The first line SSHs into the server and installs K3s, skipping a few components.
servicelb
we don’t need load balancing on a single servertraefik
we’ll be using nginx for ingresses, so no need to install this ingress controllerlocal-storage
we’ll be usinghostPath
mounts, so no need for this component either
The second line copies over the k3s.yaml
file from the server that is created after installation and includes a certificate to contact its API. The third line replaces the 127.0.0.1
IP in the server configuration with the server’s IP on this local copy of the file, and copies it over to the .kube
folder in your $HOME folder (make sure it exists). This is where kubectl
will pick it up, since we’ve set the KUBECTL
variable in the Makefile for this file explicitly.
This is the expected output:
ssh k3s 'export INSTALL_K3S_EXEC=" --no-deploy servicelb --no-deploy traefik --no-deploy local-storage"; \
curl -sfL https://get.k3s.io | sh -'
[INFO] Finding release for channel stable
[INFO] Using v1.21.7+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.21.7+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.21.7+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
scp k3s:/etc/rancher/k3s/k3s.yaml .
k3s.yaml 100% 2957 13.1MB/s 00:00
sed -r 's/(\b[0-9]{1,3}\.){3}[0-9]{1,3}\b'/"YOUR HOST IP HERE"/ k3s.yaml > ~/.kube/k3s-vm-config && rm k3s.yaml
I assume your distro has sed
installed, as most do. To test that everything is working, a simple kubectl --kubeconfig ~/.kube/k3s-vm-config get nodes
now should yield:
NAME STATUS ROLES AGE VERSION
k3s-vm Ready control-plane,master 2m4s v1.21.7+k3s1
Our K8s cluster is now ready to receive workloads!
Step 2.5 Clients (optional)
If you want to have a nice UI to interact with your K8s setup, there are two options.
- k9s (CLI) I quite like it, very easy to work with and perfect for remote setups
- Lens (GUI) My go-to recently, I quite like the integrated metrics
They should just pick up the presence of our cluster settings in ~/.kube
.
Step 3 - nginx ingress, Let’s Encrypt and storage
The next target on our Makefile will install the nginx ingress controller and Let’s Encrypt certificate manager, so our deployments can have valid TLS certificates (for free!). There’s also a default storage class, so that workloads without one set can use our default.
base:
${KUBECTL} apply -f k8s/ingress-nginx-v1.0.4.yml
${KUBECTL} wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=60s
${KUBECTL} apply -f k8s/cert-manager-v1.0.4.yaml
${KUBECTL} apply -f stacks/default-storage-class.yaml
@echo
@echo "waiting for cert-manager pods to be ready... "
${KUBECTL} wait --namespace=cert-manager --for=condition=ready pod --all --timeout=60s
${KUBECTL} apply -f k8s/lets-encrypt-staging.yml
${KUBECTL} apply -f k8s/lets-encrypt-prod.yml
You can find the files I’ve used in this gist. The nginx ingress YAML is sourced from this Github link but with a single modification on line 323:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
so that we can use DNS properly for our single server use case. More info here.
The cert-manager
file is too big to go over, feel free to consult the docs for it. As for the storage class file, it’s quite simple:
# stacks/default-storage-class.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
More info on StorageClass resources.
To actually issue the Let’s Encrypt certificates, we need a ClusterIssuer
object defined. We’ll use two, one for the staging API and one for the production one. Use the staging issuer for experiments, since you won’t get rate-limited but bear in mind the certificates won’t be valid. Be sure to replace the email address in both issuers with your own.
# k8s/lets-encrypt-staging.yml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: YOUR.EMAIL@DOMAIN.TLD
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx
# k8s/lets-encrypt-prod.yml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: YOUR.EMAIL@DOMAIN.TLD
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
If we ran all of the kubectl apply
statements one after the other, the process would probably fail since we need the ingress controller to be ready before moving on to the cert-manager one. To that end, kubectl
includes a handy wait
sub-command that can take conditions and labels (remember those?) and halts the process for us until the components we need are ready. To just elaborate on the example from above:
${KUBECTL} wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=60s
This waits until all pods matching the app.kubernetes.io/component=controller
selector are in the ready
condition for up to 60 seconds. If the timeout expires, the Makefile will stop. However, don’t worry if any of our Makefile’s targets errors out, since all of them are idempotent. You can run make base
in this case multiple times, and if the cluster already has the definitions in place, they’ll just go unchanged. Try it!
Step 4 - Portainer
I still quite like Portainer to manage my server, and as luck would have it, it support K8s as well as Docker. Let’s go bit by bit on the relevant bits of the YAML file for it.
---
apiVersion: v1
kind: Namespace
metadata:
name: portainer
Simply enough, Portainer defines its own namespace.
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: portainer-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/zpool/volumes/portainer/claim"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3s-vm
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "1Gi"
This volume (and its associated claim) where Portainer stores its config. Notice that the PersistentVolume
declaration mentions nodeAffinity
to match the hostname of the server (or VM). I haven’t found a better way to do this yet.
---
# Source: portainer/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: portainer
namespace: portainer
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: http
nodePort: 30777
- port: 9443
targetPort: 9443
protocol: TCP
name: https
nodePort: 30779
- port: 30776
targetPort: 30776
protocol: TCP
name: edge
nodePort: 30776
selector:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
Here we see the service definition. Notice how the ports are specified (our ingress will use only one of them). Now for the deployment.
---
# Source: portainer/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer
namespace: portainer
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
spec:
replicas: 1
strategy:
type: "Recreate"
selector:
matchLabels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
template:
metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
nodeSelector:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: portainer-pv
persistentVolumeClaim:
claimName: portainer
containers:
- name: portainer
image: "portainer/portainer-ce:latest"
imagePullPolicy: Always
args:
- '--tunnel-port=30776'
volumeMounts:
- name: portainer-pv
mountPath: /data
ports:
- name: http
containerPort: 9000
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 9443
scheme: HTTPS
resources:
{}
A lot of this file is taken by metadata labels, but this is what ties the rest together. We see the volume mounts, Docker image being used, ports, plus the readiness and liveness probe definitions. They are used by K8s to determine when the pods are ready, and if they’re still up and responsive respectively.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portainer-ingress
namespace: portainer
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: portainer.domain.tld
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portainer
port:
name: http
tls:
- hosts:
- portainer.domain.tld
secretName: portainer-prod-secret-tls
Finally the ingress, that maps an actual domain name to this service. Make sure you have a domain pointing to your server’s IP, since the Let’s Encrypt challenge resolver depends on it being accessible to the world. In this case, A records pointing to your IP for domain.tld
and *.domain.tld
would be needed.
Notice how we obtain a certificate - we just need to annotate the ingress with cert-manager.io/cluster-issuer: letsencrypt-prod
(or staging) and add the tls
key with the host name(s) and the name of the secret where the TLS key will be stored. If you don’t want or need a certificate, just remove the annotation and the tls
key.
One thing to note here, I’m using Kustomize to manage the YAML files for deployments. This is because another tool, Kompose, outputs a lot of different YAML files when it converts
docker-compose
YAMLs to K8s ones. Kustomize makes it easier to apply them all at once.
So here are the files needed for the Portainer deployment
# stacks/portainer/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- portainer.yaml
# stacks/portainer/portainer.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: portainer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: portainer-pv
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/zpool/volumes/portainer/claim"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3s-vm
---
# Source: portainer/templates/pvc.yaml
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
annotations:
volume.alpha.kubernetes.io/storage-class: "generic"
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "1Gi"
---
# Source: portainer/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
---
# Source: portainer/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: portainer
namespace: portainer
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
spec:
type: NodePort
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: http
nodePort: 30777
- port: 9443
targetPort: 9443
protocol: TCP
name: https
nodePort: 30779
- port: 30776
targetPort: 30776
protocol: TCP
name: edge
nodePort: 30776
selector:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
---
# Source: portainer/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer
namespace: portainer
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
spec:
replicas: 1
strategy:
type: "Recreate"
selector:
matchLabels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
template:
metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
nodeSelector:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: portainer-pv
persistentVolumeClaim:
claimName: portainer
containers:
- name: portainer
image: "portainer/portainer-ce:latest"
imagePullPolicy: Always
args:
- '--tunnel-port=30776'
volumeMounts:
- name: portainer-pv
mountPath: /data
ports:
- name: http
containerPort: 9000
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 9443
scheme: HTTPS
resources:
{}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portainer-ingress
namespace: portainer
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: portainer.domain.tld
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portainer
port:
name: http
tls:
- hosts:
- portainer.domain.tld
secretName: portainer-prod-secret-tls
And the Makefile target:
portainer:
${KUBECTL} apply -k stacks/portainer
Expected output:
> make portainer
kubectl --kubeconfig ~/.kube/k3s-vm-config apply -k stacks/portainer
namespace/portainer created
serviceaccount/portainer-sa-clusteradmin created
clusterrolebinding.rbac.authorization.k8s.io/portainer created
service/portainer created
persistentvolume/portainer-pv created
persistentvolumeclaim/portainer created
deployment.apps/portainer created
ingress.networking.k8s.io/portainer-ingress created
If you run it again, since it’s idempotent, you should see:
> make portainer
kubectl --kubeconfig ~/.kube/k3s-vm-config apply -k stacks/portainer
namespace/portainer unchanged
serviceaccount/portainer-sa-clusteradmin unchanged
clusterrolebinding.rbac.authorization.k8s.io/portainer unchanged
service/portainer unchanged
persistentvolume/portainer-pv unchanged
persistentvolumeclaim/portainer unchanged
deployment.apps/portainer configured
ingress.networking.k8s.io/portainer-ingress unchanged
Step 5 - Samba share
Running an in-cluster Samba server is easy. Here are our YAMLs:
# stacks/samba/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: smbcredentials
envs:
- auth.env
resources:
- deployment.yaml
- service.yaml
Here we have a kustomization with multiple files. When we apply -k
the folder this file is in, they’ll all be merged into one.
The service is simple enough:
# stacks/samba/service.yaml
apiVersion: v1
kind: Service
metadata:
name: smb-server
spec:
ports:
- port: 445
protocol: TCP
name: smb
selector:
app: smb-server
The deployment too:
# stacks/samba/deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: smb-server
spec:
replicas: 1
selector:
matchLabels:
app: smb-server
strategy:
type: Recreate
template:
metadata:
name: smb-server
labels:
app: smb-server
spec:
volumes:
- name: smb-volume
hostPath:
path: /zpool/shares/smb
type: DirectoryOrCreate
containers:
- name: smb-server
image: dperson/samba
args: [
"-u",
"$(USERNAME1);$(PASSWORD1)",
"-u",
"$(USERNAME2);$(PASSWORD2)",
"-s",
# name;path;browsable;read-only;guest-allowed;users;admins;writelist;comment
"share;/smbshare/;yes;no;no;all;$(USERNAME1);;mainshare",
"-p"
]
env:
- name: PERMISSIONS
value: "0777"
- name: USERNAME1
valueFrom:
secretKeyRef:
name: smbcredentials
key: username1
- name: PASSWORD1
valueFrom:
secretKeyRef:
name: smbcredentials
key: password1
- name: USERNAME2
valueFrom:
secretKeyRef:
name: smbcredentials
key: username2
- name: PASSWORD2
valueFrom:
secretKeyRef:
name: smbcredentials
key: password2
volumeMounts:
- mountPath: /smbshare
name: smb-volume
ports:
- containerPort: 445
hostPort: 445
Notice we don’ use a persistent volume and claim here, just a direct hostPath
. We set its type
to DirectoryOrCreate
so that it will be created if not present.
We’re using the dperson/samba Docker image, that allows for on-the-fly setting of users and shares. Here I specify a single share, with two users (with USERNAME1
as admin of the share). The users and passwords come from a simple env file:
# stacks/samba/auth.env
username1=alice
password1=foo
username2=bob
password2=bar
Our Makefile target for this is simple:
samba:
${KUBECTL} apply -k stacks/samba
Expected output:
> make samba
kubectl --kubeconfig ~/.kube/k3s-vm-config apply -k stacks/samba
secret/smbcredentials-59k7fh7dhm created
service/smb-server created
deployment.apps/smb-server created
Step 6 - BookStack
As an example of using Kompose to convert a docker-compose.yaml
app into K8s files, let’s use BookStack, a great wiki app of which I’m a fan.
This is my original docker-compose
file for BookStack:
version: '2'
services:
mysql:
image: mysql:5.7.33
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=bookstack
- MYSQL_USER=bookstack
- MYSQL_PASSWORD=secret
volumes:
- mysql-data:/var/lib/mysql
ports:
- 3306:3306
bookstack:
image: solidnerd/bookstack:21.05.2
depends_on:
- mysql
environment:
- DB_HOST=mysql:3306
- DB_DATABASE=bookstack
- DB_USERNAME=bookstack
- DB_PASSWORD=secret
volumes:
- uploads:/var/www/bookstack/public/uploads
- storage-uploads:/var/www/bookstack/storage/uploads
ports:
- "8080:8080"
volumes:
mysql-data:
uploads:
storage-uploads:
Using Kompose is easy:
> kompose convert -f bookstack-original-compose.yaml
WARN Unsupported root level volumes key - ignoring
WARN Unsupported depends_on key - ignoring
INFO Kubernetes file "bookstack-service.yaml" created
INFO Kubernetes file "mysql-service.yaml" created
INFO Kubernetes file "bookstack-deployment.yaml" created
INFO Kubernetes file "uploads-persistentvolumeclaim.yaml" created
INFO Kubernetes file "storage-uploads-persistentvolumeclaim.yaml" created
INFO Kubernetes file "mysql-deployment.yaml" created
INFO Kubernetes file "mysql-data-persistentvolumeclaim.yaml" created
Right off the bat, we’re told our volumes and use of depends_on
is not supported, which is a bummer. But they’re easy enough to fix. In the interest of brevity and not making this post longer, I’ll just post the final result with some notes.
# stacks/bookstack/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- bookstack-build.yaml
# stacks/bookstack/bookstack-build.yaml
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: bookstack
name: bookstack
spec:
ports:
- name: bookstack-port
port: 10000
targetPort: 8080
selector:
io.kompose.service: bookstack
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: bookstack-mysql
name: bookstack-mysql
spec:
ports:
- name: bookstack-db-port
port: 10001
targetPort: 3306
selector:
io.kompose.service: bookstack-mysql
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: bookstack-storage-uploads-pv
spec:
storageClassName: local-storage
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/zpool/volumes/bookstack/storage-uploads"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3s-vm
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
io.kompose.service: bookstack-storage-uploads-pvc
name: bookstack-storage-uploads-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: bookstack-uploads-pv
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/zpool/volumes/bookstack/uploads"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3s-vm
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
io.kompose.service: bookstack-uploads-pvc
name: bookstack-uploads-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
---
apiVersion: v1
kind: ConfigMap
metadata:
name: bookstack-config
namespace: default
data:
DB_DATABASE: bookstack
DB_HOST: bookstack-mysql:10001
DB_PASSWORD: secret
DB_USERNAME: bookstack
APP_URL: https://bookstack.domain.tld
MAIL_DRIVER: smtp
MAIL_ENCRYPTION: SSL
MAIL_FROM: user@domain.tld
MAIL_HOST: smtp.domain.tld
MAIL_PASSWORD: your_email_password
MAIL_PORT: "465"
MAIL_USERNAME: user@domain.tld
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: bookstack
name: bookstack
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: bookstack
strategy:
type: Recreate
template:
metadata:
labels:
io.kompose.service: bookstack
spec:
securityContext:
runAsUser: 33
runAsGroup: 33
containers:
- name: bookstack
image: reddexx/bookstack:21112
securityContext:
allowPrivilegeEscalation: false
envFrom:
- configMapRef:
name: bookstack-config
ports:
- containerPort: 8080
volumeMounts:
- name: bookstack-uploads-pv
mountPath: /var/www/bookstack/public/uploads
- name: bookstack-storage-uploads-pv
mountPath: /var/www/bookstack/storage/uploads
volumes:
- name: bookstack-uploads-pv
persistentVolumeClaim:
claimName: bookstack-uploads-pvc
- name: bookstack-storage-uploads-pv
persistentVolumeClaim:
claimName: bookstack-storage-uploads-pvc
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: bookstack-mysql-data-pv
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/zpool/volumes/bookstack/mysql-data"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3s-vm
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
io.kompose.service: bookstack-mysql-data-pvc
name: bookstack-mysql-data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: bookstack-mysql
name: bookstack-mysql
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: bookstack-mysql
strategy:
type: Recreate
template:
metadata:
labels:
io.kompose.service: bookstack-mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: bookstack
- name: MYSQL_PASSWORD
value: secret
- name: MYSQL_ROOT_PASSWORD
value: secret
- name: MYSQL_USER
value: bookstack
image: mysql:5.7.33
name: mysql
ports:
- containerPort: 3306
volumeMounts:
- mountPath: /var/lib/mysql
name: bookstack-mysql-data-pv
volumes:
- name: bookstack-mysql-data-pv
persistentVolumeClaim:
claimName: bookstack-mysql-data-pvc
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: bookstack-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: bookstack.domain.tld
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: bookstack
port:
name: bookstack-port
tls:
- hosts:
- bookstack.domain.tld
secretName: bookstack-prod-secret-tls
Kompose converts both containers inside the docker-compose
file into services, which is fine, but I’ll leave making this a single service, which is preferable, as an exercise for the reader.
Notice how the config map holds all of the configuration for the app and is then injected into the deployment with:
envFrom:
- configMapRef:
name: bookstack-config
You might have noticed the
securityContext:
runAsUser: 33
runAsGroup: 33
segment, and this has to do with how the BookStack docker image is set up. Most images don’t have this issue, but PHP ones are notorious for it. Without the proper permissions for the folder on the server, uploads to the wiki won’t work. But our Makefile accounts for it:
bookstack:
${KUBECTL} apply -k stacks/bookstack
@echo
@echo "waiting for deployments to be ready... "
@${KUBECTL} wait --namespace=default --for=condition=available deployments/bookstack --timeout=60s
@${KUBECTL} wait --namespace=default --for=condition=available deployments/bookstack-mysql --timeout=60s
@echo
ssh ${HOST} chown 33:33 /zpool/volumes/bookstack/storage-uploads/
ssh ${HOST} chown 33:33 /zpool/volumes/bookstack/uploads/
Here we apply the kustomization, but then wait for both deployments to be ready, since that’s when their volume mounts are either bound or created on the server. We then SSH into it to change the volumes’ owner to the correct user and group IDs. Not ideal, but works. The MySql image in the deployment doesn’t need this.
Notice also how easy it is to convert the depends_on
directive from the docker-compose
file, since the pods have access to each other by name, in much the same fashion.
Step 7 - All done!
Full code is available here. Just for completion’s sake, the full Makefile:
# set your host IP and name
HOST_IP=192.168.1.60
HOST=k3s
#### don't change anything below this line!
KUBECTL=kubectl --kubeconfig ~/.kube/k3s-vm-config
.PHONY: k3s_install base bookstack portainer samba
k3s_install:
ssh ${HOST} 'export INSTALL_K3S_EXEC=" --no-deploy servicelb --no-deploy traefik --no-deploy local-storage"; \
curl -sfL https://get.k3s.io | sh -'
scp ${HOST}:/etc/rancher/k3s/k3s.yaml .
sed -r 's/(\b[0-9]{1,3}\.){3}[0-9]{1,3}\b'/"${HOST_IP}"/ k3s.yaml > ~/.kube/k3s-vm-config && rm k3s.yaml
base:
${KUBECTL} apply -f k8s/ingress-nginx-v1.0.4.yml
${KUBECTL} wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=60s
${KUBECTL} apply -f k8s/cert-manager-v1.0.4.yaml
${KUBECTL} apply -f stacks/default-storage-class.yaml
@echo
@echo "waiting for cert-manager pods to be ready... "
${KUBECTL} wait --namespace=cert-manager --for=condition=ready pod --all --timeout=60s
${KUBECTL} apply -f k8s/lets-encrypt-staging.yml
${KUBECTL} apply -f k8s/lets-encrypt-prod.yml
bookstack:
${KUBECTL} apply -k stacks/bookstack
@echo
@echo "waiting for deployments to be ready... "
@${KUBECTL} wait --namespace=default --for=condition=available deployments/bookstack --timeout=60s
@${KUBECTL} wait --namespace=default --for=condition=available deployments/bookstack-mysql --timeout=60s
@echo
ssh ${HOST} chown 33:33 /zpool/volumes/bookstack/storage-uploads/
ssh ${HOST} chown 33:33 /zpool/volumes/bookstack/uploads/
portainer:
${KUBECTL} apply -k stacks/portainer
samba:
${KUBECTL} apply -k stacks/samba
Conclusion
So, is this for you? It took me a few days to get everything working, and I bumped my head against the monitor a few times, but it gave me a better understanding of how Kubernetes works under the hood, how to debug it, and now with this Makefile it takes me all of 4 minutes to recreate my NAS setup for these 3 apps. I still have a dozen or so to convert from my old docker-compose
setup, but it’s getting easier every time.
Kubernetes is interesting, and the abstractions built over it keep getting more powerful. There’s stuff like DevTron and Flux that I want to explore in the future as well. Flux in particular is probably my next step, as I’d like to keep everything related to the server in a Git repo I host, and updating when I push new definitions. Or maybe my next step will be trying out NixOS the reproducible OS, which enables some interesting use cases, like erasing and rebuilding the server on every boot! 😲 Or maybe K3OS, that is optimized for K3s and made by the same people that make K3s! If only I could get ZFS support for it… 🤔
There’s always something new to learn with a server (or VM) at home! But for now, I’m happy where I got to, and I hope this post has helped you as well. Who knows, maybe those DevOps people are right when they say it is the future - I still think this is the beginning of a simpler solution down the line, for which the basic vocabulary is being established right now with Kubernetes. But at the very least, I hope it doesn’t feel so alien to you anymore!
Feel free to reply with comments or feedback to this tweet
PS A tip, for when you go looking for info online - there are A LOT of articles and tutorials about Kubernetes. A good way to see if the info on them is stale or not is to check the YAML files. If you see a file that starts with
apiVersion: apps/v1alpha1
that v1alpha1
is a dead giveaway that the info in the post you’re reading is quite old, and may not work for current versions of K8s.
from Hacker News https://ift.tt/31rktra
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.