From Zero to K3S on Proxmox LXC: Part 2 - Ingress

I want to be able to deploy an HTTP service in my new K3S cluster and access it from anywhere on my home network. There are various ways to do this with Kubernetes but the most flexible is to set up an ingress.

From Zero to K3S on Proxmox LXC: Part 2 - Ingress
From me on Flickr

In part 1 of this series, I described how to create a K3S cluster using Proxmox LXC nodes. but at the end of that post, I had a cluster up and running but with some limitations - services running inside the cluster aren't currently accessible from the rest of my network.

In order to make services available outside the cluster, I'll need to set up an Ingress controller and a load balancer - I'm going to use ingress-nginx and MetalLB.

What's an Ingress controller and why do I need one?

An Ingress controller is a specialized load balancer for Kubernetes environments. An Ingress controller abstracts away the complexity of Kubernetes application traffic routing and provides a bridge between Kubernetes services and external ones.

Kubernetes Ingress controllers:
- Accept traffic from outside the Kubernetes platform, and load balance it to pods running inside the platform
- Are configured using the Kubernetes API to deploy objects called “Ingress Resources”
- Monitor the pods running in Kubernetes and automatically update the load‑balancing rules when pods are added or removed from a service

(source: nginx.com)

I want to be able to deploy an HTTP service in my new K3S cluster and access it from anywhere on my home network. There are various ways to do this with Kubernetes but the most flexible is to set up an ingress.

Why Nginx ?

K3S comes with Traefik as the default Ingress controller but I've chosen to use NGINX instead. There's no strong reason for this other than a bit of cargo-culting. Most of the articles I followed to get my cluster stood up recommending adopting NGINX Ingress over Traefik as it's more standard. I don't know if that's strictly relevant but since I tried both and I could only get Nginx working in my set up, I'm happy to stick with it.

Why MetalLB ?

In cloud environments, a Kubernetes ingress can use out-of-the-box load balancer instances to assign new IP addresses when a new ingress resource (service address) is created.

Since my home network doesn't have an on-demand load balancer, I'll need to provide the ingress controller with a software equivalent.

Bare-metal ingress without load balancer (source: kubernetes.github.io)
MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.

In a
Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller, one node attracts all the traffic for the ingress-nginx Service IP.

(Source: kubernetes.github.io)
Bare-metal ingress with MetalLB (source: kubernetes.github.io)

Let's install some stuff

Photo by James Kovin on Unsplash
💡
Unless otherwise stated, manifests used in this post can be found in my github repository

MetalLB

I chose to use an installation from manifest:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml

That appeared to be successful:

namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created

I can also verify that MetalLB pods are running in the new metallb-system namespace:

╰─ kubectl get pods -n metallb-system
NAME                          READY   STATUS    RESTARTS   AGE
controller-786f9df989-x5rpx   1/1     Running   0          2m
speaker-xfnf7                 1/1     Running   0          2m
speaker-qxmxb                 1/1     Running   0          2m

Configuring an IP address pool

Now, for the first bit of actual kubernetes configuration I've had to do in this series.

I need to tell MetalLB what IP address range it can use, and how to advertise it to other hosts on my network.

The manifest for this configuration is as follows:

# Tell MetalLB what IP address range it can use
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default-pool
  namespace: metallb-system
spec:
  addresses:
  # Only using a single address for my network
  - 192.168.1.50-192.168.1.50

---

# Tell MetalLB to advertise it's IP addresses on the local network.
# This means it will answer ARP requests for the IP addresses in the pool 
# without needing to configure those IP addresses on the nodes themselves.
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-advertisement
  namespace: metallb-system

Manifest for configuring MetalLB IP Address Pool (source)

After applying the manifest, I can see the MetalLB resources were created:

╰─ kubectl apply -f 01_metallb-ipaddr-pool-with-l2advertisement.yaml
ipaddresspool.metallb.io/default-pool created
l2advertisement.metallb.io/l2-advertisement created

Installing ingress-nginx

The good news is that installing ingress-nginx is easy and well documented - the 'Bare Metal' installation is most applicable for my setup but this time, I'll use a Helm chart instead of a plain manifest file:

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx \
  --create-namespace \
  --wait

After about 30 seconds, I get some helpful output:

Release "ingress-nginx" does not exist. Installing it now.
NAME: ingress-nginx
LAST DEPLOYED: Fri Jan  5 14:58:35 2024
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
You can watch the status by running 'kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch'

And I can see the ingress is using the External IP from MetalLB 🎉:

╰─ kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch
NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE    SELECTOR
ingress-nginx-controller   LoadBalancer   10.43.29.231   192.168.1.50   80:32192/TCP,443:30782/TCP   3m2s   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

Installing cert-manager

cert-manager · GitHub

Okay, so we have an ingress controller working so I can expose services over HTTP. But I'd also like to be able to use HTTPS which means I'll need my Kubernetes cluster to have Certificate Manager installed.

This can also be installed with Helm:

# Add the Jetstack Helm repository if you haven't already done so
helm repo add jetstack https://charts.jetstack.io
helm repo update

# Install the cert-manager Helm chart
# This command also creates some custom resource definitions (CRDs)
# that allows configuring certificate manager via manifests
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.13.3 \
  --set installCRDs=true

Now, I can verify that cert-manager is running in it's new namespace:

kubectl -n cert-manager get pods
NAME                                      READY   STATUS    RESTARTS AGE
cert-manager-cainjector-d7f8b5464-rvtth   1/1     Running   0        15m
cert-manager-57688f5dc6-92jrg             1/1     Running   0        15m
cert-manager-webhook-58fd67545d-ntcjd     1/1     Running   0        15m

I still need to configure a ClusterIssuer (this is why I needed to install the cert-manager CRDs) using a manifest:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned
spec:
  selfSigned: {}

Manifest for creating cluster issuer (source)

Which results in...

kubectl apply -f 02_certmanager-clusterissuer.yaml
clusterissuer.cert-manager.io/selfsigned created

Ok, cool. But how do we know if it's all working ?

I'm going to install a basic application into K3S and expose it via HTTPs on a dedicated hostname to verify everything I've done so far is working together.

This will involve:

  • Deploying a new deployment and service with a basic web server in a new namespace (testapp)
  • Creating an HTTPS ingress and TLS certificate for testapp.cluster.mydomain.org
  • Adding testapp.cluster.mydomain.org to my DNS server
  • Verifying I can open https://testapp.cluster.mydomain.org from a browser

So first, I need to create the new namespace:

kubectl create namespace testapp

Next, I'll deploy a test application without the ingress:

# Define a deployment in the 'testapp' namesapce
# Uses a basic 'nginx' webserver image and listens on port 80
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: testapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

---

# Define a service for the 'nginx' deployment in the 'testapp' namespace
# Service should listen on HTTP 80 and HTTPS 443
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  namespace: testapp
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: https
  selector:
    app: nginx

Manifest for a basic web server application (source)

Now, I'll create an Ingress resource and certificate for the new service:

# Create a certificate secret matching the TLS cert specified above 
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: testapp-cluster-mydomain-org-tls
  namespace: testapp
spec:
  commonName: testapp.cluster.mydomain.org
  secretName: testapp-cluster-mydomain-org-tls
  dnsNames:
    - testapp.cluster.mydomain.org
  issuerRef:
    name: selfsigned
    kind: ClusterIssuer

---

# Define an NGINX TLS ingress for the 'nginx-svc' service
# Uses the 'selfsigned' cluster issuer we set up earlier
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: testapp-ingress
  namespace: testapp
  annotations:
    cert-manager.io/cluster-issuer: selfsigned
    nginx.ingress.kubernetes.io/router.tls: "true"
spec:
  # Specifies use of the 'nginx' ingress controller
  ingressClassName: nginx
  rules:

  # Match requests for the service hostname 
  # 'testapp.cluster.mydomain.org' to this ingress service
  - host: testapp.cluster.mydomain.org
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-svc
            port:
              number: 80
  tls:
  - hosts:
    - testapp.cluster.mydomain.org
    secretName: testapp-cluster-mydomain-org-tls

Manifest for creating ingress and self-signed TLS certificate (source)

The final piece of the puzzle is to ensure that my ingress hostname testapp.cluster.mydomain.org resolves to the IP address assigned by my load balancer (192.168.1.50).

In my case, this is as simple as adding an entry to the /etc/hosts file for my dnsmasq server and reloading:

127.0.0.1       localhost.localdomain localhost
192.168.1.40    kube-master
192.168.1.41    kube-worker-1

192.168.1.50    testapp.cluster.mydomain.org

/etc/hosts

Does it all work ?

If everything has worked so far, I should be able to open https://testapp.cluster.mydomain.org from my laptop browser.

The first time I open it, I'll get a warning because I've configured cert-manager to use a self-signed TLS certificate.

Self-signed certificate warning

But if I accept that warning, I get the sweet, sweet nginx default page served by my test application inside the K3S cluster🎉:

Success !

Great, so what's next ?

Part 1 of this series covered getting a K3S cluster created.

Part 2 covered setting up a load balancer (MetalLB), an Ingress controller (Nginx) and a Certificate Issuer. Finally, I've verified that I can deploy a test application in the cluster and access it via HTTPS.

Next up, in Part 3, I'll install the Kubernetes Dashboard along with an ingress and access tokens so I can administer the cluster.