June 02, 2024

Kubernetes in Digital Ocean

Running through Kubernetes in Digital Ocean to run a simple web application.

Kubernetes! It's got a lot going on, but nothing you can't handle.

I was playing with a Digital Ocean Kubernetes cluster (the default 3 Nodes is ~$72/mo, which will run a decent number of your typical Laravel apps).

Like other clouds, DO's flavor of K8s is...just K8s (that's a good thing!) and integrates with their other services, such as a managed load balancer. This means we can get traffic into our K8s cluster pretty easily.

So, based on this DO article, here's how I did stuff on K8s on Digital Ocean.

Everything here is updated to the most modern version (as of this writing), and shorted to make a bit more sense (to me), and do less work (skip generating SSL certs).

The Goal

The goal here is to run an application, and get public traffic routed to it.

We need a few things:

  1. A K8s cluster
  2. A way to ingress (get incoming traffic to) to the cluster
  3. A way to run web app containers

You'll need kubectl to do anything useful here. It's likely already present if you're using Docker on Mac.

Let's start!

The Cluster

We first create a K8s cluster. This is just clicking around the DO UI. Don't get hung up on the number of Nodes, etc - to play with this, just create the basic 3-node "regular CPU/disk drive" option.

Once you've watched the create progress bar for a few minutes, refresh to find that maybe the progress bar was lying, and then find yet even more buttons that appear to download the configuration (Kubeconf) file.

Then we can confirm that everything is running:

# Normally this is at ~/.kube/config, but
# I'm just using the downloaded file here
export KUBECONFIG=/Users/fideloper/Downloads/fidelopers-cluster-luck-kubeconfig.yaml

# Confirm it works, find all
# Pods running in all namespaces
kubectl get pods -A

# Get all resources in this
# cluster across all namespaces
kubectl get all -A

You'll see a big list of Pods running in your K8s cluster. It's all normal - things that provide networking (Cilium, coredns, kube-proxy), a CSI (container storage interface) to help with attaching storage, things monitoring your cluster, and more things specific to how DO manages your cluster.

Ingress

The term "ingress" refers to incoming network traffic. We're going to get public web requests into stuff running in our K8s cluster.

To do that, we'll install Ingress Nginx, which will do 2 things:

  1. Create a DO managed load balancer
  2. Run Nginx within the K8s cluster

These two things work together - the load balancer gets Nginx registered as an endpoint to proxy traffic to, and then Nginx (running within the cluster) can accept web requests and route them to the correct Service (and therefore the correct Pods running your apps).

The Ingress Nginx has configuration for the popular flavors of managed (and unmanaged) K8s.

We're using the do flavor here.

# Find the latest release of the controller (not of the Helm chart!) here:
# https://github.com/kubernetes/ingress-nginx/releases
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/do/deploy.yaml

It will take a few moments for the Load Balancer to be created (find it in the Networking tab of your DO console).

We have the cluster and a way to ingress into the cluster. Let's now create the other stuff we need to route traffic to the correct place.

This requires the following:

  1. A Service
  2. A Deployment
  3. An Ingress

A Service exposes a Deployment as something reachable by the network, basically saying (for example) "all requests to this service at this port go to this target".

A Deployment gets some Pods running, with the containers of your choice. This also sets sets the port the containers are running on (the aforementioned "target").

An Ingress defines how ingress traffic (external traffic headed into the cluster) is routed to the Service and thus can reach the Pods within that Service.

The Service + Deployment

We're going to define 2 K8s objects at once here:

  1. a Service
  2. a Deployment

The Service points traffic to the Pods created in the Deployment.

The Deployment creates a ReplicaSet and Pods. A ReplicaSet is a thing that says "keep this many of this Pod running".

The Deployment also helps deploy the Pods (creating and later updating them), using a "rolling" deployment strategy by default.

Here's what it looks like to define them - create file echo.yaml:

apiVersion: v1
kind: Service
metadata:
  name: echo1
spec:
  ports:
  - port: 80
    targetPort: 5678
  selector:
    app: echo1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo1
spec:
  selector:
    matchLabels:
      app: echo1
  replicas: 2
  template:
    metadata:
      labels:
        app: echo1
    spec:
      containers:
      - name: echo1
        image: hashicorp/http-echo
        args:
        - "-text=echo1"
        ports:
        - containerPort: 5678

And then apply it:

k apply -f echo.yaml

I've named everything "echo1" (you can duplicate all of this and adjust it to be echo2 to create a second Service, etc).

There's a bunch going on, but the Service name being echo1 is the most important in terms of getting external traffic routed to this.

One thing we did is give the Pods a the key=value label of app: echo1. That's done with the template, which is a tempmlate for the Pod spec.

You can create a Pod directly too, but using a Deployment is generally what you want to do, since it provides the ReplicaSet and a deployment strategy. See how we have a replica of 2? There will be 2 pods of this container running.

(The "selector" stuff is applying the Service and Deployment to those Pods of label app: echo1).

Got it? Me either. This took me a while. The K8s yaml provides for a lot of use cases, and so its abstractions are a bit odd at first.

We can't route traffic yet

So we can't yet route traffic to this service from the outside world. However we can test this out via K8s internal DNS:

# Run a Pod of Ubuntu 24.04 and make some 
# curl requests to the service we just created
# This is just like running a Docker container, e.g.
#   "docker run --rm -it ubuntu:24.04 bash"
kubectl run --rm -it --image=ubuntu:24.04 -- bash

> apt-get update && apt-get install -y curl
> curl echo1.default.svc.cluster.local
># echo1

The hostname echo1.default.svc.cluster.local is in format <service-name>.<namespace>.svc.cluster.local. The HTTP response is just "echo1".

Ingress

We created a service, and we can use internal DNS to send requests to it from within the cluster. However, we want external traffic to reach it through the load balancer!

The flow of traffic will be my laptop -> DO load balancer -> Ingress Nginx -> one of the 2 Pods.

To accomplish that, we need to define an Ingress. The Ingress will tell K8s to use Ingress Nginx to route requests to the correct Service based on hostname (we'll configure it to map hostnames to Services, but you can do other stuff).

Create a new file ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: echo1.example.com
    http:
        paths:
        - pathType: Prefix
          path: "/"
          backend:
            service:
              name: echo1
              port:
                number: 80

And then apply it:

k apply -f ingress.yaml

Since we set the annotation kubernetes.io/ingress.class: "nginx", this will read a thing called an IngressClass that exists.It's named nginx and was created by Ingress Nginx when we installed it. It's job is to tells Ingress Nginx to reconfigure itself to route requests to hostname echo1.example.com to the service named echo1.

My Load Balancer IP address is 143.244.220.120, and at this point I can send requests to it successfully:

curl -H "Host: echo1.example.com" 143.244.220.120
# echo1

# BTW, this won't work since it didn't pass a Host header:
curl 143.244.220.120
# 404

So, that's cool! We can route web traffic to this!

At this point I stopped, because the addition of adding SSL certificates is a bit boring (which isn't to say easy or non-trivial). It requires a real domain tho, and I didn't feel like doing DNS things to make it work. However the article on using CertManager to manage SSL via LetsEncrypt are good!

Looking for a deeper dive into Docker?

Sign up here to get a preview of the Shipping Docker course! Learn how to integrate Docker into your applications and develop a workflow to make using Docker a breeze!

All Topics