Skip to content

Solution

CI Week 2

CI/CD - Docker Hub Tags

The GitHub Actions workflow builds two images and pushes them to stensel8/public-cloud-concepts:

ImageTagPull command
Bison appbisondocker pull stensel8/public-cloud-concepts:bison
Brightspace appbrightspacedocker pull stensel8/public-cloud-concepts:brightspace

DockerHub tags: latest, brightspace, bison


2.2 Kubernetes

Assignment 2.2a - Deployment running

The Week 1 deployment (first-deployment) is running on the kubeadm cluster with both pods active in two regions:

Deployment running - all nodes Ready, pods Running with IPs

NAME               STATUS   ROLES           AGE   VERSION
master-amsterdam   Ready    control-plane   9d    v1.35.1
worker-brussels    Ready    <none>          9d    v1.35.1
worker-london      Ready    <none>          9d    v1.35.1

NAME                                READY   STATUS    IP           NODE
first-deployment-5ffbd9444c-5hkzs   1/1     Running   10.244.2.3   worker-london
first-deployment-5ffbd9444c-s4xdb   1/1     Running   10.244.1.3   worker-brussels

Assignment 2.2b - Deleting and recreating a pod

A pod was deleted while the Deployment remained active. Kubernetes automatically created a replacement pod with a different IP address, demonstrating that pod IPs are ephemeral.

Pod deleted - new pod created with different IP

# Before deletion:
first-deployment-5ffbd9444c-5hkzs   IP: 10.244.2.3   worker-london

# After deletion - new pod:
first-deployment-5ffbd9444c-pdrw0   IP: 10.244.2.4   worker-london

This is exactly why a Service is needed: pods are disposable and their IPs change.


Assignment 2.2c - ClusterIP Service

service.yml - A Service connects via selector to pods with the label app: my-container. The first version was ClusterIP: only reachable within the cluster, no external IP:

+apiVersion: v1
+kind: Service
+metadata:
+  name: first-service
+spec:
+  type: ClusterIP        # stable virtual IP, internal only
+  selector:
+    app: my-container    # connects to pods with this label
+  ports:
+    - port: 80
+      targetPort: 80

ClusterIP service created with stable virtual IP

NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
first-service   ClusterIP   10.110.23.98    <none>        80/TCP    0s

The ClusterIP is only reachable from within the cluster. Traffic is load-balanced across all pods with selector app: my-container.


Assignment 2.2d - ClusterIP reachable from every node

All three nodes returned the HTML response via curl 10.110.23.98.

curl via ClusterIP from master

curl via ClusterIP from worker-brussels

curl via ClusterIP from worker-london


Assignment 2.2e - NodePort Service

For external access, the type was changed to NodePort and a fixed port was added:

 spec:
-  type: ClusterIP
+  type: NodePort
   ports:
     - port: 80
       targetPort: 80
+      nodePort: 32490   # fixed port on all nodes (range: 30000-32767)

NodePort service - port 80:32490/TCP

NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
first-service   NodePort   10.110.23.98    <none>        80:32490/TCP   8m39s

Looking up internal node IPs:

Internal IP addresses of the nodes

Testing via internal node IP:

curl via internal node IP and NodePort works

External access via NodePort + firewall rule:

GCP blocks incoming traffic by default. A firewall rule was created for TCP port 32490:

Creating a firewall rule in the GCP console

Tested without firewall rule first - blocked:

Browser blocked without firewall rule

After creating the firewall rule, the site works:

Website reachable via external IP and NodePort

kubectl port-forward is a developer tool for local testing, not an external access solution. The tunnel is only reachable on the machine running the command and stops when you press Ctrl+C.

kubectl port-forward active

curl via localhost:8080 works via port-forward


Assignment 2.2f - LoadBalancer on the kubeadm cluster

LoadBalancer service created - EXTERNAL-IP pending

LoadBalancer stays in pending status without cloud controller

NAME            TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
first-service   LoadBalancer   10.110.23.98   <pending>     80:32490/TCP   26m

Why does it stay pending?

A LoadBalancer service asks the cloud controller manager to provision an external load balancer. On a self-managed kubeadm cluster there is no cloud controller manager present - there is no component that can request a GCP load balancer on behalf of Kubernetes. The external IP is never assigned.

ApproachHowWhen
Proper wayGKE: cloud controller manager automatically provisions a Load BalancerProduction
NodePort + firewall ruleManually open a GCP firewall rule for the NodePortWorkaround on kubeadm
Ingress controllernginx Ingress Controller routes multiple services via a single external IPMultiple apps (assignment 2.2h)

Assignment 2.2g - LoadBalancer on GKE

A GKE cluster week2-cluster was created: e2-medium, 2 nodes, europe-west4-a, Regular release channel.

GKE cluster basic settings

GKE node pool configuration

GKE node machine type e2-medium

GKE week2-cluster provisioning at 33%

gcloud container clusters get-credentials week2-cluster --zone europe-west4-a

GKE cluster connected - two nodes Ready

After deploying the deployment and service:

LoadBalancer on GKE - external IP assigned after ~44 seconds

NAME            TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
first-service   LoadBalancer   34.118.232.196   34.12.127.52    80:31275/TCP   44s

After ~44 seconds GKE had provisioned a Google Cloud Load Balancer and assigned the external IP 34.12.127.52. This is the core difference with the kubeadm cluster.

Website reachable via GKE LoadBalancer external IP


Assignment 2.2h - Ingress: multiple services via one load balancer

Two apps available via one Ingress, each on its own hostname:

HostnameBackend service
bison.mysaxion.nlbison-service (port 80)
brightspace.mysaxion.nlbrightspace-service (port 80)

Installing the nginx Ingress Controller:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/cloud/deploy.yaml

nginx Ingress Controller - external IP 34.91.190.135 assigned

Deploying the manifests:

Applied files (see GitHub):

FileDescription
bison/deployment.yml2 replicas, image tag bison
bison/service.ymlClusterIP on port 80
brightspace/deployment.yml2 replicas, image tag brightspace
brightspace/service.ymlClusterIP on port 80
ingress.ymlIngress based on Host HTTP header

nginx Ingress Controller installed - pods Running

Deployments, services and Ingress applied

NAME             CLASS   HOSTS                                        ADDRESS          PORTS   AGE
ingress-saxion   nginx   bison.mysaxion.nl,brightspace.mysaxion.nl   34.91.190.135    80      25s

Ingress with saxion address and hostnames

Hosts file updated:

Hosts file with bison.mysaxion.nl and brightspace.mysaxion.nl

bison.mysaxion.nl - Bison application reachable via Ingress

brightspace.mysaxion.nl - Brightspace application reachable via Ingress

Why Ingress?

Without Ingress, each application needs its own LoadBalancer service (its own IP, its own cost). With Ingress, one load balancer routes traffic to the correct service based on the Host HTTP header.