Create a Kubernetes cluster with Openstack Magnum in CityCloud

In this article, you will learn how to create a Kubernetes Cluster with the OpenStack CLI and Magnum in CityCloud.

OpenStack Magnum is an OpenStack project that launched in late 2014, with the goal of facilitating Docker container orchestration via three different Container Orchestration Engines (COEs), namely Kubernetes, Docker Swarm, and Apache Mesos.

Of these, Kubernetes has by far the largest traction and most practical relevance.


Level

MEDIUM

Pre-requisites

  • A CityCloud account | Signup
  • A project in a datacentre where Magnum is currently available: Fra1, Sto2, Dx1, Tky1 or Kna1
  • CLI access to the project | HowTo
  • Fedora CoreOS 31 or later image available.
  • kubectl installed | HowTo

Overview

In this guide, we will follow the below steps:

Create the Kubernetes Cluster Template

Before launching a Magnum-managed Kubernetes cluster, you must first define a Cluster Template

As first step, you need to select one of the Fedora CoreOS image available in the current region:

$ openstack image list
...
| 7f46ec4a-8884-47e1-adc0-07f63b67aa14 | fedora-coreos-31       | active |
...

Continue defining the general properties of your cluster:

$ IMAGE="fedora-coreos-31"
$ TEMPLATE_NAME="k8s-template"
$ openstack coe cluster template create \
  --coe kubernetes \
  --image "$IMAGE" \
  --flavor 2C-4GB-50GB \
  --master-flavor 2C-4GB-50GB \
  --volume-driver cinder \
  --docker-storage-driver overlay \
  --external-network ext-net \
  --floating-ip-enabled \
  --network-driver flannel \
  --docker-volume-size 50 \
  --dns-nameserver 8.8.8.8 \
  --labels="container_runtime=containerd,cinder_csi_enabled=true,cloud_provider_enabled=true" \
  $TEMPLATE_NAME

Create the Kubernetes Cluster

Create a keypair resource using a public key. 

$ KEY_NAME="my_key"
$ openstack keypair create --public-key ~/.ssh/id_rsa.pub $KEY_NAME

For a production environment, we do recommend using a newly generated public key!

It's now time to create your Kubernetes cluster using the just-created template: 

$ CLUSTER_NAME="k8s-cluster"
$ openstack coe cluster create \
    --cluster-template $TEMPLATE_NAME \
    --master-count 1 \
    --node-count 1 \
    --timeout 60 \
    --keypair $KEY_NAME \
    $CLUSTER_NAME

Check the progress via:

$ openstack coe cluster list 

or for more details:

$ openstack coe cluster show $CLUSTER_NAME

Wait up to 10 minutes before the Kubernetes cluster is correctly provisioned and in the CREATE_COMPLETE state.

Access your Kubernetes cluster

Retrieve the cluster configuration file via:

$ openstack coe cluster config $CLUSTER_NAME

and export it in an environment file:

$ export KUBECONFIG=/home/user/dir/config

Now, test the communication towards the cluster via:

$ kubectl get nodes

Access the Kubernetes Dashboard

Before accessing the Kubernetes Dashboard, you will need to generate a bearer token.

Note. To find out more about how to configure and use Bearer Tokens, please refer to the Kubernetes Authentication section.

Generate your token using the following command:

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep ^token: | awk '{ print $2 }'

Start the kubectl proxy using:

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

and log in at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ using the token generated in the previous step.

Access your cluster nodes

In Create the Kubernetes Cluster, you created a keypair resource and injected this key into each node via the cluster create command.

Now, to access each and every node of your cluster simply use:

$ ssh -i ~/.ssh/id_rsa.pub core@<NODE_PUBLIC_IP>

Each node public IP can be retrieved via:

$ openstack server list

Storage

To be able to create volumes and attach them to the infrastructure nodes you will need to specify CSI Cinder as the storage backend.

The Openstack Cinder storage class is DEPRECATED since Kubernetes v1.11

First of all, make sure the csi-cinder-controllerplugin-* and csi-cinder-nodeplugin-* are running OK:

$ kubectl get pods -A
NAMESPACE     NAME                                         READY   STATUS              RESTARTS   AGE
kube-system   coredns-786ffb7797-69l8z                     1/1     Running             0          16h
kube-system   coredns-786ffb7797-9d7j8                     1/1     Running             0          16h
kube-system   csi-cinder-controllerplugin-0                5/5     Running			   0          16h
kube-system   csi-cinder-nodeplugin-68nm4                  2/2     Running             0          16h
kube-system   csi-cinder-nodeplugin-pjgfk                  2/2     Running			   0          16h
kube-system   dashboard-metrics-scraper-6b4884c9d5-v7xgz   1/1     Running             0          16h
...

Now, create the csi-cinder-sc.yaml file and paste the following content:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-sc-cinderplugin
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: cinder.csi.openstack.org

Apply the configuration via:

$ kubectl apply -f csi-cinder-sc.yaml

Kubernetes will then use CSI Cinder as the default Storage Class when creating new Persistent Volumes for your application.

$ kubectl get sc
NAME                            PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-sc-cinderplugin (default)   cinder.csi.openstack.org   Delete          Immediate           false                  4m18s

Run your first container

Below is an example of the configuration needed to run nginx in your Kuberenetes cluster.

Create the nginx.yaml file with the following content: 

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc-cinderplugin
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: csi-sc-cinderplugin
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx
    ports:
    - containerPort: 80
      protocol: TCP
    volumeMounts:
      - mountPath: /var/lib/www/html
        name: csi-data-cinderplugin
  volumes:
  - name: csi-data-cinderplugin
    persistentVolumeClaim:
      claimName: csi-pvc-cinderplugin
      readOnly: false

and apply the configuration via kubectl:

$ kubectl apply -f nginx.yaml

A PersistentVolumeClaim (PVC) will then be created and a Persistent Volume (PV) of 1Gi attached to the nginx container:

$ kubectl get pvc,pv
NAME                                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
persistentvolumeclaim/csi-pvc-cinderplugin   Bound    pvc-c302aa51-354f-4586-be17-e53366fdc28d   1Gi        RWO            csi-sc-cinderplugin   27m

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                          STORAGECLASS          REASON   AGE
persistentvolume/pvc-c302aa51-354f-4586-be17-e53366fdc28d   1Gi        RWO            Delete           Bound    default/csi-pvc-cinderplugin   csi-sc-cinderplugin            27m
$ kubectl describe pod nginx 
Name:         nginx
Namespace:    default
Node:         k8s-cluster-csi-u33d4ix4jrfj-node-0/10.0.0.213
Start Time:   Thu, 24 Jun 2021 08:52:26 +0200
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.100.1.6
IPs:
  IP:  10.100.1.6
Containers:
  nginx:
    Container ID:   containerd://21bf6ae03a2bfe760e6e15355ab287e2c0372588c1b58cfb0fd7534af84b5280
    Image:          nginx
    Image ID:       docker.io/library/nginx@sha256:47ae43cdfc7064d28800bc42e79a429540c7c80168e8c8952778c0d5af1c09db
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 24 Jun 2021 08:52:36 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/lib/www/html from csi-data-cinderplugin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ckzd6 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  csi-data-cinderplugin:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  csi-pvc-cinderplugin
    ReadOnly:   false
  default-token-ckzd6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-ckzd6
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason                  Age   From                     Message
  ----    ------                  ----  ----                     -------
  Normal  Scheduled               28m   default-scheduler        Successfully assigned default/nginx to k8s-cluster-csi-u33d4ix4jrfj-node-0
  Normal  SuccessfulAttachVolume  28m   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-c302aa51-354f-4586-be17-e53366fdc28d"
  Normal  Pulled                  28m   kubelet                  Container image "nginx" already present on machine
  Normal  Created                 28m   kubelet                  Created container nginx
  Normal  Started                 28m   kubelet                  Started container nginx

Plugins

All plugins relevant to OpenStack and Kubernetes Integration can be found at the following links: