The Kubernetes Series - Cluster Maintenance

The Kubernetes Series - Cluster Maintenance

(Photo by Dawn Armfield on Unsplash)

In the previous post we had a look at deployments and keeping your application up to date. In this post we'll have a look at how to keep the actual infrastructure your application is running on, the Master and Worker nodes, up to date, without interrupting service. We'll also have a brief look at disaster recovery, and what to do when things eventually go catastrophically wrong.

When Pods Go Down

When a pod goes offline the kube-controller-manager running on the Master node will, by default, attempt to contact it for 5 minutes before considering it to be dead. You can change this default 5-minute value if you want, by updating the property pod-eviction-timeout on your kube-controller-manager service.

If the node comes back up before the 5 minute eviction timeout, the kubelet service on the node will ensure that the Pods on that node reestablish communication with the rest of the cluster.

If, however, the node only comes up again after the 5 minute timeout, it's Pods(if part of a ReplicaSet) would have already been provisioned to new nodes and would thus be blank. All pods created with a Deployments or ReplicaSets will be provisioned to other available nodes on your cluster. Those created individually will simply be gone!

Operating System Maintenance

The above is important to keep in mind if you want to update individual nodes.

You can feel a little bit of surety that ReplicaSets and Deployments might find a home on another node if you need to briefly bring one down for some light patches, but if you've ever done an OS upgrade on a personal computer before, you know it's a risky business. You're not guaranteed to have the node up again before 5 minutes have passed.

So how do you make sure that your Pods would not go down and disappear into the void if you need to upgrade a node that might take a while to spin back up?

Drain, Cordon & Uncordon

You can force ReplicaSet and Deployment Pods on a particular node to take up residence somewhere else with;

kubectl drain {node01}

This will terminated the Pods on the node and launch them again on another one. It will also make sure no new Pods get provisioned to the node by setting them to the cordoned state.

To see which nodes your Pods are deployed on enter;

kubectl get pods -o wide

Incedently, you can manually cordon a node without draining it first to keep resources on that node limited to only the Pods running on it with the command;

kubectl cordon node01

Then, when your node is updated and back online, you can make it available to Pods again with

kubectl uncordon {node01}

Updating Kubernetes itself

To see the Kubernetes version running on a particular node, run;

kubectl get nodes

All the components of the Control Plane would have the same version by default, but you can update these individually if you like. The only requirement is that none should be a higher version than the kube-api-server service, since that is your interface into controlling the others and would need to have the latest API.

Kubernetes currently supports the last 3 minor versions, so if a version bump occurs and your Control Plane Controller versions are older than the third last one, it might be a good idea to update(and do so one minor version at a time!).

Upgrading can be very easy or very hard, depending on how you set up  and host you cluster. If you use a cluster managed by one of the major cloud providers, you could likely do it with just the push of a button. If you created your cluster yourself, it would likely be hard. Unless you used kubeadm, which is a little bit more lenient and which we'll have a look at now.

Updating with kubeadm

We have to first upgrade the Master nodes first, and then the Worker nodes. When updating Master nodes your workers keep on chugging along, but you won't be able to control them with kubectl, since your commands get orchestrated via the kube-api-server on Master, and would thus be offline.

Updating Worker nodes need similar consideration to updating Deployments - you can choose to update all Worker nodes at once resulting in some Pod downtime, or choose to update nodes one-at-a-time.

One-at-a-time

We looked at draining nodes above, which is one way to do it without resulting in downtime. You can move Pods to other nodes and then safely updated the nodes, or just create new up-to-date nodes and move the Pods there, and then remove the out-of-date ones.

Let's start. Let's gather some information first.

kubeadm upgrade plan

Note two things - First; you would also need to update kubeadm itself. Second, you see that you still need to manually upgrade your kubelet services on upgraded node afterwards.

apt-get upgrade -y kubeadm={next version} && kubeadm upgrade apply  {next version}

Now check you nodes again and you'll see that the version are still the old ones. That's because the kubelets running on the nodes are still in need of an upgrade.

So lets update Master first, with;

apt-get upgrade -y kubelet={next version} && systemctl restart kubelet

Now check the master node version again;

kubectl get nodes

There, upgraded. Now we need to do the same on the Worker nodes, one-by-one, by draining and upgrading each one.

First, find you Worker node's IP address so we can ssh into it;

kubectl get nodes -o wide

Then, ssh in and the same commands run again;

apt-get install kubeadm={next version} && apt-get upgrade -y kubelet={next version} && systemctl restart kubelet

Then when it's updated and restarted, we exit the node and need to uncordon it again.

Backups & Disaster Recovery

You can back up your entire Kubernetes cluster by making backups of the following;

Resource Configurations

If you've created your resources with declarative definition files(YAML) your backups should be much easier, because you can just back up these files. You can then store these on a volume or cloud storage somewhere, or just commit them to a repository(be careful with configmaps and secrets though).

If you used the command-line imperative method to create resources you'll need to do a little bit more work. You can look for resources manually you've created imperatively and write then out to declarative files with kubectl.

A Pod, for example, might be like this;

kubectl get pod {pod-name} -o yaml > pod-name.yml

Or you can use kube-api-server to same a copy of your entire cluster resource configuration

kubectl get all --all-namespaces -o yaml > all-kube-resources.yml

There are also tools available to help you manage this like Velero, but that's up to you to research.

ETCD Store Backups/Snapshots

Or you could backup your ETCD store, which contains information regarding your resources and their state.

You can save the content of your ETCD store either with its snapshot tool or by making a copy of its data directory manually.

But first, lets got some information about our ETCD store that we'll need to do backups;

kubectl logs etcd-master -n kube-system

You'll need to take note of the following;

  • endpoints URL –> almost always 127.0.0.1:2379
  • CA Cert location
  • Server Cert location
  • Cert Key

Then, with the snapshot tool and the information logged out from above, run;

ETCDCTL_API=3 etcdctl snapshot save {snapshot-name-date.db} --endpoints={https://127.0.0.1:2379} --cacert={/etc/etcd/ca.crt} --cert={/etc/etcd/etcd-server.cr} --key={/etc/etcd/etcd-server.key}
etcdctl snapshot status  {snapshot-name-date.db}

This will save the file to your current directory and give you an update as to the snapshot process status at the same time.

Your ETCD store gets saved in the following directory on your Master node by default;

/var/lib/etcd

You can set up you backup tool to back up this directory. You can change the location for this directory by updating your ETCD service with the flag --data-dir pointing to a directory of your choosing instead.

Restoring ETCD Store Backups/Snapshots

Restoring your ETCD will be just as easy as well. First, stop the kube-api-server;

service kube-apiserver stop

Then either replace the backed up folder above, or in the case of a snapshot;

ETCDCTL_API=3 etcdctl --endpoints={https://[127.0.0.1]:2379} --cacert=/etc/kubernetes/pki/etcd/ca.crt --name=master --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --data-dir /var/lib/etcd-backup-date --initial-cluster=master={https://127.0.0.1:2380} --initial-cluster-token {etcd-cluster-date} --initial-advertise-peer-urls={https://127.0.0.1:2380} snapshot restore {snapshot-name-date.db}

Then restart the system daemon and restart the etcd service. This would involve either restarting the service on master if you set up your cluster manually, like follows;

systemctl daemon-reload && service etcd restart && service kube-apiserver start

Or edit the etcd.yaml file in /etc/kubernetes/manifests, if you set up your cluster with kubedm. You will need to update the following based on the restore directory set in your restore command above.

  • --data-dir folder to point to /var/lib/etcd-backup-date
  • –initial-cluster-token with etcd-cluster-date
  • volumes: - hostPath: path: with /var/lib/etcd-backup-date

Conclusion

That's it for Kubernetes maintenance.

We had a look at how to update Master and Worker nodes without interrupting service with the commands Drain, Cordon and Uncordon.

We also had a look at updating Kubernetes itself with kubeadm, by updating the Master node first and then ssh'ing to the worker nodes to update them too.

Finally we had a look at methods of  disaster recovery by backing up our entire cluster, either by saving our resource deployment scripts to a remote repository or by taking snapshots with the ETCD store.

The next post will methods of authentication. See then!

Show Comments
👇 Get $100 in credit DigitalOcean Referral Badge