This is the third post in my series on Kubernetes. The previous post covered the Master node in a little bit more detail, you can find it here, and with this post we'll look at the Worker nodes and their processes.
As mentioned in the first Kubernetes post in this series, the Worker nodes have two Kubernetes Cluster Control Plane processes running on them that help them stay coordinated with the Master node and cluster as a whole;
Let's dive a bit deeper into them.
The kubelet is the contact point between the Worker node and the Master node. They load and destroy pods and containers passed from the Kube-Scheduler on Master, and sends back the status of the node and its pods on regular intervals.
The flow of this happens like so;
- The kubelet registers the node on Master
- The Kube-Scheduler may then sends it instructions to load a particular pod and its containers on the Worker node.
- Then, on the Worker node itself, the Kubelet will instruct the container runtime(Docker) to load that particular container on the pod.
- After this, the Kubelet monitors the pod and node and notifies the Master node of any changes that it might need to take action on, based on the configured Desired State.
Unlike the processes on the Master node, Kubeadm does not automatically install kubelets. Kubelets are always installed with binaries and run as a service on Worker nodes.
To view the running process, you can grep out the information needed like so
ps -aux | grep kubelet
You might have different pods running different services that need to communicate with each other on you cluster, like a database pod and a REST api pod and a front-end webapp pod.
How do these different pods and containers communicate with each other with containers and pods being dynamically being created and destroyed as things inevitably change? It works through networking services running on Master which in turn connect to the kube-proxy services on Worker nodes.
Kube-proxy runs on each Worker node, listening for connections forwarded from the cluster Master. It then handles the networking between all the different pods running on your cluster. It does this by forwarding data with IPtables rules.
Unlike with kubelet, kube-proxy can be installed with a binary and run as a service on the Worker node, or be installed with kubeadm, which then runs a dedicated daemonset pod on your cluster. A daemonset, roughly speaking, is a pod that automatically gets a copy of itself added on each node running on your cluster.
A pod is a Kubernetes object that contains containers. So in a cluster you have you Worker nodes, which contains different pods, which are a single instance of an application. Each application may contain an n number of containers that it needs to be operational.
A pod is the smallest instance or object that you can create in a Kubernetes cluster. When you need to scale up your application deployment to handle a large traffic load, you would be scaling up the number of pods running an instance of your application, not more containers in an existing pod.
Pods have a 1-to-1 relationship with individual containers. In other words, a pod can contain multiple containers, but only one container of a particular kind. Also, all containers running in a pod can reach each other by referring to localhost, because they share the same network environment space. A pod can thus be viewed as a single "computer" or "host", if you will.
So we just had a more in-depth look at Worker nodes. In the next post we will move a little bit more to the practical side of things and have a look at how we actually create Pods, ReplicaSets and Deployments. Until then, thanks for reading!