The Kubernetes Series - Introduction

The Kubernetes Series - Introduction

(Photo by chuttersnap on Unsplash)

In recent years more and more people have been deploying their web applications to containers running on their bare-mental and cloud provider hosts, instead of running the applications directly on host environment itself.

The benefit of running your application on a container is that it isolates application environment from the host itself by packaging all the dependencies needed for your application in a single, stand-alone and easily transportable unit of software that you can move rapidly between hosts, without having to worry about keeping application code, systems tools, runtime and libraries in sync between different host deployments.

(There are many container services available, the most popular being Docker, which is the one I'll be utilising in this series of posts.)

For a while, the new flexility and speed that containers allowed for seemed to be enough and containers services proliferated rapidly. But soon dev-ops teams responsible for larger applications started to run into new limitations. The speed and ease the deployment that containers allowed for quickly resulted in a new need for a way to manage state between different containers, to scale deployments dynamically and remove and replace containers that started acting up.

Most people hacked together their own scripts and services to do this, but eventually Google came to the rescue and released a tool called Kubernetes that does it all(and more) for you. They kindle open-sourced the tool and a splendid team of contributors have been continuously contributing their time and resources to transform it into the mature dev-ops solution it is today.

Being an open source solution, Kubernetes is mostly platform agnostic and you can spin up the same Kubernetes cluster on Google Cloud, AWS, Azure and DigitalOcean, or move your deployments between a mixture of them if you want.

In brief, Kubernetes features;

  • Automatic Bin-packing - it packages your application into self-contained containers
  • Service Discovery & Load Balancing - It manages networking and DNS between containers for you.
  • Storage Orchestration - you can specify a storage solution of your own choice.
  • Self-healing - it restarts failed containers on its own.
  • Secrets & Configuration Management - you can manage and update your secrets without having to rebuild your entire application or expose your secrets in the rest of your stack.
  • Batch Execution
  • Horizontal Scaling
  • Automatic Rollbacks & Rollouts

Core cluster components

Let's have a brief look at the core components and concepts of a Kubernetes cluster.

The cluster

A Kubernetes cluster consists of two types of nodes - a usually single Master node and multiple Worker nodes.

Master Node

The Master node is usually a single pod(but there might be more in high-availability configurations) in your cluster that manages your clusters' state(defined by the Pod Lifecycle Event Generator)I contributes three processes to the Kubernetes Cluster Control Plane, which are;

  • The Kube-api-server –> This is the primary Kubernetes management component, which orchestrates all functions and events in the cluster, and manages all network communication in, without and out of the cluster.
  • Kube-controller-manager –> Consists of the Node Controller and Replication Controller.
  • Kube-scheduler –> Manages which nodes which containers should be deployed to, based in the containers resource requirements

Together, these processes together ensure that the Master node can manage, schedule, plan and monitor all the nodes running on your cluster.

The shared state of the cluster is stored on the Master node in a key-value database store named ETCD. We will look into the ETCD in more depth in the next post.

Worker Nodes

A worker node is a node created by and managed by the Master node, which hosts the application containers. The Kubernetes Cluster Control Plane processes running on an individual worker node consist of the following;

  • A Kubelet –> It keeps in contact with the Master node by receiving instructions on which containers to load, and sending back continuous status updates on the the worker node and the containers running on it.
  • Kube-proxy –> The proxy facilitates networking and rules for communications between worker nodes.

That's it for the introduction. In the next post we'll have a more in detailed look at the Master node in particular.

Show Comments
👇 Get $100 in credit DigitalOcean Referral Badge