The Kubernetes Series - Networking

The Kubernetes Series - Networking

(Photo by Taylor Vick on Unsplash)

The post before this one was about handling storage and volumes in Kubernetes.  This one will be a little bit more intense. We'll be looking at networking.

Kubernetes Networking

Each node in your Kubernetes cluster has its own network interface with a bridged IP address and associated hostname.

As discussed previously, your Master node and Worker nodes have default services(the Control Plane) running in order to insure the operation of you cluster. These services require certain ports to be claimed by default.

Ports used on Master

On the Master node, we have the kube-API-server running on port 6443. Kube-scheduler runs on 10251, kube-controller-manager on 10252. The ETCD store server connects on port 2379(and 2380 if you have multiple Master nodes).

Ports user on Workers

The kubelets(point of contact between Workers and Master) on each node run on port 10250. The port range available on the Worker nodes for external connections is between 30,000 and 32,767.

Networking Between Pods

Pods Networking Model

Kubernetes does not come with a default internal networking service, so you have to set it up yourself. Luckily there's a couple of ready-made solutions to handle networking for us. Some solutions available include WeaveWorks, Cilium and NSX.

The solution you choose should comply with the following requirements;

  • Every Pod should have its own IP address.
  • Every Pod should be able to connect with other Pods on the cluster, regardless of the nodes they run on.

There's a few more requirements not listed above, but safe to say there are certain standards that need to be stuck to in order for container orchestration to work seamlessly on different container runtimes and orchestration tools.

Luckily such standards indeed have been defined - the Container Networking Interface(CNI).

Container Networking Interface(CNI)

The Container Network Interface (CNI) is networking spec for containers and container orchestrators like Kubernetes. It defines a standard on how container orchestrators go about the following;

  • How to handle container network namespaces
  • How to connect said container network namespaces to other networks
  • Which plugins should be executed on connection to the which networks
  • The creation and storage of a Network Configuration JSON file.
  • and more...

In order to be CNI-compliant, the plugin you choose to handle your networking must be executed on each node in your cluster, after each containers' creation. The Control Plane service responsible for managing this is, of course, the kubelet.

This can be confirmed by inspecting the kubelet service and looking for the properties network-plugin, cni-bin-dir and cni-config-dir.

Run the following command inside a node to inspect the kubelet service;

ps -aux | grep kubelet

You can also look at all the executable plugins for CNI at;

ls /opt/cni/bin

And the kubelet config file to view installed plugins;

ls /etc/cni/net.d

The plugin service you install - WeaveWorks, perhaps - will create a network bridge on each node in your cluster, and assign an IP address to each. It would then do the heavy lifting of managing the connections between all the nodes on your cluster.

Installing a Kubernetes Virtual Network

Let's install Weave as a Pod running on a newly created cluster;

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Now let's view the fruits of our work;

kubectl get pods -n kube-system

To view the IP ranges on the Master Pod;

ip addr show weave

When your networking is set up, by default, any Pod can communicate with any other Pod in your cluster. This is nice when you start up a cluster and need to get things running quickly, but it might not be the most secure solution.

It would be a good idea to limit which Pods can communicate with which other Pods and services - and on which ports - on an ingress and egress level. This is where Network Policies come in.

Network Policies

A Network Policy is a Kubernetes configuration object with a set of networking rules that you can assign to one or more Pods with labels and selectors.

Here's the example Network Policy from the docs;

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24
    - namespaceSelector:
        matchLabels:
          project: myproject
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 6379
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978

The Network Policy above  selects Pods with the label role=db. It defines both ingress(incoming) and egress(outgoing) restrictions.

Incoming connections may only be in the IP range IP addresses in the ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255. They may also only be in the namespace default. And only Pods with the label role=frontend or any Pod in the namespace project=myproject. The only protocol and ports allowed are TCP connections on port 6379.

Outgoing connections are allowed from any pod in the default namespace with the label role=db to only to IP included in 10.0.0.0/24 on TCP port 5978.

Service Networking

To make the applications running on a Pod accessible to another you need to make it a Service. Any services created is available to all Pods on a cluster by default. Services are thus not limited to a single node or Pod - they are cluster-wide resources.

A service gets assigned an IP address from a predefined range(defined in the kube-API-server flag service-cluster-ip-range) on instantiation. The kube-proxy service on each node then sets up the forwarding rules to make it accessible to other nodes on the cluster.

You can check the networking details of a newly created service with the following;

iptables -L -t net | grep new-service

This will display the services' IP address and port, and its' the DNAT rule, which is the forwarding IP address and port pointing to the Pods own IP and port.

DNS in your cluster

Setting resource names

Kubernetes has a built-in DNS service to manage Pods and services(not nodes, they only have to have working networking services, managed by your CNI plugin as above) in your cluster.

At the time of writing, the latest version of Kubernetes uses CodeDNS to spin up a DNS Pod and service.

You can view the DNS service in your cluster by running...

kubectl get service -n kube-system

...and looking for the service called kube-dns.

The Kubernetes DNS configuration file can be viewed at

cat /etc/coredns/Corefile

Printing out the Corefile reveals that it references a Kubernetes plugin, in which the root domain(cluster.local) is defined.

You also see that you can see that the DNS entries get created as a a ConfigMap, which gets injected into Pods and saved on the location specified;

/etc/resolv.conf

Adjusting the ConfigMap would thus propagate updates to all the Pods using it.

Service DNS

Whenever a service gets created, the Kubernetes DNS stores the service name as the services hostname with with its associated IP address as a record listing. Now, when you refer to a service by its hostname, the cluster DNS will resolve it to the correct IP address.

If you want to refer to a service in a different namespace than it was created in, you will have to use dot-notation to tell the DNS service what IP address to resolve too. For example, if you want to reach the WebServer Service in the Main namespace, you would refer to it as WebServer.Main.

You can confirm this on the Master node with CURL check;

curl https://WebServer.Main

But DNS resolution goes deeper! All services are created under a default subdomain namespace called svc, thus a valid URL for the WebServer service is also;

curl https://WebServer.Main.svc

And beneath that, all Pods and services are part of a cluster.local domain too;

curl https://WebServer.Main.svc.cluster.local

The above is what is called the fully qualified domain name for the WebServer service.

Pod DNS

Unlike services, Pod hostnames are not the derived from the Pod name by default. Instead, it is a string literal converted from the Pod IP address, with the dots replaced with dashes. IP address 10.12.0.1 will become hostname "10-12-0-1".

The domain type for Pods is simply pod, so to reach the fully qualified domain name for the above Pod(if also in the Main namespace) on your cluster you would call;

curl https://10-12-0-1.Main.pod.cluster.local

As mention previously, a the DNS service reference gets injected into Pods and the kubelet service saves it to the DNS config file, which can be viewed on the node in question with;

cat /etc/resolv.conf

This should show a listing called nameserver, which the Pod would recognise as the location for for the DNS service.

To show to which IP address a particular host name resolves, use the Linux command;

host 10-12-0-1

or

nslookup 10-12-0-1

Ingress

An Ingress service is a special service you can create that exposes a single IP address from your cluster to the public, but which also allows you to direct traffic to different services in your cluster, based in the incoming URL signature and a set of rules. It also comes with the added benefit of constraining the handling SSL security to a single service, thus vastly simplifying maintenance and setup schlep.

In other words, it's service acting as a SSL-capable and code-configurable load-balancer that you can manage like all other kubernetes resources with definition files.

The deployed ingress is called an Ingress Controller, and its configuration bindings are called Ingress Resources.

Ingress Controller

The Ingress Controller is like a normal load-balancing service with additional features. It watches for any changes in the cluster and updates itself intelligently based on these changes.

The Ingress Controller is not a default Kubernetes controller, so we need to install a third-party one too. The two most popular and best supported ones are;

Let's create a NGINX Ingress Controller deployment;

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
spec:
  replicas: 1
  selector:
    matchLables:
     name: nginx-ingress
  template:
    metadata:
      labels:
        name: nginx-ingress
    spec:
      containers:
      - name: nginx-ingress-controller
        image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller
      args:
        - /nginx-ingress-controller
        - configmap = ${POD_NAMESPACE}/nginx-configuration
      env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443

We also need to create a blank ConfigMap for it, in case we want to override and configure the properties for the Nginx Ingress like SSL, keep-alive and the location of of the error-log at a later stage. Check the NGINX Ingress ConfigMap Docs for a comprehensive list of configurable options.

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration

Now, let's turn this into a service that is exposed to the public network;

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https    
  selector:
    name: nginx-ingress

And since the Ingress service would need to be able to make changes to the service based on events happing in the cluster, we need to give it the right roles & permissions to do so;

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount

Ingress Resources

Ingress Resources are the rules and ingress configuration settings the Ingress Controller should maintain and follow. These include, for instance, the routing/forwarding rules.

Routing By Path

Rules for routing will direct traffic based on the URL rules you specify to appropriate services running in your cluster. Note that these services are referred to as 'backends' in the Ingress Resource definition files.

Let's create an Ingress Resource routing traffic to two separate routes - a webapp service on / and a blog service on /blog;

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-webapp
spec:
  rules:
  - http:
      paths:
      - path:/
        backend:
          serviceName: webapp-service
          servicePort: 80
      - path:/blog
        backend:
          serviceName: webapp-blog-service
          servicePort: 80

Routing By Domain Name

We can do the same for routing by subdomain names instead of paths too, like follows;

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-webapp
spec:
  rules:
  - host: webapp.com
    http:
      paths:
        backend:
          serviceName: webapp-service
          servicePort: 80
  - host: blog.webapp.com
    http:
      paths:
        backend:
          serviceName: webapp-blog-service
          servicePort: 80

Rewrite Targets

Notice that the blog path above routes to the webapp-blog-service. Now consider that the webapp-blog-service might be pointing to an application that doesn't expect incoming traffic to go to /blog. The application might expect traffic to come in on /blogroll.

How do we tell the Ingress service to take traffic coming from a certain path from outside to go to another path internally? With Rewrite Target annotations.

Let's create an ingress for path rules definition to handle rewrites for the blog service specifically;

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-webapp-blog
  annotations: 
    nginx.ingress.kubernetes.io/rewrite-target: /blogroll/$2
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: webapp-blog-service
          servicePort: 80
        path: /blog(/|$)(.*)

Let's unpack what's happening here:

  • we added an annotations rule called rewrite-target to the Ingress metadata that rewrites the path to /blogroll/ with a variable $2, used to add additional path signature elements to the rewritten path.
  • we added a regex capture group /blog(/|$)(.*) to the blog-service backend path, that filters out path signature elements from an initial string matching /blog
  • The captured and filtered path signature string is passed to the $2 variable to rewrite the path.

That means that the following ingress url request,

http://webapp.com/blog

will be passed to the webapp-blog-service internally as

http://webapp-blog-service/blogroll

The regex will handle additional URL elements too, for instance;

http://webapp.com/blog/hello-world

will be rewritten internally as

http://webapp-blog-service/blogroll/hello-world

Conclusion

In this post we had a look at Networking  in Kubernetes and setting up a cluster virtual network with a CNI compliant third-party service.

We also had a look at restricting incoming and outgoing traffic for Pods with Network Policies.

Next we looked at how Kubernetes handles DNS internally, and how the DNS hostnames of services and Pods are handled slightly differently.

Finally we had a look at cluster Ingresses and how they can help you resolve  to different services in your cluster with a single incoming load balancer.

And that was it for the Kubernetes series. Thanks for reading and I hope some of it was of benefit to you!

Show Comments
👇 Get $100 in credit DigitalOcean Referral Badge