The Kubernetes Series - Services

The Kubernetes Series - Services

(Photo by Christophe Hautier on Unsplash)

Services in Kubernetes are the communication channels between different components and entities in and outside of your cluster. They are like the gateways of your Kubernetes cluster. A service, being like a mini gateway/server running in your cluster, has its own internal cluster IP address.

There are three types of services;

  • NodePort –> Creates a connection between a port on the node and a port on a pod inside it.
  • ClusterIP –> Creates IPs  to connect different nodes in a cluster.
  • LoadBalancer –> Creates a load balancer for connections into your cluster

NodePort Service

NodePorts creates connections between the following ports;

  • TargetPort –> The port on a Pod
  • Port –> The port on the NodePort service
  • NodePort –> The port on the node. They are limited to port numbers 30,000 - 32,000.

As with other objects, we create a NodePort service with a YAML file.

// nodeport.yml
apiVersion: v1
kind: Service
metadata:
  name: nodeport-service-example
spec:
  type: NodePort
  ports:
    - targetPort: 80
      port: 80
      nodePort: 30000
  selector: // <- Add pods applicable
    app: front-end-app
    type: front-end

You don't need to add all 3 ports, if you leave out targetPort it defaults to the same value as port, and if you leave out nodePort Kubernetes will select a random port between 30,000 and 32,000.

You then create it as per usual with kubectl, and voila! Now you can access the pod on the nodes IP address on port 30,000.

Another thing to note is that the selector will look for all pods with the name front-end-app and of type front-end. If you have 4 pods created with those names, the selector will connect all 4 of them for you, and also load balance requests between them for you.

But it goes further than that, it will select Pods across different nodes as well, as long as the names and types match. You could then access the same service on any of the connected nodes' IP addresses, on port 30,000.

ClusterIP

ClusterIP's enable you to create a micro-services architecture on your cluster by creating a single point of contact on a service for multiple pods of a kind. For instance, if you have a traditional front-end, api-backend and caching servers deployment, you can connect all the instances of front-end Pods under one front-end service with a single IP, a single back-end service for multiple back-end Pods with its own IP address, and so on.

These services will each have their own IP address and name on the cluster. Other Pods will then be able to access this service with that address.

Lets create api.yml;

// api.yml
apiVersion: v1
kind: Service
metadata:
  name: api-service
spec:
  type: ClusterIP
  ports:
    - targetPort: 80
      port: 80
  selector: // <- Add pods applicable
    app: main-api
    type: back-end-api

Create it with kubectl and then view the service - you should see it is created and now has a clusterIP and name, both of which can be used by other entities in your cluster to connect to.

LoadBalancer

If your cluster runs in a cloud provider that supports load-balancing, you can create a LoadBalancer service. This allows external access to your cluster through an IP address and port.

The LoadBalancer service essentially creates an internal ClusterIP service, connecting the pods in the cluster to an external load-balancing service provided by your cloud provider.

Lets YAML!

// loadbalancer.yml
apiVersion: v1
kind: Service
metadata:
  name: loadbalancing-service
spec:
  type: LoadBalancer
  ports:
    - targetPort: 80
      port: 80

You can also do it CLI with

kubectl expose rc loadbalancerExample --port:80 --targetPort=80 --name=loadbalancing-service --type=LoadBalancer

Ok cool. So you might have noticed that we don't know something quite crucial.

What is the IP address then?

Well, let's interrogate our newly created loadbalancer;

kubectl describe service loadbalancing-service

Now look for the line LoadBalancer Ingress - that is your load-balancer IP address. LoadBalancers can get a little bit more technical, so we'll have a deeper look at them in some later post.

Conclusion

We're done with Service! We've learned quite a bit. Now we're going to get more hard core and look at Scheduling next.

Thanks for reading!

Show Comments
👇 Get $100 in credit DigitalOcean Referral Badge