In our last blog post, we introduced the concepts of Deployments and Replication Controllers: key instruments in Kubernetes that scale our pods to meet the demand of our applications in a declarative manner. However, they didn’t address the issue of how to make these collective groups of scaled pods available to users or other pods in our cluster.
Enter the concept of Services, which in Kubernetes enable an “abstraction which defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service”. Through a Kubernetes service, “any logical set of Pods” can now be made available to users or other services as a consumable entity, often through the use of a load balancing between available pods. However, services also allow us a great deal of flexibility and fault tolerance, giving us a reliable way to build resilient systems that revolve around the ephemeral objects that are Pods.
In part 3 of our “Kubernetes for Developers Tutorial”, we’ll provide an in depth look into how Services operate from a top-level view, and then explore how Kubernetes grants us immense power and functionality by letting us configure services. As always, we’ll include a video going over how you can make use of Services when managing a Kubernetes cluster in Nirmata.
Click below to go to other entries in the Kubernetes for Developers Series:
Part 3: Services
Getting Familiar with Services
Understanding that services act as persistent entity that enable reliance upon the capability of a logical set of pods is really all one needs to know to recognize their importance. However, the value that Services bring to Kubernetes extends beyond this. In our exploration of Services, we’ll first start by examining the its configuration, as it allows us to become more immediately aware of how to use it.
Traditionally, in a service we would want to specify a selector to determine which group of pods a service will manage, a port to listen for incoming requests on, and a targetPort to forward requests to Pods which would be listening on that port. This example is illustrated in the configuration file for a Service shown below.
kind: Service apiVersion: v1 metadata: name: my-service spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376
Kubernetes makes it easy to discover our services once they’re up an running. A reliable method that Kubernetes uses is injecting environment variables to be consumed by containers running in pods. The following example from the Kubernetes documentation shows what environment variables would be injected for the service “redis-master”.
REDIS_MASTER_SERVICE_HOST=10.0.0.11 REDIS_MASTER_SERVICE_PORT=6379 REDIS_MASTER_PORT=tcp://10.0.0.11:6379 REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379 REDIS_MASTER_PORT_6379_TCP_PROTO=tcp REDIS_MASTER_PORT_6379_TCP_PORT=6379 REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
Through these environment variables, a Pod in a cluster has enough information to communicate with Services in a cluster. Kubernetes also enables DNS resolution for Services, assuming one has elected to integrate DNS into their cluster. This is a popular feature and will be addressed further in the article.
Types of Services
Services are remarkable as they provide the ability to route requests from clients both internal and external to the cluster in a reliable and efficient manner. At the heart of this capability is the use of kube-proxy, which runs on each node in a Kubernetes cluster.
When a service is created, Kubernetes assigns it a virtual IP address, which makes it available internally to the rest of the cluster. This is accomplished via the aforementioned kube-proxy, which as mentioned in the article “Load Balancing in Kubernetes” on the Rancher website, “allows fairly sophisticated rule-based IP management.” The default method of load distribution is random selection, which chooses any backend pod that a service is managing.
This sheds light on how services can be consumed internally in the cluster, however the main purpose of services is often to make our scaled pods consumable to the rest of the world. External clients can either connect directly to a Service IP and its Port, or a hostname which resolves to them. However, the way in which a service is exposed can differ greatly upon what type of service we are dealing with.
To make a service that is available within the scope of the cluster’s network one can elect to use a ClusterIP. Here a specific ip address for any node in a cluster is chosen to be the IP address of a Service. Therefore, any group of pods can be accessible via another in the cluster using the IP address of one node in a cluster. ClusterIP is the default Service type and great for creating Services that need to be consumed by other pods running in our cluster.
Another, more flexible, type of Service is known as a NodePort. In a NodePort, the Service becomes accessible via any node in the cluster on a specific port, which can either be dynamically assigned or specified by the user. This is accomplished via NAT, and is intended to be externally accessible outside of the cluster. In a NodePort, all incoming requests on this port across any node are forwarded to the NodePort Service, which will perform further routing to the backend pods.
The final and very commonly used type of service is the LoadBalancer. Note that the implementation of this type of service is very dependent upon the cloud provider our Kubernetes cluster is running on. A Load Balancer Service acts very much as typical load balancer would and can be provisioned by Kubernetes. The popularity of this type of Service is due to the fact that Kubernetes users often want to expose a group of pods as a replicated microservice to external clients in a simple manner.
As one might, guess users will differ greatly in the capabilities they will want to their services to utilize. As a result, Services in Kubernetes are quite configurable, and interact with many other aspects of the Kubernetes cluster, including the cloud provider and DNS configurations.
For example, if DNS is used with a Kubernetes cluster each service becomes reachable in the cluster through a DNS query for the <service-namespace>.<service-name>. This ability to easily configure pods to rely upon other services within the cluster, makes creating microservice-based and other distributed applications extremely easy to run on Kubernetes.
Services can also be made to be more flexible, including not falling into one of the three categories of Services mentioned in the previous section. Such services are Headless Services and are not assigned an IP within the cluster. As a result, they are also not managed by kube-proxy, and allow the user to integrate their own methods of discovery of the backend pods.
Kubernetes Services can also make use of Session Affinity. By setting the field service.spec.sessionAffinity to “ClientIP”, Kubernetes will direct traffic from any given controller to a specific backend pod. This can be customized further by having the ability to set timeouts for these sticky sessions. By enabling Sticky Sessions, Kubernetes ensures that stateful applications can be run with the use of Services.
While this section touched upon some additional features in Services, the amount of customization and flexibility can vary greatly depending on the custom configuration of your cluster. Our goal in this article, was to demonstrate the power that Services offer within Kubernetes and how you can use them to achieve different end goals through your Kubernetes cluster.