Kubernetes For Developers Part 2 – Replica Sets and Deployments

Kubernetes For Developers Part 2 – Replica Sets and Deployments

In the previous blog post in our Kubernetes For Developers Series, we discussed what Kubernetes Pods were and the unique functionality that they offered.

By introducing Kubernetes Pods as the first topic of our series, we acknowledged the Pod’s role as the basic building blocks of Kubernetes. Therefore, through an understanding of Pods, developers can gain an introduction to the “Kubernetes way” of orchestrating containerized workloads. A consequence of this is that a perspective on the larger architecture of Kubernetes, including the necessities of various abstractions surroundings Pods, becomes less blurry.

However, Pods alone don’t give us nearly enough functionality to build robust applications and services that can communicate with one another. One of the most fundamental ways in which Pods fail in this regard is that they are unreliable by nature.

A singleton Kubernetes Pod is highly susceptible to being preempted and/or rescheduled on a different node. Therefore, we need some additional resource to keep track of the changing IP address of our pod. When attempting to scale instances of a pod in order to increase the availability and fault tolerance of our application, this requirement becomes all the more important. In the next two blog posts we will discuss two resources that enable us to solve this problem: Workload API Objects and Services.

The first is the a type of Kubernetes API Object that is intended to encapsulate the functionality of orchestrating multi-pod workloads. In this blog post, we’ll become familiar with one of these Workload API Objects, Deployments, as well as another Kubernetes resource that Deployments manage: Replica Sets.

By the end of this blog post, you should have a good understanding of what Replica Sets are and how Deployments extend their functionality. This will give us the necessary background knowledge to do a deep dive into Services in the next entry in our series.

As is the case in all of the entries in our “Kubernetes for Developers Series”, we will end this post with a video that will show you how the concepts being discussed in the blog post are used in the Nirmata platform. This will provide developers with not only a conceptual understanding of Kubernetes concepts but the abilities to use Nirmata to put these concepts in to action!

Table Of Contents

Click on one of the links below to be taken to any of our other entries in this series:

Part 1: Pods

Part 2: Replica Sets and Deployments

What are Replica Sets

In our previous blog post, we mentioned that one of the key advantages of Pods is that they allow developers to group sets of containers as an application unit and easily begin orchestrating them as workloads. Following the creation of a pod’s template, instances of this pod can then be scaled horizontally to make a Developer’s multi container applications more highly available. To manage the scaling of pods, Kubernetes uses an API object called a ReplicaSet.

According to the Kubernetes Documentation, Replica Sets ensure “that a specified number of pod replicas are running at any given time.” As a side note, the documentation extensively focuses on pointing out that ReplicaSets are the “next-generation Replication Controller”. While Replication Controllers are worth understanding and help contextualize Replica Sets, we will restrict the scope of this section to the Replica Set.

As previously mentioned, a ReplicaSet is considered to be a type of Kubernetes API Object. Therefore, just like any other member of this family, ReplicaSets require values for the fields of apiVersion, kind, and metadata to uniquely identify a type of API object across Kubernetes versions.

A ReplicaSet also has an additional field that needs to be set. This is the spec.template, which is a template for a Kubernetes Pod without the apiVersion and kind, which are already present in the ReplicaSet. The Replica Set then uses this template, to create pods which it manages. The template, below is an example of a spec.template for a Replica Set.

metadata:   
   name: myapp-pod   
   labels:     
      app: myapp 
spec:   
   containers:   
   - name: myapp-container     
     image: busybox     
     command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']

What hasn’t been mentioned yet is how to update the number of instances that a ReplicaSet manages. To do this, the spec.replicas field can be updated to increase the number of Pods that should be run concurrently. Note that this field does not guarantee that the exact number of pods specified in .spec.replicas will be running at any given time. Some examples the Kubernetes documentation gives in which this scenario takes places includes when the number of pods “running at any time may be higher or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully shut down, and a replacement starts early.”A ReplicaSet can also have a field called a .spec.selector, which it uses to determine which pods to manage. For every pod whose label matches this spec.selector of a given ReplicaSet, it falls under the jurisdiction of this ReplicaSet. This means that a ReplicaSets can manage Pods that it did not explicitly create.

As it turns out, in practice, ReplicaSets are rarely used independently to to manage pods. Instead, the most popular choice is to use layers of encapsulation that makes use of ReplicaSets in the form of Workload API objects. The type of Workload API object we will be focusing on is the Deployment.

So, why do we need Deployments

The need for deployments comes more so out of the transition from Replication Controllers to Replica Sets than the lack of capability provided from Replica sets. For this to make sense, one should understand that Replication Controllers were the original resource used for scaling Kubernetes Pods.

One of the key features of Replication Controllers included the “rolling-update” functionality. This feature allows the pods being managed by Replication controllers to be updated with a minimal/none outage of the service those pods are providing. To accomplish this the instances of the old pods are updated one by one.

However, Replication controllers were criticized for being imperative and lacking flexibility. As a solution, Replica Sets and Deployments were introduced as a replacement for Replication controllers.

While Replica Sets still have the capability to manage pods, and scale instances of certain pod, they don’t have the ability to perform a rolling update and some other features. Instead, this functionality is managed by a Deployment, which is the resource that a user using Kubernetes today would mostly likely interact with.

How Deployments Work

Deployments encapsulate replica sets and pods in the Kubernetes’ resource hierarchy and provide a declarative method of updating the state of both. One method of accessing this declarative interface is through kubectl. While we won’t be providing a tutorial for managing Deployments via kubectl, we will be using examples of kubectl commands to illustrate how Deployments allow us to update the state of its replica sets and pods.

Just like ReplicaSets, Deployments are Kubernetes API Objects and require the apiVersion, kind, and metadata. The Kubernetes Documentation provides an example ngnix-deployment.yaml, which is a great example to demonstrate the basic functionality of Deployments. This file can be seen below. 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
     app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80


Notice, however, how the Deployment specifies 3 replica sets of the nginx pod to be available. To do this, it creates a replica set with all the information necessary to scale instances of our nginx pod to 3 instances. It then provides a declarative interface for us to manage this replica set and pod.In the above example, we see that we have a Deployment with the name “nginx deployment”. Its selector which matches for the label “app : nginx” is used to determine which pods in the cluster that this deployment manages. In this case, it only references the pod “nginx” which is defined in the pod template of the Deployment. This use of a pod template is equivalent to the idea of a pod template in a Replica Set mentioned in the previous section.

A visualization of the concepts we have already described about Deployments can be found below.

The declarative interface shown in the image above is a method by which a user can change the “state” of a deployment. Either through kubectl or another method, the Deployment will work to check if the current state matches the desired state of the user. If they do not match, the deployment will work to make them consistent. An example of this declarative interface can be seen when we use kubectl to update the number of replicas that this Deployment manages.

kubectl scale deployment nginx-deployment --replicas=10 
deployment "nginx-deployment" scaled

While we won’t list all the capabilities of Deployments here, it is worth noting that they offer a lot of flexibility when it comes to scaling our pods in a declarative approach.Such a command would result in our deployment using the existing replica set to scale the number of pods to 10 instances.

To learn more about Deployments and the functionality they can offer you, check out the Kubernetes Documentation for Deployments.

Video

Coming soon.

Headfirst into Kubernetes 1.10 Beta
Kubernetes For Developers: Pods (Part 1)
No Comments

Sorry, the comment form is closed at this time.