As more companies are electing to use Kubernetes for container orchestration and workload management, Dev/Ops engineers, engineering managers, and software developers are all looking to add Kubernetes as a skill to their toolbelt.
Unfortunately, Kubernetes is not known for its simplicity. Like any distributed system, the platform is designed to be fault-tolerant and scalable, something it does extremely well. Its robust architecture, however, comes at the cost of inherent complexity.
Many, such as the renowned Kelsey Hightower, have produced content to educate the developer community about how install Kubernetes and run orchestration workloads on it. While extremely detailed and effective for certain professionals, such pieces of content often embrace, rather than simplify, the complexity of Kubernetes. This can make it difficult for newcomers to begin utilizing the technology quickly.
At Nirmata, we believe in applying a time tested software principle to enable any organization to utilize the power of Kubernetes: abstraction. Our platform is the perfect tool to help developers and dev/ops engineers understand how to orchestrate workloads on Kubernetes and eventually maintain their Kubernetes clusters. To help developers on their Kubernetes journeys, we’ve produced a series of blog posts and videos showing developers how to orchestrate their containerized workloads using Kubernetes with Nirmata.
In this series of blog posts, you’ll be able to read about key concepts in Kubernetes which will provide the necessary information to understand how Kubernetes is orchestrating your workloads. We’ll go deep enough in these posts so that you have a fundamental understanding of how Kubernetes works and supply Youtube Videos for you to see how to implement these concepts in Nirmata.
However if you are already experienced with Kubernetes, you can elect not to read our blog posts, and directly watch our series of Youtube videos which will show how you how to easily apply these concepts using Nirmata.
What You’ll Learn in This Tutorial
We’ve designed this tutorial to help developers understand how the platform works using a bottom up approach. That means we’ll start of at the basic unit of Kubernetes orchestration, the pod, and work our way up to higher level concepts such as Deployments, Services, and other essential aspects of Kubernetes.
Keep in mind, we may not have finished publishing our entire “Kubernetes For Developers” series when you are reading this article, however we will update the links in the list below as these posts become available.
Part 1: “Working With Pods”
Pods: The Basic Unit of Kubernetes
While Kubernetes is considered a container orchestration platform, its true building block is the Pod. According to the Kubernetes documentation “Pods are the smallest deployable units of computing that can be created and managed in Kubernetes”.
What exactly is a pod? Essentially, a pod is a layer of encapsulation for a set containers that are typically co-located together from an application perspective. For example, consider a Spring boot application running inside a container that is coupled with a nginx container as a network proxy. When scaling instances of our Spring Boot container, we would also like to scale instances of our nginx containers as well. Pods provide us this functionality.
Kubernetes Pods also utilize useful and interesting methods of container networking. One such method is ensuring that all containers in a pod operate on the same “network stack”. This implementation, conveniently, enables containers in a pod to communicate over localhost.
Traditionally namespaces and cgroups are used to isolate Docker containers running on the same machine from one another. In Kubernetes, containers running in a pod share these “facets of isolation”, as stated in the Kubernetes Pod Documentation. As previously mentioned, this makes itself apparent in the ability of containers in a pod to communicate over localhost. Additionally, containers collocated in a same pod can also use other forms of inter-process communication (SystemV semaphores, POSIX shared memory, etc.)
To create a shared network stack between the two containers, Kubernetes utilizes the ability of Docker to have containers connect to an existing network interface. As Mark Betz mentions in his fantastic blog post on Kubernetes networking, sharing a network interface “has a few implications: first, both containers are addressable from the outside on 172.17.0.2, and on the inside each can hit ports opened by the other on localhost.” The Kubernetes documentation further elaborates on this idea in saying that containers on different machines “may end up with the exact same network ranges and IP addresses.”
Such a model, as visualized in the image below, means that containers running in pod have extremely accessible methods of intercommunication over a localhost. This strengthens the idea of a pod as a unit of encapsulation for containers intended to be collocated with one another.
This conceptualization of a pod, doesn’t explain, however, the details of how pods redirect incoming traffic to a specific container. Pods also contain forward rules, which enable our containers, via the cni0 virtual network interface, to send as well as receive network communications. These forward rules enable traffic to be directed through or received from a pod’s physical network interface: the gateway into the Kubernetes cluster environment that exists outside of the scope of the pod.
Note that in Kubernetes each pod is given its own IP address. Therefore, direct pod to pod communication is possible, yet not recommended or often used in practice. Therefore, in Kubernetes, we tend to use other resources, such as a “Service”, to help us reliable facilitate communication between pods of different types. Continue reading to the section below for further elaboration on this idea.
Working with Pods
One of the key takeaways when working with Kubernetes is understanding that pods rarely occur as singletons in a typical Kubernetes cluster. Instead, pods are intended to be scalable and to an extent, ephemeral. This unreliability stems from the fact that pods may crash or be rescheduled to another node. Therefore other types of constructs are necessary to create a sense of reliability when working with the service that a set of Kubernetes Pods would provide.
In our subsequent blog posts, we’ll begin talking about Workload API Objects and services. These are Kubernetes resources that developers will actually be working the majority of time. At Nirmata, it was important that our platform felt like an extension of the existing Kubernetes approach of container orchestration. Therefore, you’ll find that we’ve designed Nirmata in a manner that is easy to use yet still authentic to the “Kubernetes way of doing things”.
Video
Now that you have an understanding of pods, check out our upcoming first video below in the Kubernetes For Developers Youtube series.
Sorry, the comment form is closed at this time.