Jim Bugwadia: “Hi, everybody; this is Jim Bugwadia, cofounder and CEO of Nirmata, and welcome to our second episode of our webinar series on Enterprise-Wide Kubernetes. So today we’re going to cover security as our topic and I’ll kind of dive into a few more details and some of the key questions that we’re going to answer as part of this session.
So first off every enterprise today as we’re seeing in this space is becoming – and every business perhaps – is becoming a software business and a digital business to some extent. Of course, businesses need to deliver their products and services faster and better and in a more efficient manner and this has driven the huge need for cloud native technologies like containers, Kubernetes, and of course cloud computing itself as a platform for greater efficiencies, agility at scale and also portability for applications.
So it’s interesting that all of this has led to the point in the industry where now Kubernetes has become the de facto thing, standard of choice for container orchestration, container management. And as enterprises are looking to build applications which are portable, which can run in either private or public clouds, in many ways Kubernetes has become this next level of operating system and a toolset of constructs which are being used to define cloud native applications itself.
So when you’re looking at a Kubernetes stack there are several layers to think about in the Kubernetes stack and how that stack gets deployed in an enterprise, right? So of course, like any application, Kubernetes itself needs compute networking and storage and this is, you know, either though an infrastructure as a service provider or a private cloud or even just bare metal or virtualization is how you would deliver those.
Also for Kubernetes you are going to need to configure and manage how Kubernetes reacts to code changes, to version changes of your application, to upgrading Kubernetes components itself. Those need to be versioned and managed, as well as how things like, for example beyond just layer two, layer three networking, there are concerns on how you would manage traffic flows coming into your cluster itself, which is what the Ingress component can help do.
There are also things like logging, monitoring that have to be managed for both the components in the Kubernetes stack, as well as the workloads, the applications that you’re running in the stack. And of course, you want security which is going to be our key topic today, end to end for your stack.
And finally, the point of all of this is to drive applications, to deliver applications faster, to manage workloads more efficiently, so there needs to be a layer of application management. So when you’re looking at designing, delivering a Kubernetes stack these are some of the key components, these are some of the key things which have to be managed and delivered together as a complete solution.
So the interesting thing is you can compose these from best in class tools, and of course most enterprises already have existing networking storage and other tools, or you can kind of look at a curated version or one single source for a lot of these components, right? The choice we are seeing which enterprises make is of course to be more flexible, composable, and to be able to pick the best in class tools for each one of these boxes.
So like I mentioned, our focus today is going to be on security and what we will do is we’re going to look at some of the key considerations, the key questions an enterprise architect has to go through when deciding how to secure their Kubernetes stack, as well as workloads which are running in containers and are managed through Kubernetes.
So with that I want to introduce our guest today, Patrick Maddox, who is the Senior Director of Solutions Architecture at Twistlock. He’s been an industry veteran having led roles at Puppet prior to Twistlock; also at Verizon, so Patrick, welcome to this session and thank you for joining us.”
Patrick Maddox: You’re very welcome; thanks for the intro, Jim. So as Jim mentioned, we’re going to talk about securing Kubernetes itself, the infrastructure supporting Kubernetes, as well as a number of other things. I think Jim has some questions to kind of lead off with. I don’t know if you want to ask me these, Jim, or if you want me to just kind of go through them and try to satisfy that?
Jim: Yeah; absolutely. Let’s just dive in and the high level, it’s very simple. How do I secure my cluster components? How do I secure my workloads and how do I get to a decision on what tools to use for the stack?
Patrick: Okay; well, I will go ahead and cover those as we progress through it. I’ll kind of take over forwarding the slides and we’ll just go through it.
Jim: Sounds good.
Patrick: So Jim mentioned I work for Twistlock; Twistlock is a company that’s been around for about three years. We have actually been using containers. It’s highly likely you’re already using software that we wrote. One of our sorts of landmark contributions was the authorization framework you find in Docker, some secrets management software as well, but we’re really going to cover basic security for a deployment.
To set some groundwork or a foundation for it I should really describe what Twistlock is and our approach to security. We focus on security for cloud native workloads and so really that applies across two axes, so the left to right axes is the lifecycle of artifacts that you’ll find in cloud native environments, integration with their build system or maybe even your developer’s desktops, scanning objects as they’re in registries and then the full operating production stack.
So regardless of what service providers, the different layers of abstraction that kind of ultimately get you to running a workload in a Kubernetes environment, and all the frameworks involved, even down to the level of individual processes, network connectivity, file system persistence, inside the instantiated workloads that you find running on top of a Kubernetes infrastructure.
I mentioned that container lifecycle; Twistlock is really kind of focused across five main swim lanes of security, and we refer to them as swim lanes here because different capabilities come to bear at different points across the container lifecycle. So in the build system you start incorporating vulnerability management and compliance. As you shift those objects up to registries you’re again sort of maintaining vulnerability and management and compliance posture.
But then when it comes time to running the environment – and this is somewhat where we’re going to focus today – you’re talking about things like access control, still maintaining vulnerability management and compliance, runtime defense, cloud native firewalling. And really across all these, when we talk about an operating stack you’re going all the way from that workload that’s satisfying a business need, all the way down to the infrastructure underneath it, and that’s really what we’re going to dive into next.
So let’s start talking about securing the stack, and it really is a stack. You’re going to start from the fundamental first asset and sort of march your way all the way up the stack, because really containerized workloads represent different layers of the abstraction from the underlying hardware.
Fundamentally, we’re going to start by securing the host; making sure that we’re operating from a secure posture on the host, then we’re going to talk about securing Kubernetes itself, implementation of role-based access control for Kubernetes. How do you evaluate the health or security posture of your infrastructure against known standards such as Linux, Docker and Kubernetes CIS benchmarks?
And then what do you do next? So if you’ve secured the operating infrastructure what are the next things you need to do to secure the workloads that are ultimately running on it? You don’t run Kubernetes just to run Kubernetes; you run Kubernetes to support a business need or a workload.
So let’s go through some of first principles of securing that host. Ultimately, you want to reduce your attack surface, so you’re going to start with the most minimal install of the host operating system you can. You’re going to likely implement some configuration management or some process around to make sure that your packages are always up to date, that unneeded services are removed; you’re continually reducing that attack surface.
And it’s not just reducing the attack surface, you’re also reducing the complexity present in the environment because you’re going to adding some complexity later in the environment, so the complexities here sort of shift in a Kubernetes environment. You want to make sure you’re securing your logins; that you do actually implement intrusion detection systems so there’s no sort of easy button for all of this. You still need to adopt secure practices by default for your host.
So once you’ve secured those hosts you sort of march your way up the next tier and we’re going to cover a variety of things here, but really from the first principles perspective you start thinking about what you need to do to secure Kubernetes itself. This covers a variety of territories like making sure you’re implementing role-based access control, which we’ll talk a little bit next. Implementing layer three protection for Kubernetes, making sure you have secure configuration for your master nodes, as well as your worker nodes.
Secure ETCD; make sure that the files associated with ETCD are secured, minimum permissiveness, and ultimately, you need to make sure when you are dealing with secret data, whether it’s authorization tokens or passwords, even up into the workloads, that you have a secrets injection policy that does not put secrets at risk early. So you’re not sort of shipping around secrets data; you’re injecting that secret data at the last possible moment required by taking advantage of key value stores, either embedded in Kubernetes or through another tool.
Let’s touch a little bit on Kubernetes RBAC. You should adopt a whitelist security model that is additive, and by default that’s really what happens here. You need to focus on the roles and how you’re binding those roles within Kubernetes, and similarly, with your clusters and the cluster roles.
You need to very discreetly limit access to the nodes. Ultimately, you don’t want people interacting directly with the nodes. Access to the containers in the cluster, access to the assets you deployed on top of Kubernetes all needs to be done within kubectl. If you have people interacting directly with the nodes; and certainly if you’re using MicroDoc or runtime you can still access all those containers that Kubernetes instantiated on the node.
That’s not sort of the most secure way to do it; really, if you limit access to your nodes you’ve created a funnel for people accessing your systems, or accessing the workloads you’ve instantiated, and doing that all through kubectl allows you to apply the same role-based access control, even though there are other mechanisms if you have access to the nodes, to access those workloads.
Finally, and we’ll show a little bit of this; you really want to apply the CIS benchmarks. There are benchmarks established for running that infrastructure securely; making sure the configuration for the infrastructure is done in the same way. These benchmarks apply to the host through the Linux CIS benchmarks. There are Docker CIS benchmarks if you’re using Docker as your runtime engine.
There are discrete benchmarks that were built for Kubernetes that apply both the master nodes and the worker nodes, and I’ll show you actually where Twistlock has all these things built in so you can sort of quickly evaluate the health of your infrastructure from a configuration perspective as it relates to the CIS standards.
When it comes to securing the network and the workloads there are a number of different things you need to do. You need to make sure you’re automatically segmenting your infrastructure, and this really comes down to what you actually put on top of your Kubernetes infrastructure. Tools like Twistlock allow you to automatically segment your infrastructure, automatically model what known and good behavior of an application is, so you can work on deviations and behavior.
And ultimately what you’re trying to do, especially with workloads that are sort of container and microservice centric, you want to eliminate manual policy creation, and thus the maintenance of it as frequently as you can. You ultimately need to be selecting tools and taking an approach of security that uses automation as the first class citizen.
I think everybody is sort of familiar with, well, we’ll create a firewall rule and immediately that firewall rule begins to rot. And it’s very true if you sort of create manual policies that are referencing legacy artifacts; sort of IP to IP access rules really don’t apply terribly well from the context of the Kubernetes workload, and that’s because Kubernetes doesn’t necessarily care about the IP. It cares about the individual service. It cares about the capabilities of that service, and so using tools that are native to these environments is a real advantage to the operators.
I think now we’re at a point where I’m actually going to share my screen and show you what Twistlock does and some of the capabilities that are built in for Twistlock, in order to help you secure the workloads and what our approach to security is. Now, Twistlock is a very comprehensive tool so I will just be showing you a small piece of our capability as it applies to Kubernetes, but we’re going to go through some of the compliance benchmarks.
We’re going to talk about how we model applications. We’re going to show you some of the Kubernetes specific features and capabilities that are built in, so bear with me while I share my screen. And Jim, would you confirm that you can see my screen so I’m not dodging back to the presenter window too much?”
Jim: “Yes, we can, Patrick; thank you. And just a quick note to our audience and everybody on, as you think of questions please go ahead and enter them in the chat panel and we will make some time to answer them as we go along.”
Patrick: “Fantastic; thanks, Jim. What we’re looking at right here is the Twistlock dashboard. Now, there are a couple of different things that sort of stand out; this is really just a summary view but the thing I’m going to start focusing on is this view. This is an automatically generated radar view of Kubernetes services running in my environment and I’m going to touch on this really quickly.
You can see we have automatically learned and automatically microsegmented the infrastructure. That’s one of the key components to running this secure infrastructure, so to automatically microsegment. We’ve also got the namespace aware topology of all the different assets in my environment. The vulnerability posture of those assets, including Kubernetes components, the runtime events, all of that stuff.
If we jump over to the first principles of making sure you’re running your infrastructure securely, we’re going to start with looking at those benchmarks that Twistlock automatically includes. Twistlock includes over 300 different compliance checks out of the box. The other way to look at compliance checks is really to say am I running my infrastructure in a way that’s configured securely by default? I can choose which way I can configure that.
Twistlock includes the Docker CIS benchmarks, the Kubernetes CIS benchmarks, as well as benchmarks used by ourselves, and finally the Linux CIS benchmarks. So you can start applying first principles that are from a security perspective to all the different assets that make up your runtime stack.
Twistlock also includes prebuilt templates so you can have these benchmarks configured to apply to a given standard, whether it’s the NIST special publication 800-190 or you’re trying to adhere to, say, guides by GDPR. And you can apply these benchmarks as broadly or as narrowly as you would like, but think of them fundamentally again when you’re applying benchmarks to a system so that we can evaluate the configuration policy of it.
When it comes down to securing the workloads within your container infrastructure, Twistlock is going to include a number of different capabilities in terms of automatically assembling a model for container and service behavior. So if we go ahead and look at, say, something like, Kubernetes infrastructure you can see that if I want to go look at my API gateway Twistlock has automatically modeled out the individual processes that run inside the container, and so you start securing these workloads automatically such that you’re working from a minimum permissive footprint when you use a tool like Twistlock, for both the Kubernetes – the Kubernetes system itself but also the workloads that are layered on top.
If we dive into this a little bit more you can see how you can use these automatically generated rules, again automation is first class citizen, to look at things like detecting Kubernetes attacks. These kinds of settings are can I make sure that the workloads that I’m running aren’t accessing, say, the kublits that are running on the host as well; so have I segmented my infrastructure properly.
And then you get into also looking at whether your hosts are secure, so are my services that are running on my host configured in a way that has the least permissive model attached to them? Am I doing that all automatically? Am I applying these models automatically as I increase the capacity across my infrastructure, etc?
And then just to dive real quickly back to this automatically generated topology; using a tool like Twistlock allows you to take advantage of namespaces. It allows you to do things like segment your infrastructure to a minimum viable connectivity model automatically, and then enforce it. It allows you to gather all the data about your infrastructure, where assets are running, etc, and present them to you in a very straightforward and easy way so you, as an operator can determine what your overall risk profile is associated with running a workload on top of Kubernetes in your infrastructure.
That’s all I was planning on showing for a demo; there’s a lot more underneath all of this, comprehensive vulnerability management, you know, and much deeper runtime protection than I showed, but at a high level these are the things you sort of need to consider when it comes down to running workloads inside of Kubernetes and securing Kubernetes itself.”
Jim: “Very cool; so a few quick questions, Patrick, and certainly again we’ll see if there are more questions from the audience. But things from the demo that you went through, on that network segmentation component you mentioned namespaces; is it also looking at network policies or how does it know more about the traffic then?”
Patrick: “Yeah; that’s a great question. Twistlock is absolutely namespace aware. My environment had an Ingress controller and so I’m watching traffic traverse different namespaces. How we’re able to do that is we’re able to observe traffic in a pitcher/catcher relationship, so we observe a service, initiating connection to an adjacent service. We map out that connectivity. We build access rules around it automatically, associate them with the running services and then sort of represent the minimum connectivity model without an operator really having to do anything.
We do that by observing the initiation of traffic and having what we call the fender on every single host in the infrastructure so we can observe the traffic in that pitcher/catcher relationship.”
Jim: “Okay; so this is something that could be tuned over time or what if there is a periodic connection – let’s say that happens once a day – things like that? How would I configure for those?”
Patrick: “Yeah; so there is a bunch of different ways this happens; by default our models are generated in the first 24 hours that we observe a service. Those models are associated with the image shaw, so if you were a new version of the service we start building a new model for it automatically, but you can also manually trigger learning modes, manually stop the learning modes, trigger free learning periods.
And since we’re a fully API driven product you can integrate this with your deployment or PBT testing but it’s really based – sort of the default operating position is observe traffic for 24 hours, assemble the first go at a model, and then give the operators options to add to the model or use a baseline automatically generated model if you want to create your own manually. You can use our models as sort of templates for manual policy creation if you would like if you had [unintelligible 00:22:24] that was somehow not catching or you don’t want it to automatically learn. There are a lot of different options in the configuration of the tool.”
Jim: “So I can actually train that model maybe in a staging or test environment and then put it into practice in production or do I have to do it in the environment where it’s running?”
Patrick: “No; that’s actually an excellent highlight. It somewhat depends on what your deployment model is. So if you have a lower environment I highly recommend training the models in that lower environment and then you can graduate it to production; it somewhat depends.
You can export the models as well and sort of manually create policies if you don’t want to have sort of the learning happen in your production environment, but your lower environment is a reasonable facsimile of your production environment. Not all organizations are at that level of maturity where they have a preprod that actually mimics their production environment, but if you do, you absolutely can do that.“
Jim: “Okay; well, that’s very cool. One other question I had, is you mentioned secrets, right, and there is a lot of buzz and discussion in the community about dynamic secrets. There are tools like Vault from HashiCorp which supports that. Maybe if you can quickly explain what exactly dynamic secrets are and why that matters, and then if Twistlock can help with that?”
Patrick: “Yeah; so the concept of secrets is, because of the nature of containers and images themselves you don’t want to embed anything in them. You don’t want to embed sensitive or secret data, whether that’s API keys or certificates or key value pairs like database password equals this is my database password. So the concept here is you’re storing that data in a vault or some sort of location, and HashiCorp Vault is a perfect example of this.
So you store those key value pairs in a sort of segregated store and then you inject them into the container, into the container runtime. What Twistlock allows you to do is say very clearly map with a lot of resolution, okay, this particular workload, on this particular host, matching this particular image with this particular label, means that I need to go grab this secret from the store and put it in a place where the application – embedded inside the container or really embed it inside the image that the container is going to be based from – so when it starts up it’s only present in the workload that’s running, and then when it’s torn down it’s never sort of present in some static file that a malicious attacker could then go and grab.
The idea here is you separate the artifacts from the data, so it’s another layer of keeping your data outside of your application and injecting it at the last possible moment. This ultimately, I think came about because a lot of the legacy models, you embed everything in the image and you really don’t want to do that with secrets data. That’s sort of not the best practice.
Twistlock supports a variety of different secret stores, whether it’s HashiCorp Vaults or any of the secret stores that are present in AWS or GCP or Azure or, say, from Cyberark so you can inject those secrets at the last possible moment, thus lowering the risk profile of the images as you’re developing them.”
Jim: “Got it; so it’s basically from – I’m a software developer so thinking in programming terms it’s more like late binding or dynamic binding of that secret to where it actually needs to be, and deferring that to the last moment possible?”
Patrick: “Yeah; and we support two main mechanisms of secrets injection. You can inject them as files or you can inject them as environmental variables, so it depends on what your needs are, yeah.”
Jim: “Cool; all right. Also, there’s a question from the audience – the question is, is the Twistlock agent deployed as a container in every node?”
Patrick: “Yeah; so the context of an agent – we refer to this as Twistlock defender – so Twistlock’s appointment model is really straightforward. You’re deploying a container that runs alongside a single container per node and all it is, is just another container that’s running on your nodes in your infrastructure. You’re not running additional processes. You’re not manipulating your image builds. You’re not running additional processes in your containers or changing anything within the container’s file system. You’re just simply running another container alongside the other containers that are part of your infrastructure. In the context of Kubernetes we’re actually deploying it as a daemon set.”
Jim: “Okay; awesome. All right, I’m going to move to the next section of the presentation. Patrick covered a lot of the capabilities, a lot of what you need to look at in terms of security for a Kubernetes cluster around the workloads. I’m going to show how Nirmata can now deploy Twistlock itself on clusters and that’s what we’re going to look at in this next section. And then again, feel free as we’re going along to add more questions, either on Twistlock or some of the Nirmata content we’ll cover and we’ll have some more time for Q&A after this next section.
So first off let me quickly introduce what Nirmata is itself. Nirmata is an application management platform built for cloud native, built for Kubernetes and there are three phases of the application lifecycle that we focus on. So Nirmata does not dictate or impose anything on the development side, per se, but once you have your container images, once you have your manifest, Nirmata focuses on the deploy, the operate and the optimize part of this.
We really think of this as a continuous feedback loop where we’re not only pushing things and managing things across clusters, but we’re also collecting data in real time from the entire stack. From the host, from the Kubernetes components, as well as the workloads, and optimizing the workloads as they go along, which could be smaller adjustments in time based on capacity or larger adjustments based on policies that you would configure within Nirmata itself.
So this is a quick view of the architecture itself and I’ll start from the bottom of the stack and move towards the top. So first off, with Nirmata you have the choice of either installing and managing Kubernetes through Nirmata or you can use any managed service. So for example, if you want to use GKE, AKS, EKS, PKS, all of those are supported, but if you want Nirmata to install an upstream version of Kubernetes we also go through compatibility tests and we certify against all major and minor versions of Kubernetes, as well as newer patch versions as they’re released.
So the interesting thing here is we don’t impose any particular distribution or any particular version; you are free to – even if we haven’t tested and validated with the new version you’re free to go ahead and try that out and deploy it with Nirmata to experiment with some of the newer features perhaps.
So to do that you need the agents installed on your virtual or physical machine, your container host, and those would be responsible for bringing up the Kubernetes control plane and the worker node components. And again, if that happens to be a managed service then you would skip the agent step and just deploy our controllers within the Kubernetes clusters itself.
But then both controllers and agents, they connect securely upstream into our management plane, which is a set of micro services providing application management capabilities. I’ll showcase this in a demo and we’ll look at how Twistlock itself, both the daemon set component as well as the controller, the console, gets deployed through Nirmata.
So one thing to also mention is that in terms of form factors Nirmata is deployed both as a SaaS so we operate hosts and manage Nirmata at nirmata.io, so that’s something that is free to sign up for and start using, or it’s available as a set of micro services. It basically runs as a Kubernetes SaaS so you can run this yourself in your private networks, etc, with downloadable components which run securely in your datacenter or in your cloud.
All right; so just a quick summary of features and then we’ll dive into the demo to show how Twistlock gets deployed and managed. But really, one of the key differentiators, the way we have built and designed Nirmata, we think of it as out of band management so we’re not imposing that every operation has to go through Nirmata itself but you’re free to go and make changes directly through kubectl or any other tool that you wish.
And we have bidirectional change management – Nirmata itself allows – like I mentioned it’s very composable in this sense. We can really integrate with other CNCS partners like Twistlock and several others, of course around the storage, the networking, as well as on the CICD side of things as we looked at in the stack.
Like I mentioned earlier, we don’t really impose any particular version or distribution; we certify with the available versions as they are released. We’re agnostic so it’s designed to be multicluster, multicloud, so think of Nirmata as a single management plane that can span any cloud, even bare metal servers which are running Kubernetes.
And you’ve heard from the beginning we’ve built in – you know, we believe that to do, like I mentioned with the lifecycle management, you really need the monitoring and the deployment and some of the other capabilities to be well integrated and this is where we provide built in learning metrics, remediation. That’s part of the solution itself so no additional tools that you have to build on and integrate yourself. And of course we provide – as we engage with enterprises we start as early as – you know with some of the architectural decisions, and then as a SaaS provider our kind of mindset is we’re with you every step of that journey.
So let me start – I’ll just show a quick demo of how Twistlock is configured and deployed in my cluster, and then what we’ll do is we’ll come back for some more Q&A, so I’m going to start my screen share and we’ll switch to my Nirmata view.
So here I’m going to use our SaaS offer for the demonstration itself, and as you see I’m logged in into my console. This is nirmata.io and this is showing everything that’s going on in my account. This is a shared account and there are some clusters which were decommissioned and haven’t been cleaned up that’s why there are some alarms, but I also see some of the other activity and what else has been going on in this account itself.
So just starting from how a user would approach this and how, if you sign up for Nirmata what you would do. The first thing you would do is you would set up one or more cloud providers and here, because we test and integrate with every cloud provider I have several, including folks like Diamanti and Nutanix from the converged infrastructure side. We have cloud providers – we’re using Azure, GKE, etc, and even GCP directly for different cloud providers.
So if I wanted to add a new cloud provider – and it’s very simple to go through that – let’s say I want a new AWS account that I want to onboard, because we’re going through Nirmata as a service, as a cloud service we would integrate directly into IAM. If you were to use the onpremises version you would just create the right credentials for Nirmata, but different cloud providers this registration portion would be customized to suit the best practices, the security best practices for that cloud provider itself.
So for Azure it would be different; it would go through – actually integrating with their security and IAM group, versus with AWS it would be just the account ID and the external ID so that Nirmata can access resources securely.
Once that’s done the next step is to configure, within your cloud provider, a set of host groups and here you see we have a lot of older host groups, which right now there’s no host but there are also some which are running workloads itself. And those, if I go and look at these hosts which are available, some are connected directly and others are going through the cloud provider integration itself.
So this host group is what we’ll use for the demo and I already have two hosts on boarded and connected. These are just configured and I can quickly show what that configuration looks like. Here we have chosen to do things directly within AMI, but if I were to add a new AWS host group, for example, I could choose between integrating directly at AMI with a launch config, even with a spot fleet, right?
So if you want some spot resources as part of your Kubernetes cluster that’s very easy to do. And you can size your host group and also provide any cloud in it or user data that you want to pass in when that host is initialized. So pretty simple and straightforward to set up and what this allows us to do is organize our host, our container host based on similar configurations.
So the next level up, now that we have our cloud providers and hosts, is to just look at the clusters, right? So these are Kubernetes clusters that are already deployed and with Nirmata I can either manage existing clusters, I can even – you know if I want create, for example on GKE, another cloud provider I can do that or I can install my own upstream Kubernetes and the right hosts or virtual or physical hosts, so lots of flexibility and options here.
But for this demo we already have a host – a cluster up and running. I can browse data here at the cluster level, so if I look at the name spaces, etc, I’ll see that there is already Twistlock because I installed the console as well as the daemon set, but there is also other namespaces that are getting created and I’ll explain what the correlation there is and how those are handled.
So really from the infrastructure perspective that’s all you need to do to get up and running. You onboard your cloud providers, virtual or physical hosts, you could even direct connect a host if you don’t have a cloud provider. You organize your resources into host groups optionally and then you deploy or discover clusters with Nirmata.
Again, the goal of all of this is to manage applications, right, and that’s where you can obviously – with Kubernetes you would declare your applications in YAML as manifest or in Nirmata if you’re not – you know you don’t want to go the YAML route you can just come in and model your application. So this is, for example, the guestbook application we use as a demo; it’s got different components including a redis-master and a redis-slave. It’s got a frontend component, which is the guestbook app itself. And I can see that I have it running in one environment but I can go here and also add more components. I can tweak and tune the application as I wish and Nirmata will validate and produce the right YAMLs behind the scene.
So here, for example, if I want to export the YAMLs this is everything that is inside my application itself. It’s a lot of details, including defaults for network policies, defaults for other components like services, etc, which get created and then can be customized.
What we want to do here is look at the Twistlock installation itself, so as you can see I have two applications defined in Nirmata for Twistlock – the defender which is the daemon set that Patrick mentioned, and then the console itself. So first I’ll go and take a quick look at the console, and the console is a single deployment here. I can drill down. I can see everything configured for the console, including the pod template down to the container level. And if we go back up we also see – let’s see, there’s probably some config and storage maps, so there’s a config map installed here which will configure to a slot based on your specific settings, etc, for that cluster itself.
We can similarly take a quick look at the defender component, which is a daemon set, right, so here I see my single daemon set and I see everything required to model – to deploy that particular daemon set. So in this case I already have Twistlock running in a single environment, so if I go in here and I take a look at that I can see that it’s running as a daemon set so I click on defender so I see on my two hosts, that we showed on the cluster, it’s already running.
And if we go back to the application view that we had or if you just go environments – and I’ll go on my Twistlock environment – I see also the console is running so I could connect to this and look at the same view that Patrick was showing in his installation, right?
Now typically, in an enterprise you don’t need to run the console on every cluster; you would run that centrally or in a single management plane cluster, and then you could have your daemon sets connect up to that and be communicating with that itself.
But that’s generally how we would model the application itself, and deploy it through Nirmata. One of the features I’ll quickly show in Nirmata – you don’t want to be doing this manually for every cluster, right, so what we would like to do is to be able to automate that, and for that we have things like cluster policies where you can define all the configuration that you want for your cluster, including any add-ons. And what we’re seeing increasingly is there are several services you may want to run, and perhaps it’s not like – like I said, it’s not the controller or the console services but the agents, the daemon sets that you want to run as part of your cluster. Those are essential; that you have the right set of services always running on every cluster.
So in Nirmata we also think of those as applications but we have an option to mark those as cluster addons, and then those could be pulled into your cluster policy, and whenever you deploy a new cluster which matches the policy type it will automatically get those addons. So an extremely powerful feature which lets you get consistency and compliance across any cloud, regardless of any Kubernetes cluster, regardless of whether it’s a managed cluster or it’s a cluster that was installed or requested by some team and installed for them as a custom cluster.
All right; so one last thing I want to show very quickly, and this is really again the point of doing everything we did here, was to deploy and manage applications, right? I’ll just show you what that looks like, so let’s say I want to deploy a new version of Ghost. One thing I should point out before I do that is in Nirmata there is this concept of environment, and environments are a logical concept which layer above clusters.
Think of it as a set of policies for your applications and for your workloads, so environments let you group together common policies that you want, whether it’s [Arback], whether it’s update policies, your CICD pipeline; all of that you could configure very easily as part of your environment.
Now, once you have that environment you can also do additional higher level policies and apply them as patch policies by selecting particular environments and workloads, and this lets you mutate any YAML which is going into that environment. So an extremely powerful way and we’re actually using this – one of the things we’ll show in an upcoming – in a blog post and a video is how we’re using this to inject even things like sidecars for getting vault secrets, etc, in an automated fashion.
As well as if you’re running Isteo you would also want sidecars for that, so a lot of this can be fully automated without having to go and change the application YAMLs, so it’s that separation of concern between the application portion and the operations portion.
So here what I’m going to do is in the Twistlock environment itself, and it doesn’t have to be in this environment but I’m going to deploy another version of this Ghost application. And because, when I configured my environment policy I chose to isolate every application in its own namespace.
What this is going to do is if we go back and look at the namespaces it’s automatically going to generate a namespace name for me and put that – oh, it failed, most likely because there’s a note board or something like that configured. Actually; this is interesting. Yeah; so it says that service with that particular name already exists in that environment which is why – oh, here I don’t have the isolation policy set to be at the namespace level, so let’s actually try that. I’m going to delete this very quickly and just to show where that’s configured I’m going to go and deploy this in a different environment.
So we’ll delete this application and if I go back to my – you know to the Twistlock, the reason why it failed is because my isolation policies I’m using is a shared namespace here. But if I go to a separate environment, like for example, this test environment which is running on AKS I can quickly – I can deploy now another instance or several instances of the same application.
Let’s do G1 and G2, so we’ll run that and we’ll also run another instance as G2, and both of these will successfully come up without any problems in this environment because we’re using namespaces to isolate each application into its – each instance of the application into its own namespace.
So once the applications are running, because Twistlock has been configured on that cluster, what will happen is – oh- I guess I keep picking the wrong environment here. This one doesn’t seem to have namespace isolation either so I think we’ve probably hit the same issue, but I think you get the idea so I won’t belabor this point anymore.
But really it’s configurable at this environment level in terms of how you want to deploy and operate your applications and how you want to isolate them in terms of security. But the nice thing is now that once this application is deployed with the Twistlock console and integration, it automatically starts scanning, monitoring and managing that workload based on the policies that were set up.
All right, so let me stop there in terms of the demo itself. I think that shows a good flavor of what we do with Nirmata and how things would work with the Twistlock integration and we’ll check and see if there are any questions and then quickly summarize. I see there are a few questions from the audience. One question is whether you can discover GKE clusters? So yes, absolutely. We fully integrate with existing GKE clusters. It’s extremely simple to just run the controller as a single command with kubectl or similar, or you can also use Nirmata to create and manage GKE clusters.
So we integrate at the API level where you can use Nirmata to just onboard or spin up new GKE clusters, which would be fully managed of course by the Google Cloud platform, but they’re integrated into Nirmata so you get the same unified application and workload management, the same policies, the same set of cluster services in a single pane of glass.
Okay; so if there are any other questions feel free to add them in, otherwise just to quickly summarize what we have looked at and what we covered. So today, of course, Kubernetes has been widely adopted by enterprises and as enterprises are adopting Kubernetes, like Patrick covered very well, there is a need to make sure there is a comprehensive security strategy in place, right?
This really must cover the entire stack from your host to your Kubernetes’ components itself, like all the cluster services at CD, etc, and also has to cover applications running inside of Kubernetes, in the Kubernetes clusters, as well as sensitive data like secrets that you’re bringing into your applications.
Not only that, but you have to think about security from both the static as well as the runtime perspective, so it has to span your CICD pipeline and make sure – you know you would want to make sure images are getting scanned as they go into your registries. You have image prominence so the right images are pulled on the right machines, as well as runtime checks with the CIS benchmarks that Patrick showed.
So suddenly Twistlock, from what we looked at briefly – very comprehensive solution which covers all of what I just described, and including network segmentation and some interesting things there.
Then what we also looked at is how Nirmata makes it extremely easy for cluster operators, for platform teams to deploy and operate services like Twistlock as part of their cluster using cluster policies. So like Patrick highlighted, automation is key to success in all of this, so automating that portion is also a fantastic step. And now you get a single click deployment of a cluster on any cloud, with the same compliance, with the same settings, including cluster policies. Then you can layer your additional policies and workload management on top.
With Nirmata your development team, your DevOps teams are still free to use any tool, any interface they want and they’re not bound to just a particular platform’s view of how to do things in Kubernetes. Patrick, any other final thoughts, words that you want to share?”
Patrick: “No; I wanted to thank you and the audience for the opportunity to talk about security with Kubernetes and for the excellent overview for the Nirmata platform, and I encourage anybody to reach out to both of us if they have any questions that we can answer for them.”
Jim: “Awesome, and thank you for making the time and providing the fantastic demo. Just a quick note – our next webinar in this series, so on the Enterprise-Wide series we’re going to do one on storage. There were some interesting questions that we’ve already gathered on storage and how we would do snap shots, how you can backup data, so stay tuned for that. We will be announcing that very soon, within the next week or so, but look for that episode in early November. So with that, thank you everybody and feel free to reach out to me or Patrick if there are any other questions or thoughts that come to mind.”
Radhesh is Managing Partner of Arka Venture Labs. Arka Venture Labs is an Accelerator fund which assists Indian B2B Startups to foray into US by providing a combination of Funding, Mentoring and access to Silicon Valley Ecosystem. Arka Venture Labs was formed in August 2018 and has made 9 investments so far. Prior to starting Arka, Radhesh was Venture Advisor to Blume Ventures, focusing on early stage B2B Startups investments. Before this he was leading the Global Entrepreneur Program, for IBM India and South Asia. He exhibited strong leadership in steering the Startup initiative of IBM from scratch to one of the companies to be reckoned by the Startup ecosystem in India and generating strong revenues for IBM India Cloud business. He has helped many B2B startups scale in their journey by mentoring them, facilitating access to funds and customers.
He has core competency in evaluating startups leveraging technology and advising them on areas of improvement from business and technology standpoint. He conceptualized IBM India`s Startup challenge called IBM India Smartcamp and successfully executed the same. Radhesh has personally curated the startups for the finals, many of whom got funding either for the first time or for their subsequent rounds.He also worked with large enterprises in assisting them in identifying the next generation innovations through joint hackathons and startup challenges.
Prior to this role at IBM he was working as a Software Architect where he was designing Software solutions for Enterprise Clients, ISVs and System Integrators. He created many First of its kind solutions and led several key Sales wins for IBM. Radhesh has strong skills in building strategic relationships with Partner organizations.
Anubhav is VP of Business Development and Customer Success. He has 20+ years of experience in building and growing businesses across service provider, enterprise and commercial sectors. He has led functions in business development, product management, marketing, delivery and operations through his career, and most recently served as GM for the $250M Web-scale Services business at Cisco.
Anubhav is passionate about building new solutions and teams, and growing new market segments. At Cisco, he grew business 30-40% annually for many years while also building new offers, a world class team and a global delivery model.
Throughout his career, Anubhav has straddled technical, operational and business domains to bring new solutions around real-time analytics, operational assessments and network lifecycle management. Most recently, he was involved in bringing in new offers around recently launched Business Critical Services, a $2.5B business for Cisco. Before leaving Cisco, Anubhav signed off with a $350M multi-year deal built entirely around new solutions and engagement model with an innovative commercial structure.
Anubhav brings to Nirmata’s product development and organization an extensive experience developing both custom and standard subscription services, which was significantly formed by his time spent building analytics solutions at Cisco. This perspective on building bleeding edge solutions is evident in his business outlook, which recognizes that best solutions are built with the customers, by listening to them and partnering in risk taking when breaking new ground.
Anubhav holds bachelor’s degrees in both physics and electronics and telecommunications from Mumbai University and an MBA from San Jose State University.
Ritesh Patel is co-founder of Nirmata and has 20+ years experience building and delivering enterprise software solutions and has led highly successful software and business development teams. Ritesh began his career in engineering for high tech firms, and has since migrated to the business side of the operation. In his founding of Nirmata, Ritesh sought to bring his broad spectrum of experience to a single previously unaddressed industry problem through the creation of a new business. To Nirmata’s leadership, Ritesh brings a rare skill set incorporating experience with the entire chain of software development activities. This background has contributed to Nirmata’s commitment to empowering all employees to do the hard work required to deliver tools that solve tough problems.
Prior to Nirmata, Ritesh led business development at Brocade, where he was responsible for defining the firm’s cloud strategy, and oversaw developments that advanced the entire cloud “as-a-service” market. Through cloud and security-related initiatives, Ritesh and his team at Brocade were able to package Brocade’s plethora of IT infrastructure products into enterprise-ready solutions including OpenStack and CloudStack that pioneered widespread cloud computing implementation. In addition to these technical achievements, Ritesh succeeded in creating an extensive partner ecosystem to efficiently match these solutions with urgent customer needs.
Ritesh has also held key technical positions at Trapeze Networks (where he created industry award-winning products), Nortel, and Motorola. Ritesh holds an MBA from UC Berkeley and a master’s degree in computer engineering from Michigan State University.
Damien Toledo is Co-Founder and Vice President of Engineering, overseeing research and development, operations, maintenance, and delivery of Nirmata products. Damien brings over 20 years experience leading global engineering teams and delivering Enterprise grade solutions.
Since 1998 when he arrived in Silicon Valley from France to pursue the possibilities of US startup culture, Damien has held a number of engineering positions at high tech firms, each of which playing a role in the concept development for Nirmata. Building on lessons learned in management transformation at Jetstream Communications in the early 2000s, Damien built the Meru Networks Network Management team and Network Management solution from the ground up as one of the firm’s original members. Meru Networks went public in 2010 (NASDAQ:MERU).
Subsequent to his work at Meru, Damien led the transformation of the engineering team at Netscout to build an agile organization. At Netscout, he championed the adoption of Continuous Integration best practices across a team of 200+ engineers and 7 development sites, which resulted in reducing the software release cycles by 300%. While at Netscout and together with Nirmata co-founder Jim Bugwadia, Damien oversaw the adoption of microservices while searching for solutions to operating quickly in the cloud, and developed the foundations for what would become Nirmata.
Damien holds a master’s degree in computer science from University of Technology of Compiègne.
Jim Bugwadia has 20+ years experience building and leading effective teams and has created software that powers communications systems around the world.
Jim was among the original architects and business leaders within Cisco’s cloud automation practice, where he helped grow revenues to over $250M. During Jim’s tenure, IDC recognized the practice as #1 in global cloud services.
Prior to his work at Cisco, Jim led engineering teams at startups including Pano Logic, a desktop virtualization startup recognized for its innovative design by Wired magazine; Trapeze Networks, a wireless pioneer; and Jetstream Communications, a telecom equipment manufacturer. Jim started his career developing C++ software at Motorola for cellular network infrastructure where his team launched the world’s first cellular telephony that used code division multiplexing to optimize radio frequency usage.
Jim’s passion is to simplify the use of complex systems by providing well designed products that drive mass adoption of new technologies. As software has become mission critical to all businesses, Jim and his co-founders started Nirmata to help enterprises automate the delivery and management of applications. Jim currently develops software in Java, Golang, and Javascript, and is a Certified Kubernetes Administrator who actively participates in Nirmata’s full product lifecycle.
Over the course of his career, Jim has logged over $1.3B in revenue, 6 patent filings, 8 major product launches, and 29 years experience coding.
Jim holds a bachelor’s degree in engineering from Chicago State University and a master’s degree in computer science from the University of Illinois at Chicago.