Recently, I participated in an online panel on the subject of DevOps and Docker at Scale. This session was part of Continuous Discussions (#c9d9), a series of community panels by Electric Cloud about Agile, Continuous Delivery and DevOps.
You can watch the full recording here. Below are a few questions and answers from the panel:
Does Docker help enable DevOps?
I see it as a symbiotic relationship, and think that containers in general can definitely help with getting to DevOps. A common mistake to think of containers just as another form of virtualization. The key value [of containers] comes with separating the concerns of application packaging from that of operations, and using containers as the standardized units of management across the entire CI/CD chain.
With containers you can now have common tools which automate application delivery, but also automate the operations and management of applications. As a developer, to me DevOps is all about what can you automate – and the containers provides us with that standard building block to enable DevOps.
What are the challenges at Enterprise Scale?
First it’s important to clarify what enterprise scale means. So when people talk about Netflix, Google, and other web giants, the scale they are at and the types of applications they operate – that’s not a typical enterprise. With most enterprises, you have applications composed of a handful of services, and you’re running on maybe a few dozen servers. But, you may have several application teams like this in the same enterprise. So what might work for well for Google or Netflix may not be exactly be what an enterprise needs. That’s a distinction we try to make upfront as we work with customers, to solve problems that are really important at their scale and for their organization.
The other thing that we advocate as a transition point, is taking incremental steps and running containers on VMs as a way to start – it’s a good stepping stone for their automation, for leveraging the agility of containers, and also using VM based tooling and segregation for your different workloads. So that combination creates a good entry-point into the world of containerization and DevOps, and also helps solve some of the immediate scaling problems along the way.
Best Practices for Docker Orchestration in Production
Scaling & HA
Automation is obviously key, and containers help with that. Because applications components are now self-contained and containers provide standards for operations and management of different types of applications.
At Nirmata, we believe that out-of-band management is required for business critical systems, and [as a best practice] you can’t really co-locate your management systems with applications. Therefore, we built a highly scalable and secure cloud service that provided out-of-band management of containerized applications. This approach frees up enterprises, from having to deal with scheduling, clustering, and managing all that complexity, without locking themselves into a single cloud provider. And it ultimately allows much greater scale.
If you go back a year and a half when Docker orchestration emerged, developers could now easily build application images and change things quickly and more easily. So that immediately raised a lot of concerns in terms of: “now anything can run anywhere.” Security – like in any emerging technology – immediately became a primary concern.
But if you fast forward to where we are right now, there was a recent paper by Gartner, where they said, that if done right containers can actually be more secure.
Besides advanced features like with kernel tracing and restricting privileges, you just get a lot more visibility, a lot more insights, and a lot more control with containers. Having said that, it’s a fairly large amount of things you need to think of, and with our customers, we help provide a simple framework consisting of three aspects.
Image analysis, image management and artifact management are absolutely key. But the other two important aspects, are the execution environment i.e. the container hosts – that also needs to be inspected and audited for security, and the application. Whether it’s a microservices application or a monolith, you still need application-level security. Your security framework really needs to cover all three of these vectors, and you need to think about static analysis as well as runtime analysis across all of those three.
One other item to highlight on the application side – if your application is doing authentication and encryption, that doesn’t really need to change with containers. Your application still does what it does, but then there are new considerations like managing application secrets and sensitive configuration data. If you have security credentials used for access across application components, you can no longer put that sensitive data as part of your image or make these environment variables, as with containers it’s now fairly easy for others to read the data.
One thing I want to address is a myth that we often hear repeated, that containers are only good for stateless applications. This is not the case. In fact, a number of our customers are running stateful, traditional applications in containers, and there are good reasons for why they’re doing this. It has to do of course with agility and automated delivery management.
Regarding state and containers, it’s a consideration, but there are solutions for really every kind of state that you would have or that you would encounter in an application. So starting with persistent state, every application needs to write some data – whether it is to a database service or a data layer, or maybe even just to a persistent file. If you are already doing that, typically your database servers would be in a separate tier. Perhaps, initially, maybe the database service doesn’t need to be containerized, or if you’re running on AWS, you might be using RDS or something similar.
That’s a reasonable way to start, by containerizing your middle tier services, or the components of your application itself that change more often than others.
If you do end up containerizing your database – there are several ways to handle this. For example at Nirmata we run everything from Elasticsearch to MongoDB in containers, at scale, and we do that mostly by using host volumes. Or you can use volume plugins from EMC, ClusterHQ, Netapp, and others. These are tools that let you attach volumes to containers, and if the container moves around, the volume moves with the container itself.
The other types of state also to be concerned about in applications could be configuration data or session state, but all of there also have solutions that can be applied to these containerized applications.
Another big topic with scaling containers is networking, and it’s interesting to watch as the space matures. Initially, there was a big rush to invent new solutions. As containers became popular, there was a tendency to treat them as the new layer of virtualization, and say: “Well, everything we did at the infrastructure layer now has to change.” But today, we’re seeing more of the attitude that infrastructure solutions that used to work for VMs, can also be made to work for containers. So recent networking solutions are getting simpler, for example the CNI model, and moving away from layers of overlays, and adding new abstractions just for containers. It’s good to see things moving in that direction, and we believe that the native networking from cloud providers can also be made to work for containers.
View our webinar – DevOps and Docker at Scale with Electric Cloud – here from Nirmata. Feel free to Contact Nirmata with any questions or issues you wish to discuss with us, thank you.