Using Virtual Machines
In the past, many companies have tried to deliver traditional applications by deploying them in VMs as a hosted offering. To achieve multi tenancy, they end up deploying separate instances of the application for each customer. They typically find that this approach works for a few customers but does not easily scale. There is significant amount of automation and tooling required to deploy and manage customer environments. Also depending on the size of the application, multiple VMs may be required per customer potentially resulting in poor resource utilization ultimately increasing the operating costs. Most importantly, when using virtual machines, there is no clear separation of concerns between development and operations teams potentially resulting in operational challenges.
A better approach
An alternate approach to delivering traditional applications as a service is by using Linux containers. Docker has simplified deployment of applications in containers. Each runtime component of a typical three tier application can be deployed in a separate container and be linked with other tiers. Moreover the entire application can be deployed on a single VM, or spread across multiple VMs without any changes to the application. To achieve multi tenancy, separate instances of the application can be deployed for each customer. In fact, the application can even be deployed on appropriate infrastructure based on customer SLA’s. High availability can also be achieved by deploying standby containers. Once the application deployment process is automated, on boarding a new customer can be as easy as flipping a switch.
In a large deployment, upgrading customers with newer versions of software can be tricky. Automating the upgrade process is pretty much mandatory. An important benefit of using containers is the ability to easily rollback the entire application in case an upgrade goes bad. With Docker, container images can be tagged. Rolling back an application requires just redeploying the application with previous tag. Any database schema rollback will need to be handled separately though.
Containers also help cleanly address ownership issues and operational challenges related to separation of concerns. With this approach, the operations team is responsible for anything at the virtual machine layer and below and the development team is responsible for anything at the container layer and above. Containers become the unit of deployment, orchestration and management. Another side benefit of using containers is application portability i.e. application becomes independent of underlying VM or cloud provider.
Overall, application delivery by containerization using Docker is a highly scalable and an elegant approach, that cleanly decouples the application from the underlying infrastructure bringing organizations closer to continuous delivery.
Once a traditional application is running on cloud and being delivered as a service, it can be evolved to a multi-tenant architecture and can start leveraging shared services for common functions. This provides a path, where businesses can leverage their existing investments and evolve to a cloud-native architecture.
Customers that are on a journey to transform their applications to be cloud native should seriously consider application containerization as a key component, and a good first step, of their strategy. Most of the automation and tooling they previously built can be now leveraged as they transform their application to a more distributed cloud native architecture. Operating the application as a service also provides the experience that can prove be valuable when they deploy and operate a more complex application at a much larger scale.
For more updates and news follow us at: