For more updates and news follow us at:
Cloud native applications are built to run optimally on cloud infrastructure. Cloud native application architectures are very different than traditional tiered applications which are designed for a data center. In this post I will discuss maturity model, from the Open Data Center Alliance (ODCA), for assessing the cloud nativeness of an application.
I recently read a very good paper from the Open Data Center Alliance (ODCA) called, “Best Practices: Architecting Cloud-Aware Applications Rev. 1.0”. This paper provides a number of principles, patterns, and practices for developing and operating cloud applications and also includes this cloud application maturity model:
From: http://www.opendatacenteralliance.org/docs/architecting_cloud_aware_applications.pdf
The ODCA paper provides additional details on each level, which I encourage you to read first. Below are my notes on each level:
The main goal at this level is to be able to easily and quickly install the application on different types of virtual machines, or cloud instances. (as a sidebar – I am not sure if ‘virtualized’ is the best name for this level; the application could be deployed in application containers that can run on physical servers and still meet all requirements of this level.)
A best practice to consider, is to create immutable application images for the application. For example, Netflix bakes AMIs at build time. If you are using an application container, like Docker, immutable container images can be built using tools like Jenkins.
If you application is multi-tenant, you may be running a separate environment for each tenant at this level. This makes sense if you are migrating from traditional application delivery methods to a Software-as-a-Service, but should be properly positioned as an intermediary goal.
At this level, your applications are decoupled from the underlying infrastructure primitives, and all major application components (or tiers) should be decoupled from each other.
A good first step is to decouple your application from storage and the data management tiers. This may also includes configuration data, logs, etc. If the application is multi-tenant, your data tier should be shared across tenants.
The next step, which can often be harder, is to decouple the application from network constructs. The goal is to use a naming service, rather than rely on IP addresses and ports. This may simply be DNS across tiers, or injecting IP Addresses and ports as part of the deployment (this works for static application components), or a service naming, registration and discovery scheme.
For example, Nirmata.io provides a built-in service naming, registration, discover, load-balancing, and routing which allow full decoupling of the application from the underlying networks. Other tools, like Zookeeper, Etcd, and Consul.io and can also be used to build service registration and discovery.
To achieve this level, the application must be fully decoupled from the infrastructure. Application containers, like Docker, provide a way to decouple application components from the infrastructure, but are not enough. You will also need to abstract application blueprints, deployment policies, scaling policies, affinity and placement rules, etc.
At this level, each application service must be elastic (i.e. can scale-up and down independently of other services) and resilient (i.e. has multiple instances and can survive instance failures). The application should also be designed, so that failures in one service do not cascade to other services.
The Microservices style architecture is a good example of an application architecture at this level. In a Microservices architecture, the application is composed of multiple services and each service is designed to be elastic, resilient, composable, minimal, and complete (see Microservices: 5 architectural constraints).
At this level, the application is able to detect or anticipate changes and react to them in a fully automated manner. For example, Netflix uses a predictive auto-scaling algorithm.
As a best practice, you will also want to separate application management & control functions, from the application itself, or use an external application control services like Nirmata, or AWS Auto-scaling services.
The ODCA paper also talks about dynamically migrating across providers. I agree that is a good goal, However, the reality today is that each cloud provider’s stack are fairly different and require building significant skills and operational expertise.
A reason why application containers, like Docker, have quickly become so popular are that they promise to ease the cloud portability challenge. Containers are an important building block, but still a tiny piece of the overall puzzle. Much else is needed for true application portability across providers. At Nirmata, these are some of the challenges we are currently working on solving, by providing cloud-agnostic application operations & management services.
Andrew Spyker (ex-IBM, now with the Netflix platform team) had once mentioned using a set of questions to assessing application architectures. I thought that was a great idea, and have tried to map each maturity level to a set of questions:
Scoring:
The Cloud Application Maturity Model from the Open Data Center Alliance provides a way to assess the cloud nativeness of an application, understand best practices, and plan improvements. Although, I would have used slightly different level names and terms, the differences are minor.
Keep in mind that this model only assess the maturity of an application. To be successful, you will also need to build a DevOps culture. Perhaps we need a DevOps maturity model as well?
How do you see your applications mapping to this model? Would love to hear your thoughts and feedback!
Jim Bugwadia
Founder and CEO at Nirmata
@JimBugwadia
For more updates and news follow us at:
Latest
The latest industry news, interviews, technologies, and resources.
View all blogsLast week, an autonomous bot called hackerbot-claw — describing itself as “an autonomous security research agent powered by claude-opus-4-5” —…
AI has dramatically lowered the friction to create infrastructure. Developers can now generate Kubernetes manifests, Terraform modules, and CI/CD pipelines…