Cloud hosting, music, and video streaming, and messaging services mean that everything is right at your fingertips. You can watch the shows you want, when you want, listen to your favorite music, or message your friends whenever. The cloud makes things so much easier for users. But developing for a cloud-centric world can be a nightmare.
The biggest challenge is making sure that data and apps are available round the clock, on-demand. But server time costs money by the hour. Do you just leave your servers on all the time? What happens when there’s a bug in one part of your platform? And what happens when you have to push through a new update?
Traditional design architectures force you to push updates as complete builds. It can take hours to install an update and that means lots of downtime for users. Containerized microservices are a radical, cloud-friendly way of solving this problem. Containers are essentially standalone processes and all their dependencies packaged together to run. Everything that a containerized process needs to run is in that container, making it highly portable.
How does containerization work?
Here’s an easy analogy. Think of your cloud application as a ship and containerized microservices as multiple airtight bulkhead compartments. If there’s only one compartment, your ship will sink if gets flooded. But if you have multiple compartments, your ship’s going to be fine, even if one compartment gets flooded. Apps built using a microservice architecture can have dozens or even hundreds of containerized microservices working together. Microservices can provide unique functions (for instance, just the messaging part of a social media platform). You can also have multiple microservices that do redundant functions on multiple servers for load balancing.
Instead of having one messaging server on all the time, (chewing through your wallet), you can have the messaging microservice running on multiple servers. They can scale up or scale down your server utilization based on demand for that particular service. So you’d have less utilization at night-time, addressing reduced demand and saving money. But you’d be able to scale up utilization during peak hours.
How containerization helps
Containerization can also help reduce downtime significantly. If your app’s made up of multiple containers and there’s a bug in one particular service, everything else will keep working while you’re fixing it. Moreover, when you’re pushing incremental updates, you wouldn’t have to bring down entire servers. You’d just have to update individual microservices. Users might not even notice that there was downtime.
Here is a good analogy of what Kubernetes is. But in practice, managing and monitoring (orchestrating) apps built using microservices can be incredibly difficult. You’d need a system that dynamically monitors and adjusts microservices across hundreds or thousands of servers. Thankfully, Google did develop a solid tool by creating Kubernetes. The word “Kubernetes” means governor in Greek, and that’s essentially what it is. It’s a platform that helps you monitor and govern microservice-based apps.
Kubernetes isn’t without its disadvantages, though. It has a very steep learning curve, though it’d be much harder to build your own orchestrator. When transitioning to a microservice model, there’s also the need for developers to change the way they look at code. They themselves need to have a good idea of how it’ll scale and how it’ll be deployed, instead of just leaving it to operations. We’ve prepared a curated list of Kubernetes alternatives here if the Google-based solution isn’t the right one for you. Read on to find out more.
1. Docker Swarm
Who it’s for: Users who want an easy-to-configure alternative to Kubernetes
Docker (the overall project, not Swarm) pioneered the idea of containerized microservices in 2013. Docker Swarm is Docker’s orchestration platform. It has several advantages over Kubernetes. For starters, while less versatile than Kubernetes, is extremely straightforward and easy to install than Kubernetes. Docker Swarm uses a CLI with GIT-like semantics.
This familiarity means that developers can easily integrate Swarm into their existing workflow. Secondly, Docker for easier manual scaling of services, compared to using the kubectl function in Kubernetes. Docker also wins in terms of support. Docker itself offers official enterprise support for customers of Docker Enterprise Edition (which includes Swarm). Google doesn’t offer support for Kubernetes. However, Kubernetes is an open-source platform so there are other vendors out there. They offer support for their own releases.
Docker has disadvantages too. Logging and monitoring are a key weak point. While Kubernetes has built-in monitoring tools, Docker Swarm requires you to use third-party tools like Sumo Logic and Retrace. Also, the Docker Swarm community is a lot smaller than Kubernetes community.
- Easier to set up and configure than Kubernetes
- Official support for the Enterprise Edition
- Limited monitoring and logging functionality
- Smaller community than Kubernetes
Who it’s for: Users who want to run containerized and non-containerized workloads on a distributed platform
DC/OS is short for Data Center Operating System. It operates at a higher level of abstraction than Kubernetes. Kubernetes just orchestrates containerized microservices. You’re still dealing with multiple distinct servers and multiple resource pools. DC/OS, however, abstracts resources away from the machines themselves. DC/OS can present the whole datacenter as a single, giant pool of resources–Petabytes of storage, terabytes of RAM, and thousands of CPU cores.
Developers can code for it as if they’re working with one giant system and the DC/OS magic intelligently distributes the load across all your servers. This means that DC/OS can be used to distribute the function of non-containerized workloads too. DC/OS does have notable drawbacks, though.
While DC/OS is open source, there’s an enterprise edition which has many key features locked behind a subscription paywall. You’ll have to pay up for certain functions that Kubernetes supports out of the box.
- Lets you run both containerized and non-containerized workloads
- Presents itself as one unified resource pool, reducing complexity for developers
- Premium features are locked behind a paywall
Who it’s for: Users who want a limited, but focused orchestration service
One of the major drawbacks of the big orchestration players like Kubernetes and Docker Swarm is that they’re so complex. They’re built with the requirements of giants like Spotify–that service hundreds of millions of users a day–in mind. If your app needs to scale across thousands of servers and provide dozens of services to millions of people, you need that level of complexity. But if you’re a small or mid-sized player, your orchestration requirements will also be simpler.
Nomad does very little by itself. It only lets you manage container clusters and schedule them. In case of errors of failure, it’ll keep your container clusters running, too but that’s about it. Any other functionality you need, in terms of logging, monitoring, or networking, need to be handled by other tools. If you have further requirements, it’s easy to integrate Nomad with other tools, too. Hashicorp, Nomad’s key developer, ensures close integration of Nomad with their other software products like Consul and Vault.
Nomad has some notable disadvantages too, though. For starters, it is limited and that’s a double-edged sword. If you need advanced network policy functions and monitoring built-in, Kubernetes is the better solution. Moreover, Nomad is a much smaller player than Kubernetes. Both are open source projects that depend considerably on community input. Nomad has scarcely 10 percent as many Github commits as Kubernetes. This means an overall slower pace of development and bug-fixing.
- Easy to use compared to the other options listed
- Limited scope and scale make it hard to implement in truly large projects
- Relatively small community
Each of these orchestration platforms has its advantages and drawbacks. Kubernetes itself is the go-to solution for enterprises who want to orchestrate apps that cater to millions of users. Vague documentation, a steep learning curve, and relatively poor support mean it’s not for everyone, though.
Docker Swarm is a lot easier to configure and use, but it doesn’t have robust monitoring or logging tools built-in. DC/OS lets you do more than just orchestrate containerized microservices, but premium functionality is paywalled. And while Nomad’s key highlight is its simplicity–making it ideal for smaller-scale projects–that very simplicity makes it less than ideal for large, enterprise-class efforts.