Site icon The Monk who Sold his Server

The Bin Packer: Container Orchestration

Reading Time: 7 minutes

Containers are so amazing but do they also present new challenges in the Ops world.

Let’s continue with the example from the previous article, two applications and two database instances. For simplicity and generalisation, let’s presume that all the applications in the world require exactly 1 CPU and 1GB of RAM and also all the servers in the world have only 4 CPUs and 4GB of RAM. Each server can host only up to 4 applications.

At your present company, the current application operations/distribution view is:

If you’d look, not even carefully, you’ll see that you’ve been paying for two unneeded servers, too many resources are available. You’d be logging in to Server B and Server D and move the containers from them to Server A and C:

You’d even made sure that no two applications of the same kind are on the same server and you’d be taking down Server B and Server D and you’d save the company some money. Congrats, you’ve saved the company a lot of money and you are now up for a promotion!

A week later you come to work to find this current view:

You are star struck. How can this be if you’ve just removed these two servers last week!  What happens, it turns out that:

You’d be emailing them both ranting “Oh come on guys! Couldn’t you figure it out on your own to place them both on the same server!?”. Dave replies and says “Who is Dave?” and Dave replies “Who is Dan?”. You do your job and move App F to Server B and kill Server D.

The Human Bin Packer

Remember that promotion from before? You’ve been promoted to a bin packer. Your job now is to do this every morning. To make sure that no company resources are wasted due to incorrect container/resource allocations. Although it is necessary and beneficial it’s quite annoying, repetitive and time consuming. 

If we’d take this simple example towards the real world scenario, we’d see that having a person moving containers around all day long is infeasible:

Eventually, when your company is big enough, you’d need to be up 24/7 or you’d hire three experts with rotating shifts in order to manage a constantly dynamic environment that constantly requires dynamic resource allocations (Auto Scaling) of thousands of containers and of the underlying servers. If you want to visualise it properly it looks something like this:

Sort this mess  ————————————————————————————————–> into this mess

It is a well known problem called the bin packing problem. If only this entire burden and waste of time can be coded and automated somehow, so you’d can go back to focus on the money makers.

The Humanoid Bin Packer

To your rescue, comes The Container Orchestrator, and you may have heard of some.

There are many such as Swarm, AWS ECS, Rancher or the most known of all – Kubernetes (K8S). Each one differs in levels of abstraction, decoupling and obfuscation of the server pool from the application delivery. They differ a lot more in terms of maintenance, usability, functionality and integrability but that’s beyond the scope of this series of articles.

The most basic/low level orchestrator (Swarm) responsibilities / capabilities are:

A huge workload has been lifted both from the Ops and the application developers. An engineer can deploy his containers without any assistance from the Ops. That would be a huge change in the delivery process to a far more simpler, faster and safer one.

An application developer, a scheduler or a CI/CD platform, can directly ask the orchestrator to “please take this container with Application A in it that needs 2 CPUs, 4GB of RAM – and you find the correct server for me”. The Orchestrator would be looking for a server that meets the resource demands, passes the Container to it and commands the server to run it. Later, the developer would remember that it actually needs three instances and he’d ask the same of the orchestrator. The orchestrator does his job again of looking for servers that meet the resource demands and would launch more containers – if possible.

Alas due to separation of concerns and infrastructure agnosticity, the orchestrator can not launch more servers on its own. If the orchestrator has determined that no more resources are available, it would warn the Ops guy who will need to launch a new server with enough resources and add it to the pool. Technically speaking, the server would be the one asking the orchestrator to be added to the pool.

In case there are too many resources available, it would be the Orchestrator’s responsibility to alert the Ops guy to shut the server down. The orchestrator needs to be notified that the server is shutting down. Only then it will redistribute all the running containers to the remaining available servers. This is how having an Orchestrator resolves the bin packing problem. 

Two steps forward, one step back

Unfortunately the Ops engineer is still in charge of launching and maintaining the underlying pool of servers:

Container orchestration is a huge step forward in cloud computing evolution. Your bin packing problem has been resolved but has yet to reach an optimal cost solution. Server maintenance is still required, although far less as all servers are almost identical in nature. A dedicated personnel is still a must, although maybe now not a full time one.

Underlying server maintenance can be further reduced to almost zero with another leap in evolution with Serverless Compute.

Exit mobile version