The lower your application invocation ratio is, the better it is to go with Functions as it will be more cost effective. That is true both to the “product level” and the “application level”.
In the series of Cloud Computing Evolution, I gave an example of a small company with 20 users that uses the entire product a few times a day. From the cost perspective, as Lambda pricing model is pay per use, it will be far cheaper than running highly available Containers (3 instances), by a varied factor. As it varies, the simplest example would be of a minimal usage.
On Fargate, running a fully managed Container with minimal resources of 0.25vCPU and 0.5GB would cost:
vCPU (0.0128$) per hour * 24 hours * 30 days * 0.25vCPU = 2.3$
RAM GB per hour (0.0015) * 24 hours * 30 days * 0.5GB = 0.54$
That’s (2.3$ + 0.54$) times 3 instances = 8.52$ which is 102.4$ annually, not including load balancer costs.
As pricing for Lambda and Fargate are calculated completely differently, it is hard to compare between the two. If we’d presume the same CPU and RAM resources are required, still two more parameters are required. Let’s further presume that you’d have 10K requests per month and each request processing time is 2 seconds. That will cost you 17 cents a month, or 2.04$ a year.
Prior to making a decision consider that it’s not only a x50 factor, but that it sounds like a huge factor. It is also only a 100$ absolute difference. In comparison, that’s less than a workday. Do not forget that this is per environment. If you have at the very least three environments (dev, test and prod) that’s 300$ saved annually, not including indirect costs of maintenance and loss of availability.
The price factor between Lambda and Fargate remains x50 even when more applications/services are needed. For example if you’re working in a service oriented environment that has 10 services concurrently running – that would be paying 1,020$ for containers while Lambda would have costs 20.4$. That is more about 1,000$ saved per environment and 3,000$ with a total of three environments. It can quickly add up.
A real scenario we had at Silo if of the Metadata service [which will be thoroughly discussed in a future article]. It was an application that once in a few months parses 10K unique messages/requests. We had 7 environments planned (2 devs, 1 QA, 1 CI, 1 test, 2 production), and there was no need for high availability, 1 Container instance only was sufficient. An annual Lambda cost would have been around $0.03. An annual cost on Fargate would have been $238.56 ($2.84 monthly * 12 months * 7 environments). We had tens of those that added up to a huge amount saved.
There are some solutions to this scenario using containers:
- Scheduling a periodic Container launch
- Invoking a Lambda function that launches a container
- Have the CI/CD system run a Container
What would have been out of the question is running a local execution from a developer’s workstation that affects a production environment – this can be disastrous. As this is outside the scope of this article I would not elaborate on these solutions, but do notice that none of them are better and easier than just using a Function.
Going bankrupt with throughout
Lambda pricing is sensitive to the number of requests thus sensitive to throughput (requests per second, concurrency). The other end of the spectrum would be an application with an incoming throughput of 10k per second for 24/7 (864M a month). That will blast the costs to around 900$ a month, which is equivalent to running 316 small container instances.
At Silo, we expected such throughputs as the business plan was having 100k Silo devices running at year 2 (realistically possibly) and 1M in year 4 (in case of success). The messaging infrastructure [which will be reviewed in detail in a future series] consisted of several components/applications that were expected to handle high throughput. One of which, of example, was a small bridge between our server-to-server message broker and our Event Analysis service. That could easily reach thousands of messages per second. Lambda was definitely not the way to go.
As we dealt with unknown product usage and message throughput that left us without any kind of capability to measure it. We had a rule of thumb to somewhat predict it, a factor of something else in the system. In the case above these components were expected to handle a factor of the entire count of messages passing through our messaging infrastructure.
Another useful rule of thumb is ratio, to know how many times your application is invoked per customer interaction. For example, If it is a simple request/message from the mobile application that is 1:2 ratio, the request and the reply. But each device operation of Silo emitted ~20 messages (due to reason beyond the scope of this article) so that is a 1:20 ratio. The higher the ratio or the higher the factor of is, solutions should tend towards Containers than Lambda.
As you’d be paying your cloud provider per request and according to throughput, this is how’d you’d literally pay for a mistake in a choice between the two, maybe even gravely. At Silo, where we did not know in advance neither the throughput nor its expected growth, we decided to design and code our applications to run on both. That flexibility, or what I used to call compute agnosticity, would allow us to easily switch between the two when needed and according to which environment it’s running on. You’d see how that is possible in the upcoming articles about stateless applications and persistent connections and in a future article about the Event Handler framework we developed internally.