Often, we see several cloud practitioners quote this line about serverless technologies. While serverless services have brought so much convenience, they have brought few challenges as well along with them. Like no visibility into abstracted layers that the service provider takes care of! One such serverless service is AWS Lambda function, where AWS allocates CPU power to each function on your behalf depending on the memory allocation. “Some abstractions do not actually simplify our lives as much as they were meant to do, ” like Joel Spolsky says in his article, The Law of Leaky Abstractions.
So, in this post, we walk you through how CPU allocation affects Lambda execution time.
When you’re running a piece of code on a traditional server, you can monitor CPU utilization and memory utilization continually.
In the AWS cloud world, if you happen to use EC2s, you have the visibility into CPU allocation time, memory, IOPS, and network, etc.
In case of serverless services, like Amazon Lambda function, you just run the code without provisioning or managing servers. So, with Lambda functions, we naturally tend to not worry about system level metrics, such as CPU allocation time as well as other metrics like disk utilization, I/O and the network. We tend to focus more on application metrics, right?
Recently, we ran a Lambda function code with 250 MB memory allocation, for experimentation purposes. This function after execution in actuality consumed just 170 MB.
This actual memory consumption often MISLEADS several AWS users. When their code execution time moves up north, they tend to focus entirely on their code optimization and ignore other dynamics, such as the underlying CPU allocation for each batch of memory allocation.
When we increased our Lambda function memory allocation to 512MB, the execution time of the code magically reduced to 52 mSec, which is ~50% increase in execution speed!
The result:
Even though your Lambda function code consumes memory within limits, a higher allocation of memory improves the performance by 50%!
According to AWS, a user needs to choose the amount of memory required for a function and it will allocate proportional CPU power according to memory allocation and other resources. You can update the configuration and request additional memory in 64 MB increments from 128MB to 3008 MB. Amazon allocates CPU power proportional to the memory using the same ratio as a general purpose Amazon EC2 instance type.
As Jeremy Daly mentions in one of his blog posts, “ CPU resources, I/O, and memory are all affected by the memory allocation setting. If your function is allocated more than 1.8GB of memory, then it will utilize a multi core CPU. Thus, if you have CPU intensive workloads, increasing your memory to more than 1.8GBs should give you significant gains. The same is true for I/O bound workloads like parallel calculations.”
There have been several debates around CPU allocation in AWS Lambda since 2015. Mustafa Akin’s post throws enough light on how CPU allocation happens for Lambdas. He recommends profiling the application and recognizing the bottlenecks before adjusting the function configuration for ideal memory settings.
If you want to know other AWS Lambda Gotchas, here’s a curated list of good reads:
5 Things you should know before using Lambda
My Accidental 3–5x Speed Increase of AWS Lambda Functions