{"id":1226,"date":"2020-03-20T12:21:18","date_gmt":"2020-03-20T11:21:18","guid":{"rendered":"https:\/\/blog.besharp.it\/?p=1226"},"modified":"2021-03-24T12:37:28","modified_gmt":"2021-03-24T11:37:28","slug":"a-comprehensive-analysis-of-aws-lambda-function-optimize-spikes-and-prevent-cold-starts","status":"publish","type":"post","link":"https:\/\/blog.besharp.it\/a-comprehensive-analysis-of-aws-lambda-function-optimize-spikes-and-prevent-cold-starts\/","title":{"rendered":"A comprehensive analysis of AWS Lambda function: optimize spikes and prevent cold starts"},"content":{"rendered":"

When it comes to Serverless, many are the aspects that we have to keep in mind to avoid latency and produce better, more reliable and robust applications. In this article, we will discuss many aspects we need to keep in mind when developing through AWS Lambda, how we can avoid common problems and how to exploit some of the recently introduced features to create more performant, efficient and less costly Serverless applications.<\/span><\/p>\n

Cold Starts<\/h2>\n

For years the topic of cold starts has been one of the hottest and most frequently debated topics in the Serverless community.<\/span><\/p>\n

Suppose you\u2019ve just deployed a brand new Lambda Function. Regardless of the way the function is invoked, a new Micro VM needs to be instantiated, since there are no existing instances already available to respond to the event. The time needed to set up the Lambda Function runtime, together with your code, all of its dependencies and connections, is commonly called Cold Start.<\/span><\/p>\n

Depending on the runtime you choose, this setup process could take at least 50 – 200 ms before any execution actually started. Java and .Net Lambdas often experience cold starts that last for several seconds!<\/span><\/p>\n

Depending on your use case, cold starts may be a stumbling block, preventing you from adopting the Serverless paradigm. Cold Starts should be avoided in scenarios where low-latency is a driver factor, e.g. customer-facing applications. Luckily, for many developers, this situation is an avoidable issue because their workload is predictable and stable or is mainly based on internal calculations, e.g. data-processing.<\/span><\/p>\n

AWS documentation provides an example to better understand cold starts issues correlated to scaling needs. Imagine some companies, such as JustEat or Deliveroo, which experience very spiky traffic around lunches and dinners.<\/span><\/p>\n

\"spiky_traffic\"<\/p>\n

These spikes cause the application to run into limits such as how quickly AWS Lambda is able to scale out after the initial burst capacity. After the initial burst, it can scale up linearly to 500 instances per minute to serve your concurrent requests. Before it can handle the incoming requests, each new instance should face a cold start. Concurrency limit and high latencies due to cold starts could make your function scaling not able to deal with incoming traffic, causing new requests to be throttled.<\/span><\/p>\n

\"function
courtesy of AWS<\/figcaption><\/figure>\n
Legend<\/h6>\n

\"\"<\/p>\n

Function instances<\/span><\/p>\n

\"\"<\/p>\n

Open requests<\/span><\/p>\n

\"\" Throttling possible<\/span><\/p>\n

Reserved Concurrency<\/h2>\n

Concurrency has a regional limit that is shared among the functions in a Region, so this is also to take into account when some Lambdas are subject to very frequent calls. See the table below:<\/span><\/p>\n

Burst Concurrency Limits<\/h3>\n