Best practices for AWS serverless architecture: In this article, you will find some of the best practices that we should keep in mind while building Serverless applications using AWS Serverless Architecture. Nevertheless, this knowledge will equip you with a deeper understanding of building serverless applications and it will introduce you to ideas and tips that will go a long way in ensuring that you build a well-optimized serverless application.
You will find different best practices for working with AWS Lambda, API Gateway, DynamoDB, and Step functions. We will also look at security best practices and different serverless architecture patterns.
AWS Lambda Best Practices |
AWS API Gateway Best Practices |
Amazon DynamoDB Best Practices |
Best Practises for AWS Step Functions |
Serverless security best practices |
AWS Lambda Best Practices

✺ Use the programming best practices
Always use the programming best practices of the programming language runtime that you’re using If you are using Node.js runtime, adhere to the Node and JavaScript recommendations and best practices.
✺ Keep the declarations and instantiations outside the Lambda handler
This allows the Lambda handler to reuse the objects when the container gets reused. In case you are using Database connections, this would matter even more.
✺ Keep the lambda handler lean
Move the core logic of your Lambda function outside the handler. This is especially true when we have to write a large piece of code in our Lambda function.
Move the core logic to separate functions and call those functions from the Lambda handler as needed. This allows the code to be easier to maintain in the long run.
✺ Avoid hardcoding within our Lambda functions
Just like any programming recommendation, we must avoid hardcoding within our Lambda functions.
Make use of environment variables instead of hardcoding constant values.
Use the AWS Lambda Environment Variables to pass operational parameters to your function.
For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable.
✺ Different lambda function to do different tasks
Also, it’s a good idea to write different lambda functions to do different tasks. In other words, one function should perform only one task and nothing more.
This is kind of Microservices approach and it does have its own advantages. Notably, it allows for better performance optimization since you can optimize each individual Lambda function separately and independently.
✺ Check for dependencies
Before deploying a function to production, its best to check for dependencies. And make sure that only the needed dependencies are included in the package dot JSON file or included in the deployable package.
This will reduce the size of the deployment package. At the same time, certain libraries are available on the Lambda runtime. We can exclude those libraries from our package.
However, also keep in mind that Lambda keeps updating the libraries regularly. And sometimes, if your code depends on some specific version of the library, an update might break your function.
If you suspect such a likelihood, it’s better instead to package that version of the library directly in the deployment package, and not use the one available on the Lambda runtime.
✺ Always keep a watch on the Lambda logs
Monitor the execution duration and memory consumption. This will allow you to adjust these values over a period of time and ensure that you get the optimal level of price to performance for your functions.
Many times, we had set up IAM policies with Full Access to the target AWS services. We must provide only the necessary permissions.
✺ Use – C flag to your serverless commands
This will ensure that the commands only generate a CloudFormation file and not actually execute it You can then execute this CloudFormation file from within CloudFormation console or as part of your CI/CD process.
✺ Unlink temporary files
If you are creating any files in the SLASH TMP or ephemeral storage of the Lambda Container, make sure you unlink those files before exiting the handler, unless you want them to be available for reuse in case the container gets reused.
✺ Remove unused Lambda functions
There are restrictions on how many Lambda functions you can create in your account. So, it’s also a good idea to remove functions that we no longer need or use.
Always make use of the error handling mechanisms
Put your code in try-catch blocks, throw errors wherever needed, handle exceptions. And also make use of Dead letter queues wherever appropriate.
✺ Use VPC only if necessary
If your function depends on resources within a VPC, e.g. RDS instances, then you can put that function within a VPC. Otherwise there is no need to use VPC.
Invoke permissions are decoupled from the execution permissions. So, there is little to no benefit of putting a function inside a VPC unless your function depends on VPC based resources.
Using a VPC is also likely to add additional latency to your functions.
✺ Be careful of using reserved concurrency
If you are planning to use reserved concurrency, make sure that other functions in your account have enough concurrency to work with.
This is because every account gets about 1000 concurrent Lambda executions across functions.
So, if you reserve concurrency for a function, the concurrency limit for the remaining functions will reduce by that amount.
✺ Keep your containers warm
So that they can be reused. This would reduce the latency introduced by cold starts.You can easily schedule dummy invocations with CloudWatch events to keep the functions warm.
✺ Use the frameworks, like AWS SAM or the Serverless Framework
It’s always recommended to use a streamlined approach to your serverless development and deployment. Develop and test functions and APIs locally using SAM CLI Local commands or the Serverless Framework plugins. Validate your templates.
✺ Use DevOps tools
Make use of CI/CD tools or DevOps tools that are available and follow a structured process.
This will help you get your application up and running quickly and will also help in reducing the breakdown maintenance and downtimes.
AWS API Gateway Best Practices

✺ Keep API definition simple
Move all the logic to the backend Lambda functions.
So, unless absolutely necessary, we could simply use Lambda proxy integration where API Gateway merely acts as a proxy between the caller and the Lambda function.
All the data manipulation happens at one place and that’s inside the Lambda handler.
This approach allows you to keep all the logic at one place, making it easier to maintain in the long run.
That said, it’s a good idea to do content validations or schema validation of the incoming requests within API Gateway.
✺ Return appropriate responses back to the caller
If you’re using Lambda integration instead, make sure you map different HTTP response codes and return a useful response back to the caller, instead of returning generic server-side error.
✺ Enable logging options
Make sure you enable logging options in API Gateway so it’s easy to track down failures to their causes. Always return useful responses from API Gateway. Enable AWS CloudWatch Logs for the APIs
✺ Use custom domains
When using API Gateway in production, it’s recommended to use custom domains instead of API Gateway URLs.
This allows you to retain the same URL even in case you want to completely replace the API with a new one.
For example, if you deploy your API with a new name using SAM or the Serverless Framework, it’s going to remove the existing API and create a new one with the new name. And this new API will have a different API Gateway endpoint. Thus, using a custom domain name will not break your API in events like this one.
✺ Use frameworks like AWS SAM
And of course, using frameworks like AWS SAM or the Serverless Framework is always the best approach when working on Serverless projects.
✺ Deploy APIs closer to your customers
Consider deploying your APIs closer to your customers.
Choosing the AWS region that’s closer to your customer will certainly improve the latency
of your APIs, thereby boosting the speed.
Alternatively, you may want to consider using Edge-optimized endpoints.
These are accessed through a CloudFront distribution.
You could also add caching to your APIs to get additional performance gains.
Amazon DynamoDB Best Practises

✺ Ensure uniform data access
First and the most important is the Table Design.
DynamoDB tables provide the best performance when designed for Uniform Data Access. DynamoDB divides the provisioned throughput equally between all the table partitions.
And hence, in order to achieve maximum utilization of capacity units, we must design our table keys in such a way that the read and write loads are uniform across partitions, or partition keys.
When DynamoDB tables experience non-uniform access patterns, it results in what is called as hot partitions, meaning some partitions are accessed heavily while other ones remain idle.
When this happens, the idle provisioned capacity is wasted while we still have to keep paying for it.
So, we don’t quite get the optimal performance for the price we pay, if we don’t pick the table partition keys thoughtfully.
Of course, you can use caching services like DAX to cache the hot data. However, we should still focus on having the tables designed for uniform access because DAX doesn’t come cheap.
✺ Must avoid temporary substantial scaling up
When changing the provisioned throughput for any DynamoDB table, i.e. scaling up or scaling down, we must avoid temporary substantial scaling up of the provisioned capacity followed by scaling down substantially.
We may decide to do this to speed up data uploads or data migration activities, for example. But this often has negative implications.
It’s important to note that substantial increases in provisioned capacity almost always result in DynamoDB allocating additional partitions.
If we subsequently scale the capacity down, the DynamoDB will NOT de-allocate previously allocated partitions
And since we scaled down the capacity and since throughput always gets divided equally among all partitions, each partition will now receive a very small chunk of capacity units. And, this results in substantial degradation in performance.
✺ Keep item attribute names short
It is a good idea to keep the attribute names shorter.
This helps reduce the item size and thereby the costs as well. At the same time, make sure that these names are intuitive for other developers to understand.
✺ Compress large non-key attributes
If we must store large values in our items, we should consider compressing the non-key attributes.
We can use techniques like GZIP, for example. Alternatively, you can store large items in S3, and then only the pointers to those items can be stored in DynamoDB.
✺ Performing efficient read operations
We know that scan operations scan the entire table and hence these are less efficient than the Query operations.
So, we must avoid scan operations as far as possible.
It’s also important to note that filters always get applied after the query and scan operation is completed.
✺ Use eventual consistency as far as possible
Unless our application demands strongly consistent reads, we must always opt for eventual consistency.
And that saves half the money.
Also remember that any read operations on global secondary indexes are always eventually consistent.
✺ Use Local Secondary Indexes sparingly
Let’s talk about the best practices when using Local Secondary Indexes.
Remember that local secondary indexes share the same partition, same physical partition space that is used by the table. So, adding more indexes will reduce the available size for the table.
This doesn’t mean not to use them, but use them as per your application’s needs.
✺ Project fewer attributes on to secondary indexes
When choosing the projections, we can project up to a maximum of 20 attributes per index. So, choose them carefully and project as fewer attributes as possible, typically only the ones that your application is likely to use. If you just need keys, then just use keys and it produces the smallest index.
Specify ALL if you want your queries to return the entire table item but you want to sort the table by a different sort key, for example.
At the same time, it’s important to project all the attributes that your application needs. Otherwise, if your read operation requests additional attributes which were not projected in the index, DynamoDB will have to fetch those attributes from the main table.
And, this will result in an additional read operation. So, spend a good amount of time before deciding on which attributes to project.
Idea is to consider all current and future possibilities of read operations that our application would possibly need to perform.
✺ Design Global Secondary Indexes for uniform data access
Let’s look at the best practices when using Global Secondary Indexes.
Just like table indexes, we must choose the keys for global secondary indexes in a manner that results in uniform workloads.
If the table item as a whole is large, reading an entire item to retrieve just a few attributes may result in consumption of a lot of read capacity.
So, we can create a global secondary index by projecting just the needed attributes, or even just the key attributes. This creates a much smaller index and can result in a considerable amount of savings.
One very important use of a global secondary index is creating an eventually consistent read replica.
The idea here is to create a global secondary index with the same keys as that of the table. And then use this index to divert some of the load from the main table to the index.
And being a global secondary index, we can control the throughput of this index separately. This could also be useful if you have two applications that read from this table and you want to control the table throughput individually for each application.
In this case, one app could use the table and the other could use the global secondary index to perform reads.
Best Practises for AWS Step Functions

✺ Always use timeouts in the Task states
Specifying explicit timeouts helps in preventing state machine executions from getting stuck.
For example, if something goes wrong in the backend Lambda function and we have not specified a timeout within that task state, the state machine will continue to wait and it’s not going to receive any response because the Lambda function won’t send one.
✺ Handle errors with Retriers and Catchers
Lambda can run into errors, for example, there could be network issues, or runtime errors or exceptions.
So, it’s recommended to always handle these errors within our state machine. We can use Retriers or Catchers to handle these situations.
✺ Use S3 to store large payloads and pass only the payload ARN between states
There could be situations when we have to pass large payloads or large amount of data between states. This can result in states getting terminated.
So, if we expect the data or payload size to go over 32K, AWS recommends using S3 to store this data.
And then, instead of passing the entire payload between the states in your state machine, you simply pass the ARN of this payload stored in S3.
Serverless security best practices

When talking about security, the most important service that comes to our mind is the IAM. We know that AWS Lambda uses a decoupled permission model. It uses two types of permissions, Invokes permissions and execution permissions.
Thus, Lambda functions essentially decouple the entity invoking the function from the entity that actually executes the Lambda function code.
The invoke permissions only require the caller to have access to invoke the Lambda function and no more access is needed.
The execution policy on the other hand is used by AWS Lambda to execute the function code.
There are some best practices to consider when setting these permissions or creating the executive roles for Lambda functions.
✺ Each lambda function should have its own role
First, it’s a good idea to have each Lambda function to have its own role and not to use the same role across multiple Lambda functions.
Intention here is the needs of our Lambda functions can change over time and then we might have to alter existing permissions.
If each function has its own role, it keeps these permissions independent of each other. Of course, if you have similar microservices that you expect to not require additional permissions in future, you can reuse the IAM roles, and there is no harm in doing that. But otherwise, it’s a good idea to have each function with its own role.
✺ We should avoid setting wildcard permissions or giving full access to Lambda functions
Always provide only the necessary permissions, keeping the policies as restrictive as possible. Sometimes it may not be possible to avoid using a wildcard on a resource, you should still be able to choose only the required actions in the IAM policy, keeping the policy as restrictive as possible.
Sometimes AWS might add a new action on a resource, and if your policy is using a wildcard on the actions, it’ll automatically receive this additional access, even though it may not require it.
Hence, it’s a good idea and a recommended idea to explicitly specify individual actions in the policy and not use wildcards.
✺ Always make use of environment variables to store such sensitive data
Never hardcode these in your function code. And make use of KMS encryption service to encrypt this data at rest and in transit as well.
And remember that environment variables are tied to the function version. So it’s a good idea to encrypt them before you generate the version.
✺ Never log the decrypted values of these variables to the console or any persistent storage
If you are using a Lambda function within a VPC, it is recommended to use least privilege security groups, Lambda function-specific subnets and network configuration that allows only the Lambda functions to access VPC resources.
✺ CI/CD Pipeline
When you’re using CI/CD pipelines for automated deployments, you must ensure that appropriate access control is in place.
For example, if pushing the code to the master branch triggers your deployment pipeline, then you must ensure that only the authorized team members have the ability to update the master branch.
Serverless CI/CD for the Enterprise on AWS