AWS SAM quick guide: Lambda functions for serverless Python

Authored by Distinguished Engineers, Dan Furman and Brian McNamara

Best practices for serverless Python function development

Python developers can leverage AWS Lambda functions (function-as-a-service) to build and deploy serverless applications quickly. But, developing and operating with functions can be daunting if you need to become more familiar with the best practices. As people who have been there and done that, we can tell you that it doesn't have to be! In this blog post, we'll cover the top tips for all levels of Python developers when developing and operating AWS Lambda functions that will make your life easier, all while ensuring the smooth functioning of your code at scale.

What is AWS Lambda?

AWS Lambda is a serverless computing platform that allows you to run code without provisioning or managing servers. Lambda can be used to build applications that process and respond to events. These event-driven applications are referred to as "Lambda functions."

Lambda functions can be built in any programming language that is supported by an AWS Lambda runtime environment. In addition, Lambda provides a wide range of built-in, event-triggered functions known as "blueprints" which can be used as a starting point for creating your own Lambda functions.

Lambda is a convenient way to run code in the cloud without having to worry about provisioning or managing servers. You can simply write your code, deploy it to AWS Lambda, and the service will take care of running and scaling it as events are received. As Lambda is event-driven, you can easily set up your code to automatically respond to events such as file uploads, database changes, API calls, or webhook callbacks.

Using AWS SAM to create serverless applications with an API

AWS Serverless Application Model (SAM) is a tool that allows Python developers to author, build, and deploy AWS Lambda functions and create serverless applications on AWS with an API. Developers define their serverless application’s resources, functions, and events using YAML or JSON, and SAM will automatically provision and manage the necessary AWS resources for them. AWS SAM also provides a CLI tool that can be used to emulate the Lambda runtime locally to test events and package, deploy, and manage serverless applications. Because SAM is built on top of AWS CloudFormation, you can deploy your application using the same tools, controls, and processes that you use to deploy other CloudFormation stacks.

When developing serverless applications with AWS SAM, there are several best practices that Python developers should follow:

  1. Define all resources in a separate file from the function code. This will make it easier to manage and deploy your application.
  2. Make use of existing libraries whenever possible. This can save you time and effort in development and make your code more reliable.
  3. Use environment variables to store sensitive data, such as credentials or API keys, as well as environment-specific settings. Since Lambda functions often require access to sensitive data, there are out-of-the-box solutions like AWS Secrets Manager to securely retrieve secrets. This will help keep your application secure and avoid hard-coding sensitive information into your code.
  4. Use logs, metrics, and traces to observe your Lambda functions. This will give you visibility into how your functions are performing and allow you to troubleshoot any issues that may arise.
  5. Automate your tests for code to run locally before deploying to Lambda using a framework like Pytest. This allows you to run tests on code behavior and implementations, which can detect issues before ever committing the code or leaving your desktop. This will help minimize issues when deploying your code to Lambda. After deployment, you can use Pytest to run integration tests and automate testing the IAM execution and any specialized network configuration before promoting the code to customer-facing environments.
  6. Equip yourself with proven resources to organize and maintain your code. AWS Lambda Powertools for Python is an open-source developer toolkit for serverless development and compatible with strategy pattern and MVC controllers, which are commonly used by API developers. These patterns can simplify testing through the use of Pytest fixtures, making it easy to test code segments separately. This is important for all Python development, serverless or not, and whether it involves APIs, files, or other event sources.
  7. Handle errors gracefully by catching and handling exceptions in your code. This will help ensure your application can recover from errors without impacting users or ignoring their needs. For example, if your response is an error while communicating with APIs, you should never allow it to pass silently or allow Lambda to “eat” the error and not respond. Instead, you can prove your application’s resilience by returning an appropriate error message to the calling client.

Maximizing observability in your AWS Lambda functions

Monitoring and observability are both useful practices for guaranteeing the health of Lambda-based applications. Monitoring is all about tracking the performance of your Lambda functions and ensuring that they're running as expected. This means collecting data on things like invocation count, invocation time, error rate, and so on.

Observability, on the other hand, is about understanding what's happening inside your Lambda functions. This means instrumenting your code to send signals like logs, traces, and metrics to get a better picture of what your code is doing and how it's performing.

Both monitoring and observability are important for keeping your Lambda functions running smoothly. Monitoring focuses on performance, measurements to requirements, and metrics within your application. Observability can be especially helpful when something goes wrong and you need to debug your code.

When developing AWS Lambda functions with Python, there are a few best practices you can follow to maximize observability.

  1. Always log your code at the INFO level but allow for the log level to be changed without a new deployment. This will ensure that all information needed to troubleshoot an issue is captured in the logs while using the familiar log level filter. 
  2. Ensure your logs are structured. It will become easier to search through them with tools like AWS CloudWatch Logs Insights without the need to develop custom parsers.
  3. Use a tool like CloudWatch Logs Insights to query and visualize your logs (you’ll hear this one a lot). This will help you quickly identify problems and resolve them faster. 
  4. Monitor your Lambda functions for errors and performance issues. By doing so, you can prevent problems from occurring in the first place or quickly mitigate them when they do occur. This may include automated rollback to a previous version of the Lambda function.
  5. Use AWS Lambda Powertools for Python to allow for simpler observability practices without needing to build and maintain your own.

Considerations when developing a serverless API

When developing a serverless API, there are a few key considerations to keep in mind to ensure smooth operation and scalability.

  1. Since serverless APIs are stateless, all data must be stored in a database such as Amazon DynamoDB or object store like Amazon S3.
  2. Consider how your API will be invoked: will it be via an API Gateway? An AppSync GraphQL call? Directly through Lambda? Your response will need to match the expected payload based on the service.
  3. Consider how you will validate your API inputs. Your API endpoint may allow you to define a JSON Schema model to validate request and body parameters. You can also use Python modules like Pydantic to perform flexible validations. AWS Lambda Powertools has modules to help with input parsing and validation, which expose both the fastjsonschema and pydantic packages for your use.
  4. Think about how you will handle authentication and authorization for your API. This may be integrated into the service calling your function such as API Gateway or AppSync, or you may need to implement this directly. If adding additional depth in your function, consider using middleware to consolidate this code.
  5. Utilize Powertools and create Lambda layers to package libraries and other dependencies. The benefits of using layers for dependency management include reducing the size of uploaded deployment archives and removing redundancies like duplicate dependencies, which can help you iterate faster.
  6. Consider using the Insights feature of a tool like CloudWatch Logs to help monitor and troubleshoot your function in production. This can be invaluable when things go wrong and help you avoid or fix issues before they cause significant problems.

Key metrics to use for serverless APIs

There are many factors to consider when developing serverless APIs with AWS Lambda, but some key metrics to keep in mind are:

Throughput

APIs typically measure their scale in throughput or the number of transactions that can be handled per second (TPS). Lambda measures scale in concurrency—the number of instances processing events. Concurrency is a product of the number of requests and the average duration of a function. When considering scaling throughput, it is important to drive the duration of a function as low as possible. This, in turn, will increase the number of requests per second your API can handle, and drive your costs down.

Error rates

What percentage of requests result in errors?

Cost

How much does it cost to run your API?

Monitoring these metrics will help ensure that your API is performant and cost-effective.

Using AWS SAM to create an application to process uploads to an AWS S3 bucket

AWS SAM is an excellent tool for creating AWS Lambda functions to process uploads to an AWS S3 bucket. Here are some best practices to keep in mind when developing and operating your Lambda function:

  1. Test your function thoroughly before deploying it to production. This will ensure that it works as expected and minimizes the risk of unexpected issues.
  2. When using 3rd party libraries in your Lambda function, be sure to vendor them so they are bundled with your code when deployed. This will avoid issues caused by different versions of the libraries being used in production and development environments.
  3. Keep an eye on your Lambda function's deployment package size. It can impact cold start times and exceed the maximum allowed size for Lambda functions if it gets too large. Though leaning out dependencies is always a best practice to follow, container packaging, creating layers, and file compression can help reduce file sizes, too.

Key metrics to consider for file processing applications

There are many factors to consider when optimizing the performance of a file processing application. Here are some core examples:

Execution time

This is the most important metric to track, as it directly impacts the user experience. You want to keep your execution time as low as possible so users can get results quickly.

Memory usage

Another important metric to track is memory usage, particularly for serverless applications like AWS Lambda, since memory usage impacts both execution time and cost. It’s essential that your application is using memory efficiently, so consider using tools like AWS Lambda Power Tuning, file systems like Amazon EFS, or storage resources like ephemeral storage for assistance.

Cost

While not directly related to performance, the cost is essential when operating a file processing application on AWS Lambda. By monitoring your other key metrics, you can help keep your costs under control.

Conclusion

Developing and operating AWS Lambda functions can be a great way to save time and money on your development projects. By applying the best practices that we’ve discussed here, you should be able to get your project off the ground quickly and efficiently without compromising quality or functionality. Taking advantage of these tips will also help ensure that any issues encountered along the way are handled in a timely manner so that you don’t have to worry about costly delays or unexpected problems down the line.


Capital One Tech

Stories and ideas on development from the people who build it at Capital One.

Explore #LifeAtCapitalOne

Startup-like innovation with Fortune 100 capabilities.

Learn more

Related Content