Skip to content

intelematics/terraform-aws-lambda

 
 

Repository files navigation

terraform-aws-lambda

This Terraform module creates and uploads an AWS Lambda function and hides the ugly parts from you.

Features

  • Only appears in the Terraform plan when there are legitimate changes
  • Creates a standard IAM role
    • Policy for CloudWatch Logs
    • Can add additional policies if required
  • Zips up a source file or directory
  • Installs dependencies from requirements.txt for Python functions
    • Only does this when necessary, not every time
  • Uses Docker with lambci docker image to ensure we get binaries for lambda environment when installing native extensions (numpy, scipy, pandas etc)
  • Slims the lambda
    • Strips shared object files ca. 20% saving
    • Remove .py, use .pyc's = faster lambda and ca. 20% saving
    • Remove tests, info (minor savings, but why not)
    • Remove packages that will be available in AWS lambda environment (boto3 et al) ca. 50mb (uncompressed) saving

Troubleshooting

You may get into a situation where applys return like so:

Error: Error in function call

  on .terraform/modules/ou_athena_update_lambda/archive.tf line 49, in resource "aws_s3_bucket_object" "lambda_package":
  49:   etag       = filemd5(data.external.built.result.filename)
    |----------------
    | data.external.built.result.filename is ".terraform/modules/ou_athena_update_lambda/builds/f8e8c637a03939482e75ebe52204c75cc8dc510a512cf04fc0b71e7af44008cf.zip"

Call to function "filemd5" failed: no file exists at
.terraform/modules/ou_athena_update_lambda/builds/f8e8c637a03939482e75ebe52204c75cc8dc510a512cf04fc0b71e7af44008cf.zip

This happens if the remote state has a reference to the lambda, with filename, but that built zipfile doesn't exist locally. Hence when the module goes to get the md5 of that file to check if it needs updating, it fails. Easiest fix I found was destroy the resource (the lambda module) in the state file and re-run apply.

Requirements

  • Python 3.6 or higher
  • Docker
  • Linux/Unix/Windows

Terraform version compatibility

Module version Terraform version
1.x.x 0.12.x
0.x.x 0.11.x

Usage

Upload your Lambda code directly using the API

This method is simplest and recommended for lambdas <50mb

module "lambda" {
  source = "github.com/claranet/terraform-aws-lambda"

  function_name = "deployment-deploy-status"
  description   = "Deployment deploy status task"
  handler       = "main.lambda_handler"
  runtime       = "python3.6"
  timeout       = 300

  // Specify a file or directory for the source code.
  source_path = "${path.module}/lambda.py"

  // Add additional trusted entities for assuming roles (trust relationships).
  trusted_entities = ["events.amazonaws.com", "s3.amazonaws.com"]

  // Attach a policy.
  policy = {
    json = data.aws_iam_policy_document.lambda.json
  }

  // Add a dead letter queue.
  dead_letter_config = {
    target_arn = aws_sqs_queue.dlq.arn
  }

  // Add environment variables.
  environment = {
    variables = {
      SLACK_URL = var.slack_url
    }
  }

  // Deploy into a VPC.
  vpc_config = {
    subnet_ids         = [aws_subnet.test.id]
    security_group_ids = [aws_security_group.test.id]
  }
}

Upload your lambda code to s3 first, then create the Lambda from there

For lambdas over 50mb yet uncompressed still under 250mb, you can upload to s3 first, then create the lambda from there.

Details on this technique and limits https://hackernoon.com/exploring-the-aws-lambda-deployment-limits-9a8384b0bec3

To do so, specify the s3_bucket_lambda_package argument.

The module will create said bucket, and upload the lambda package there before deployment.

module "lambda" {
  source = "github.com/claranet/terraform-aws-lambda"

  s3_bucket_lambda_package  = "123456789123-lambda-package-deploy-status"
  function_name             = "deployment-deploy-status"
  description               = "Deployment deploy status task"
  handler                   = "main.lambda_handler"
  runtime                   = "python3.6"
  timeout                   = 300

  // Specify a file or directory for the source code.
  source_path = "${path.module}/lambda.py"

  // Attach a policy.
  policy = {
    json = data.aws_iam_policy_document.lambda.json
  }

  // Add a dead letter queue.
  dead_letter_config = {
    target_arn = aws_sqs_queue.dlq.arn
  }

  // Add environment variables.
  environment = {
    variables = {
      SLACK_URL = var.slack_url
    }
  }

  // Deploy into a VPC.
  vpc_config = {
    subnet_ids         = [aws_subnet.test.id]
    security_group_ids = [aws_security_group.test.id]
  }
}

Inputs

Inputs for this module are the same as the aws_lambda_function resource with the following additional arguments:

Name Description Type Default Required
source_path The absolute path to a local file or directory containing your Lambda source code string yes
s3_bucket_lambda_package The name of the s3 bucket to create, in order to upload the zipped lambda package and deploy from there (allows larger lambdas) string null no
build_command The command to run to create the Lambda package zip file string "python build.py '$filename' '$runtime' '$source'" no
build_paths The files or directories used by the build command, to trigger new Lambda package builds whenever build scripts change list(string) ["build.py"] no
cloudwatch_logs Set this to false to disable logging your Lambda output to CloudWatch Logs bool true no
cloudwatch_logs_retention_days Number of days to retain CloudWatch Logs number 3653 no
lambda_at_edge Set this to true if using Lambda@Edge, to enable publishing, limit the timeout, and allow edgelambda.amazonaws.com to invoke the function bool false no
policy An additional policy to attach to the Lambda function role object({json=string}) no
trusted_entities Additional trusted entities for the Lambda function. The lambda.amazonaws.com (and edgelambda.amazonaws.com if lambda_at_edge is true) is always set list(string) no

The following arguments from the aws_lambda_function resource are not supported:

  • filename (use source_path instead)
  • role (one is automatically created)
  • s3_bucket
  • s3_key
  • s3_object_version
  • source_code_hash (changes are handled automatically)

Outputs

Name Description
function_arn The ARN of the Lambda function
function_invoke_arn The Invoke ARN of the Lambda function
function_name The name of the Lambda function
function_qualified_arn The qualified ARN of the Lambda function
role_arn The ARN of the IAM role created for the Lambda function
role_name The name of the IAM role created for the Lambda function

Gotchas

Docker on Windows + WSLv1

WSLv2 due to be released can run Docker natively hence shouldn't suffer from many of the problems under v1.

Can be tricky. Main problem is file access, since (with WSLv1) Docker Daemon running on Windows host, not within WSL. It should work if this module code is under /c/Users/, and the volume is shared to Docker Daemon

Failure leaves partial configuration

Sometimes lambda deployment (e.g. docker or python steps) will fail, leaving partially-configured resources e.g. the cloudwatch log group for the lambda gets created, but not put into the state file. If this happens, the terraform import can get confusing. If the top-level TF contains a module which deploys the lambda, which in turn uses this module, tf import will have to look like so:

terraform import module.github_read_all_team_update_lambda.module.github_read_all_team_update.aws_cloudwatch_log_group.lambda '/aws/lambda/azuread_roles_sync'

`terraform import module.ecr_scan_notify.module.ecr_scan_notify_lambda.aws_cloudwatch_log_group.lambda '/aws/lambda/ecr_scan_notify'``

About

Terraform module for AWS Lambda functions

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HCL 68.2%
  • Python 29.0%
  • Shell 2.8%