Creating a One-Step Deploy Compiled Layer

Layers are AWS Lambda’s way of allowing developers to add custom libraries, runtimes, and other dependencies into their Lambda functions. For Later, the most common use case is adding additional libraries or assets not available out of the box with Lambda.

It’s always been possible to add libraries to an individual Lambda function, but the size of the final package is restricted by the memory limits of your Lambda function. Furthermore, before Layers it was not possible to easily share code between Lambda functions.

Layers solve both of these issues by allowing developers to add libraries in a separate location, where they can be called by multiple functions. The Serverless Framework also now allows developers to create Layers managed by Serverless. This makes it super simple to have Layers with a one-step deploy.

The rest of this blog post assumes:

  1. You have Serverless and Docker installed and set up correctly
  2. You are somewhat familiar with AWS Lambda

Native dependencies

Native dependencies are dependencies that are compiled from C or C++. These run directly on the operating system without any intermediary.

Statically Linked Dependencies

Statically linked dependencies are native dependencies that include everything they need to run in an executable. These come as pre-compiled binary executable files and, for our purposes, are very convenient. You can just download the binary to your computer and it will run!

Similarly, it’s fairly straightforward to create Lamba Layers for these dependencies. ffmpeg is a good example of a commonly used library that is statically linked. To create a Lambda Layer for ffmpeg, using the Serverless Framework, one simply has to:

  1. Download & unzip ffmpeg
  2. Create a new Serverless project
  3. Add the following to the generated serverless.yml file:
             path: << Insert relative path to ffmpeg >> 
  4. run serverless deploy

And that’s it! We didn’t have to compile ffmpeg to make sure it works with Lambda, since it’s already compiled to run everywhere.

Dynamically Linked Dependencies

Other libraries must be compiled for the environment they will be run on, e.g. gifsicle. In addition to being compiled, gifsicle also has other configuration, in the form of a ./configure script, that must be done to ensure it runs properly.

Since AWS defines Lambda’s operating system & configuration, adding compiled dependencies to Lambda can be more of a challenge, since the compilation needs to happen in the AWS environment.

In the past, we at Later chose to avoid the problem entirely by not using Lambda for tasks that required access to dynamically linked libraries. We chose to use Docker containers instead, where we could control the operating system and other native dependencies. The trade off with using Docker is that with all the freedom comes more responsibilities — we had to manually handle hosting, memory management, logging, etc.

With Lambda, however, these responsibilities are passed off to AWS, which makes our lives a whole lot easier. That’s why we’d prefer to use Lambda if possible.The blocker has often been the size of the final package, as well as the number of steps required to generate a suitable executable. In the past, the workaround to using compiled dependencies in Lambda has been to:

  1. Spin up an EC2 instance with Amazon Linux installed
  2. Compile the required executable binaries there
  3. Copy the binary out of EC2, and onto your local machine
  4. Add the binary to your Serverless package

While this process works for the most part, it is a multistep manual process that is not always quickly reproducible. Furthermore, it would not really be possible to have this set up in a CI/CD system. We can definitely do better and make this a one-step deploy!

Setting up the Gifsicle Layer for a one-step deploy

LambCI has built a Docker image that matches the Lambda environment as closely as possible. We can take advantage of this Docker image to compile our binary.

Building the binary

This Dockerfile will build the gifsicle executable by

  1. Grabbing the LambCI Docker image
  2. Downloading the Gifsicle tar.gz
  3. Configuring and making Gifsicle
  4. Moving the gifsicle executable into /opt/gifsicle
     FROM lambci/lambda:build-nodejs8.10
     # currently using
     # Compilation work
     RUN curl -L -o "gifsicle-${GIFSICLE_VERSION}}.tar.gz" "${GIFSICLE_VERSION}.tar.gz" \
       && tar xf "gifsicle-${GIFSICLE_VERSION}}.tar.gz"
     WORKDIR "gifsicle-${GIFSICLE_VERSION}" 
     RUN "./configure" \
       && "make" \
       # Move to /opt
       && mkdir /opt/gifsicle \
       && mv ./src/gifsicle /opt/gifsicle

    Automating the process with a build script

We can write a build script that will do the following steps:

  1. Spin up our Docker container
  2. Compile the Gifsicle executable
  3. Copy the binary from Docker to our local machine
  4. Stop and remove the Docker container
     #!/bin/bash -x
     set -e
     docker build -t gifsicle-lambda-layer .
     CONTAINER=$(docker run -d gifsicle-lambda-layer false)
     docker cp $CONTAINER:/opt layer
     docker stop $CONTAINER
     docker rm $CONTAINER

    At this stage, we have the binary on our computer, in the /layer folder.

Adding the build script into Serverless config

Now that we have a reproducible build script, we can set it to be built each time we run serverless deploy. A handy node plugin called serverless-plugin-scripts lets us trigger events based on serverless lifecycle hooks (e.g. before deploy, after package etc.). All we have to do is add it into the serverless.yml file like so:

          "before:package:createDeploymentArtifacts": ./

Now, when we run serverless deploy, the binary will be packaged up and sent to Lambda as a Layer!

Resources & Other Reading