A lighter way to deploy to AWS ECS

Building smooth and automated deployments has always been one of my favourite areas in software engineering. Over the years, I’ve worked with NullSoft installer, Windows MSI and even built a system based on Capistrano for Java deploys (boy was that ever a terrible idea).

When Docker (oh joy!) came into my life in 2015 I was working at Meltwater to build backend search systems. As we moved to Docker, we decided to stand up Mesos/Marathon clusters in our data centers and created our own tooling for deployments and the corresponding vital secrets support – Lighter and Secretary (built mostly by my esteemed friend Mikael Johansson).

The problem that Lighter was trying to solve was to properly merge information from different sources into one runtime setup that took both the container itself and its needs into consideration but also specific settings for the target environment (e.g. staging, production) and then sprinkling in support for canary builds and red/green deploys.

We figured at the time that the application itself will know the most about its requirements and these requirements will change as the code evolves so the starting point for deployment should be stored with the code and published as an artefact somewhere (a Maven repo in our case), tied to the same semver version number as the container itself.

Then we had a git repository with settings particular to each environment, e.g. cluster size and service pointers and Lighter would take all these json and yaml files and merge them into one before pushing the configuration to Mesos/Marathon. Easy to understand and very usable, it turned out. We also had it set up to run as a Lambda function and deploy automatically based on git commits. Sweet!

Fast forward to earlier this week (May 2019) when I decided to give the Meteor-based app I currently work with some love through a proper build pipeline and a deployment process to a server-less environment (AWS ECS/Fargate). Since I’m partial to Terraform I set up the infrastructure itself with that toolset (ECR for storing my docker images and a VPC with ECS for running them) but I didn’t feel that Terraform was a good fit for continuous deployments – I wanted something more lightweight that could run with limited access rights.

So I looked around for options but couldn’t find anything I really liked so like so many other engineers I decided to build my own tooling and did so in about a day. The rest of this blog post describes what I built and can hopefully serve as inspiration for others.

The basics

The deployment mechanism in ECS/Fargate is built around task definitions which contain the runtime settings for running containers as an ECS Service in an ECS cluster. A new deployment consists of publishing a new task definition and then pointing the ECS service to use that new version. A task definition contains stuff like which docker image to run and CPU/RAM settings.

I started out generating a skeleton json setup file by running

aws ecs register-task-definition --generate-cli-skeleton

Then I added that file to my application git repo as ”ecs_task_template.json”, yanked all the things that were dynamic in nature, and ended up with my core deployment settings as seen below (some names changed and secret names omitted to protect the innocent):

  "family": "my-app",
  "cpu": "256",
  "memory": "512",
  "requiresCompatibilities": ["FARGATE"],
  "networkMode": "awsvpc",
  "containerDefinitions": [
      "cpu": 256,
      "memory": 512,
      "name": "myapp",
      "portMappings": [
          "containerPort": 3000,
          "hostPort": 3000
      "environment": [
          "name": "ROOT_URL",
          "value": "Provided by task builder tool"
          "name": "PORT",
          "value": "3000"
      "secrets": [
          "name": "MONGO_URL"
          "name": "MAIL_URL"
      "logConfiguration": {
        "logDriver": "awslogs"

You may notice that vital settings like the docker image are missing and that the secrets section entries are missing the valueFrom property. These will be inserted later since the values are particular to the environment that the service is run in.

So now I had a versioned file that I wanted to retrieve for a particular docker image tag/version, but where could I put it in order to retrieve the one that matches the specific version? My first thought was to rely on git and store enough of the git hash in the docker tag itself (e.g. master_23DEADBEEF) but I just couldn’t find a simple way to retrieve a single file from a git repository. How about if there’s a place to attach the file to the docker image itself as metadata that can be downloaded without pulling the entire image?

Docker LABEL

Turns out there is. I added a LABEL entry to my Dockerfile and adapted my docker build script to inject the contents of my json file.

# Parts of my Dockerfile


# A line from my build script
docker build -t ${TAG} --build-arg ECS_TASK_TEMPLATE="`cat .docker/ecs_task_template.json`" .

Static environment-specific settings

My staging environment adds the following settings to be merged with the ones above so I put it in a file named staging.json:

  "containerDefinitions": [
      "environment": [
          "name": "ROOT_URL",
          "value": "https://staging.myapp.com"
      "logConfiguration": {
        "options": {
          "awslogs-group": "staging-logs",
          "awslogs-region": "eu-central-1",
          "awslogs-stream-prefix": "app-staging"

LABEL retrieval and json merge

Now I could build a small script to download the template and merge it with my environment-specific static and dynamic content. The script below is the outcome. The docker registry authentication parts are ECR-specific but should be easily adapted to other registries like Docker Hub.

#!/usr/bin/env bash
# Builds a task definition file from various sources and registers it

set -eo pipefail

DOCKER_REGISTRY_REGION=eu-north-1 # Stockholm FTW!
DOCKER_REGISTRY_ACCOUNT=1111111111 # your account# here
ENVIRONMENT=staging # from command line parameter in the future


function get_ecs_template() {
    >&2 echo "Fetching metadata for $TAG" 
    TOKEN=$(aws ecr get-login --no-include-email --region $DOCKER_REGISTRY_REGION | cut -d " " -f 6)
    SCHEMA_HEADER="Accept: application/vnd.docker.distribution.manifest.v2+json"

    # https://hackernoon.com/inspecting-docker-images-without-pulling-them-4de53d34a604
    # not sure why the assignment fails with exit code 6 but works anyway
    MANIFEST=$(curl -s https://${DOCKER_REGISTRY}/v2/${APP_NAME}/manifests/$TAG -H $SCHEMA_HEADER -u AWS:$TOKEN) || true

    ECSTASK_TEMPLATE=$(echo "$MANIFEST" | jq -r .history[0].v1Compatibility | jq -r .config.Labels.ECS_TASK_TEMPLATE | jq .)


script_full_path=$(dirname "$0")
if [[ -z "$TAG" ]]; then
    echo "Usage: $0 <docker image tag for $APP_NAME in $DOCKER_REGISTRY>"
    exit 3

EXEC_ROLE=$(aws iam get-role --role-name ecsTaskExecutionRole | jq -r .Role.Arn)

# now merge the ecs task definition template from the docker image (1)
# with environment-specific static values (2)
# and dynamic values (3)
    get_ecs_template $TAG
    cat "${script_full_path}/../environments/${ENVIRONMENT}.json"
    jq -n --arg execRole $EXEC_ROLE --arg img "${DOCKER_REGISTRY}/${APP_NAME}:$TAG" '{"executionRoleArn":$execRole,"containerDefinitions":[{"image":$img}]}' 

# If you have many secrets you need to loop here
SECRETS=$(aws secretsmanager list-secrets | jq '.SecretList|[.[]|{Name,ARN}]')

echo $TEMPLATES | "$script_full_path/processandmergeobjects" "${SECRETS}" | jq . >task.json

>&2 echo "Registering new task definition" 

REGISTER_RESULT=$(aws ecs register-task-definition --cli-input-json file://./task.json)

TASK_ARN=$(echo $REGISTER_RESULT | jq -r .taskDefinition.taskDefinitionArn)

UPDATE_RESULT=$(aws ecs update-service --cluster $ECS_CLUSTER_NAME --service $ECS_SERVICE_NAME --task-definition $TASK_ARN | jq .)

>&2 echo "Completed deploying $TAG, new task definition is $TASK_ARN" 

Some parts of the script are noteworthy:

  • Getting the LABEL metadata through the Docker Registry API is really hairy as you can only get it from the v1Compatibility property of the first of the history entries. Go figure.
  • The processandmergeobjects script is found below. I could only find a json merge feature that did what I want in the Javascript “lodash” package. It also adds the ARN values of the secrets by being given all secrets in the account on the command line (the script needs to be run in a context that has the secretsmanager:ListSecrets permission). Oh and the oboe js package was a brilliant find. Kudos to the authors.
#!/usr/bin/env node
// Reads JSON objects from stdin and merges the objects

var oboe = require("oboe");
var _ = require("lodash");

const objects = [];

const secretJson = process.argv.length > 1 && process.argv[2];

const secrets = secretJson && JSON.parse(secretJson);

  .node("!.containerDefinitions.[0].secrets.*", o => {
    const foundSecret = secrets && secrets.find(e => e.Name === o.name);
    if (foundSecret) {
      return { ...o, valueFrom: foundSecret.ARN };
    return o;
  .on("done", o => objects.push(o));

process.on("beforeExit", () => {
  const result = objects.reduce((agg, o) => _.merge(agg, o), {});

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: