How to deploy a microservice application to AWS

In this post, I will use an example to show how you can:

  • Write a small microservice with a REST API in Java, using the Dropwizard framework
  • Containerize the application using Google JIB
  • Deploy the application to AWS using Ansible
    • With a robust, clustered and self-healing setup

All the source code for this blog post is published in the following repositories:

https://github.com/crispab/aws-asg-microservice-example-using-dropwizard-and-ansible

The main repository with the microservice application and Ansible playbook.

https://github.com/crispab/ansible_role_aws_ecr_docker_push

The Ansible role to create a Amazon Elastic Container Registry (AWS ECR) and push a given Docker image to that ECR.

https://github.com/crispab/ansible_role_aws_cloudformation_asg

The Ansible role to deploy a dockerized application to a AWS AutoScalingGroup.

Main benefits of this setup

  • Reusable deployment code
  • Immutable infrastructure
  • Robust, self-healing infrastructure
  • Continuous Deployment friendly

The application

The purpose of the application example code is mainly to have a application which we can deploy. I am not going to focus on the code of the application, but feel free to take a look.

Try it out!

$ git clone https://github.com/crispab/aws-asg-microservice-example-using-dropwizard-and-ansible.git
$ cd aws-asg-microservice-example-using-dropwizard-and-ansible
$ ./gradlew run

The application should start and you should see something like:

INFO  [2018-12-30 10:08:16,052] io.dropwizard.jersey.DropwizardResourceConfig: The following paths were found for the configured resources:

    GET     / (se.crisp.goodtimes.GoodTimesResource)

INFO  [2018-12-30 10:08:16,054] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@286855ea{/,null,AVAILABLE}
INFO  [2018-12-30 10:08:16,061] io.dropwizard.setup.AdminEnvironment: tasks = 

    POST    /tasks/log-level (io.dropwizard.servlets.tasks.LogConfigurationTask)
    POST    /tasks/gc (io.dropwizard.servlets.tasks.GarbageCollectionTask)

INFO  [2018-12-30 10:08:16,067] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@18d11527{/,null,AVAILABLE}
INFO  [2018-12-30 10:08:16,077] org.eclipse.jetty.server.AbstractConnector: Started application@4052c8c2{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
INFO  [2018-12-30 10:08:16,084] org.eclipse.jetty.server.AbstractConnector: Started admin@181b8c4b{HTTP/1.1,[http/1.1]}{0.0.0.0:8081}
INFO  [2018-12-30 10:08:16,084] org.eclipse.jetty.server.Server: Started @2318ms
<=========----> 75% EXECUTING [3s]
> :run

In another terminal window, test the REST endpoint with curl:

curl localhost:8080
{"id":"cb9d386c-c239-4aa1-8a7a-94b284cce84e","message":"Good times!","hostname":"max-laptop/127.0.1.1"}

The application is up and running!

You can also curl the healthcheck on the admin port:

curl localhost:8081/healthcheck
{"deadlocks":{"healthy":true}}

Containerize it!

By packaging the application as a Docker image, we lessen the need to provision the servers with the dependencies of the application (install Java etc) since everything you need to run the application is inside the Docker image. This is referred to as “dockerization”, or “containerization”. All you need on the servers is Docker, and all you need to install and run the application is to do “docker run [image]”.

 

I’ve recently started using Google JIB to containerize Java applications. I think the main benefit is that it creates a layered Docker image with a separate layer for your classes and another layer for your dependencies so that small code changes will result in small image changes. This gives us a faster build and and push. Another benefit is that you don’t need the Docker daemon. There are plugins for both Gradle and Maven.

Containerize the app by:

./gradlew jibDockerBuild

This will build a Docker image named crisp/goodtimes:latest

You can list the image by:

$ docker images | grep goodtimes
crisp/goodtimes	latest	73bf1c3dbae9        49 years ago        135MB

And you can run the image by:

$ docker run crisp/goodtimes:latest

 

And the app should start!

The configuration for Google JIB can be found in build.gradle (the build configuration file for the gradle project)

Publish the image

Just like you would publish a JAR, you need to publish your Docker image. In this case, I’ve chosen to use Amazon Elastic Container Registry (ECR) since this post is about using AWS. I wrote a simple, reusable Ansible role which will create a ECR for your application, and push your image to it. See: https://github.com/crispab/ansible_role_aws_ecr_docker_push

 

Why do we need to publish the image? Why can’t we just upload it to the servers? Because we are not going to use long-lived servers (google: pets vs cattle), we are going to replace our servers on every deploy, and not only that, we are going to use a AutoScalingGroup which can replace our servers at any time. So, deployment will not only happen when we trigger it and it will even happen to servers which we do not know the IP of.

 

Deploy to AWS

We will deploy to AWS using a another reusable Ansible role which I wrote just for this blog post. It’s based on a pattern I’ve used for mission critical, production applications, but it’s written fresh for this post. See: https://github.com/crispab/ansible_role_aws_cloudformation_asg

 

The Ansible role betrcode.aws_cloudformation_asg will deploy any Dockerized application to a AWS AutoScalingGroup, using CloudFormation. When the role is re-run, it will deploy the application again (maybe a new version of it) to completely new infrastructure, and then delete the old infrastructure. This includes a Application Load Balancer with a Listener and a Target Group, a AutoScalingGroup with a LaunchConfiguration (which will launch instances) and Security Groups.

 

The main benefits of this is that every install is done on new machines, with a potentially new AMI (with the latest security patches). You will never have to do a special operation to replace your infrastructure, because you do it on every deploy. The only thing that we keep for a long time and which mutates in this role is the Route53 entry which points to the LoadBalancer we want to use. Important that your clients refresh the DNS often enough. If you wanted to you could modify this role to keep the LoadBalancer as a pet and not replace it every time.

 

The Ansible roles used here are both published as Open Source (MIT license) to Ansible Galaxy, so feel free to use/copy/modify them for your own needs!

 

Bringing it together in a Ansible Playbook

In the playbook (deploy.yml), I use the two roles described above to publish the image and deploy it to a AWS ASG.

---

- hosts: localhost
  connection: local
  gather_facts: yes  # needed for ansible_date_time
  roles:
    - role: betrcode.aws_ecr_docker_push
      source_image_tag: latest
    - role: betrcode.aws_cloudformation_asg
      docker_image: "{{ full_destination_image }}" # Coming from previous role

 

To be able to run this playbook, you need an AWS account and permissions to use all the services needed.

 

WARNING! This will create (and attempt to delete) AWS resources on your AWS account. This will cost you money to try! And you also risk that the the role deletes the wrong things! (It shouldn’t, but it could.) Use at your own risk!

 

Change dir into the “deployment” folder and create and populate a file called extra-vars.yml like this:

---

vpc: vpc-abc12345     # the AWS VPC identifier (your vpc)
region: eu-central-1  # the AWS region to deploy to (example: eu-west-1)
subnets: [subnet-123aaabc, subnet-456dddef] # the subnets to deploy to (needs to exist already)
aws_key: yourkey-amazon   # the instance key which can be used to log in to the created instances (needs to exist already)
route53_zone: yourdomain.com  # the Route 53 zone where you want to create your DNS entry. (needs to exist already)
instance_profile: YourEc2Role # the name of the instance profile (or IAM Role) that the created instances will get. (needs to exist already)

 

Download the required roles:

ansible-galaxy install -r requirements.yml

And then run:

$ ansible-playbook deploy.yml --extra-vars @extra-vars.yml
PLAY [localhost] *****************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************
ok: [localhost]

TASK [betrcode.aws_ecr_docker_push : Create ECR repository] **********************************************************************************
ok: [localhost]

TASK [betrcode.aws_ecr_docker_push : Debug ECR repository] ***********************************************************************************
skipping: [localhost]

TASK [betrcode.aws_ecr_docker_push : Set full_destination_image] *****************************************************************************
ok: [localhost]

TASK [betrcode.aws_ecr_docker_push : Login to ECR] *******************************************************************************************
changed: [localhost]

TASK [betrcode.aws_ecr_docker_push : Tag and push image] *************************************************************************************
changed: [localhost]

TASK [betrcode.aws_cloudformation_asg : Create cloudformation stack: stack-goodtimes-1x0-1546166862] *****************************************
changed: [localhost]

TASK [betrcode.aws_cloudformation_asg : Get facts about new LB] ******************************************************************************
ok: [localhost]

TASK [betrcode.aws_cloudformation_asg : set_fact] ********************************************************************************************
ok: [localhost]

TASK [betrcode.aws_cloudformation_asg : Update DNS to point to new LB] ***********************************************************************
changed: [localhost]

TASK [betrcode.aws_cloudformation_asg : debug] ***********************************************************************************************
ok: [localhost] => {
    "msg": "You can now try: http://goodtimes.yourdomain.com"
}

TASK [betrcode.aws_cloudformation_asg : Find all stacks] *************************************************************************************
ok: [localhost]

TASK [betrcode.aws_cloudformation_asg : Delete old stacks] ***********************************************************************************
skipping: [localhost] => (item=stack-goodtimes-1x0-1546166862) 

PLAY RECAP ***********************************************************************************************************************************
localhost                  : ok=11   changed=4    unreachable=0    failed=0   

 

If successful, this will take a few minutes to run, and when it is done, you should be able to visit: http://goodtimes.yourdomain.com …and get a response. 🙂

 

To delete everything the role created (except the Route53 entry), run the same playbook but with the “cleanup” tag:

$ ansible-playbook deploy.yml --extra-vars @extra-vars.yml --tags cleanup
PLAY [localhost] *****************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************
ok: [localhost]

TASK [betrcode.aws_cloudformation_asg : Find all stacks] *************************************************************************************
ok: [localhost]

TASK [betrcode.aws_cloudformation_asg : Delete old stacks] ***********************************************************************************
changed: [localhost] => (item=stack-goodtimes-1x0-1546166862)

PLAY RECAP ***********************************************************************************************************************************
localhost                  : ok=3    changed=1    unreachable=0    failed=0   

 

You mentioned Continuous Delivery?

As you have seen, we have all the necessary parts of a continuous delivery pipeline, except the extensive tests needed of course. Would you like to setup containerized system tests in your pipeline? See my article series in three parts about system testing a Java application: https://blog.crisp.se/2018/05/26/maxwenzin/dockerized-testing-vs-end-to-end-testing  

The main parts of a Continuous Delivery pipeline would be:

  1. Test & Build: ./gradlew build
  2. Build Docker Image: ./gradlew jibDockerBuild
  3. Publish & Deploy: ansible-playbook deploy.yml –extra-vars @extra-vars.yml

This could easily be put into a Jenkinsfile or similar.

Resources

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.