Deploying to AWS Part V: the final punch list (load balancer, CDN, SSL)

Looking for a fresh, 2018 approach to deploying a Rails app to AWS? We've partnered with DailyDrip on a series of videos to guide you through the process. We're covering how to Dockerize a Rails app, AWS Fargate, logging, monitoring, setting up load balancing, SSL, CDN, and more.

In our previous videos, we dockerized our Rails app, setup an ECS container, deployed to AWS Fargate, configured logging, and monitored our app's performance. However, we still have a few remaining items.

What's left in our AWS production deployment punch list? Configuring a load balancer, SSL, and a CDN. We're using containers, which gives us a great way to scale, but have left out some pretty important pieces to the puzzle.

In today's video, we're going to address these issues. Let's jump in.

Setup our load balancer

For a load balancer, we'll be using the AWS Application Load Balancer (ALB). It's not hard to setup, but there is a small problem with using it in our current ECS service. A load balancer can only be configured for a service during the initial creation.

Not to worry, almost all of the work we've done so far is not lost. Creating a new service is simple at this point. So, let's do that.

The first step? Create a load balancer. Head over to the AWS console and go to the ECS dashboard. Once there, click on the "Load Balancers" link at the top of the page.

From here, click the "Create Load Balancer" button. On the next page, it's going to ask us to choose the type of load balancer. Choose the "Application Load Balancer".

Now, we should be on the first step of configuring our new load balancer. On this page, we can name our load balancer, select the scheme, IP type (v4/v6), setup the listeners, and the availability zones that our load balancer can route traffic to. I'm going to name our load balancer produciton, make sure that the Scheme is set to internet-facing and choose us-east-1a and us-east-1b for our availability zones. We can leave the rest of the settings as their default values.

You should have something like this:

create load balancer step one

Click "Next: Configure Security Settings" at the bottom and let's move on to the next step.

You should see a warning message that tells us we should configure a secure listener (HTTPS). We're going to come back to this step.

create load balancer step two

Click the "Next: Configure Security Groups" button to continue.

Now, we should be at step 3 to configure the security groups. We can either create a new security group or select an existing security group. Let's choose "select an existing security group" and select our "default" security group.

create load balancer step three

Click "Next: Configure Routing" at the bottom and let's move on to step 4.

On this step, we will be configuring the target group, which is how the load balancer knows where to route traffic. When a container is spun up, it registers itself with a target group and the load balancer send traffic to the containers in that target. When a container is spun down, it deregisters itself from the target group.

For our target group, I'm going to make sure "new target group" is selected. I'm going to set the name as produciton, change the target type to IP, and set the health checks path to /healthcheck (we'll come back to this later).

You should have something similar to this:

create load balancer step four

Click "Next: Register Targets" and we'll move to step 5.

At this step, we could choose to add running targets to the target group, but we're going to wait since we haven't actually spun up the containers that we need. They will register themselves as they spin up.

create load balancer step five

So, let's move to the final step by clicking "Next: Review"

Now just verify that all of our changes look correct.

create load balancer step six

Once we've reviewed our configuration, we can click "Create".

Updating our app for health checks

So, I skimmed over one bit of information above and said we'd come back to it. Well, here we are. We're going to need to update our app to handle communicating it's status so that we know if it needs to be pulled from the load balancer. This isn't a new concept and we're not going to go into much information about how detailed health checks can be.

For now, we are simple going to add a gem that will inject some middleware to create a /healthcheck endpoint that will respond with a 200. It's a very simple library and it will get us by for now.

We'll need to add this line to our Gemfile, run bundle, and commit our code.

gem 'aws-healthcheck'

Once we've done that, we can check our app to make sure the new endpoint it working, then we'll need to build, tag, and push a new image.

docker build -t production .
docker tag production:latest 154477107666.dkr.ecr.us-east-1.amazonaws.com/dailydrip/produciton
$(aws ecr get-login --no-include-email)
docker push 154477107666.dkr.ecr.us-east-1.amazonaws.com/dailydrip/produciton

After we have pushed up our new image, we need to make a small modification to our task definition then we'll be ready to move onto creating our service.

Update task definition

Since we were not running our service behind a load balancer previously, we didn't need to map any ports. Now that we have a load balancer, we need to add a port mapping. Head over to our task definitions page in the ECS dashboard.

Once there, we can choose our task definition. Now, let's choose our latest revision and click "Create New Revision".

From here, scroll down to our container section and click on our "produciton" container. Once the slide out modal is open, we can scroll down to the "Port mappings" section and click "Add port mapping". We need to set the port to 80 and make sure the protocol is set to TCP.

The setting should look like this:

port mapping - task definition

After we've set the port mapping, click "Update" then scroll down to the bottom and click "Create".

Setup our service

At this point, we should have a load balancer up and running, so now we can move on to creating our service that will use this load balancer. Since we've already went over creating a service and getting it running, we're going to skim through most of it.

To get our service setup, we'll need to head back over to the ECS dashboard and choose our cluster. Once we are in our cluster details page, we can click "Create" to create a new service (we'll leave the old single container service there).

We're going to stick with most of the initial settings we used last time we created our service, however we are going to change a few things. On step 1, we are going to set the Number of tasks field to 2. We should have something that looks similar to this:

create service step one

Click "Next step" and let's move to step 2.

On step 2, we need to verify that we choose the same vpc and one of the subnet's we selected when we setup our load balancer.

Finally, we need to move down to the "Load Balancing" section and make a few changes.

First, choose a load balancer type. We created an application load balancer, so we need to choose that option here. Next, we need to choose our "produciton" load balancer from the dropdown.

So far, our page should look like this:

create service step two top

Now, we need to scroll a bit further to the "Container to load balance" section. Then, we need to click the "Add to load balancer" button. From here we can go to our Target group name and choose our "produciton" target group. Everything else should auto-fill.

This is what the bottom section of our page should look like:

create service step two bottom

Let's move on to the next step.

Step 3 is where we would configure auto scaling. But, we're not going to do that right now. We can always come back and configure our auto scaling policy. So, let's click "Next step" and move on to our last step, which is reviewing our configuration.

You're settings should look similar to what we have here:

load balancer details

Everything looks good here, so let's create our service.

The last step after creating our service is to make sure our new service has access to our database. We can do this exactly how we did before, by going to our database instance's security group and adding an inbound rule for our new service's security group.

security group inbound rule

Now, if we go to our service's detail page and look at our tasks, we should see 2 running services. If we want to hit the service, we can go back to our load balancers dashboard and select our load balancer. Then, we should see the "DNS name" for the load balancer. We can use that the access our new load balance'd service.

load balancer details

Let's verify our service is running before moving along.

web app

Great! It looks like our new service is back up and running. So, let's move along to setting up our domain.

Configure our domain

In our case, we did not purchase our domain name through AWS. This means that we are going to need to setup our registrar to use AWS's name servers, which will allow us to manage our DNS settings within Route 53.

To do this, we first need to navigate over to the Route 53 dashboard. Once we are at there, we need to click "Hosted Zones" on the left navigation menu and click the "Create Hosted Zone" button. This should open a window to the right, where we can add our domain name, comment (optional), and set our type. We need to make sure the type is set to "Public Hosted Zone".

create hosted zone

One we've entered our info, we'll click "Create".

Now, we should be on the details page for our hosted zone. Let's create a new record set by clicking the "Create Record Set" button. Leave the name blank, make sure the type is set to "A - IPv4 address", and choose "Yes" for Alias. This should allow us to choose our produciton load balancer from a list that we want to route traffic to. We'll leave the other options set to their defaults.

route 53 create a record

Now, click "Create" and we should see a 3rd record set show up for our hosted zone.

Next, let's create a 4th record that is a CNAME from www.produciton.net with a value of produciton.net, so that we can use both www and not.

NOTE: The reason you'd want to use Route 53 to manage your DNS is because the IP's for load balancers are dynamic and change. So, if we were to set our A record in Namecheap or another registrar, it might change, and then our domain name wouldn't be pointing to the correct place.

Now, we need to note the NS record that was created and take the 4 name servers and configure our dns in Namecheap.

namecheap dns config

NOTE: If you are using Namecheap, don't forget to click the green checkbox to save your work.

Once you change the DNS settings, it might take a while for the changes to propagate. But, once they do, we should see something similar to this:

$ dig www.produciton.net

; <<>> DiG 9.8.3-P1 <<>> www.produciton.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 63457
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.produciton.net.    IN  A

;; ANSWER SECTION:
www.produciton.net. 300 IN  CNAME produciton.net.
produciton.net.   60  IN  A 52.20.51.157
produciton.net.   60  IN  A 54.165.204.118

;; Query time: 68 msec
;; SERVER: 10.0.0.1#53(10.0.0.1)
;; WHEN: Fri Dec 22 05:53:06 2017
;; MSG SIZE  rcvd: 82

Setting up SSL

Now, we'll move on to generating an SSL certificate using Amazon's Certificate Manager and setting that up for our load balancer. The first step in the process is heading over to the AWS console and going to the Certificate Manager dashboard. Once there, we'll need to click on "Request a Certificate", which should take use to multistep process for setting up our certificate.

Now, we should be on step 1, which is us entering our domain name we want to request our certificate for. We want to setup a wildcard certificate, so we're going to enter *.produciton.net and click "Next".

certificate setup step one

On step 2, we'll need to specify how we want to validate the we control the domain. For this, we'll choose DNS validation and click "Review".

certificate setup step 2

On step 3, we'll review our settings and click "Confirm and Request".

certificate setup step 3

Then, on step 4 we should see a pending validation with some instructions. Since we are using Route 53, we can simply click on the "Create record in Route 53" and it will take care of creating the CNAME record for validation.

certificate setup step 4

Then, once the domain is validated, we should see this:

certificate setup validated

At this point, we should be able to hit our app from http://produciton.net, but if we try to go to https://produciton.net, we'll still get an error. This is because we haven't configured our load balancer to handle HTTPS traffice. We'll need to do 2 things to make sure this is setup. First, we'll need to setup a listener for HTTPS and we'll need to add an HTTPS Inbound rule for our security group.

Let's start with adding our listener. To do this, we'll need to head back over to the load balancers dashboard and select our load balancer. In the bottom pane, we can choose the "Listeners" tab and click "Add listener". For our protocol, we'll need to choose "HTTPS" and our default target group will need to be the target group we created earlier. In our case, that will be "produciton".

Since we choose HTTPS for our protocol, we were presented with a few more options. From here, we'll make sure "Choose a certificate from ACM (recommended)" is selected and choose our "*.produciton.net" certificate we just created. We'll leave the default security policy selection. Then, we'll click "Create".

load balancer listener setup

Now that we've added our listener, let's jump back over to the "Description" tab and click on the security group. Once we get to the security group view, let's click on the "Inbound" tab in the bottom pane and add an HTTPS rule (if there isn't one there) and click "Save".

edit inbound rules https

Now, we can check to see if https://produciton.net works correctly.

it's alive

Looks like we're up and running a little more securely than we were! So, let's move on to the last piece of business.

Setting up our CDN

Configuring a CDN for our Rails application is very easy. First, head over to the CloudFront dashboard in the AWS console and click on "Create Distribution". Then, we'll want to click "Get Started" under the Web delivery method.

There are a lot of configuration options that we can change on this page, but most of the defaults will work for us. So, let's start by setting the "Origin Domain Name" to our domain name. For us, we are going to set it to produciton.net. Next, we want to set an "Origin ID". I've set ours to produciton. Now, let's move down to the "Distribution Settings" section.

We are going to end up setting up a CNAME to point cdn.produciton.net to our CloudFront distribution, so we need to configure the distribution to use our SSL certificate, instead of the typical CloudFront certificate. For that, we need to set the "Alternate Domain Name" to cdn.produciton.net. Next, for "SSL Certificate" we need to select "Custom SSL Certificate" and choose our "*.produciton.net" certificate we created earlier.

cloudfront distribution ssl section

Now, we can scroll to the bottom and click "Create Distribution".

Once we have created our distribution, we should see our newly created distribution and that it's in the "In Progress" state. It will take around 8 to 10 minutes for the distribution to be created. While we wait, we can setup our CNAME for cdn.produciton.net. First, let's copy the Domain Name for our distribution. Then, let's go over to the Route 53 dashboard, and create our CNAME for cdn.produciton.net.

cdn CNAME route 53 setup

Once we've created our CNAME, we can make the last change. For this, we need to switch over to our terminal and open config/environments/application.rb and uncomment and change our asset_host url to point to our new CloudFront distribution.

config.action_controller.asset_host = 'cdn.produciton.net'

Once we've made the change and committed our code, we can push our new image up and restart our service by clicking update on our ECS Service and checking "force update" on the first step. For the other steps, just click the next button and save the changes.

At this point, we can switch back to our AWS console and check to see if our distribution status has changed to "deployed". Once that has happened, we can verify our change by going to our application and checking to see if the assets are being delivered from our new CDN.

verifying asset delivery with cdn

It looks like we are now delivering assets with our new CloudFront distribution.

NOTE: If you are running an app in production already, you'd probably not want to make the change to the application until after verifying the distribution was working correctly. In that case, after the distribution is in the "deployed" status, we could simply manually try to fetch an asset from the cdn. Then, once you've verified that the assets are being delivered correctly we could deploy the changes to our rails application.

Summary

In this video, we setup a service with an application load balancer and multiple containers running behind it. We configured Route 53 to manage our DNS entries for a domain name that we purchased outside of AWS. Then, we used AWS Certificate Manager (ACM) to create a free SSL certificate and configured our load balancer to use it. After all of that, we topped things off by setting up a CDN to deliver our assets.

Resources

Our full series on deploying to AWS