Handling outages with AWS Route 53

DNS or Domain Name System is one of the basic pillars of the whole Internet and we strongly believe this concept is a must-have for any decent software engineer. It was developed during the 80s to translate domain names into IP addresses. If at first DNS was build to solve a very simple problem, it has been developed over time and now it can satisfy more complicated needs.

AWS Route 53 is the replacement in cloud for a traditional DNS service. But it comes with some features that makes it the best solution for DNS management when you operate an AWS infrastructure, especially thanks to the integration level between this service and other AWS solutions, like ELB, S3 or Beanstalk.

How to do the setup

In this post we’ll describe how Route 53 can be used to manage service failures. Let’s say we have a service responsible for user authentication running inside a Beanstalk environment. This application can be accessed through an endpoint provided by AWS Beanstalk. Also, we have domain example.com hosted inside Route 53 – according to Route 53’s terminology we say we have a hosted zone named example.com. This means that we can create a record set named auth-client-prod, with type A. Checking Yes on Alias and then scrolling in the target drop-down, we’ll find the Beanstalk environment. On routing policy, we select failover. For the rest of the options, see the picture below:

In this example, we used that basic REST app built long time ago, with a single difference: in the controller, we added a handler for the empty path:

public String helloMain(@RequestParam(value = "name", required = false, defaultValue = "World") String name) {

    return "Hello, " + name + ". This is the main page! ";

To confirm the record defined above works, just run in your terminal:

curl -k http://auth-clients-prod.example.com

and you should see a similar output:

Hello, World. This is the main page!

Then we have to define a secondary record type. For that, we’ll build a very basic html page, available as a static page inside S3. For that, we create a bucket named auth-clients-prod.example.com and in that bucket we upload a very simple html page. All steps to host a static site in S3 are described here.

Last step is to go back in Route 53 and create that secondary record type. For that, create a new record set named auth-client-prod, with type A. Select Yes on Alias and then in that input box, add s3-website-eu-west-1.amazonaws.com. On routing policy select Failover, then Secondary. Something like in the below print screen:

How to test it works

In the end, let’s test this setup and confirm Route 53 shifts traffic from the load balancer to the S3 console if the first service becomes unavailable:

curl -k http://auth-clients-prod.example.com
Hello, World. This is the main page!

Now, go to the EC2 console and kill all instances that host that environment. Wait a while, then run again that command. You should see the content of that html page you loaded in the S3 bucket.

This is just a basic setup to demo how powerful Route 53 can be. Imagine that instead of having that static website, we can have an API Gateway endpoint that calls a Lambda function to buffer somewhere the calls if your service is unavailable.

Thanks for reading this post! We are happy to announce you this was our 50th post! 🙂

Happy cloud computing to all of you!