In Bootstrapping AWS we looked at what’s required to kick off a brand new installation with your latest build.  But it’s two weeks later now – and you’re about to release version 2 of the application.  Using the Cloud Formation script we created first, it’s actually quite easy.

In the first build script, there was a reference in the CloudFormation Metadata to the website source – being {“Ref” : “BuildNumber”}.

"Parameters" : {
  "BuildNumber" : {
  "Type" : "Number"

So the process is as follows.

  1. Execute your Cloud Formation Script
  2. Bump the build number from 1 to 2
  3. Perform a termination operation using powershell – ok, the below is a little brutal – but it does achieve the desired result.
    Get-ASAutoScalingInstance | where-object {$_.AutoScalingGroupName -eq 'Blog-WebServerScalingGroup-AHAW1FUMED64'} | foreach {Stop-EC2Instance -instance $_.InstanceId -Terminate}

What’s just happened is that we’ve updated the cloudformation metadata to reference the new source zip file.  Then, we’ve terminated all the instances in our scaling group.  Autoscale will kick in and ensures the minimum number of instances are running.   It starts up two new instances with the new CloudFormation BuildNumber and Release 2 source.  You could of course terminate these one at a time, maintaining up time, but this is quicker to demonstrate.

The net result of this is that we end up with two new brand new instances kicked off using release 2.  But if you’re running a lot of servers, this can be a little time consuming and there’s potentially a lot of extra hourly dollars you’re being charged for.  (Bring on micro charging!).  But, there’s a better way already baked into the EC2 toolset on the base AMIs.  It’s called cfn-hup. cfn-hup is a service that polls the CloudFormation meta-data for changes, and then executes some actions when a change is detected.  Let’s use our existing update process and roll out a better method using cfn-hup via CloudFormation.

So to start – we need to configure cfn-hup on our instances.  But in line with the objectives of bootstrapping, we’ll do it via a launch configuration and CloudFormation, so that we can roll it out easily.

Returning to our original CloudFormation script and the instance metadata, we first of all need to ensure that cfn-hup has the configuration we require when we start it.  Cfn-hup looks in the c:\cfn folder for a file called cfn_hup.conf.   That file tells the service where to find the metadata, and how often to look at it.  We need to create that file.  So returning to our CloudFormation script, we add under AWS::CloudFormation::Init -> “config” a new “files” element like this

"files" : {
  "c:\\cfn\\cfn-hup.conf" : {
    "content" : { "Fn::Join" : ["", [
      "stack=", { "Ref" : "AWS::StackId" }, "\n",
      "region=", { "Ref" : "AWS::Region" }, "\n",
  "c:\\cfn\\hooks.d\\cfn-auto-reloader.conf" : {
    "content": { "Fn::Join" : ["", [
      "action=cfn-init.exe -v -s ", { "Ref" : "AWS::StackId" },
                         " -r WebServerLaunchConfiguration",
                         " --region ", { "Ref" : "AWS::Region" }, "\n"

The first part creates the cfn-hup configuration.  Using CloudFormation pseudo parameters, we’re able to pass in the region and the stack to tell cfn-hup where to look.  And we use the interval to tell us how frequently (in minutes) to check for updates.

The next file we add is a hook into any changes identified by cfn-hup.   We’re going to trigger an action post any detected update in elements below the “path” – ie – Resources.WebServerLaunchConfigurtion.Metadata.  Note that BuildNumber is below Metadata in the original CloudFormation script.   If you look closely, you’ll see the action we fire is one of the same actions we kicked off when we first bootstrapped the instance – ie cfn-init.  This means we get to leverage all the cfn-init goodness we used earlier plus, leverage any additional magic in the future.

A key thing to note here is that the script in UserData will not be re-executed.  This is important because we don’t want to configure the website all over again.  Userdata only executes once per instance by default.

If you were to run this update into CloudFormation now, it’s still not going to execute though.  We need to ensure that the cfn-hup service is running.  We do that by adding a “services” section under config.  (At the same level as the “files” action we previously added)

"services" : {
  "windows" : {
    "cfn-hup" : {
      "enabled" : "true",
      "ensureRunning" : "true",
      "files" : ["c:\\cfn\\cfn-hup.conf", "c:\\cfn\\hooks.d\\cfn-auto-reloader.conf"]

The above script declares that the cfn-hup service is to be enabled and running.   We also declare that the service is dependent and will monitor any changes to the “files” listed.  Any changes to those files will initiate a cfn-hup restart. Now we are ready to roll it out.

We perform the roll out by executing the same steps we executed earlier.   Perform an update and terminate the instances.  The build number can be left the same.  Once you’ve done this, auto-scale will bring up new instances with the new launch configuration that will monitor the metadata, and update themselves without further manual intervention. 

So next fortnight – when you roll out version 3, you only need to perform an update by re-executing the CloudFormation script, and giving it the new build number.  The running instances (however many there may be) will perform an in place upgrade to themselves – without needing to terminate or create new instances.

Amazon Web Services, Cloud Infrastructure

Join the conversation! 6 Comments

  1. Thanks for this Peter powerful stuff. I will get back soon with a response to my Stackoverflow question

  2. Thanks! This was more helpful than the AWS docs and I was able to get up and running with this pretty quickly.

  3. Where can I download your cloudformation scripts from

  4. when doing the ip-place upgrade, the instance is still in the ELB pool. So it will still get request from users. would be a problem? i guess the question is how to schedule upgrade, so the service is cut over to new version seamlessly.

Comments are closed.