Supercharge your CloudFormation templates with Jinja2 Templating Engine

If you are working in an AWS public cloud environment chances are that you have authored a number of CloudFormation templates over the years to define your infrastructure as code. As powerful as this tool is, it has a glaring shortcoming: the templates are fairly static having no inline template expansion feature (think GCP Cloud Deployment Manager.) Due to this limitation, many teams end up copy-pasting similar templates to cater for minor differences like environment (dev, test, prod etc.) and resource names (S3 bucket names etc.)
Enter Jinja2. A modern and powerful templating language for Python. In this blog post I will demonstrate a way to use Jinja2 to enable dynamic expressions and perform variable substitution in your CloudFormation templates.
First lets get the prerequisites out of the way. To use Jinja2, we need to install Python, pip and of course Jinja2.
Install Python
[sourcecode language=”bash” wraplines=”false” collapse=”false”]
sudo yum install python
Install pip
[sourcecode language=”bash” wraplines=”false” collapse=”false”]
curl "" -o ""
sudo python
Install Jinja2
[sourcecode language=”bash” wraplines=”false” collapse=”false”]
pip install Jinja2
To invoke Jinja2, we will use a simple python wrapper script.
[sourcecode language=”bash” wraplines=”false” collapse=”false”]
Copy the following contents to the file
[sourcecode language=”python” wraplines=”false” collapse=”false”]
import os
import sys
import jinja2
Save and exit the editor
Now let’s create a simple CloudFormation template and transform it through Jinja2:
[sourcecode language=”bash” wraplines=”false” collapse=”false”]
vi template1.yaml
Copy the following contents to the file template1.yaml
[sourcecode language=”cpp” wraplines=”false” collapse=”false”]

AWSTemplateFormatVersion: ‘2010-09-09’
Description: Simple S3 bucket for {{ env[‘ENVIRONMENT_NAME’] }}
Type: AWS::S3::Bucket
BucketName: InstallFiles-{{ env[‘AWS_ACCOUNT_NUMBER’] }}
As you can see it’s the most basic CloudFormation template with one exception, we are using Jinja2 variable for substituting the environment variable. Now lets run this template through Jinja2:
Lets first export the environment variables
[sourcecode language=”bash” wraplines=”false” collapse=”false”]
export ENVIRONMENT_NAME=Development
export AWS_ACCOUNT_NUMBER=1234567890
Run the following command:
[sourcecode language=”bash” wraplines=”false” collapse=”false”]
cat template1.yaml | python
The result of this command will be as follows:
[sourcecode language=”cpp” wraplines=”false” collapse=”false”]

AWSTemplateFormatVersion: ‘2010-09-09’
Description: Simple S3 bucket for Development
Type: AWS::S3::Bucket
BucketName: InstallFiles-1234567890
As you can see Jinja2 has expanded the variable names in the template. This provides us with a powerful mechanism to insert environment variables into our CloudFormation templates.
Lets take another example, what if we wanted to create multiple S3 buckets in an automated manner. Generally in such a case we would have to copy paste the S3 resource block. With Jinja2, this becomes a matter of adding a simple “for” loop:
[sourcecode language=”bash” wraplines=”false” collapse=”false”]
vi template2.yaml
Copy the following contents to the file template2.yaml
[sourcecode language=”cpp” wraplines=”false” collapse=”false”]

AWSTemplateFormatVersion: ‘2010-09-09’
Description: Simple S3 bucket for {{ env[‘ENVIRONMENT_NAME’] }}
{% for i in range(1,3) %}
S3Bucket{{ i }}:
Type: AWS::S3::Bucket
BucketName: InstallFiles-{{ env[‘AWS_ACCOUNT_NUMBER’] }}-{{ i }}
{% endfor %}
Run the following command:
[sourcecode language=”bash” wraplines=”false” collapse=”false”]
cat template2.yaml | python
The result of this command will be as follows:
[sourcecode language=”cpp” wraplines=”false” collapse=”false”]

AWSTemplateFormatVersion: ‘2010-09-09’
Description: Simple S3 bucket for Development
Type: AWS::S3::Bucket
BucketName: InstallFiles-1234567890-1
Type: AWS::S3::Bucket
BucketName: InstallFiles-1234567890-2
As you can see the resulting template has two S3 Resource blocks. The output of the command can be redirected to another template file to be later used in stack creation.
I am sure you will appreciate the possibilities Jinja2 brings to enhance your CloudFormation templates. Do note that I have barely scratched the surface of this topic, and I highly recommend you to have a look at the Template Designer Documentation found at to explore more possibilities. If you are using Ansible, do note that Ansible uses Jinja2 templating to enable dynamic expressions and access to variables. In this case you can get rid of the Python wrapper script mentioned in this article and use Ansible directly for template expansion.

Enable Cost Allocation Tags to differentiate project based billing

When running in an AWS public cloud environment, many times there is a need to dissect the billing across different projects for accounting and accrual purposes. AWS provides a mechanism to aggregate related platform costs using a feature known as Cost Allocation Tags. With this feature you can designate Tags on your AWS resources to track costs on a detailed level.
From the AWS Documentation:

Activating tags for cost allocation tells AWS that the associated cost data for these tags should be made available throughout the billing pipeline. Once activated, cost allocation tags can be used as a dimension of grouping and filtering in Cost Explorer, as well as for refining AWS budget criteria.

For example, to view cost allocation based on various project resources in your AWS account, you can tag these resources (EC2 instances, S3 buckets, etc) with a tag named “Project”. Next the Project tag can be activated as a Cost Allocation Tag. From then on AWS will include this tag in associated cost data to allow for filtering based in the tag in Cost Explorer reports.
Let’s walk through the steps of setting this up:

  1. Log in to your AWS Management Console
  2. Tag all the resources with a Tag Key as Project and Value as per your various projects. Understand that this may not be possible for every resource type.
  3. Navigate to My Billing Dashboard > Cost Allocation Tags
  4. Under User-Defined Cost Allocation Tags section, select the tag “Project” and click the “Activate” button.

Once a tag is activated it will take around 24 hours for billing data to appear under this tag.
Next, to view the costs under a project, do the following:

  1. Log in to your AWS Management Console
  2. Navigate to My Billing Dashboard > Cost Explorer
  3. Click “Launch Cost Explorer”
  4. On the right side of the page under Filters section, click the Tag filter and select the Project tag, then the Tag Value to filter cost by the Project

As you can see from the screenshot below, now we can see exactly how much each project is costing per day (or month, if selected)
Some important points to consider:

  • Cost allocation tagging is “managed” via the master billing account at the root of the AWS organization. If your account is part of an organization you will have to contact this account administrator to enable the cost allocation tags.2018-01-05_145000
  • The error message in the previous screenshot will always appear in tenancies not allocated the management permission.
  • Some resources notably bandwidth charges cannot be tagged and thus cannot be accounted under cost allocation tagging. A common pattern in such cases is to calculate percentage cost on each project and cost the unaccounted charges based on this percentage.


Watching the watcher – Monitoring the EC2Config Service

EC2Config service is a nifty Windows service provided by Amazon that performs many important chores on instances based on AWS Windows Server 2003-2012 R2 AMIs. These tasks include (but are not limited to):

  • Initial start-up tasks when the instance is first started (e.g. executing the user data, setting random Administrator account password etc)
  • Display wallpaper information to the desktop background.
  • Run Sysprep and shut down the instance

More details about this service can be found at Amazon’s webpage
Another important aspect of EC2Config service is that it can be configured to send performance metrics to CloudWatch. Example of these metrics are Available Memory, Free Disk Space, Page File Usage to name a few. The problem we faced is sometimes this service will either stop or fail to start due to a misconfigured configuration file. Having this service running all the time was critical for monitoring and compliance reasons.

To make sure that this service was running and publishing metrics to CloudWatch, we came up with a simple solution. We used a Python script written as a Lambda function to query Windows performance metrics for the last 10 minutes (function scheduled to run every 30-minute interval configurable through Lambda Trigger) and if the metric was missing, send an alert.

Following is the code written for this purpose. The salient features of the code are:

  1. The function lambda_handler is invoked by Lambda
  2. Variable are initialised, currently these are coded in to the function but they can also be parametrized using Environment Variables feature of a Lambda function
  3. Ec2 and CloudWatch objects are initialised
  4. Running Instances are retrieved based on “running” filter
  5. If an Instance is running for less than the period requested than ignore this instance (this avoids false alarms for instances started in the last few minutes)
  6. Cloudwatch metric ‘Available Memory’ for the instance is retrieved for last 10 min. This can be substituted with any other metric name. Please also take note of the Dimension of the metric
  7. Datapoint result is inspected, if no Datapoint is found this instance is added to a list (later used for alert)
  8. If the list has some values, an alert is sent via SNS topic

[sourcecode language=”python” wraplines=”false” collapse=”false”]
# AWS Lambda Python script to query for Cloudwatch metrics for all running
# EC2 instance and if unavailable send a message through an SNS topic
# to check for EC2Config service
# Required IAM permissions:
# ec2:DescribeInstances
# sns:Publish
# cloudwatch:GetMetricStatistics
# Setup:
# Check these in the code (Search *1 and *2):
# *1: Confirm details of the parameters
# *2: Confirm details of the dimensions
from __future__ import print_function
import boto3,sys,os
from calendar import timegm
from datetime import datetime, timedelta
def check_tag_present(instance, tag_name, tag_value):
for tag in instance.tags:
if tag[‘Key’] == tag_name:
if tag[‘Value’] == tag_value:
return True
return False
def send_alert(list_instances, topic_arn):
if topic_arn == "":
instances = ""
for s in list_instances:
instances += s
instances += "\n"
subject = "Warning: Missing CloudWatch metric data"
message = "Warning: Missing CloudWatch metric data for the following instance id(s): \n\n" + instances + "\n\nCheck the EC2Config service is running and the config file in C:\\Program Files\\Amazon\\Ec2ConfigService\\Settings is correct."
client = boto3.client(‘sns’)
response = client.publish(TargetArn=topic_arn,Message=message,Subject=subject)
print ("*** Sending alert ***")
def lambda_handler(event, context):
# *1-Provide the following information
_instancetagname = ‘Environment’ # Main filter Tag key
_instancetagvalue = ‘Prod’ # Main filter Tag value
_period = int(10) # Period in minutes
_namespace = ‘WindowsPlatform’ # Namespace of metric
_metricname = ‘Available Memory’ # Metric name
_unit = ‘Megabytes’ # Unit
_topicarn = ” # SNS Topic ARN to write message to
_region = "ap-southeast-2" # Region
ec2 = boto3.resource(‘ec2’,_region)
cw = boto3.client(‘cloudwatch’,_region)
filters = [{‘Name’:’instance-state-name’,’Values’:[‘running’]}]
instances = ec2.instances.filter(Filters=filters)
now =
print(‘Reading Cloud watch metric for last %s min\n’ %(_period))
start_time = datetime.utcnow() – timedelta(minutes=_period)
end_time = datetime.utcnow()
print ("List of running instances:")
for instance in instances:
if check_tag_present(instance, _instancetagname, _instancetagvalue)==False:
continue #Tag/Value missing, ignoring instance
print ("Checking ",
new_dt = datetime.utcnow() – date_s
instance_name = [tag[‘Value’] for tag in instance.tags if tag[‘Key’] == ‘Name’][0]
minutessince = int(new_dt.total_seconds() / 60)
if minutessince < _period:
print ("Not looking for data on this instance as uptime is less than requested period.\n")
metrics = cw.get_metric_statistics(
Dimensions=[{‘Name’: ‘InstanceId’,’Value’:}],
datapoints = metrics[‘Datapoints’]
for datapoint in datapoints:
if datapoint[‘Maximum’]:
print (i,")\nDatapoint Data:",datapoint[‘Maximum’],"\nTimeStamp: ",datapoint[‘Timestamp’],"\n")
print ("Cloudwatch has no Maimum metrics for",_metricname,"instance id: ",
if i == 1: #No data point found
print ("Cloudwatch has no metrics for",_metricname," for instance id: ",
list_instances.append(instance_name + " (" + ")" + ", CW Server Name: " + cw_server_name)
print ("=================================================\n")
if len(list_instances) > 0:
send_alert(list_instances, _topicarn)
Please note: The function needs some permissions to execute, so the following policy should be attached to lambda function’s role:
[sourcecode language=”perl” wraplines=”false” collapse=”false”]
"Version": "2012-10-17",
"Statement": [{
"Sid": "Stmt1493179460000",
"Effect": "Allow",
"Action": ["ec2:DescribeInstances"],
"Resource": ["*"]
"Sid": "Stmt1493179541000",
"Effect": "Allow",
"Action": ["sns:Publish"],
"Resource": ["*"]
"Sid": "Stmt1493179652000",
"Effect": "Allow",
"Action": ["cloudwatch:GetMetricStatistics"],
"Resource": ["*"]

Follow ...+

Kloud Blog - Follow