Enable Cost Allocation Tags to differentiate project based billing

When running in an AWS public cloud environment, many times there is a need to dissect the billing across different projects for accounting and accrual purposes. AWS provides a mechanism to aggregate related platform costs using a feature known as Cost Allocation Tags. With this feature you can designate Tags on your AWS resources to track costs on a detailed level.

From the AWS Documentation:

Activating tags for cost allocation tells AWS that the associated cost data for these tags should be made available throughout the billing pipeline. Once activated, cost allocation tags can be used as a dimension of grouping and filtering in Cost Explorer, as well as for refining AWS budget criteria.


For example, to view cost allocation based on various project resources in your AWS account, you can tag these resources (EC2 instances, S3 buckets, etc) with a tag named “Project”. Next the Project tag can be activated as a Cost Allocation Tag. From then on AWS will include this tag in associated cost data to allow for filtering based in the tag in Cost Explorer reports.


Let’s walk through the steps of setting this up:

  1. Log in to your AWS Management Console
  2. Tag all the resources with a Tag Key as Project and Value as per your various projects. Understand that this may not be possible for every resource type.
  3. Navigate to My Billing Dashboard > Cost Allocation Tags
  4. Under User-Defined Cost Allocation Tags section, select the tag “Project” and click the “Activate” button.



Once a tag is activated it will take around 24 hours for billing data to appear under this tag.


Next, to view the costs under a project, do the following:

  1. Log in to your AWS Management Console
  2. Navigate to My Billing Dashboard > Cost Explorer
  3. Click “Launch Cost Explorer”
  4. On the right side of the page under Filters section, click the Tag filter and select the Project tag, then the Tag Value to filter cost by the Project


As you can see from the screenshot below, now we can see exactly how much each project is costing per day (or month, if selected)


Some important points to consider:

  • Cost allocation tagging is “managed” via the master billing account at the root of the AWS organization. If your account is part of an organization you will have to contact this account administrator to enable the cost allocation tags.2018-01-05_145000
  • The error message in the previous screenshot will always appear in tenancies not allocated the management permission.
  • Some resources notably bandwidth charges cannot be tagged and thus cannot be accounted under cost allocation tagging. A common pattern in such cases is to calculate percentage cost on each project and cost the unaccounted charges based on this percentage.



Watching the watcher – Monitoring the EC2Config Service

EC2Config service is a nifty Windows service provided by Amazon that performs many important chores on instances based on AWS Windows Server 2003-2012 R2 AMIs. These tasks include (but are not limited to):

  • Initial start-up tasks when the instance is first started (e.g. executing the user data, setting random Administrator account password etc)
  • Display wallpaper information to the desktop background.
  • Run Sysprep and shut down the instance

More details about this service can be found at Amazon’s webpage

Another important aspect of EC2Config service is that it can be configured to send performance metrics to CloudWatch. Example of these metrics are Available Memory, Free Disk Space, Page File Usage to name a few. The problem we faced is sometimes this service will either stop or fail to start due to a misconfigured configuration file. Having this service running all the time was critical for monitoring and compliance reasons.

To make sure that this service was running and publishing metrics to CloudWatch, we came up with a simple solution. We used a Python script written as a Lambda function to query Windows performance metrics for the last 10 minutes (function scheduled to run every 30-minute interval configurable through Lambda Trigger) and if the metric was missing, send an alert.

Following is the code written for this purpose. The salient features of the code are:

  1. The function lambda_handler is invoked by Lambda
  2. Variable are initialised, currently these are coded in to the function but they can also be parametrized using Environment Variables feature of a Lambda function
  3. Ec2 and CloudWatch objects are initialised
  4. Running Instances are retrieved based on “running” filter
  5. If an Instance is running for less than the period requested than ignore this instance (this avoids false alarms for instances started in the last few minutes)
  6. Cloudwatch metric ‘Available Memory’ for the instance is retrieved for last 10 min. This can be substituted with any other metric name. Please also take note of the Dimension of the metric
  7. Datapoint result is inspected, if no Datapoint is found this instance is added to a list (later used for alert)
  8. If the list has some values, an alert is sent via SNS topic

# AWS Lambda Python script to query for Cloudwatch metrics for all running 
# EC2 instance and if unavailable send a message through an SNS topic
# to check for EC2Config service
# Required IAM permissions:
#   ec2:DescribeInstances
#   sns:Publish
#   cloudwatch:GetMetricStatistics
# Setup:
# Check these in the code (Search *1 and *2): 
#   *1: Confirm details of the parameters
#   *2: Confirm details of the dimensions

from __future__ import print_function
import boto3,sys,os
from calendar import timegm
from datetime import datetime, timedelta

def check_tag_present(instance, tag_name, tag_value):
    for tag in instance.tags:
        if tag['Key'] == tag_name:
            if tag['Value'] == tag_value:
                return True

    return False

def send_alert(list_instances, topic_arn):
    if topic_arn == "":

    instances = ""

    for s in list_instances:
        instances += s
        instances += "\n"

    subject = "Warning: Missing CloudWatch metric data"
    message = "Warning: Missing CloudWatch metric data for the following instance id(s): \n\n" + instances + "\n\nCheck the EC2Config service is running and the config file in C:\\Program Files\\Amazon\\Ec2ConfigService\\Settings is correct."
    client = boto3.client('sns')
    response = client.publish(TargetArn=topic_arn,Message=message,Subject=subject)
    print ("*** Sending alert ***")

def lambda_handler(event, context):
    # *1-Provide the following information
    _instancetagname = 'Environment' # Main filter Tag key
    _instancetagvalue = 'Prod'       # Main filter Tag value
    _period = int(10)                # Period in minutes
    _namespace = 'WindowsPlatform'   # Namespace of metric
    _metricname = 'Available Memory' # Metric name
    _unit = 'Megabytes'              # Unit
    _topicarn =  ''                  # SNS Topic ARN to write message to
    _region = "ap-southeast-2"       # Region

    ec2 = boto3.resource('ec2',_region)
    cw = boto3.client('cloudwatch',_region)

    filters = [{'Name':'instance-state-name','Values':['running']}]

    instances = ec2.instances.filter(Filters=filters)

    now = datetime.now()

    print('Reading Cloud watch metric for last %s min\n' %(_period))

    start_time = datetime.utcnow() - timedelta(minutes=_period)
    end_time = datetime.utcnow()

    print ("List of running instances:")


    for instance in instances:
        if check_tag_present(instance, _instancetagname, _instancetagvalue)==False:            
            continue #Tag/Value missing, ignoring instance

        print ("Checking ", instance.id)

        new_dt = datetime.utcnow() - date_s

        instance_name = [tag['Value'] for tag in instance.tags if tag['Key'] == 'Name'][0]
        minutessince = int(new_dt.total_seconds() / 60)
        if minutessince < _period:
            print ("Not looking for data on this instance as uptime is less than requested period.\n")

        metrics = cw.get_metric_statistics(
            Dimensions=[{'Name': 'InstanceId','Value': instance.id}], 
        datapoints = metrics['Datapoints']

        for datapoint in datapoints:
            if datapoint['Maximum']:
                print (i,")\nDatapoint Data:",datapoint['Maximum'],"\nTimeStamp: ",datapoint['Timestamp'],"\n")
                print ("Cloudwatch has no Maimum metrics for",_metricname,"instance id: ", instance.id)

        if i == 1: #No data point found
            print ("Cloudwatch has no metrics for",_metricname," for instance id: ", instance.id)
            list_instances.append(instance_name + " (" + instance.id+ ")" + ", CW Server Name: " + cw_server_name)
        print ("=================================================\n")

    if len(list_instances) > 0:
        send_alert(list_instances, _topicarn)

Please note: The function needs some permissions to execute, so the following policy should be attached to lambda function’s role:

    "Version": "2012-10-17",
    "Statement": [{
        "Sid": "Stmt1493179460000",
        "Effect": "Allow",
        "Action": ["ec2:DescribeInstances"],
        "Resource": ["*"]
        "Sid": "Stmt1493179541000",
        "Effect": "Allow",
        "Action": ["sns:Publish"],
        "Resource": ["*"]
        "Sid": "Stmt1493179652000",
        "Effect": "Allow",
        "Action": ["cloudwatch:GetMetricStatistics"],
        "Resource": ["*"]