Update FSTAB on multiple EC2 instances using Run Commands

Scenario:

  • Customer Running multiple Linux Ec2 instance in AWS.
  • Customer reports that Instances are loosing mount points after a reboot.

Solution :

The resolution requires to update the fstab file on all the instances.

fstab is a system configuration file on Linux and other Unix-like operating systems that contains information about major filesystems on the system. It takes its name from file systems table, and it is located in the /etc directory ( ref : http://www.linfo.org/etc_fstab.html)

In order to update files on multiple servers we will utilize the following

  • ECHO command with append parameter (>>) to update the text file through shell
  • SSM Run Command to execute the command on multiple machines.

Note : All the concerned EC2 instances should have SSM manager configured.

Step 1 : Login to the AWS Console and click  EC2

click on ec2

 Step 2: Click on Run Command on the Systems Manager Services section

click on Run command

Step 3: Click on Run Command in the main panel

click-on-run-command-2.png

Step 4: Select Run-Shell Script

select run-shell script

Step 5: Select Targets 

Note : Targets can be selected manually or we can use Tags to perform the same activity on multiple instances with the matching tag.

select targets and stuff

Step 6:

Enter the following information :

  • Execute on : Specifies the number of target the commands can be executed concurrently. Concurrently running commands save time in execution.
  • Stop After : 1 errors
  • Timeout ( seconds) : leave the default 600 seconds

Step 7: Select the commands section and paste the command

echo '10.x.x.x:/ /share2 nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev 0 0' >> /etc/fstab
echo '10.x.x.x:/ /share1 nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev 0 0' >> /etc/fstab
echo '192.x.x.x:/ /backup1 nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev 0 0' >> /etc/fstab
echo '172.x.x.x:/ /backup2 nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev 0 0' >> /etc/fstab            

 

Step 8 : Click on Run click on run

Step 9: Click on command id to get update regarding the execution success of failure

click on command id to check the status of the coomand

Replacing the service desk with bots using Amazon Lex and Amazon Connect (Part 3)

Hopefully you’ve had the chance to follow along in parts 1 and 2 where we set up our Lex chatbot to take and validate input. In this blog, we’ll interface with our Active Directory environment to perform the password reset function. To do this, we need to create a Lambda function that will be used as the logic to fulfil the user’s intent. The Lambda function will be packaged with the python LDAP library to modify the AD password attribute for the user. Below are the components that need to be configured.

Active Directory Service Account

To begin, we need to start by creating a service account in Active Directory that has permissions to modify the password attribute for all users. Our Lambda function will then use this service account to perform password resets. To do this, create a service account in your Active Directory domain and perform the following to delegate the required permissions:

  1. Open Active Directory Users and Computers.
  2. Right click the OU or container that contains organisational users and click Delegate Control
  3. Click Next on the Welcome Wizard.
  4. Click Add and enter the service account that will be granted the reset password permission.
  5. Click OK once you’ve made your selection, followed by Next.
  6. Ensure that Delegate the following common tasks is enabled, and select Reset user passwords and force password change at next logon.
  7. Click Next, and Finish.

KMS Key for the AD Service Account

Our Lambda function will need to use the credentials for the service account to perform password resets. We want to avoid storing credentials within our Lambda function, so we’ll store the password as an encrypted Lambda variable and allow our Lambda function to decrypt it using Amazon’s Key Management Service (KMS). To create the KMS encryption key, perform the following steps:

  1. In the Amazon Console, navigate to IAM
  2. Select Encryption Keys, then Create key
  3. Provide an Alias (e.g. resetpw) then select Next
  4. Select Next Step for all subsequent steps then Finish on step 5 to create the key.

IAM Role for the Lambda Function

Because our Lambda function will need access to several AWS services such as SNS to notify the user of their new password and KMS to decrypt the service account password, we need to provide our function with an IAM role that has the relevant permissions. To do this, perform the following steps:

  1. In the Amazon Console, navigate to IAM
  2. Select Policies then Create policy
  3. Switch to the JSON editor, then copy and paste the following policy, replacing the KMS resource with the KMS ARN created above
    {
       "Version": "2012-10-17",
       "Statement": [
         {
           "Effect": "Allow",
           "Action": "sns:Publish",
           "Resource": "*"
         },
         {
           "Effect": "Allow",
           "Action": "kms:Decrypt",
           "Resource": "<KMS ARN>"
         }
       ]
    }
  4. Provide a name for the policy (e.g. resetpw), then select Create policy
  5. After the policy has been created, select Roles, then Create Role
  6. For the AWS Service, select Lambda, then click Next:Permissions
  7. Search for and select the policy you created in step 5, as well as the AWSLambdaVPCAccessExecutionRole and AWSLambdaBasicExecutionRole policies then click Next: Review
  8. Provide a name for the role (e.g. resetpw) then click Create role

Network Configuration for the Lambda Function

To access our Active Directory environment, the Lambda function will need to run within a VPC or a peered VPC that hosts an Active Directory domain controller. Additionally, we need the function to access the internet to be able to access the KMS and SNS services. Lambda functions provisioned in a VPC are assigned an Elastic network Interface with a private IP address from the subnet it’s deployed in. Because the ENI doesn’t have a public IP address, it cannot simply leverage an internet gateway attached to the VPC for internet connectivity and as a result, must have traffic routed through a NAT Gateway similar to the diagram below.

Password Reset Lambda Function

Now that we’ve performed all the preliminary steps, we can start building the Lambda function. Because the Lambda execution environment uses Amazon Linux, I prefer to build my Lambda deployment package on a local Amazon Linux docker container to ensure library compatibility. Alternatively, you could deploy a small Amazon Linux EC2 instance for this purpose, which can assist with troubleshooting if it’s deployed in the same VPC as your AD environment.

Okay, so let’s get started on building the lambda function. Log in to your Amazon Linux instance/container, and perform the following:

  • Create a project directory and install python-ldap dependencies, gcc, python-devel and openldap-devel
    Mkdir ~/resetpw
    sudo yum install python-devel openldap-devel gcc
  • Next, we’re going to download the python-ldap library to the directory we created
    Pip install python-ldap -t ~/resetpw
  • In the resetpw directory, create a file called reset_function.py and copy and paste the following script

  • Now, we need to create the Lambda deployment package. As the package size is correlated with the speed of Lambda function cold starts, we need to filter out anything that’s not necessary to reduce the package size. The following zip’s the script and LDAP library:
    Cd ~/resetpw
    zip -r ~/resetpw.zip . -x "setuptools*/*" "pkg_resources/*" "easy_install*"
  • We need to deploy this package as a Lambda function. I’ve got AWSCLI installed in my Amazon Linux container, so I’m using the following CLI to create the Lambda function. Alternatively, you can download the zip file and manually create the Lambda function in the AWS console using the same parameters specified in the CLI below.
    aws lambda create-function --function-name reset_function --region us-east-1 --zip-file fileb://root/resetpw.zip --role resetpw --handler reset_function.lambda_handler --runtime python2.7 --timeout 30 --vpc-config SubnetIds=subnet-a12b3cde,SecurityGroupIds=sg-0ab12c30 --memory-size 128
  • For the script to run in your environment, a number of Lambda variables need to be set which will be used at runtime. In the AWS Console, navigate to Lambda then click on your newly created Lambda function. In the environment variables section, create the following variables:
    • Url – This is the LDAPS URL for your domain controller. Note that it must be LDAP over SSL.
    • Domain_base_dn – The base distinguished name used to search for the user
    • User – The service account that has permissions to change the user password
    • Pw – The password of the service account
  • Finally, we need to encrypt the Pw variable in the Lambda console. Navigate to the Encryption configuration and select Enable helpers for encryption in transit. Select your KMS key for both encryption in transit and at reset, then select the Encrypt button next to the pw variable. This will encrypt and mask the value.

  • Hit Save in the top right-hand corner to save the environment variables.

That’s it! The Lambda function is now configured. A summary of the Lambda function’s logic is as follows:

  1. Collect the Lambda environment variables and decrypt the service account password
  2. Perform a secure AD bind but don’t verify the certificate (I’m using a Self-Signed Cert in my lab)
  3. Collect the user’s birthday, start month, telephone number, and DN from AD
  4. Check the user’s verification input
  5. If the input is correct, reset the password and send it to the user’s telephone number, otherwise exit the function.

Update and test Lex bot fulfillment

The final step is to add the newly created Lambda function to the Lex bot so it’s invoked after input is validated. To do this, perform the following:

  1. In the Amazon Console, navigate to Amazon Lex and select your bot
  2. Select Fulfillment for your password reset intent, then select AWS Lambda function
  3. Choose the recently created Lambda function from the drop down box, then select Save Intent

That should be it! You’ll need to build your bot again before testing…

My phone buzzes with a new SMS…

Success! A few final things worth noting:

  • All Lambda execution logs will be written to CloudWatch logs, so this is the best place to begin troubleshooting if required
  • Modification to the AD password attribute is not possible without a secure bind and will result in an error.
  • The interrogated fields (month started and date of birth) are custom AD attributes in my lab.
  • Password complexity in my lab is relaxed. If you’re using the default password complexity in AD, the randomly generated password in the lambda function may not meet complexity requirements every time.

Hope to see you in part 4, where we bolt on Amazon Connect to field phone calls!

Enable Cost Allocation Tags to differentiate project based billing

When running in an AWS public cloud environment, many times there is a need to dissect the billing across different projects for accounting and accrual purposes. AWS provides a mechanism to aggregate related platform costs using a feature known as Cost Allocation Tags. With this feature you can designate Tags on your AWS resources to track costs on a detailed level.

From the AWS Documentation:

Activating tags for cost allocation tells AWS that the associated cost data for these tags should be made available throughout the billing pipeline. Once activated, cost allocation tags can be used as a dimension of grouping and filtering in Cost Explorer, as well as for refining AWS budget criteria.

 

For example, to view cost allocation based on various project resources in your AWS account, you can tag these resources (EC2 instances, S3 buckets, etc) with a tag named “Project”. Next the Project tag can be activated as a Cost Allocation Tag. From then on AWS will include this tag in associated cost data to allow for filtering based in the tag in Cost Explorer reports.

 

Let’s walk through the steps of setting this up:

  1. Log in to your AWS Management Console
  2. Tag all the resources with a Tag Key as Project and Value as per your various projects. Understand that this may not be possible for every resource type.
  3. Navigate to My Billing Dashboard > Cost Allocation Tags
  4. Under User-Defined Cost Allocation Tags section, select the tag “Project” and click the “Activate” button.

 

Fig-1

Once a tag is activated it will take around 24 hours for billing data to appear under this tag.

 

Next, to view the costs under a project, do the following:

  1. Log in to your AWS Management Console
  2. Navigate to My Billing Dashboard > Cost Explorer
  3. Click “Launch Cost Explorer”
  4. On the right side of the page under Filters section, click the Tag filter and select the Project tag, then the Tag Value to filter cost by the Project

2018-01-05_150042

As you can see from the screenshot below, now we can see exactly how much each project is costing per day (or month, if selected)

2018-01-05_145028

Some important points to consider:

  • Cost allocation tagging is “managed” via the master billing account at the root of the AWS organization. If your account is part of an organization you will have to contact this account administrator to enable the cost allocation tags.2018-01-05_145000
  • The error message in the previous screenshot will always appear in tenancies not allocated the management permission.
  • Some resources notably bandwidth charges cannot be tagged and thus cannot be accounted under cost allocation tagging. A common pattern in such cases is to calculate percentage cost on each project and cost the unaccounted charges based on this percentage.

 

 

Replacing the service desk with bots using Amazon Lex and Amazon Connect (Part 2)

Welcome back! Hopefully you had the chance to follow along in part 1 where we started creating our Lex chatbot. In part 2, we attempt to make the conversation more human-like and begin integrating data validation on our slots to ensure we’re getting the correct input.

Creating the Lambda initialisation and validation function

As data validation requires compute, we’ll need to start by creating an AWS Lambda function. Head over to the AWS console, then navigate to the AWS Lambda page. Once you’re there, select Create Function and choose to Author from Scratch then specify the following:

Name: ResetPWCheck

Runtime: Python 2.7 (it’s really a matter of preference)

Role: I use an existing Out of the Box role, “Lambda_basic_execution”, as I only need access to CloudWatch logs for debugging.

Once you’ve populated all the fields, go ahead and select Create Function. The script we’ll be using is provided (further down) in this blog, however before we go through the script in detail, there are two items worth mentioning.

Input Events and Response Formats

It’s well worth familiarising yourself with the page on Lambda Function Input Event and Response Formats in the Lex Developer guide. Every time input is provided to Lex, it invokes the Lambda initalisation and validation function. For example, when I tell my chatbot “I need to reset my password”, the lambda function is invoked and the following event is passed:

Amazon Lex expects a response from the Lambda function in JSON format that provides it with the next dialog action.

Persisting Variables with Session Attributes

There are many ways to determine within your Lambda function where you’re up to in your chat dialog, however my preferred method is to pass state information within the SessionAttributes object of the input event and response as a key/value pair. The SessionAttributes can persist between invocations of the Lambda function (every time input is provided to the chatbot), however you must remember to collect and pass the attributes between input and responses to ensure it persists.

Input Validation Code

With that out of the way, let’s begin looking at the code. The below script is what I’ve used which you can simply copy and paste, assuming you’re using the same slot and intent names in your Lex bot that were used in Part 1.

Let’s break it down.

When the lambda function is first invoked, we check to see if any state is set in the sessionAttributes. If not, we can assume this is the first time the lambda function is invoked and as a result, provide a welcoming response while requesting the User’s ID. To ensure the user isn’t welcomed again, we set a session state so the Lambda function knows to move to User ID validation when next invoked. This is done by setting the “Completed” : “None” key/value pair in the response SessionAttributes.

Next, we check the User ID. You’ll notice the chkUserId function checks for two things; That the slot is populated, and if it is, the length of the field. Because the slot type is AMAZON.Number, any non-numeric characters that are entered will be rejected by the slot. If this occurs, the slot will be left empty, hence this is something we’re looking out for. We also want to ensure the User ID is 6 digits, otherwise it is considered invalid. If the input is correct, we set the session state key/value pair to indicate User ID validation is complete then allow the dialog to continue, otherwise we request the user to re-enter their User ID.

Next, we check the Date of Birth. Because the slot type is strict regarding input, we don’t do much validation here. An utterance for this slot type generally maps to a complete date: YYYY-MM-DD. For validation purpose, we’re just looking for an empty slot. Like the User ID check, we set the session state and allow the dialog to continue if all looks good.

Finally, we check the last slot which is the Month Started. Assuming the input for the month started is correct, we then confirm the intent by reading all the slot values back to the user and asking if it’s correct. You’ll notice here that there’s a bit of logic to determine if the user is using voice or text to interact with Lex. If voice is used, we use Speech Synthesis Markup Language (SSML) to ensure the UserID value is read as digits, rather than as the full number.

If the user is happy with the slot values, the validation completes and Lex then moves to the next Lambda function to fulfil the intent (next blog). If the user isn’t happy with the slot values, the lambda function exits telling the user to call back and try again.

Okay, now that our Lambda function is finished, we need to enable it as a code hook for initialisation and validation. Head over to your Lex bot, select the “ResetPW” intent, then tick the box under Lambda initialisation and validation and select your Lambda function. A prompt will be given to provide permissions to allow your Lex bot to invoke the lambda function. Select OK.

Let’s hit Build on the chatbot, and test it out.

So, we’ve managed to make the conversation a bit more human like and we can now detect invalid input. If you use the microphone to chat with your bot, you’ll notice the User ID value is read as digits. That’s it for this blog. Next blog, we’ll integrate Active Directory and actually get a password reset and sent via SNS to a mobile phone.

Replacing the service desk with bots using Amazon Lex and Amazon Connect (Part 1)

“What! Is this guy for real? Does he really think he can replace the front line of IT with pre-canned conversations?” I must admit, it’s a bold statement. The IT Service Desk has been around for years and has been the foot in the door for most of us in the IT industry. It’s the face of IT operations and plays an important role in ensuring an organisation’s staff can perform to the best of their abilities. But what if we could take some of the repetitive tasks the service desk performs and automate them? Not only would we be saving on head count costs, we would be able to ensure correct policies and procedures are followed to uphold security and compliance. The aim of this blog is to provide a working example of the automation of one service desk scenario to show how easy and accessible the technology is, and how it can be applied to various use cases.
To make it easier to follow along, I’ve broken this blog up into a number of parts. Part 1 will focus on the high-level architecture for the scenario and begin creating the Lex chatbot.

Scenario

Arguably, the most common service desk request is the password reset. While this is a pretty simple issue for the service desk to resolve, many service desk staff seem to skip over, or not realise the importance of user verification. Both the simple resolution and the strict verification requirement make this a prime scenario to automate.

Architecture

So what does the architecture look like? The diagram below dictates the expected process flow. Let’s step through each item in the diagram.

 

Amazon Connect

The process begins when the user calls the service desk and requests to have their password reset. In our architecture, the service desk uses Amazon Connect which is a cloud based customer contact centre service, allowing you to create contact flows, manage agents, and track performance metrics. We’re also able to plug in an Amazon Lex chatbot to handle user requests and offload the call to a human if the chatbot is unable to understand the user’s intent.

Amazon Lex

After the user has stated their request to change their password, we need to begin the user verification process. Their intent is recognised by our Amazon Lex chatbot, which initiates the dialog for the user verification process to ensure they are who they really say they are.

AWS Lambda

After the user has provided verification information, AWS Lambda, which is a compute on demand service, is used to validate the user’s input and verify it against internal records. To do this, Lambda interrogates Active Directory to validate the user.

Amazon SNS

Once the user has been validated, their password is reset to a random string in Active Directory and the password is messaged to the user’s phone using Amazon’s Simple Notification Service. This completes the interaction.

Building our Chatbot

Before we get into the details, it’s worth mentioning that the aim of this blog is to convey the technology capability. There’s many ways of enhancing the solution or improving validation of user input that I’ve skipped over, so while this isn’t a finished production ready product, it’s certainly a good foundation to begin building an automated call centre.

To begin, let’s start with building our Chatbot in Amazon Lex. In the Amazon Console, navigate to Amazon Lex. You’ll notice it’s only available in Ireland and US East. As Amazon Connect and my Active Directory environment is also in US East, that’s the region I’ve chosen.

Go ahead and select Create Bot, then choose to create your own Custom Bot. I’ve named mine “UserAdministration”. Choose an Output voice and set the session timeout to 5 minutes. An IAM Role will automatically be created on your behalf to allow your bot to use Amazon Polly for speech. For COPPA, select No, then select Create.

Once the bot has been created, we need to identify the user action expected to be performed, which is known as an intent. A bot can have multiple intents, but for our scenario, we’re only creating one, which is the password reset intent. Go ahead and select Create Intent, then in the Add Intent window, select Create new intent. My intent name is “ResetPW”. Select Add, which should add the intent to your bot. We now need to specify some expected sample utterances, which are phrases the user can use to trigger the Reset Password intent. There’s quite a few that could be listed here, but I’m going to limit mine to the following:

  • I forgot my password
  • I need to reset my password
  • Can you please reset my password

The next section is the configuration for the Lambda validation function. Let’s skip past this for the time being and move onto the slots. Slots are used to collect information from the user. In our case, we need to collect verification information to ensure the user is who they say they are. The verification information collected is going to vary between environments. I’m looking to collect the following to verify against Active Directory:

  • User ID – In my case, this is a 6-digit employee number that is also the sAMAccountName in Active Directory
  • User’s birthday – This is a custom attribute in my Active Directory
  • Month started – This is a custom attribute in my Active Directory

In addition to this, it’s also worth collecting and verifying the user’s mobile number. This can be done by passing the caller ID information from Amazon Connect, however we’ll skip this, as the bulk of our testing will be text chat and we need to ensure we have a consistent experience.

To define a slot, we need to specify three items:

  • Name of the slot – Think of this as the variable name.
  • Slot type – The data type expected. This is used to train the machine learning model to recognise the value for the slot.
  • Prompt – How the user is prompted to provide the value sought.

Many slot types are provided by Amazon, two of which has been used in this scenario. For “MonthStarted”, I’ve decided to create my own custom slot type, as the in-built “AMAZON.Month” slot type wasn’t strictly enforcing recognisable months. To create your own slot type, press the plus symbol on the left-hand side of the page next to Slot Types, then provide a name and description for your slot type. Select to Restrict to Slot values and Synonyms, then enter each month and its abbreviation. Once completed, click Add slot to intent.

Once the custom slot type has been configured, it’s time to set up the slots for the intent. The screenshot below shows the slots that have been configured and the expected order to prompt the user.

Last step (for this blog post), is to have the bot verify the information collected is correct. Tick the Confirmation Prompt box and in the Confirm text box provided, enter the following:

Just to confirm, your user ID is {UserID}, your Date of Birth is {DOB} and the month you started is {MonthStarted}. Is this correct?

For the Cancel text box, enter the following:

Sorry about that. Please call back and try again.

Be sure to leave the fulfillment to Return parameters to client and hit Save Intent.

Great! We’ve built the bare basics of our bot. It doesn’t really do much yet, but let’s take it for a spin anyway and get a feel for what to expect. In the top right-hand corner, there’s a build button. Go ahead and click the button. This takes some time, as building a bot triggers machine learning and creates the models for your bot. Once completed, the bot should be available to text or voice chat on the right side of the page. As you move through the prompts, you can see at the bottom the slots getting populated with the expected format. i.e. 14th April 1983 is converted to 1983-04-14.

So at the moment, our bot doesn’t do much but collect the information we need. Admittedly, the conversation is a bit robotic as well. In the next few blogs, we’ll give the bot a bit more of a personality, we’ll do some input validation, and we’ll begin to integrate with Active Directory. Once we’ve got our bot working as expected, we’ll bolt on Amazon Connect to allow users to dial in and converse with our new bot.

Disk Space Reporting through Lamba Functions- Linux servers

Solution Objective:

The solution provides detailed report related to hard disk space for all the Linux Ec2 instances in the AWS environment.

Requirements:

 

Mentioned below are the requirements the solution should be able to fulfil.

  • Gather information related to all mount points in all the Linux EC2 instances in the environment.
  • Able to generate cumulative report based on all instances in the environment.

3. Assumptions:

The following assumptions are considered

  • All the EC2 instances have SSM agent installed.
  • The personnel responsible for the configuration have some understanding of IAM Roles, S3 buckets and lambda functions

4. Solutions Description:

The following services provided by Amazon will be utilized to generate the report

  • Linux shell Scripts
  • AWS S3
  • AWS Lambda
  • AWS IAM Roles
  • Maintenances Windows

4.1      Linux Shell Script.

Linux Shell Script will be utilized to generate information about the instance and the mount points space utilization.

Mentioned below script needs to be executed on all Linux Ec2 instances to generate the mount point information.

curl http://169.254.169.254/latest/meta-data/instance-id # Prints the Instance ID
printf "\n" # Adds line
df # provides details of the mount point

4.1      AWS S3

The result of the shell script will be posted to an S3 bucket for further use.

The EC2 instances will need write access to the nominated S3 bucket for certificate Maintenance.

S3 Bucket Name: eomreport ( sample name )

4.2      AWS Lambda Functions

Lambda Functions will be used to perform the following activities.

  • Acquire the result of the Shell script from the S3 bucket
  • Generate a Report
  • Email the report to the relevant recipient

The Lambda Functions would need read access to the S3 bucket and access to AWS SES to send emails to recipients.

Mentioned below is the Lambda Functions that performs the mentioned above tasks.

import boto3
import codecs
import pprint
from datetime import datetime, date, time
def lambda_handler(event,Context):
    s3 = boto3.resource('s3')
    mybucket = s3.Bucket('eomreport')
    resulthtml = ["<h1>Report : Hard disk Space </h1>"] # Adds heading to the email body
    resulthtml.append('<html><body><table border="1">') # Creates a table
    resulthtml.append('<tr><td><b>InstanceID</b></td><td><b>Available Space</b></td><td><b>Used Space</b></td><td><b>Use %</b></td></td><td><b>Mounted on</b></td></b></tr>')
    for file_key in mybucket.objects.all():
        complete_string = str(file_key)
        search = "stdout"
        check = complete_string.find(search)
        if check > 0 :
            body = file_key.get()['Body'].read().decode('utf-8')
            complete=body.splitlines() #splits data into lines.
            id="".join(complete[0])
            hr=complete[1]
            hr2=hr.split()
            hr2.append("InstanceID")
            hstr=",".join(hr2)
            details=complete[2:]
            for line in details:
                    output_word=line.split()
                    dstr="".join(line)
                    resulthtml.append(("<td>'{}'</td><td>'{}'</td><td>'{}'</td><td>'{}'</td><td>'{}'</td></tr>").format(id,output_word[3],output_word[2],output_word[4],output_word[5])) # for the HTML email to be sent.
    resulthtml.append('</table></body></html>')
    final=str("".join(resulthtml))
    final=final.replace("'","")
    print(final)
    sender = "email@email.com"
    recipient = "email@email.com"
    awsregion = "us-east-1"
    subject = "Certificate Update list"
    charset = "UTF-8"
    mylist="mylist update"
    client = boto3.client('ses',region_name=awsregion)
    try:
        response = client.send_email(
           Destination={
               'ToAddresses': [
                   recipient,
                ],
            },
         Message={
                  'Body': {
                      'Html': {
                        'Charset': charset,
                        'Data': final,
                             },
                    'Text': {
                     'Charset': charset,
                     'Data': mylist,
                    },
                },
                'Subject': {
                    'Charset': charset,
                    'Data': subject,
                },
            },
            Source=sender,
        )
    # Display an error if something goes wrong.
    except Exception as e:
        print( "Error: ", e)
    else:
       print("Email sent!")

 

4.1 AWS IAM Roles

Roles will be used to grant

  • AWS S3 write access to all the EC2 instances as they will submit the output of the  the S3 bucket
  • AWS SES access to Lambda Functions to send emails to relevant recipients.

4.2 AWS SES

Amazon Simple Email Service (Amazon SES) evolved from the email platform that Amazon.com created to communicate with its own customers. In order to serve its ever-growing global customer base, Amazon.com needed to build an email platform that was flexible, scalable, reliable, and cost-effective. Amazon SES is the result of years of Amazon’s own research, development, and iteration in the areas of sending and receiving email.( Ref. From https://aws.amazon.com/ses/).

We would be utilizing AWS SES to generate emails using AWS lambda.

The configuration of the Lambda functions can be modified to send emails to a distribution group to provide Certificate reporting, or it can be used to send emails to ticketing system in order to provide alerting and ticket creation in case a certificate expiration date crosses a configured threshold.

5Solution Configuration

5.1 Configure IAM Roles

The following Roles should be configured

  • IAM role for Lambda Function.
  • IAM for EC2 instances for S3 bucket Access

5.1.1 Role for Lambda Function

Lambda function need the following access

  • Read data from the S3 bucket
  • Send Emails using Amazon S3

To accomplish the above the following policy should be created and attached to the IAM Role

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1501474857000",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::S3BucketName/*"
            ]
        },
        {
            "Sid": "Stmt1501474895000",
            "Effect": "Allow",
            "Action": [
                "ses:SendEmail"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

6.1.2  Role for EC2 instance

All EC2 instances should have access to store the Shell output in the S3 bucket.

To accomplish the above , the following policy should be assigned to the EC2 roles

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1501475224000",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::eomreport"
            ]
        }
    ]
}

6.2 Configure Maintenance Window.

The following tasks need to be performed for the maintenance window

  • Register a Run Command with Run-Shell Script using the script in section 4.1
  • Register targets based on the requirements
  • Select the schedule based on your requirement

Maintenance Window Ref : 

http://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html

6.3 Configure Lambda Function:

The following tasks need to be performed for the Lambda Function

  • Create a blank lambda function with the S3 put event as the trigger\lambda function
  • Click on Next
  • Enter the Name and Description
  • Select run time Python 3.6
  • Copy and paste the lambda function mentioned in section 4.3

    6.4 Configuring AWS SES

The following tasks need to be completed before the execution of the Run-commands.

  • Email Addresses should be added to the AWS SES section of the tenant.
  • The email addresses should be verified.

 7. Result:

Based on the above configuration, whenever the run command is executed, the following report is generated and sent to the nominated email account.

InstanceID Available Space Used Space Use % Mounted on
i-sampleID1 123984208 1832604 0.02 /
i-sampleID1 7720980 0 0 /dev
i-sampleID1 7746288 0 0 /dev/shm
i-sampleID1 7721456 24832 0.01 /run
i-sampleID1 7746288 0 0 /sys/fs/cgroup
i-sampleID2 122220572 3596240 0.03 /
i-sampleID2 7720628 0 0 /dev
i-sampleID2 7746280 8 0.01 /dev/shm
i-sampleID2 7532872 213416 0.03 /run
i-sampleID2 7746288 0 0 /sys/fs/cgroup
i-sampleID2 81554964 16283404 0.17 /sit
i-sampleID2 83340832 14497536 0.15 /uat
i-sampleID2 1549260 0 0 /run/user/1000
i-sampleID3 123983664 1833148 0.02 /
i-sampleID3 7720980 0 0 /dev
i-sampleID3 7746288 0 0 /dev/shm
i-sampleID3 7721448 24840 0.01 /run
i-sampleID3 7746288 0 0 /sys/fs/cgroup

 

AWS Re:Invent 2017 – what’s out so far

What a week it’s been for AWS customers. Just in the last 5 days we already seen a huge number of product releases including:

AWS Sumerian: With Sumerian, you can construct an interactive 3D scene without any programming experience, test it in the browser, and publish it as a website that is immediately available to users. Product details can be found https://aws.amazon.com/about-aws/whats-new/2017/11/announcing-amazon-sumerian-preview/

Amazon MQ:Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud. Amazon MQ works with your existing applications and services without the need to manage, operate, or maintain your own messaging system. See Jeff’s blog post here https://aws.amazon.com/blogs/aws/amazon-mq-managed-message-broker-service-for-activemq/

Amazon EC2 Bare Metal Instances:Amazon EC2 Bare Metal instances provide your applications with direct access to the processor and memory of the underlying server. These instances are ideal for workloads that require access to hardware feature sets (such as Intel VT-x), or for applications that need to run in non-virtualized environments for licensing or support requirements. For for info on getting into the preview, visit https://aws.amazon.com/about-aws/whats-new/2017/11/announcing-amazon-ec2-bare-metal-instances-preview/

PrivateLink for Customers/Partners: We announced that customers can now use AWS PrivateLink to access third party SaaS applications from their Virtual Private Cloud (VPC) without exposing their VPC to the public Internet. Customers can also use AWS PrivateLink to connect services across different accounts and VPCs within their own organizations, significantly simplifying their internal network architecture. See details https://aws.amazon.com/about-aws/whats-new/2017/11/aws-privatelink-now-available-for-customer-and-partner-services/ and kloud will be blogging about this much more in the coming weeks

Amazon GuardDuty:Amazon GuardDuty is a threat detection service that gives you a more accurate and easy way to continuously monitor and protect your AWS accounts and workloads. With a few clicks in the AWS Management Console, GuardDuty begins analyzing AWS data across all your AWS accounts integrated with threat intelligence feeds, anomaly detection, and machine learning for more actionable threat detection in an easy to use, pay as you go cloud security service. Again, Jeff’s done a great article https://aws.amazon.com/blogs/aws/amazon-guardduty-continuous-security-monitoring-threat-detection/

Now that Andy Jassy’s keynote has just finished, we now have a bunch more:

AWS Fargate: containers as a service where a customer no longer needs to manage the underlying EC2 instances. See blog post https://aws.amazon.com/blogs/aws/aws-fargate/

Elastic Kubernetes Service: the same level of integration we’ve to come expect from ECS, but running kubernetes. For more details, see https://aws.amazon.com/blogs/aws/amazon-elastic-container-service-for-kubernetes/

Aurora serverless: Designed for workloads that are highly variable and subject to rapid change, this new configuration allows you to pay for the database resources you use, on a second-by-second basis. More details can be found https://aws.amazon.com/blogs/aws/in-the-works-amazon-aurora-serverless/

AWS recognition for video: Amazon Rekognition Video is a new video analysis service feature that brings scalable computer vision analysis to your S3 stored video, as well as, live video streams. see Jeff’s blog here https://aws.amazon.com/blogs/aws/launch-welcoming-amazon-rekognition-video-service/

AWS Neptune: a fast and reliable graph database service that makes it easy to gain insights from relationships among your highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds of latency. Jeff’s blog can be found https://aws.amazon.com/blogs/aws/amazon-neptune-a-fully-managed-graph-database-service/

AWS DeepLens: a new video camera that runs deep learning models directly on the device, out in the field. You can use it to build cool apps while getting hands-on experience with AI, IoT, and serverless computing. AWS DeepLens combines leading-edge hardware and sophisticated on-board software, and lets you make use of AWS Greengrass, AWS Lambda, and other AWS AI and infrastructure services in your app. See here for the latest blog post https://aws.amazon.com/blogs/aws/deeplens/

This is by no means a complete list of everything released, but just a glimpse of what’s come out so far. Stay tuned to our blog for detailed deep dives into some of these services.

Getting Started with AWS Direct Connect

In this blog post we review the process for implementing an AWS Direct Connect for an AWS account. An AWS Direct Connect is a dedicated network connection between your network and one of the AWS Direct Connect location (such as Sydney) ranging in speed from 50mbps all the way up to 10Gbps for bandwidth intensive workloads.  This dedicated connection, can be partitioned into multiple network interfaces allowing for a single Direct Connect service to access both public resources (such as S3) and private resources such as virtual machines within your VPC, while maintaining network separation.

see how Bradnam’s Windows and Doors use Telstra Cloud Gateway and AWS Direct Connect

 

Benefits of Direct Connect

The main benefit of an AWS Direct Connect is that it provides a private connection to your AWS environment, while reducing the bandwidth cost associated with bandwidth-heavy workloads. This is because of the reduced cost per bandwidth across a Direct Connect when compared with standard Internet bandwidth ($0.042 per GB outbound for Direct Connect vs $0.14 per GB outbound for Internet*). It also provides more predictable performance than direct internet connectivity due to it being a dedicated connection.

Hosted Connection vs Virtual Interface

When provisioning an AWS Direct Connect, the first question you need to answer is whether you want a Virtual Interface for a Hosted Connection. A Hosted Connection, is a connection provided by a Direct Connect APN Partner, which allows for the provisioning of a Sub-1Gbps connection without the need to physically run new/additional cables. This solution is the only way of getting a service with less than 1Gbps of allocated bandwidth. A Virtual Interface service is available in 1Gbps or 10Gbps versions, and can either be self-provisioned or procured via a AWS APN Partner (such as Telstra). For the purposes of this blog post, we will focus on procuring a service from an APN Partner as this is the most common method.

Setup Process for Direct Connect

 

Create a Virtual Private Gateway

In order to use an AWS Direct Connection, we need somewhere within our VPC to connect it to which is where a Virtual Private gateway come in. VPG will provide an ingress/egress point in our VPC. To create a VPG

  • From the AWS Management Console, select “VPC” and select “Virtual Private Gateway” from the left-hand side.

Picture2

  • Click “Create Virtual Private Gateway”
  • On the next screen provide a logical name for the new VPG. Leave the ASN set to “Amazon default ASN” and click “Create Virtual Private Gateway”

Now that the VPG is created, it needs to be added to a VPC which is as simple as selecting “Attach to VPC” from the actions menu. Your VPG is now attached to the VPC and ready to accept a VPN connection.

Request Service from APN Direct Connect Partner

As we will be procuring our Direct Connect service from an APN partner, the first step is to find and procure a service. For our example, we will use the Telstra Cloud Gateway product which will allow us to connect our AWS environment, back to our Telstra IP Network as shown in the diagram below.

Picture1

For further details on procuring a Telstra Cloud Gateway, visit https://www.telstra.com.au/business-enterprise/solutions/network-services/networks/cloud-gateway#getting-started

Accept your Hosted Connection

Accepting our new Connection is as simple as browsing to the “Direct Connect” section of the “AWS Management Console” and selecting “Connections” from the left-hand pane. Once their Select “I Understand that Direct Connect Port charges apply once I click “Accept This Connection”” and then choose “Accept Connection”.

Once you have successfully accepted the connection, you will be presented with the following screen.

Create Virtual Interface

Now that we have a valid Connection, we can provision our Virtual Interfaces which we will use to route traffic back to our network. We can use our Virtual Interface in one of several ways:

  • to create a private virtual interface to connect to our VPC.
  • to create a public Virtual Interface to connect to AWS Service that aren’t in a VPC (Such as S3 or Dynamodb).

Given that most organisations purchase Direct Connects to access resources they have within their VPC, for this example we will provision a Private Virtual Interface. To. do this, we need to make sure that in addition to the detail of our Virtual Private gateway (that we provisioned above), we also have the following information which was provided to us by our APN Partner:

  • a unique VLAN tag that’s not in use on the AWS Direct Connection for another virtual interface. This number should be between 1 and 4094.
  • a BGP ASN number which is used to identify your network routes. This number should be between 64512 and 65536.

To provision our Private Virtual Interface, from the “Direct Connect -> Connections” we select our connection choose “Actions, Create Virtual Interface”.

At the next screen, we need to select “Private – A private virtual Interface should be used to access an Amazon VPC using private IP Addresses” and then under “Define Your New Private Virtual Interface”:

  • for “Virtual Interface Name”, we enter a logical name for the virtual Interface.
  • for “Virtual Interface Owner”, we select “My AWS Account”
  • for “Connection To”, we select “Virtual Private Gateway” and then from the dropdown, select the VPG we created earlier.
  • For “VLAN”, we enter the ID Number provided to us by our APN Partner which should be between 1 and 4094
  • We Select “IPv4” as our Address Family.
  • Check “Auto-Generate peer IPS”
  • for “BGP ASN” Enter the ASN provided to us by our APN Partner, which should be between 64512 and 65535
  • Check “Auto-generate BGP Key unless these have been provided to us.

Once we have completed the form, click “Continue”

Verify Connectivity

Now that we have established our Virtual Interface, it’s time for us to verify that it’s working. To do this, simply log into an EC2 instance within the VPC and perform a traceroute (or tracert if in Unix/Linux) back to an on-premise device (ensure that an associated Security group allows outbound ICMP).

Notes & Tips

From our experience, when designing or implementing an AWS Direct Connect solution, there are a couple of things you want to remember:

  • You can only have a maximum of 50 Virtual interface per Direct Connect Connection. if you require more, you will need to provision more than one service.
  • There is a limit of 100 routes per BGP session one private Virtual Interface. If you require more, either leverage summary routes or static routing.
  • Don’t forget to monitor the status of our Direct Connects. This can be done via AWS CloudWatch and Lambda functions. See https://github.com/awslabs/aws-dx-monitor for an example.
  • Direct Connects are physical services and as such require maintenance from time to time. Keep an eye on your Personal Health Dashboard for any scheduled maintenance activities.

Customer Example

An example of a good use case for the AWS Direct Connect would be Bradnam’s Windows and Doors. Bradnam’s had a requirement to connect their corporate network, into their newly create VPC within the Amazon AWS environment. Due to the need for reliability, performance predictability and secure communication AWS Direct Connect was the logical solution. The Direct Connect was provided by the Telstra Cloud Gateway solution which allowed it to be connected directly into their existing Corporate network without any additional hardware or on-premise equipment. Bradnam’s AWS environment now simply appears on their Next IP network as a logical site with the ability to scale the bandwidth between 50Mbps and 500Mbps as their business needs change.

* prices quoted are in USD for the AWS Sydney region and taken from the AWS website on November 20th, 2017
Read More

Recursive Lambda

In this blog post we explore a recursive pattern for AWS Lambda. This pattern allows us to tackle potentially long running tasks such as ones requiring processing multiple items in the input or tracking the progress of a long running task.

Read More

Certificate Management using PowerShell and Lambda Functions

Certificate Management

1. Why Certificate Management is required.

Certificates installed on client machines are one of the critical resources in the client’s infrastructure. Monitoring certificates is critical to any company willing to successfully provide Certificate Management service. The process of manually reporting certificate details is tedious is time consuming, so it better to automate it.

The following document will explain the steps to configure AWS services to provide certificate management for customers with AWS hosted infrastructure.

2. Solution Requirements

Mentioned below are the requirements that should be fulfilled to provide proper certificate management services.

  • The Solution should scan all instances for installed Certificates
  • The solution should be able to generate Reports which include the following information
    • Instance ID of the machine the certificate is installed on
    • Subject of the Certificate
    • Days till expiration
  • The solution should be able to generate an alert if the “days to expire” falls below 30 Days.
  • The alert should be able to create a ticket which can later be worked on to rectify the situation

Note: The requirements for certificate management are prone to change based on the requirements for the clients.

3. Assumptions:

The following assumptions are considered while configuring certificate management:

  • All the EC2 instances are running proper windows PowerShell.
  • All the EC2 instances have SSM agent installed.
  • The personnel responsible for the configuration of certificate management have some understanding of IAM Roles, S3 buckets and lambda functions

4. Solutions Description:

The following services provided by Amazon will be utilized to provide certificate management

  • Windows PowerShell
  • AWS S3
  • AWS Lambda
  • AWS IAM Roles
  • Maintenances Windows

4.1      Windows PowerShell

PowerShell will be utilized to generate information about the certificate and instance id for the AMI.

Mentioned below script needs to be executed on all AMI’s to generate the certificate information.

Set-Location Cert:\LocalMachine\my #Sets the location
$CertificateDetails = get-childitem # Get all the machine certificates
$instanceId = Invoke-WebRequest -usebasicparsing -Uri http://169.254.169.254/latest/meta-data/instance-id # Gets the instance ID
foreach ($i in $CertificateDetails) # writes the output with certificate Thumbprint , Subject, Instance ID and Expiration Date
{ 
Write-host "Thumbprint=" $i.Thumbprint " Expiration Date="$i.NotAfter " InstanceID ="$instanceID.Content" Subject="$i.Subject 
}

The following output is generated as a result of the PowerShell script.

 

Thumbprint= ****************** Expiration Date= 1/1/2040 10:59:59 AM InstanceID = i-******* Subject= CN=SubjectName Blah Blah Blah
Thumbprint= ****CYXZ********** Expiration Date= 1/1/2040 10:59:59 AM InstanceID = i-******* Subject= CN=SubjectName Blah Blah Blah
Thumbprint= ****************** Expiration Date= 1/1/2040 10:59:59 AM InstanceID = i-******* Subject= CN=SubjectName Blah Blah Blah
Thumbprint= ****************** Expiration Date= 1/1/2040 10:59:59 AM InstanceID = i-******* Subject= CN=SubjectName Blah Blah Blah
Thumbprint= ****************** Expiration Date= 1/1/2040 10:59:59 AM InstanceID = i-******* Subject= CN=SubjectName Blah Blah Blah

Where * represents different values for different certificates.

4.1      AWS S3

The result of the PowerShell script will be posted to an S3 bucket for further use.

The EC2 instances will need write access to the nominated S3 bucket for certificate Maintenance.

S3 Bucket Name: CertificateManagement

4.2      AWS Lambda Functions

Lambda Functions will be used to perform the following activities.

  • Acquire the result of the PowerShell script from the S3 bucket
  • Process the data to determine the Days till expiration of all the certificates
  • Generate a Report
  • Email the report to the relevant recipient

The Lambda Functions would need read access to the S3 bucket and access to AWS SES to send emails to recipients.

Mentioned below is the Lambda Functions that performs the mentioned above tasks.

import boto3
import codecs
import pprint
from datetime import datetime, date, time
def lambda_handler(event,Context):
    s3 = boto3.resource('s3')
    mybucket = s3.Bucket('CertificateManagement')
    result=["this is the result","\n"]
    for file_key in mybucket.objects.all():
        print("\n")
        body = file_key.get()['Body'].read().decode('utf-8')
        output_lines=body.splitlines() #splits data into lines.
        resulthtml = ["<h1>Client Certificate Report</h1>"] # Adds heading to the email body
        resulthtml.append('<html><body><table border="1">') # Creates a table
        resulthtml.append('<tr><b><td>InstanceID</td><td>Thumbprint</td><td>Subject</td><td>Days to Expiration</td></b></tr>')
        try:
            for line in output_lines:
                x=1
                output_word=line.split() # splits every line into words
                cnlist=output_word[10:] # gets the complete subject name and adds into an array
                cnstr=" ".join(cnlist) # joins the array to create a string.
                dt = datetime.strptime(str(output_word[4]), "%m/%d/%Y") # Converts the Date values string into a date format.
                ct=datetime.now()
                difference=dt-ct # Gets the difference between the date value and current date.
                Values = str(difference) # converts the difference into string
                Valuelist=Values.split() # converts the string into arrays.
                Days = Valuelist[0] # selcts the first value in the array which is Number of days.
                result.append(("Certificate with '{}' will expire in '{}'").format(cnstr,difference)) # to display output when testing the script in the lambda section.
                resulthtml.append(("<td>'{}'</td><td>'{}'</td><td>'{}'</td><td>'{}'</td></tr>").format(output_word[9],output_word[1],cnstr,Days)) # for the HTML email to be sent.
            resulthtml.append('</table></body></html>')
        except:
            print("some errors encountered")
        for i in result: # prints the result in the testing window in the Lambda console
            print(i)
    myhtmllist = str(''.join(resulthtml)) # converts all the resulthtml into string to be used in Email.
    myhtmllist = myhtmllist.replace("'","") #removes the "'" from the data. 
    textbody="this is a text body default"
    sender = "syed.naqvi@kloud.com.au"
    recipient = "syed.naqvi@kloud.com.au"
    awsregion = "us-east-1"
    subject = "Certificate Update list"
    charset = "UTF-8"
    client = boto3.client('ses',region_name=awsregion)
    try:
        response = client.send_email(
           Destination={
               'ToAddresses': [
                   recipient,
                ],
            },
         Message={
                  'Body': {
                      'Html': {
                        'Charset': charset,
                        'Data': myhtmllist,
                             },
                    'Text': {
                     'Charset': charset,
                     'Data': mylist,
                    },
                },
                'Subject': {
                    'Charset': charset,
                    'Data': subject,
                },
            },
            Source=sender,
        )
    # Display an error if something goes wrong.
    except Exception as e:
        print( "Error: ", e)
    else:
       print("Email sent!")

 4.1 AWS IAM Roles

Roles will be used to grant

  • AWS S3 write access to all the EC2 instances as they will submit the output of the PowerShell script in the S3 bucket
  • AWS SES access to Lambda Functions to send emails to relevant recipients.

4.2 AWS SES

Amazon Simple Email Service (Amazon SES) evolved from the email platform that Amazon.com created to communicate with its own customers. In order to serve its ever-growing global customer base, Amazon.com needed to build an email platform that was flexible, scalable, reliable, and cost-effective. Amazon SES is the result of years of Amazon’s own research, development, and iteration in the areas of sending and receiving email.( Ref. From https://aws.amazon.com/ses/).

We would be utilizing AWS SES to generate emails using AWS lambda.

The configuration of the Lambda functions can be modified to send emails to a distribution group to provide Certificate reporting, or it can be used to send emails to ticketing rsystem in order to provide alerting and ticket creation in case a certificate expiration date crosses a configured threshold.

5. Solution Work Flow

Mentioned below is the workflow for the tasks.

workflow

6Solution Configuration

6.1 Configure IAM Roles

The following Roles should be configured

  • IAM role for Lambda Function.
  • IAM for EC2 instances for S3 bucket Access

6.1.1 Role for Lambda Function

Lambda function need the following access

  • Read data from the S3 bucket
  • Send Emails using Amazon S3

To accomplish the above the following policy should be created and attached to the IAM Role

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1501474857000",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::S3BucketName/*"
            ]
        },
        {
            "Sid": "Stmt1501474895000",
            "Effect": "Allow",
            "Action": [
                "ses:SendEmail"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

6.1.2  Role for EC2 instance

All EC2 instances should have access to store the PowerShell output in the S3 bucket.

To accomplish the above , the following policy should be assigned to the EC2 roles

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1501475224000",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::CertificateMaintenance"
            ]
        }
    ]
}

6.2 Configure Maintenance Window.

The following tasks need to be performed for the maintenance window

  • Register a Run Command with Run-PowerShell Script using the script in section 4.1
  • Register targets based on the requirements
  • Select the schedule based on your requirement

Maintenance Window Ref : 

http://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html

6.3 Configure Lambda Function:

The following tasks need to be performed for the Lambda Function

  • Create a blank lambda function with the S3 put event as the triggerlambda function
  • Click on Next
  • Enter the Name and Description
  • Select run time Python 3.6
  • Copy and paste the lambda function mentioned in section 4.3

6.4 Configuring AWS SES

The following tasks need to be completed before the execution of the Run-commands.

  • Email Addresses should be added to the AWS SES section of the tenant.
  • The email addresses should be verified.

 7. Result:

Based on the above configuration, whenever the run command is executed, the following report is generated and sent to the nominated email account.

report

8. Next Steps:

We can modify the script to only report on Certificate expiring in 45 days or less.

We can also use SNS instead of SES to send out SMS to relevant teams as well.

That will be catered in another blog