Update FSTAB on multiple EC2 instances using Run Commands

Scenario:

  • Customer Running multiple Linux Ec2 instance in AWS.
  • Customer reports that Instances are loosing mount points after a reboot.

Solution :

The resolution requires to update the fstab file on all the instances.

fstab is a system configuration file on Linux and other Unix-like operating systems that contains information about major filesystems on the system. It takes its name from file systems table, and it is located in the /etc directory ( ref : http://www.linfo.org/etc_fstab.html)

In order to update files on multiple servers we will utilize the following

  • ECHO command with append parameter (>>) to update the text file through shell
  • SSM Run Command to execute the command on multiple machines.

Note : All the concerned EC2 instances should have SSM manager configured.

Step 1 : Login to the AWS Console and click  EC2

click on ec2

 Step 2: Click on Run Command on the Systems Manager Services section

click on Run command

Step 3: Click on Run Command in the main panel

click-on-run-command-2.png

Step 4: Select Run-Shell Script

select run-shell script

Step 5: Select Targets 

Note : Targets can be selected manually or we can use Tags to perform the same activity on multiple instances with the matching tag.

select targets and stuff

Step 6:

Enter the following information :

  • Execute on : Specifies the number of target the commands can be executed concurrently. Concurrently running commands save time in execution.
  • Stop After : 1 errors
  • Timeout ( seconds) : leave the default 600 seconds

Step 7: Select the commands section and paste the command

echo '10.x.x.x:/ /share2 nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev 0 0' >> /etc/fstab
echo '10.x.x.x:/ /share1 nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev 0 0' >> /etc/fstab
echo '192.x.x.x:/ /backup1 nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev 0 0' >> /etc/fstab
echo '172.x.x.x:/ /backup2 nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev 0 0' >> /etc/fstab            

 

Step 8 : Click on Run click on run

Step 9: Click on command id to get update regarding the execution success of failure

click on command id to check the status of the coomand

Use Azure Health to track active incidents in your Subscriptions

siliconvalve

Yesterday afternoon while doing some work I ran into an issue in Azure. Initially I thought this issue was due to a bug in my (new) code and went to my usual debugging helper Application Insights to review what was going on.

The below graphs a little old, but you can see a clear spike on the left of the graphs which is where we started seeing issues and which gave me a clue that something was not right!

App Insights views

Initially I thought this was a compute issue as the graphs are for a VM-hosted REST API (left) and a Functions-based backend (right).

At this point there was no service status indicating an issue so I dug a little deeper and reviewed the detailed Exception information from Application Insights and realised that the source of the problem was the underlying Service Bus and Event Hub features that we use to glue…

View original post 268 more words

Disk Space Reporting through Lamba Functions- Linux servers

Solution Objective:

The solution provides detailed report related to hard disk space for all the Linux Ec2 instances in the AWS environment.

Requirements:

 

Mentioned below are the requirements the solution should be able to fulfil.

  • Gather information related to all mount points in all the Linux EC2 instances in the environment.
  • Able to generate cumulative report based on all instances in the environment.

3. Assumptions:

The following assumptions are considered

  • All the EC2 instances have SSM agent installed.
  • The personnel responsible for the configuration have some understanding of IAM Roles, S3 buckets and lambda functions

4. Solutions Description:

The following services provided by Amazon will be utilized to generate the report

  • Linux shell Scripts
  • AWS S3
  • AWS Lambda
  • AWS IAM Roles
  • Maintenances Windows

4.1      Linux Shell Script.

Linux Shell Script will be utilized to generate information about the instance and the mount points space utilization.

Mentioned below script needs to be executed on all Linux Ec2 instances to generate the mount point information.

curl http://169.254.169.254/latest/meta-data/instance-id # Prints the Instance ID
printf "\n" # Adds line
df # provides details of the mount point

4.1      AWS S3

The result of the shell script will be posted to an S3 bucket for further use.

The EC2 instances will need write access to the nominated S3 bucket for certificate Maintenance.

S3 Bucket Name: eomreport ( sample name )

4.2      AWS Lambda Functions

Lambda Functions will be used to perform the following activities.

  • Acquire the result of the Shell script from the S3 bucket
  • Generate a Report
  • Email the report to the relevant recipient

The Lambda Functions would need read access to the S3 bucket and access to AWS SES to send emails to recipients.

Mentioned below is the Lambda Functions that performs the mentioned above tasks.

import boto3
import codecs
import pprint
from datetime import datetime, date, time
def lambda_handler(event,Context):
    s3 = boto3.resource('s3')
    mybucket = s3.Bucket('eomreport')
    resulthtml = ["<h1>Report : Hard disk Space </h1>"] # Adds heading to the email body
    resulthtml.append('<html><body><table border="1">') # Creates a table
    resulthtml.append('<tr><td><b>InstanceID</b></td><td><b>Available Space</b></td><td><b>Used Space</b></td><td><b>Use %</b></td></td><td><b>Mounted on</b></td></b></tr>')
    for file_key in mybucket.objects.all():
        complete_string = str(file_key)
        search = "stdout"
        check = complete_string.find(search)
        if check > 0 :
            body = file_key.get()['Body'].read().decode('utf-8')
            complete=body.splitlines() #splits data into lines.
            id="".join(complete[0])
            hr=complete[1]
            hr2=hr.split()
            hr2.append("InstanceID")
            hstr=",".join(hr2)
            details=complete[2:]
            for line in details:
                    output_word=line.split()
                    dstr="".join(line)
                    resulthtml.append(("<td>'{}'</td><td>'{}'</td><td>'{}'</td><td>'{}'</td><td>'{}'</td></tr>").format(id,output_word[3],output_word[2],output_word[4],output_word[5])) # for the HTML email to be sent.
    resulthtml.append('</table></body></html>')
    final=str("".join(resulthtml))
    final=final.replace("'","")
    print(final)
    sender = "email@email.com"
    recipient = "email@email.com"
    awsregion = "us-east-1"
    subject = "Certificate Update list"
    charset = "UTF-8"
    mylist="mylist update"
    client = boto3.client('ses',region_name=awsregion)
    try:
        response = client.send_email(
           Destination={
               'ToAddresses': [
                   recipient,
                ],
            },
         Message={
                  'Body': {
                      'Html': {
                        'Charset': charset,
                        'Data': final,
                             },
                    'Text': {
                     'Charset': charset,
                     'Data': mylist,
                    },
                },
                'Subject': {
                    'Charset': charset,
                    'Data': subject,
                },
            },
            Source=sender,
        )
    # Display an error if something goes wrong.
    except Exception as e:
        print( "Error: ", e)
    else:
       print("Email sent!")

 

4.1 AWS IAM Roles

Roles will be used to grant

  • AWS S3 write access to all the EC2 instances as they will submit the output of the  the S3 bucket
  • AWS SES access to Lambda Functions to send emails to relevant recipients.

4.2 AWS SES

Amazon Simple Email Service (Amazon SES) evolved from the email platform that Amazon.com created to communicate with its own customers. In order to serve its ever-growing global customer base, Amazon.com needed to build an email platform that was flexible, scalable, reliable, and cost-effective. Amazon SES is the result of years of Amazon’s own research, development, and iteration in the areas of sending and receiving email.( Ref. From https://aws.amazon.com/ses/).

We would be utilizing AWS SES to generate emails using AWS lambda.

The configuration of the Lambda functions can be modified to send emails to a distribution group to provide Certificate reporting, or it can be used to send emails to ticketing system in order to provide alerting and ticket creation in case a certificate expiration date crosses a configured threshold.

5Solution Configuration

5.1 Configure IAM Roles

The following Roles should be configured

  • IAM role for Lambda Function.
  • IAM for EC2 instances for S3 bucket Access

5.1.1 Role for Lambda Function

Lambda function need the following access

  • Read data from the S3 bucket
  • Send Emails using Amazon S3

To accomplish the above the following policy should be created and attached to the IAM Role

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1501474857000",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::S3BucketName/*"
            ]
        },
        {
            "Sid": "Stmt1501474895000",
            "Effect": "Allow",
            "Action": [
                "ses:SendEmail"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

6.1.2  Role for EC2 instance

All EC2 instances should have access to store the Shell output in the S3 bucket.

To accomplish the above , the following policy should be assigned to the EC2 roles

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1501475224000",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::eomreport"
            ]
        }
    ]
}

6.2 Configure Maintenance Window.

The following tasks need to be performed for the maintenance window

  • Register a Run Command with Run-Shell Script using the script in section 4.1
  • Register targets based on the requirements
  • Select the schedule based on your requirement

Maintenance Window Ref : 

http://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html

6.3 Configure Lambda Function:

The following tasks need to be performed for the Lambda Function

  • Create a blank lambda function with the S3 put event as the trigger\lambda function
  • Click on Next
  • Enter the Name and Description
  • Select run time Python 3.6
  • Copy and paste the lambda function mentioned in section 4.3

    6.4 Configuring AWS SES

The following tasks need to be completed before the execution of the Run-commands.

  • Email Addresses should be added to the AWS SES section of the tenant.
  • The email addresses should be verified.

 7. Result:

Based on the above configuration, whenever the run command is executed, the following report is generated and sent to the nominated email account.

InstanceID Available Space Used Space Use % Mounted on
i-sampleID1 123984208 1832604 0.02 /
i-sampleID1 7720980 0 0 /dev
i-sampleID1 7746288 0 0 /dev/shm
i-sampleID1 7721456 24832 0.01 /run
i-sampleID1 7746288 0 0 /sys/fs/cgroup
i-sampleID2 122220572 3596240 0.03 /
i-sampleID2 7720628 0 0 /dev
i-sampleID2 7746280 8 0.01 /dev/shm
i-sampleID2 7532872 213416 0.03 /run
i-sampleID2 7746288 0 0 /sys/fs/cgroup
i-sampleID2 81554964 16283404 0.17 /sit
i-sampleID2 83340832 14497536 0.15 /uat
i-sampleID2 1549260 0 0 /run/user/1000
i-sampleID3 123983664 1833148 0.02 /
i-sampleID3 7720980 0 0 /dev
i-sampleID3 7746288 0 0 /dev/shm
i-sampleID3 7721448 24840 0.01 /run
i-sampleID3 7746288 0 0 /sys/fs/cgroup

 

VPC ( Virtual Private Cloud) Configuration

Introduction

This blog is Part 01 of a 02 part series related to custom VPC configurations

Part 01 discusses the following scenario

  • Creating a VPC with 02 subnets ( Public and Private )
  • Creating a bastion host server in the public subnet
  • Allowing the Bastion host to connect to the servers in the Private Subnet using RDP.

Part 02 will discuss the following

  • Configuring NAT Instances
  • Configuring VPC Peering
  • Configuring VPC flow Logs.

What is a VPC

VPC can be described as a logical Datacenter where AWS resources can be deployed.

The logical datacenter can be connected to your physical datacenter through VPN or direct connect. Details (https://blog.kloud.com.au/2014/04/10/quickly-connect-to-your-aws-vpc-via-vpn/)

This section deals with the following tasks.

  • Creating the VPC
  • Creating Subnets
  • Configuring Subnets for internet access

1 Creating the VPC

The following steps should be followed for configuring VPC. we can use the wizard to create a VPC but this document will focus on the detailed method where every configuration parameter is defined by the user.

Step 01.01 : Logon to the AWS console

Step 01.02 : Click on VPC

Step 01.03 : Select Your VPCs

01

Step 01.04 : Select Create VPC

02

Step 01.05 Enter the following details in the Create VPC option

  • Enter the details of the Name Tag
  • Enter the CIDR Block. keep in mind that the block size cannot be greater that /16.

Step 01.06: Click on Yes,Create

04

We have now created a VPC. The following resources are also created automatically

  • Routing table for the VPC
  • Default VPC Security Group
  • Network ACL for the VPC

Default Routing Table ( Route Table Id = rtb-ab1cc9d3)

Check the Routing table below for the VPC. If you check the routes of the route table, you see the following

  • Destination :10.0.0.0/16
  • target : Local
  • Status: Active
  • Propagated: No

12

This route ensures that all the subnets in the VPC are able to connect with each other. All the subnets created in the VPC are assigned to the default route table therefore its best practice not to change the default route table. For any route modification, a new route table can be created and assigned to subnets specifically.

Default Network Access Control List ( NACL id= acl-ded45ca7)

Mentioned below is the snapshot of the default NACL created when the VPC was created.

06.GIF

Default security group for the VPC ( Group id = sg-5c088122)

Mentioned below is the snapshot of the default Security Group created when the VPC was created.

07.GIF

Now we need to create Subnets. Keeping in mind that the considered scenario needs 02 subnets ( 01 Private and 01 Public ).1.

2 Creating Subnets

Step 02.01 : Go to the VPC Dashboard and select Subnets

08

Step 02.02 : Click on Create Subnet

09

Step 02.03: Enter the following details in the Create Subnet window

  • Name Tag: Subnet IPv4 CIDR Block ) – “Availability Zone” =  10.0.1.0/24 – us-east-1a
  • VPC : Select the newly created VPC = vpc-cd54beb4 | MyVPC
  • Availability Zone: us-east-1a
  • IPv4 CIDR Block :10.0.1.0/24

Step 02.04: Click on Yes,Create

10

Now we have created subnet 10.0.1.0/24

We will use the same steps to create another subnet. 10.0.2.0/24 in availability zone us-east-1b

  • Name Tag: Subnet IPv4 CIDR Block ) – “Availability Zone” =  10.0.2.0/24 – us-east-1b
  • VPC : Select the newly created VPC = vpc-cd54beb4 | MyVPC
  • Availability Zone: us-east-1b
  • IPv4 CIDR Block :10.0.2.0/24

11

3 Configuring subnets

Now that we have 02 subnets and we need to configure the 10.0.1.0/24 as the public subnet and 10.0.2.0/24 as the private subnets. The following tasks need to be performed for the activity

  • Internet Gateway creation and configuration
  • Route table Creation and configuration
  • Auto Assign Public IP Configuration.

3.1 Internet gateway Creation and Configuration ( IGW Config )

Internet gateways as the name suggest provide access to the internet. They are assigned to VPC and routing table is configured to direct all internet based traffic to the internet gateway.

Mentioned below are the steps for creating and configuring the internet gateway.

Step 03.01.01 : Select Internet Gateways  from the VPC dashboard and click on Create Internet Gateway

13.GIF

Step 03.01.02 : Enter the name tag and click on Yes,Create

14.GIF

The internet gateways is created but not attached to any VPC.( internet gateway Id = igw-90c467f6)

Step 03.01.03: Select the Internet Gateway and click on Attach to VPC

15

Step 03.01.04 : Select your VPC and click on Yes,Attach

16.GIF

We have now attached the Internet Gateway to the VPC. Now we need to configure the route tables for internet access.

3.2 Route Table creation and Configuration ( RTBL Config)

A default route table ( with id rtb-ab1cc9d3) was created when the VPC was created. Its best practice to create a separate route table to internet access.

Step 03.02.01 : Click on the Route Table section in the VPC Dashboard and click Create Route table

17

Step 03.02.02: Enter the following details in the Create Route Table window and click on Yes,Create

  • Name tag: Relevant Name = InternetAccessRoutetbl
  • VPC : Your VPC = vpc-cd54b3b4 | MyVPC

19

Step 03.02.03 : Select a newly created Route table( Route Table Id = rtb-3b78ad43 | InternetAccessRouteTbl) and Click Routes and then Edit

20

Step 03.02.04: Click on Add Another Route

21

Step 03.02.05 : Enter the following values in the route and click on Save

  • Destination: 0.0.0.0/0
  • Target : Your Internet Gateway ID  = igw-90c467f6 ( in my case )

22.GIF

Route table needs subnet associations. The subnets which we want to make Public should be associated with the route table. In our case, we would associate Subnet 10.0.1.0/24 to the route table.

Step 03.02.06: Click on Subnet Associations

23.GIF

You should be able to see the message “You do not have any subnet associations”

Step 03.02.07: Click on Edit

24.GIFStep 03.02.08: Select the subnet you want to configure as a Public Subnet. In our case 10.0.1.0/24 and Click on Save

25.GIF

03.03 Auto Assign Public IP Configuration

Both the subnets created ( 10.0.1.0/24 and 10.0.2.0/24) will not assign public IP addresses to the instances deployed in them as per their default configuration.

We need to configure the public subnet ( 10.0.1.0/24 ) to provide Public IPs automatically.

Step 03.03.01: Go to the Subnets section in the VPC dashboard.

Step 03.03.02: Select the Public Subnet

Step 03.03.03: Click on Subnet Actions

Step 03.03.04: Select Modify auto-assign IP Settings

26.GIF

Step 03.03.05: Check the Enable Auto-assign Public IPv4 Addresses  in the  Modify Auto-Assign IP Settings Window and click on Save

27.GIF

After this configuration, any EC2 instance deployed in the 10.0.1.0/24 subnet will be assigned a public IP.

4 Security

security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group. When we decide whether to allow traffic to reach an instance, we evaluate all the rules from all the security groups that are associated with the instance.

We will create 02 security groups,

  • Public-Private ( Will contains access rules from Public Subnet to Private Subnet )
  • Internet-Public ( will contains the ports allowed from the internet to the Public Subnet )

Step 4.1  : Click on Security Groups in the Network and Security section

Step 4.2 : Click on Create Security Group

28

Step 4.3 : Enter the following details on the Create Security group Window and click on Create

  • Security Group Name : Public-Private
  • Description : Rules between Private subnet and Public subnets
  • VPC : Select the VPC we created in the exercise.
  • Click on Add Rules to Add the following rules
    • Type = RDP , Protocol = TCP , POrt Range = 3389 , Source = Custom : 10.0.1.0/24
    • Type = All ICMP – IPV4, Protocol = ICMP , Port Range = 0 – 65535 , Source = Custom, 10.0.1.0/24

29

Step 4.4 : Enter the following details on the Create Security group Window and click on Create

  • Security Group Name : Public-Internet
  • Description : Rules between Public and the internet
  • VPC : Select the VPC we created in the exercise.
  • Click on Add Rules to Add the following rules
    • Type = RDP , Protocol = TCP , POrt Range = 3389 , Source =Anywhere
    • Type = All ICMP – IPV4, Protocol = ICMP , Port Range = 0 – 65535 , Source = Anywhere

30

4 EC2 installation

Now we will deploy 02 EC2 instances . One EC2 Instances Named PrivateInstance in the 10.0.2.0/24 subnet and one instance named PublicInstance in the 10.0.1.0/24 subnet.

Public Instance Configuration :

  • Instance Name : Public Instance
  • Network : MyVPC
  • Subnet : 10.0.1.0/24
  • Auto-Assign Public ip : Use subnet setting ( enabled )
  • Security Group : Public-Internet security group
  • IAM Role : As per requirement

Private Instance Configuration :

  • Instance Name : Private Instance
  • Network : MyVPC
  • Subnet : 10.0.2.0/24
  • Auto-Assign Public ip : Use subnet setting ( disabled)
  • Security Group : Public-Private security group
  • IAM Role : As per requirement

Once the deployment of the EC2 instance is complete, you can connect to the PublicInstance through RDP and from there connect further to the Private instances.

Patching EC2 through SSM

 

Why Patch Manager?

AWS SSM Patch Manager is an automated tool that helps you simplify your operating system patching process, including selecting the patches you want to deploy, the timing for patch roll-outs, controlling instance reboots, and many other tasks. You can define auto-approval rules for patches with an added ability to black-list or white-list specific patches, control how the patches are deployed on the target instances (e.g. stop services before applying the patch), and schedule the automatic roll out through maintenance windows.

These capabilities help you automate your patch maintenance process to save you time and reduce the risk of non-compliance. All capabilities of EC2 Systems Manager, including Patch Manager, are available free of charge so you only pay for the resources you manage.

The article can be used to configure patching for instances hosted in AWS Platform.

You will need to have the necessary pre-requisite knowledge regarding, EC2, and IAM section of the AWS. If so then please read on.

The configuration has three major sections

  • EC2 instance configuration for patching
  • Default Patching Baseline Configuration
  • Maintenance Window configuration.

1  Instance Configuration

We will start with the First section which is configuring the Instances to be patched. This requires the following tasks.

  1. Create Amazon EC2 Role for patching with two policies attached
    • AmazonEC2RoleForSSM
    • AmazonSSMFullAccess
  2. Assign Roles to the EC2 Instances
  3. Configure Tags to ensure patching in groups.

Important: The Machines to be patched should be able to contact Windows Update Services.  Mentioned below article contains the URLS which should be accessible for proper patch management.

https://technet.microsoft.com/en-us/library/cc708605(v=ws.10).aspx

Mentioned below are the detailed steps for the creation of an IAM role for Instances to be Patched using Patch Manager.

Step 1: Select IAM —–> Roles and Click on Create New Role

1

Step 2: Select Role Type —-> Amazon EC2

2.PNG

Step 3: Under Attach Policy Select the following and Click Next

  • AmazonEC2RoleForSSM
  • AmazonSSMFullAccess

3.PNG

Step 4: Enter the Role Name and Select Create Role (At the bottom of the page)

4.PNG

Now you have gone through the first step in your patch management journey.

Instances should be configured to use the above created role to ensure proper patch management. (or any roles which has AmazonEC2RoleforSSM and AmazonSSMFullAccess policies attached to it.)

5.PNG

We need to group our AWS hosted servers in groups cause no one with the right frame of mind wants to patch all the servers in one go.

To accomplish that we need to use Patch Groups (explained later).

For example:

We can configure Patch manager to Patch EC2 instances with Patch Group Tag = Group01 on Wednesday and EC2 instances with Patch Group Tag = PatchGroup02 on Friday.

To utilize patch groups, all EC2 instances should be tagged to support cumulative patch management based on Patch Groups.

Congratulations, you have completed the first section of the configuration. Keep following just two to go.

Default Patch Baseline configuration.

Patches are categorized using the following attributes :

  • Product Type: like Windows version etc.
  • Classification: CriticalUpdates, SecurityUpdates, SevicePacks, UpdateRollUps
  • Severity: Critical,Important,Low etc.

Patches are prioritized based on the above factors.

A Patch baseline can be used to configure the following using rules

  • Products to be included in Patching
  • Classification of Patches
  • Severity of Patches
  • Auto Approval Delay: Time to wait (Days) before automatic approval)

Patch Baseline is configured as follows.

Step 01: Select EC2 —> Select Patch Baselines (under the Systems Manager Services Section)

Step 02: Click on Create Patch Baseline

6.PNG

Step 03: Fill in the details of the baseline and click on Create

7.PNG

Go to Patch Baseline and make the newly created baseline as your default.

8.PNG

At this point, the instances to be patched are configured and we have also configured the patch policies. The next section we provide AWS the when (Date and Time) and what (task) of the patching cycle.

Maintenance Windows Configuration

As the name specifies, Maintenance Windows give us the option to Run Tasks on EC2 Instances on a specified schedule.

What we wish to accomplish with Maintenance Windows is to Run a Command (Apply-AWSPatchBaseline), but on a given schedule and on a subset of our servers. This is where all the above configurations gel together to make patching work.

Configuring Maintenance windows consist of the following tasks.

  • IAM role for Maintenance Windows
  • Creating the Maintenance Window itself
  • Registering Targets (Selecting servers for the activity)
  • Registering Tasks (Selecting tasks to be executed)

Mentioned below are the detailed steps for configuring all the above.

Step 01: Create a Role with the following policy attached

  • AmazonSSMMaintenanceWindowRole

9.PNG

Step 02: Enter the Role Name and Role Description

10.PNG

Step 03: Click on Role and copy the Role ARN

Step 04: Click on Edit Trust Relationships

11.PNG

Step 05: Add the following values under the Principal section of the JSON file as shown below

“Service”: “ssm.amazonaws.com”

Step 06: Click on Update Trust Relationships (on the bottom of the page)

12.PNG

At this point the IAM role for the maintenance window has been configured. The next section details the configuration of the maintenance window.

Step 01: Click on EC2 and select Maintenance Windows (under the Systems Manager Shared Resources section)

13.PNG

Step 02: Enter the details of the maintenance Windows and click on Create Maintenance Windows

14.PNG

At this point the Maintenance Window has been created. The next task is to Register Targets and Register Tasks for this maintenance window.

Step 01: Select the Maintenance Window created and click on Actions

Step 02: Select Register Targets

15.PNG

Step 03: Enter Owner Information and select the Tag Name and Tag Value

Step 04: Select Register Targets

16.PNG

At this point the targets for the maintenance window have been configured. This leaves us with the last activity in the configuration which is to register the tasks to be executed in the maintenance window.

Step 01: Select the Maintenance Window and Click on Actions

Step 02: Select Register Task

17.PNG

Step 03: Select AWS-ApplyPatchBaseline from the Document section

18.PNG

Step 04: Click on Registered targets and select the instances based on the Patch Group Tag

Step 05: Select Operation SCAN or Install based on the desired function (Keep in mind that an Install will result in a server restart).

Step 06: Select the MaintenanceWindowsRole

Step 07: Click on Register Tasks

19.PNG

After completing the configuration, the Registered Task will run on the Registered Targets based on the schedule specified in the Maintenance Window

The status of the Maintenance Window can be seen in the History section (as Shown below)

20.PNG

Hope this guide does get you through the initial patching configuration for your EC2 instances in Amazon.

Also in AWS the configuration can be done using CLI as well. Lets leave that for another blog for now.

Thanks for Reading.

Service Strategy – How do you become Instrumental?

There is a well-known concept developed by Ronald Coase around organisational boundaries being determined by transaction costs.

This concept stated that organisations are faced with three decisions.

To make, buy or rent.

In some scenarios, it makes sense for organisations to own and operate assets, or conduct activities in-house, however, at other times you could seek alternatives from the open market.

When seeking alternatives from the open markets the key factor can be the transaction cost.

The transaction cost it’s the overall costs of the economic exchange between the supplier and customer with the objective of ensuring that commitments are fulfilled.

In the context of Service Strategy, why is it important to understand this concept?

In the current shift towards cloud computing, the service transaction has now drastically been minimised in cost. It’s critical to understand this.

I often think of this like catching a fish, let me explain.

It takes time

There are costs that will be incurred when attempting to catch that fish.

In Service Strategy the costs can be calculated as the time taken to find and select qualified suppliers for goods or services that are required.

The right bait

Proven fishermen will always be asking the question “what bait works here”. So when putting together your service strategy make sure you know who you are considering in transacting with.  Knowing the track record of your provider is crucial, ask around you will save yourself a lot of time. I’m quite dumbfounded when I hear of customers ‘still’ transacting with suppliers that no longer provide any real value. More time spent transacting in the excuse basket than providing the outcome that the customer is after. Particularly when it comes to delivering services.

Certain bait attracts certain fish

If you are considering the leap to Cloud computing, find suppliers who are proven in this area. It’s not like the traditional on-premise managed service computing model. It has changed, go with providers who are working in this space and have been for a considerable period of time.  For example, they get what right sizing is and how crucial it could be to your cost model.

How many hooks and sinkers are you prepared to lose?

Governance is the costs of making sure the other party sticks to the terms of the contract and taking appropriate action if this turns out not to be the case.

Governance is fundamentally an intertwining of both leadership and management. Leadership in the sense of understanding the organisation’s vision, setting the strategy and bringing about alignment. Management in the sense of how we actually implement the strategy. Setting the budget, working out the resources required and so forth.

It’s crucial that your Cloud Enterprise Governance Framework has these qualities, for example, a policy is formally documented management expectations and intentions which can be used to direct decisions and ensure a consistent approach when implementing leadership’s strategy. In the cloud-climate where change is constant, you need to be in a position to respond with greater agility. Your governance framework has the ability to move between the two.

Once again do your homework. Find out what their Cloud Computing Service Model entails. Roadmaps, framework, fundamentally what approach they take. It’s crucial that they have this in place in order to succeed.

So what does it take to get hooked?

The actual answer to how you become instrumental is by clearly understanding your service transaction. The net effect will be brokering real value.

This, in turn, is likely to lead you to as prevailing conditions change, boundaries of the firm contract or expand with decisions such as make, buy, or rent.

Is Service Strategy your Everest?

Essentially strategy is separating what to do versus what not to do.

It brings about alignment to your organisation’s vision.

In this blog, I will endeavour to cover why having a service management strategy is critical for your organisation and point you towards well-known principles that will help you scale it.

You could think of this like climbing a mountain.

It starts with your perspective.

Perspective – outlines your vision and direction.

What does Everest look like? What do you see?

In the context of service strategy, it’s how you see yourself in the market. How you differentiate yourself from the providers you engage. At Kloud we are relatively young when it comes to alternate service providers that have dominated the markets over the past, yet we have a sound vision, and that vision is the inevitable shift towards cloud computing. Having this perspective allows us to clearly understand what we are about as we continue to evolve our service strategy.

What route will you take to ascend Everest?

In the context of service strategy, it’s your position.

Position – describes the decision to adopt a well-defined stance. At Kloud we believe that from a service management perspective there is a new model (consumption-based service management) when it comes to delivering value. Have a read of a recent blog I published around this shift.

https://blog.kloud.com.au/2016/04/06/consumption-based-service-management/

Your route will ultimately define the decisions you will continue to take amongst the many potential paths you will be presented to take.

You need a plan to make the ascent.

You can see the summit, you know the route you need to take but now it’s time to plan how your journey.

The Plan – describes the means of transitioning from ‘as is’ to ‘to be’. A plan might detail, ‘How do we offer high- value or low-cost services?’ Or, ‘How do we achieve and offer our specialised services?’ You need a plan that will detail how you will get there. It allows all involved to see what’s required to get you there.

The way you climb.

In the context of service strategy, it’s your pattern.

Pattern – describes a series of consistent decisions and actions over time.  Over a period of time, your climbing style will start to show. What do I mean? Well, are you risk adverse? Do you climb with or without a support ? In Service Strategy, this could be providing service with high availability or high value, this will soon develop into your pattern, what you consistently gravitate towards.

In summary requirements and conditions are ever changing. A service provider may begin with any one form and evolve to another.

As a service provider, you might begin with a perspective. The service provider might then decide to take on a position articulated through Company policies, capabilities and resources.

This position may then be achieved through the execution of a plan.

Once you have been able to achieve this, the service provider may maintain its position through a series of well-understood decisions and action over time: a pattern.

I encourage you to use all four Ps, Perspective, Position, Plan and Pattern. Move between all four as required, seeing the big picture while working through the details.

 

Exploring Cloud Adoption

At Kloud we get incredible opportunities to partner with global organisations.

Listening to and observing one of our established clients has inspired me to write about the change programme around Office 365 and how we can expand the approach.

The change management programme, in terms of adoption, is based on a user’s journey through the office 365 toolset. A sort of step by step approach incorporating exchange online and SharePoint online – building a workspace for each department which effectively will become the reference point in which you work from. In short, targeting adoption champions throughout particular business units and keeping them abreast of the Office 365 updates.

We have seen results through this approach but it’s here I want to drop a different thought into the equation.

It’s natural tendency to develop a well-trodden path, but what happens when you have a disparate group of individuals who don’t take this route?

Taking people on a well thought out journey could almost take the excitement out of the trip. Why not foster an environment that will allow individuals to explore the Office 365 toolset?

What do I mean? Office 365 is continuously evolving its services. Just look at the roadmap and you will see what I mean.

Think about the last trip you took. Did you perhaps decide to stop the car and explore the countryside? You could well have stepped off the beaten track and explored the unknown finding that place that no one told you about.

Rigorous change control programmes that try and control how people ,within the organisation , adopt the services could well stifle the opportunity that the Cloud is creating.

Here are some key thoughts when considering an explorative approach to adoption.

Pop-up Adoption

The pop-up store is a remarkable concept in today’s society. It is a fantastic way to introduce a new thought, concept or idea down the well-trodden route with an ability to create a short term experience that could prompt exploring. There is fantastic opportunity to create these pop-up adoption initiatives throughout the organisation to inspire alternate ways to being productive.

Your posture on Change Management

In cloud computing, we find that cloud suppliers have a wave of new features designed to change and enhance the productivity of the consumer. The consumer loves all the additional functionality that will come with the solution, however at most times does not understand what the full extent of the changes. Herein lies the challenge, how does the conventional IT department communicate the change, train up end users and ultimately manage change? We get ready to run all our time intensive checks so that people know exactly what is about to change, when and how it will change. Challenging, to say the least, but when you ramp this up, well to be specific multiply by a factor of a hundred.

It’s time-consuming and nearly impossible to manage in the current cloud climate.

Here is a thought that will take you to the extreme opposite.

Enable everything. Yes, that’s right, let the full power of Cloud permeate through your organisation. The more time you spend trying to control what gets released as opposed to what doesn’t, the more time you waste which can be better used to create more value. Enable it and let your early adopters benefit, explore, they could even coach the late bloomers, then let the remaining people go on a journey.

Expose

Use what you got to expose people to what they could adopt

Yammer is a great tool in the Office 365 toolset that will expose people to different opportunities in being productive. Exposing different toolsets to people causes them to explore what they don’t know.

The mobile phone will help

People have started to explore technology. The mobile age has stimulated this approach. The slogan “There is an app for that” could well be the tag line that has encouraged end users to explore the tools that are available to them.

The Adoption Map

Whenever one is navigating areas unknown we tend to develop visuals that will help us make sense of the space. The Adoption Map could be a great tool to visually show your people what is actually out there. Instead of being a mere passenger on the journey, they could help plot out their own coordinates. When people start learning for themselves it’s potentially the most powerful recipe for success.

This approach could alter the role of change managers to become change enablers. Instead of controlling change you are learning and innovating from it.

Adoption simple means to choose. We have a great opportunity to present them with as many services as possible from which to choose, you can potentially foster an environment that creates exploration. The ability to learn for themselves and ultimately to explore in the confounds of a boundary that is ever expanding.

Cloud Enterprise Governance

Organisational Strategies are currently being forged around SAAS, IAAS and PAAS in cloud computing.

What will the new Cloud Enterprise Governance framework look like?

Lack of Cloud Enterprise Governance can result in organisations not achieving strategically set directives as well as consumer loss of confidence.

This is challenging considering how fast cloud computing is currently being accelerated, at the same token, this also provides a fantastic opportunity to get it right.

Cloud computing is putting pressure on traditional governance ability to adapt and change.

It is critical that governance is addressed from a holistic point of view.

What does Governance mean?

Ensuring that policies and strategy are actually implemented, and correctly followed. Your ability to clearly define ownership, auditing, measuring, reporting and resolving any issues that are identified as a result thereof.

Some key thoughts to consider when putting your Cloud Enterprise framework together under a Cloud model.

Leading and Managing intertwined

This is where Governance gets interesting, sticking to the definition of Governance is fundamentally an intertwining of both leadership and management.

Leadership in the sense of understanding the organisation’s vision, setting the strategy and bringing about alignment.

Management in the sense of how we actually implement the strategy. Setting the budget, working out the resources required and so forth.

It’s crucial that your Cloud Enterprise Governance Framework has these qualities, for example, a policy is formally documented management expectations and intentions which can be used to direct decisions and ensure a consistent approach when implementing leadership’s strategy. In the cloud-climate where change is constant, you need to be in a position to respond with greater agility. It’s crucial that your governance framework has the ability to move between the two.

Your Evolvement (not involvement – that’s a given)

What are the new requirements that your Cloud model has outlined in its roadmap?

How you best position your organisation to maximise the offerings.

What has been launched? Hopefully, you are not caught out here, depending on your position on change management.

What’s currently being developed? What’s currently being rolled out? Important questions that need to be flushed out to ensure your evolution in cloud computing.

Cloud Enterprise Guiding Coalition

Experience at establishing a Cloud Enterprise Governance has shown that a carefully thought out group of individuals (Leaders and Managers) are critical in ensuring Cloud Service success.

A guiding coalition team does not have to be comprised solely of senior managers. A single champion cannot achieve success alone.

The end goal of establishing this team is not from a who has the power but more around experience, respect, versatility and trust. This team should be backed by an influential business or IT sponsor.

As the programme buy-in grows, and throughout the programme itself when more and more successes are achieved and benefits realised, this team should be increased to involve a wider range of people and functions.

The types of questions you need to be asking are, ‘Do we have the right people on board?’ and, if not, ‘Who should we have on board?’

Something to Remember

Do:

  • Lead and manage, intertwine the two at all levels.
  • Ensure you have a position of constant evolution.

Don’t:

  • Forsake building a Cloud Enterprise Governance framework.
  • Have clear markers on how you are assessing your operational performance, ensure that these indicators are accurately represented in the Cloud Enterprise Governance Coalition.

The Service Coin

We were recently invited by a customer to share on how we do managed services at Kloud, particularly around Microsoft Azure and Office 365.

During the conversation, I landed on an analogy that best articulated how we envision delivering service through the Cloud computing model.

I thought I would share that analogy.

Simply put service is a means of delivering value to customers, minus all the overhead of actually operating the service.

With that in mind let’s think about the service coin in terms of value in the economy called the cloud.

A service coin has three sides to it.

The first side of the coin can be known as the service management layer. It’s how the service will actually be delivered from a management perspective.

The framework that gets implemented, how we respond to demands, how we ensure availability, Incident Management, Change Management, Problem Management and various key processes that ensure services are being delivered.

The second side of the coin can be known as the operational layer. It’s how the service gets engineered.

We spend a lot of time at Kloud investing into this layer, instilling operational run books amongst other initiatives that will provide comprehensive steps on how to successfully run the service. They serve as an open book on how we engineer the service in the complexities of our customer’s environments.

At this layer, we will find;

Checklists, configuration documentation, patching cycles and so forth. It reads like a novel, well more of a technical manual to be precise. Our talented engineers love it.

Last but not least there is a third side of the coin, it’s actually the side on which it all rests, this can be known as the adoption layer. It’s where we ultimately take people on a journey on how to use the technology. I think it’s here that we find that a shift has taken place. People have started to explore technology. The Mobile age has stimulated this shift. The slogan “There is an app for that” could well be the tag line that has encouraged end users to explore the tools that are available to them.

Yes, some require hand holding but on the whole if we foster an adoption strategy that encourages people to explore what they have at their disposal we just might take productivity to a whole new level. Yes, we might have some other challenges, but these challenges are the one’s worth taking.

The adoption layer is potentially the most important layer in terms of service success, it’s what will introduce the value and actually introduce the coin into the organisations economy. Resulting in net productivity and the ultimate effect which is value.

The coin’s ultimate purpose is a medium for exchange. In the context of the service coin, it’s purpose is to ensure healthy exchange between the three sides.

One has to be prepared to spin the coin to get that perfect pattern.

Finally, the coin has an imprint; this is the part that signifies who the Coin belongs to. In our case, the Service Coin ultimately belongs to our customer. It’s their service, they entrust us to run the service, it’s of tremendous value to them. We get to hold it for the time that they allow us to.

Judging by the response from our client I think it resonated with what had always just been in his pocket. The Service Coin.