A tale of two products (don’t expect Dickens)

At Re:Invent and just after, AWS released several new products. Included in those were AWS FSx Windows and AWS Backup. Both of these products had a lot of interest for me, for various reasons, so I thought I’d give them a try. None of my experience was under work conditions, but the following are my experiences. Note: Both are only in a small number of regions, currently.

AWS FSx Windows


  • Easy setup (by itself)
  • Fully compatible Windows file server
  • DFS support
  • Has backups
  • Works as expected


  • Requires AWS Microsoft AD in each VPC
  • Can’t change file share size
  • Some features can only be changed from CLI
  • Throughput can only be changed through restore
  • Minimum share size is 300GB

First out of the box, and released at Re:Invent is AWS FSx Windows. AWS Elastic File System has been around for a while now and works nicely for providing a managed NFS share. Great for Linux, not so good for Windows. Your Windows sharing options are now enhanced with AWS FSx Windows. This is an AWS managed Windows File Server, running on Windows Data Centre. When you go to do the setup, there are only a few options, so it should be pretty easy, right? Well, yes and no. Actually configuring FSx Windows is easy, but before you do that, you need to have an AWS Microsoft AD directory service (not just an EC2 running AD) in the VPC that you are launching FSx Windows. If you’re running Windows based workloads, you’ll likely have a Windows admin, so this shouldn’t be too hard to configure and tie into your regular AD. OK, so, I’ve got my AWS Window AD, what’s next? Well, jump into the FSx Windows console, enter the size, throughput (default 8MB/s), backup retention and window, and maintenance window. That’s it. You now have a Windows file share you can use for all your Windows file sharing goodness.

FSx Windows FS Creation

But, what’s the catch? It can’t be that easy? Well, mostly it is, but there are some gotchas. With EFS, you don’t specify a size. You just use space and Amazon bill you for what you use. If you throw a ton of data on the share, all good. It just grows. With FSx Windows, you have to specify a size, min 300GB, at creation. Right now, there is no option to grow the share. If you run out of space, you’ll need to look at DFS (fully supported). Create a new FSx Windows share and use your own DFS server to manage merging the two shares into a single namespace. While on the topic of DFS, FSx Windows is just single AZ. If you want redundancy, you’ll need to create a second share and keep the data in sync. AWS has a document on using DFS for a multi-AZ scenario: https://docs.aws.amazon.com/fsx/latest/WindowsGuide/multi-az-deployments.html

What other issues are there? Well, a semi-major one and a minor one. The major one is there is currently no way to easily change the throughput, or at least not that I’ve found. When doing a restore, you can choose the throughput, so you can restore with a new throughput, blow away the old share and then map to the new one. Not terrible, but a bit painful. For backup changes, time and retention, or scheduled maintenance window changes, those can only be done via CLI. It would be nice to have that option in the console, but not really a huge deal.

And there you have it. An AWS managed Windows File Server. Having to rely on an AWS Microsoft AD in each VPC is a bit of pain, but not terrible. Overall, I think this is a great option. Once it rolls out to GA it’s definitely worth investigating if you currently have EC2 instances running as Windows File Servers.

AWS Backup


  • Easy EFS backup
  • RDS & Dynamo DB backup


  • No EC2 backup (EBS isn’t the same)
  • Dashboard is fairly basic

Last year AWS launched their first foray into managed backups with Data Lifecycle Manager (DLM). This was very basic and helped with the scheduling and life cycle management of EBS snapshots. The recent announcement of AWS Backup sounded like this would be a big step forward in AWS’s backup offering. In some ways it is, but in others it is still very lacking. I’ll start with the good, because while these are great, they are relatively quick to cover. RDS and DynamoDB expands on what is already offered past the traditional 35-day mark. The big surprise, and much needed feature, was support for EFS backups. Previously, you had to roll your own, or use an AWS Answer provided by AWS to do an EFS to EFS backup. It worked, but was messy. This option makes it really easy to backup an EFS volume. Just configure it into a Backup Plan and let AWS do the work! There’s not a lot to say about this, but it’s big, and may be the main reason people include AWS Backup in their Data Protection strategy.

AWS Backup - Dashboard01

Now for the bad; there is no EC2 backup or restore. But wait! What about the EBS backup & restore option? That’s OK, but really, how many people spin up EBS volumes and only want to protect the volume? You don’t. You spin up an EC2 instance (with attached volumes) and want to protect the instance. Sadly, this is where AWS Backup falls down and products like Cloud Protection Manager from N2W Software shine. When you have an issue that requires restoring an EC2 instance from backup, it’s generally not in a calm situation with plenty of time to stop and think. There’s a disaster of some form and you need that server back now! You don’t want to have to restore each EBS volume, launch an instance from the root volume, make sure you know what volumes were attached in what order and with what device name and have all the details of security groups, IPs, etc. You just want to hit that one button that says “restore my instance” and be done. That’s what tools like CPM give you and what is sorely lacking in this offering from AWS Backup.


What’s my wrap up from my “Tale of two products”? For FSx Windows, if you have a need for a Windows File Server, wait until it comes to your region and go for it. With AWS Backup, it has a place, especially if you have important data in EFS, but it’s no replacement for a proper backup solution. Most likely this will be implemented in a hybrid arrangement with another backup product.

Note: AWS Backup also manages backups of AWS Storage Gateway. I didn’t have one to test, so I won’t comment.

Backups? Doesn’t Amazon handle that?

For many, the cloud is a magical place where servers just appear and your cloud provider looks after everything, or, if they at least have a concept of the servers, they just assume that the provider will also back them up. Lots of people never bothered to think about protection in a VMware environment, so why start now?

Unfortunately, while your cloud provider probably supplies the tools, you still need to do the configuration and management. In this blog, I won’t talk in specifics about tools, but I’ll start with discussing the concepts around backups. I’ll be discussing these concepts in relation to AWS, but the IaaS elements are roughly the same in Azure too.

Oh crap! I didn’t mean to blow away that server!
This is the time when you really need to find out if your backups are configured correctly, and by that, I don’t mean “are they working”, but are they backing up the right servers, at the right times, with the right frequency. There are two big buzz terms when designing backup policies: Recover Time Objective (RTO) and Recover Point Objective (RPO).

RTO is the “how quick can I get my data back” question. This is the question that governs how you back up your data. AWS natively has snapshots for volumes, plus backup options for RDS, DynamoDB, Aurora, and Redshift. They also have Lifecycle Management (DLM) for volume snapshots. There are all sorts of options in this space, from using the native tools; roll your own using Lambda, Step Functions, SSM, etc.; or backup products from vendors using their own methods or better frontends around AWS APIs. While tools like DLM & lambda functions are cheap and may backup well, the ease, and hence speed, with which restores are done may not be the fastest.

RPO is the “how much data can I afford to lose” question. For things like app servers, or even file servers, your standard nightly backup is often sufficient. For infrastructure like database servers, some logging servers, time sensitive data, then more frequent backups are often required. Often times, it’s a mix of the two, e.g. full nightly backup of the database and hourly archive log backups, or even writing the archive logs to a secondary location, S3, upon completion.

RTO & RPO are vital to your DR, or operational recovery. They answer the questions of how fast will you get your infrastructure back and at what point your data will be. The next set of questions are for historical data recovery. This is a mixture of questions around regulations, business processes, and cost. If your industry has certain legal requirements, those need to be allowed for. The longer you keep data, the more it’s going to cost. Also, how you keep your data will have an impact on cost. Is it snapshots, is it EBS, is in in some virtualised dedupe device, or S3/Glacier?

In summary, backups of data is still the customer’s responsibility, even S3. If you are going to backup your data, it is worth taking the time to plan it properly and regularly review that plan. Think of it as business insurance, along the lines of, if I lost this data, can the business survive?

Disk Space Reporting through Lamba Functions- Windows servers

Solution Objective:

The solution provides detailed report related to hard disk space for all the Windows Ec2 instances in the AWS environment.


Mentioned below are the requirements the solution should be able to fulfil.

  • Gather information related to all mount points in all the Windows EC2 instances in the environment.
  • Able to generate cumulative report based on all instances in the environment.

3. Assumptions:

The following assumptions are considered

  • All the EC2 instances have SSM agent installed.
  • The personnel responsible for the configuration have some understanding of IAM Roles, S3 buckets and lambda functions

4. Solutions Description:

The following services provided by Amazon will be utilized to generate the report

  • PowerShell Scripts
  • AWS S3
  • AWS Lambda
  • AWS IAM Roles
  • Maintenances Windows

4.1      Linux Shell Script.

PowerShell Script will be utilized to generate information about the instance and the mount points space utilization.
Mentioned below script needs to be executed on all Windows Ec2 instances to generate the mount point information.

$instanceId = Invoke-WebRequest -Uri -UseBasicParsing
Get-WmiObject Win32_logicaldisk | select DeviceID,Size,Used,FreeSpace,PlaceHolder,VolumeName | ft -Autosize

4.1      AWS S3

The result of the shell script will be posted to an S3 bucket for further use.
The EC2 instances will need write access to the nominated S3 bucket for certificate Maintenance.
S3 Bucket Name: eomreport ( sample name )

4.2      AWS Lambda Functions

Lambda Functions will be used to perform the following activities.

  • Acquire the result of the Shell script from the S3 bucket
  • Generate a Report
  • Email the report to the relevant recipient

The Lambda Functions would need read access to the S3 bucket and access to AWS SES to send emails to recipients.
Mentioned below is the Lambda Functions that performs the mentioned above tasks.

import boto3
import codecs
import pprint
from datetime import datetime, date, time
def lambda_handler(event,Context):
    s3 = boto3.resource('s3')
    mybucket = s3.Bucket('diskspacewindows')
    resulthtml = ["<h1>Report : Hard disk Space Client Name </h1>"] # Adds heading to the email body
    resulthtml.append('<html><body><table border="1">') # Creates a table
    resulthtml.append('<tr><td><b>InstanceID</b></td><td><b>Drive Letter</b></td><td><b> FreeSpace</b></td><td><b>Total Space </b></td></b></tr>')
    for file_key in mybucket.objects.all():
        complete_string = str(file_key)
        search = "stdout"
        check = complete_string.find(search)
        if check > 0 :
            body = file_key.get()['Body'].read().decode('utf-8')
            complete=body.splitlines() #splits data into lines.
            resulthtml.append(("<td>'{}'</td><td></td><td></td><td></td></tr>").format(id)) # for the HTML email to be sent.
            for line in details:
                    if len(output_word) > 0:
                      resulthtml.append(("<td></td><td>'{}'</td><td>'{}'</td><td>'{}'</td></tr>").format(output_word[0],output_word[1],output_word[2])) # for the HTML email to be sent.
    sender = "syed.naqvi@kloud.com.au"
    recipient = "syed.naqvi@kloud.com.au"
    awsregion = "us-east-1"
    subject = "Client Hard Disk Space - Windows "
    charset = "UTF-8"
    mylist="mylist update"
    client = boto3.client('ses',region_name=awsregion)
        response = client.send_email(
               'ToAddresses': [
                  'Body': {
                      'Html': {
                        'Charset': charset,
                        'Data': final,
                    'Text': {
                     'Charset': charset,
                     'Data': mylist,
                'Subject': {
                    'Charset': charset,
                    'Data': subject,
    # Display an error if something goes wrong.
    except Exception as e:
        print( "Error: ", e)
       print("Email sent!")

4.1 AWS IAM Roles
Roles will be used to grant

  • AWS S3 write access to all the EC2 instances as they will submit the output of the  the S3 bucket
  • AWS SES access to Lambda Functions to send emails to relevant recipients.


Amazon Simple Email Service (Amazon SES) evolved from the email platform that Amazon.com created to communicate with its own customers. In order to serve its ever-growing global customer base, Amazon.com needed to build an email platform that was flexible, scalable, reliable, and cost-effective. Amazon SES is the result of years of Amazon’s own research, development, and iteration in the areas of sending and receiving email.( Ref. From https://aws.amazon.com/ses/).
We would be utilizing AWS SES to generate emails using AWS lambda.
The configuration of the Lambda functions can be modified to send emails to a distribution group to provide Certificate reporting, or it can be used to send emails to ticketing system in order to provide alerting and ticket creation in case a certificate expiration date crosses a configured threshold.

5Solution Configuration

5.1 Configure IAM Roles

The following Roles should be configured

  • IAM role for Lambda Function.
  • IAM for EC2 instances for S3 bucket Access

5.1.1 Role for Lambda Function

Lambda function need the following access

  • Read data from the S3 bucket
  • Send Emails using Amazon S3

To accomplish the above the following policy should be created and attached to the IAM Role

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "Stmt1501474857000",
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Sid": "Stmt1501474895000",
            "Effect": "Allow",
            "Action": [
            "Resource": [

6.1.2  Role for EC2 instance

All EC2 instances should have access to store the Shell output in the S3 bucket.
To accomplish the above , the following policy should be assigned to the EC2 roles

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "Stmt1501475224000",
            "Effect": "Allow",
            "Action": [
            "Resource": [

6.2 Configure Maintenance Window.

The following tasks need to be performed for the maintenance window

  • Register a Run Command with Run-Shell Script using the script in section 4.1
  • Register targets based on the requirements
  • Select the schedule based on your requirement

Maintenance Window Ref : 

6.3 Configure Lambda Function:

The following tasks need to be performed for the Lambda Function

  • Create a blank lambda function with the S3 put event as the trigger\lambda function
  • Click on Next
  • Enter the Name and Description
  • Select run time Python 3.6
  • Copy and paste the lambda function mentioned in section 4.3

    6.4 Configuring AWS SES

The following tasks need to be completed before the execution of the Run-commands.

  • Email Addresses should be added to the AWS SES section of the tenant.
  • The email addresses should be verified.

 7. Result:

Based on the above configuration, whenever the run command is executed, the following report is generated and sent to the nominated email account.

IT Service Management (ITSM) & Operations – Overview of the Availability Management Process


In many cases ITSM Availability Management Process is overlooked due to other frontline processes such as incident, problem and change management. I have provided a summary of this availability management process and significance below. I hope that the information is useful for your organisation in order to define and implement the process.

  • Availability management has to ensure that the delivered availability levels for all services comply with or exceed the agreed requirements in a cost-effective way and enables the business to satisfy its objectives.
  • Provide a range of IT Availability reporting to ensure that agreed levels of Availability, reliability and maintainability are measured and monitored on an ongoing basis.
  • Create and maintain a forward looking Availability Plan aimed at improving the overall Availability of IT Services and Infrastructure components to ensure existing and future business Availability requirements can be satisfied.


  • Designing, implementing, measuring, managing and improving IT services and the components that are used to provide them.
  • Services and processes:
  • Business processes
  • Future business plans and requirements
  • Service objectives, current Service Operation and delivery
  • IT infrastructure, data, applications and the environment
  • Priorities of the business in relation to the services

Industry Good Practice for this Process

Avialability mgt process.jpg
Availability management is part of service design and it is one of the critical process because the reliability of a service or component indicates how long it can perform its agreed function without interruption.

Activities – Reactive (Executed in the operational phase of the lifecycle)

  • Monitoring, measuring, analysing and reporting availability of services and components
  • Unavailability analysis
  • Expanded lifecycle of the incidents
  • Service Failure Analysis (SFA)

Activities – Proactive (Executed in the design phase of the lifecycle)

  • Identifying Vital Business Functions (VBFs)
  • Designing for availability
  • Component Failure Impact Analysis (CFIA)
  • Single Point of Failure (SPOF) analysis
  • Fault Tree Analysis (FTA)
  • Risk Analysis and Management
  • Availability Test Schemes
  • Planned and preventive maintenance
  • Production of Projected Service Availability (PSA document
  • Continuous reviewing and improvement



  • Business information, organisation strategy, financial information and plans
  • Current and future requirements of IT services
  • Risk analysis
  • Business impact analysis
  • Service portfolio and service catalogue from service level management process
  • Change calendars and release management information


  • Availability management information systems (AMIS)
  • Availability plan
  • Availability and restore criteria
  • Reports on the availability, reliability and maintainability of services


In summary, ITSM Availability Management measures three important aspects: how long a service can perform without interruption (Reliability), how quickly a service can be restored when it has failed (Maintainability) and how effectively a third party supplier deliver their services (Serviceability). These three aspects are key performance measures in ITSM availability management. Availability Management has to be discussed at the design phase of IT Service Management. Hope you found the above information useful. Thanks

The Present [and Future] Landscape of Data Migrations

A rite of passage for the majority of us in the tech consultancy world is being a part of a medium to large scale data migration at some stage in our careers. No, I don’t mean dragging files from a PC to a USB drive, though this may have very well factored into the equation for some us. What I’m referencing is a planned piece of work where the objective is to move an entire data set from a legacy storage system to a target storage. Presumably, a portion of this data is actively used, so this migration usually occurs during a planned downtime period, ad communication strategy, staging, resourcing, etc.
Yes, a lot of us can say ‘been there, done that’. And for some us, it can seem simple when broken down as above. But what does it mean for the end user? The recurring cycle of change is never an easy one, and the impact of a data migration is often a big change. For the team delivering it can be just as stress-inducing – sleepless shift cycles, outside of hours and late-night calls, project scope creeping (note: avoid being vague in work requests, especially when it comes to data migration work), are just a few of the issues that will shave years off anyone who’s unprepared for what a data migration encompasses.
Back to the end-users, it’s a big change: new applications, new front-end interfaces, new operating procedures and a potential shake-up of business processes, and so on. Most opt and agree with the client to taper off the pain of the transition/change period, ‘rip the Band-Aid right off’ and move an entire dataset from one system to another in one operation. Sometimes, and dependent on context/platforms, this is a completely seamless exercise. The end user logs in on a Monday and they’re mostly unaware of a switch. Whether taking this, or a phased approach to the migration, there are signs showing in today’s technology services landscape that these operations are aging and become somewhat outdated.
Data Volumes Are Climbing…
… to put it mildly. We’re in a world of Big Data, and this isn’t only for Global Enterprises and Large Companies, but even mid-sized ones and for some individuals too. Weekend downtimes aren’t going to be enough – or aren’t, as this BA discovered on a recent assignment – and when your data amounts aren’t equitable to the actual end users you’re transitioning (the bigger goal is, in my mind, the transformation of the user experience in fact), then you’re left with finite amounts of time to actually perform tests, gain user acceptance, plan and strategise for mitigation and potential rollback.
Migration through Cloud Platforms are not yet well-optimized for effective (pain-free) Migrations
Imagine you have a billing system that contains somewhere up to 100 million fixed assets (active and backlog). The requirement is to migrate these all to a new system that is more intuitive to the accountants of your business. On top of this, the app has a built-in API that supports 500 asset migrations a second. Not bad, the migration will, therefore, take just under 20 days to complete. Not optimal for a project, no matter how much planning goes into the delivery phase. On top of this, consider the slowing down of performance due to user access going through an API or load gateway. Not fun.
What’s the Alternative?
In a world where we’re looking to make technology and solution delivery faster and more efficient, the future of data migration may, in fact, be headed in the opposite direction.
Rather than phasing your migrations over outage windows of days or weeks, or from weekend-to-weekend, why not stretch this out to months even?
Now, before anyone cries ‘exorbitant bill-ables’, I’m not suggesting that the migration project itself be drawn out for an overly long period of time (months, a year).
No, the idea is not to keep a project team around for unforeseen, yet to-be-expected challenges that face them as previously mentioned above. Rather, as tech and business consultants and experts, a possible alternative is redirecting our efforts towards our quality of service, to focus on change management aspect with regards to end-user adoption of a new platform and associated process, and the capability of a given company’s managed IT serviced too, not only support the change but in fact incorporate the migration into as a standard service offering.
The Bright(er) Future for Data Migrations
How can managed services support a data migration, without specialisation in, say, PowerShell scripting or experience in performing a migration via a tool or otherwise, before? Nowadays we are fortunate enough that vendors are developing migration tools to be highly user-friendly and purposed for ongoing enterprise use. They are doing this to shift the view that a relationship with a solution provider for projects such as this should simply be a one-off, and that the focus on migration software capability is more important than the capability of the resource performing the migration (still important, but ‘technical skills’ in this space becoming more of a level playing field).
From a business consultancy angle, an opportunity to provide an improved quality of service is presented by looking at ways in which we can utilise our engagement and discovery skills to bridge the gaps which can often be prevalent between managed services and an understanding of the businesses everyday processes. A lot of this will hinge on the very data being migrated. This can onset positive action from a business given time and with full support from managed services. Data migrations as a BAU activity can become iterative and via request; active and relevant data first, followed potentially by a ‘house-cleaning’ activity where the business effectively de clutters data which it no longer needs or is no longer relevant.
It’s early days and we’re likely still toeing the line between old data migration methodology and exploring what could be. But ultimately, enabling a client or company to be more technologically capable, starting with data migrations, is definitely worth a cent or two.

Querying against an Azure SQL Database using Azure Automation Part 1

What if you wanted to leverage Azure automation to analyse database entries and send some statistics or even reports on a daily or weekly basis?
Well why would you want to do that?

  • On demand compute:
    • You may not have access to a physical server. Or your computer isn’t powerful enough to handle huge data processing. Or you would definitely do not want to wait in the office for the task to complete before leaving on a Friday evening.
  • You pay by the minute
    • With Azure automation, your first 500 minutes are for free, then you pay by the minute. Check out Azure Automation Pricing for more details. By the way its super cheap.
  • Its Super Cool doing it with PowerShell. 

There are other reasons why would anyone use Azure automation but we are not getting into the details around that. What we want to do is to leverage PowerShell to do such things. So here it goes!
To query against a SQL database whether its in Azure or not isn’t that complex. In fact this part of the post is to just get us started. Now for this part, we’re going to do something simple because if you want to get things done, you need the fastest way of doing it. And that is what we are going to do. But here’s a quick summary for the ways I thought of doing it:

    1. Using ‘invoke-sqlcmd2‘. This Part of the blog.. its super quick and easy to setup and it helps getting things done quickly.
    2. Creating your own SQL Connection object to push complex SQL Querying scenarios. [[ This is where the magic kicks in.. Part 2 of this series ]]

How do we get this done quickly?
For the sake of keeping things simple, we’re assuming the following:

  • We have an Azure SQL Database called ‘myDB‘, inside an Azure SQL Server ‘mytestAzureSQL.database.windows.net‘.
  • Its a simple database containing a single table ‘test_table’. This table has basically three columns  (Id, Name, Age) and this table contains only two records.
  • We’ve setup ‘Allow Azure Services‘ Access on this database in the firewall rules Here’s how to do that just in case:
    • Search for your database resource.
    • Click on ‘Set firewall rules‘ from the top menu.
    • Ensure the option ‘Allow Azure Services‘ is set to ‘ON
  • We do have an Azure automation account setup. We’ll be using that to test our code.

Now lets get this up and running
Start by creating two variables, one containing the SQL server name and the other containing the database name.
Then create an Automation credential object to store your SQL Login username and password. You need this as you definitely should not be thinking of storing your password in plain text in script editor.

I still see people storing passwords in plain text inside scripts.

Now you need to import the ‘invoke-sqlcmd2‘ module in the automation account. This can be done by:

  • Selecting the modules tab from the left side options in the automation account.
  • From the top menu, click on Browse gallery, search for the module ‘invoke-sqlcmd2‘, click on it and hit ‘Import‘. It should take about a minute to complete.

Now from the main menu of the automation account, click on the ‘Runbooks‘ tab and then ‘Add a Runbook‘, Give it a name and use ‘PowerShell‘ as the type. Now you need to edit the runbook. To do that, click on the Pencil icon from the top menu to get into the editing pane.
Inside the pane, paste the following code. (I’ll go through the details don’t worry).

#Import your Credential object from the Automation Account
 $SQLServerCred = Get-AutomationPSCredential -Name "mySqllogin" #Imports your Credential object from the Automation Account
 #Import the SQL Server Name from the Automation variable.
 $SQL_Server_Name = Get-AutomationVariable -Name "AzureSQL_ServerName" #Imports the SQL Server Name from the Automation variable.
 #Import the SQL DB from the Automation variable.
 $SQL_DB_Name = Get-AutomationVariable -Name "AzureSQL_DBname"
    • The first cmdlet ‘Get-AutomationPSCredential‘ is to retrieve the automation credential object we just created.
    • The next two cmdlets ‘Get-AutomationVariable‘ are to retrieve the two Automation variables we just created for the SQL server name and the SQL database name.

Now lets query our database. To do that, paste the below code after the section above.

#Query to execute
 $Query = "select * from Test_Table"
 "----- Test Result BEGIN "
 # Invoke to Azure SQL DB
 invoke-sqlcmd2 -ServerInstance "$SQL_Server_Name" -Database "$SQL_DB_Name" -Credential $SQLServerCred -Query "$Query" -Encrypt
 "`n ----- Test Result END "

So what did we do up there?

    • We’ve created a simple variable that contains our query. I know the query is too simple but you can put in there whatever you want.
    • We’ve executed the cmdlet ‘invoke-sqlcmd2‘. Now if you noticed, we didn’t have to import the module we’ve just installed, Azure automation takes care of that upon every execution.
    • In the cmdlet parameter set, we specified the SQL server (that has been retrieved from the automation variable), and the database name (automation variable too). Now we used the credential object we’ve imported from Azure automation. And finally, we used the query variable we also created. An optional switch parameter ‘-encypt’ can be used to encrypt the connection to the SQL server.

Lets run the code and look at the output!
From the editing pane, click on ‘Test Pane‘ from the menu above. Click on ‘Start‘ to begin testing the code, and observe the output.
Initially the code goes through the following stages for execution

  • Queuing
  • Starting
  • Running
  • Completed

Now what’s the final result? Look at the black box and you should see something like this.

----- Test Result BEGIN
Id Name Age
-- ---- ---
 1 John  18
 2 Alex  25
 ----- Test Result END

Pretty sweet right? Now the output we’re getting here is an object of type ‘Datarow‘. If you wrap this query into a variable, you can start to do some cool stuff with it like
$Result.count or
$Result.Age or even
$Result | where-object -Filterscript {$PSItem.Age -gt 10}
Now imagine if you could do so much more complex things with this.
Quick Hint:

If you include a ‘-debug’ option in your invoke cmdlet, you will see the username and password in plain text. Just don’t run this code with debugging option ON 🙂

Stay tuned for Part 2!!

Major Incident Management – Inputs and Outputs


A major incident is an incident that results in significant disruption to the business and demands a response beyond the routine incident management process.
The Major Incident Management Process applies globally to all Customers and includes Incidents resulting in a service outage.  This process is triggered by Incidents directly raised by Users or via referral from the Event Management Process, which are classified as Major Incidents in the Incident Management Process by the Service Desk.

Entry Criteria

Major Incident Management Process commences when an Incident has been identified and recorded as a Major Incident in the Incident Management Tool by the Service Desk.


The inputs to the Major Incident Management Process include:

  • Incidents identified as Major Incident from the Incident Management Process raised directly by Users or generated via referral from the Event Management Process.
  • Knowledge Management Database (known errors/ existing resolutions/ accepted workarounds)
  • Details of impacted Configuration Items (CIs) from Configuration Management Database (CMDB)
  • Major Incident Assignment Scripts & Organisational Major Incident Communication Process

Exit Criteria

The Major Incident Management Process is complete when:

  • Major Incident is resolved or self-resolved and the Incident record is closed with a workaround/permanent resolution. In case of a workaround a Problem record will be raised for Root Cause Analysis and permanent resolution.
  • Incident is downgraded and routed to the Incident Management Process for resolution.


The expected outputs from the Major Incident Management Process are:

  • Successfully implemented Change through the Change Management Process.
  • Problem record raised in Problem Management tool for Root Cause Analysis (RCA).
  • Priority downgraded Incident routed to Incident Management Process for resolution.
  • Restored Service.
  • Updated Knowledge Base/Known Error Database.
  • Closed Incident record with accurate details of the Incident attributes and the steps taken for resolution.
  • Notification through various channels (email, call etc.) on the initiation, resolution process and closure of Major Incident to various stakeholders.
  • Updated Daily Operation Report.
  • Accurately recorded service and/or component outage details (e.g. start, end, duration, outage classification, etc.).

Key Activities

  • Major Incident identified from Incident Management Process
  • Assign Incident to Incident Resolver Team
  • Notify Resolver & Organisational Incident Management Teams
  • Service Restoration
  • Notifications and Communications
  • Conduct Investigation & Diagnosis
  • Resolution
  • SME inputs
  • Develop Resolution/Workaround
  • Resolution Type (Permanent)
  • Root Cause Analysis
  • Change Record
  • Confirmation of Status of Incident
  • Daily Operations Update
  • Closure of Incident


It is very important to establish a Major Incident Management Process in conjunction with Incident Management, Problem Management, Change Management and Communication Management processes. A key process in ITSM to restore service in the event of a major incident.

Cloud Operations Model and Project Stream – Considerations


Cloud operations stream is responsible for designing and operation of the cloud model for the project and BAU activities. This stream is primarily responsible for people, process, tools and information. The model can change as the organisation’s requirements and type of business.  

Aspects Cloud Operations Model

Below is an example of key aspects that we need to consider when defining Cloud Operations Model.
aspects 2.jpg

Cloud Operations Stream  – High Level Approach

Below is an example model for how to track a cloud program operationally.
track ops cloud.jpg

Cloud Operations Stream – Governance

Below is an example model for cloud operations stream governance, which can be used to guide the operations stream.
Cloud ops Gov.jpg


Hope you found some of these aspects and considerations mentioned above is useful. Thanks

ITSM – Service Catalogue – Summary


  • The Service Catalogue represents a trusted record of the services provided by Information Technology (IT), its default capabilities, measures and primary means of access and provision.
  • It is the means by which we articulate WHAT we manage and measure. It is the hidden power of how we set the customer’s expectations and exceed them.
  • It can provide an essential medium for communication and coordination among IT and its customers, and should distinguish between Business Customers (the ones paying for the service) and End Users (the recipient of the service).
  • If CMDB is the system of record for what IT did, then the Service Catalogue becomes the system of records for what IT does.
  • ITIL recommends the development of a Service Catalogue as the first step in the Service Level Management (SEM) process.

Why Service Catalogue is Required?

Show the value of Information Technology (IT)

  • For IT to be fully successful, IT needs to be strategically aligned to the business and positioned as a key enabler in achieving successful outcomes for the organisation.
  • It is not enough for IT alone to consider itself successful at what it does. IT needs to provide real value to the organisation that directly achieves business outcomes that the organisation wants to achieve and should be able to deal with the ever changing needs and demands of the organisations and their customers.
  • IT should also be capable of demonstrating how it provides business value to the organisation to ensure that IT is positioned within the organisation as a core strategic asset.

To support the above, it will be good for IT to develop a Service Catalogue that defines the scope, characteristics and costs of available services and products, and allows for better management of the IT environment as a whole.
The basic requirement to do all this is to have a clear definition of the services the IT organisation provides, the components and resources that make up the service, and the associated costs for these services.
SC image 1.jpg

Why Service Catalogue is Important?


Service Catalogue Types and Recommended Construction Approach

The two types of Service Catalogues are records based and actionable based.

Characteristics of Different Views


Attributes of a Service Catalogue

An effective Service Catalogue should be:

  • Constitutive – what IT does and does not, and on what terms.
  • Actionable – provides the means by which IT and its customers coordinate and conduct business.
  • Governing – conditions and controls defined in the Service Catalogue are integrated into the service delivery processes.

An ideal Service Catalogue should have the following six attributes:

  1. User-Relevant Services – A Catalogue that users understand
  2. Accessible Service Description – Speaking the customer’s language
  3. Objective Performance Accountability Metrics – Setting clear performance goals
  4. Actionable Pricing Information – Helping customers understand their costs
  5. Consumption Management Tips – Not all costs are created equal
  6. Adoption Facilitation Mechanisms – Reading like a best seller

Service Catalogue – Format

  • Service Catalogue can be a simple list in Word or Excel document, or as comprehensive as installing specific tools designed to create formal Service Catalogues.
  • The catalogue should contain items that are visible to customer, and additional information that used by service delivery team to ensure smooth delivery. Items visible to customer include:
    • A description of the service
    • Disclosure of any perquisites or required services
    • Approval levels


Service Catalogue – Links to the Different Elements within IT Service Lifecycle


Tips & Challenges with Implementing a Service Catalogue


Recommended Implementation  Approach

Phase One – Build foundation

  • Start with simple
  • List all Service Management services containing key attributes and in customer perspective
  • Availability of Service Catalogue via selected media

Phase Two – Deploy

  • Detailed and comprehensive
  • Expand attributes to the most popular Service Management services in customer perspective
  • Service Catalogue document control
  • Marketing – Service Catalogue

Phase Three – BAU

  • Review, improve and expand Service Catalogue

Hope you found these useful.

ITSM – Continual Service Improvement (CSI) – All you need to know

Why Continual Service Improvement (CSI) is Required?

  • The goal of Continual Service Improvement (CSI) is to align and realign IT Services to changing business needs by identifying and implementing improvements to the IT services that support Business Processes.
  • The perspective of CSI on improvement is the business perspective of service quality, even though CSI aims to improve process effectiveness, efficiency and cost effectiveness of the IT processes through the whole life-cycle.
  • To manage improvement, CSI should clearly define what should be controlled and measured.

It is also important to understand the differences in between Continuous and Continual
What are the Main Objectives of Continual Service Improvement (CSI)
Continual Service Improvement (CSI) – Approach
Continual Service Improvement (CSI) – 7 Step Process
CSI – Measure and improvement process has 7 steps. These steps will help to define the corrective action plan.
csi7 steps.jpg
Continual Service Improvement (CSI) – Challenges, CSFs & Risks
Like all programs, CSI also have its challenges, critical success factors and risks. Some of these are listed below. It is absolutely important that to implement the CSI program that we need the senior management’s buy-in.
CSI csfs.jpg
Please remember transforming IT is a Process/Journey, not an Event.
Hope these are useful.

Follow Us!

Kloud Solutions Blog - Follow Us!