Step-by-step: Using Azure DevOps Services to deploy ARM templates with CI/ CD – Part 1

In this blog, we will see how to get started with Azure DevOps for an Infrastructure background person.

We will familiarize ourselves with deploying your Azure resources with ARM templates by using Azure DevOps with Continuous Integration (CI) and Continuous Deployment (CD).

I have made this entire post into two parts for easier understanding:

Part 1: Creating your first project in Azure DevOps

Part 2: Creating the first project in Azure DevOps for Continuous Integration (CI) / Continuous Deployment (CD).

This article will focus on Part 1. The things needed to make this successful include:

        1. Visual Studio software (Free edition)  – you can get this from website: https://visualstudio.microsoft.com
        2. Azure Subscription access. If not, you can create a free azure account.
        3. An account in Visual Studio. if you don`t have one create a new account by signing into https://visualstudio.microsoft.com and enabling Azure DevOps service. 
        4. Click on Azure DevOps and select sign in.
        5. Once you sign in with your Microsoft account, click continue.
  1. Creating the first project in Azure DevOps:When you log into Azure DevOps(https://dev.azure.com) for first time with your MSDN/ Microsoft account.
    • Now, click on New project and provide name (Eg: Firstproject) & add a Description for project.
    • Select visibility options: Private (with this setting, only you can access the content. You can provide access to people who can able to view this project.)
    • Under Firstproject , Click on Repos.
    • Since the project folder is empty, we need to create a new file. We can use Visual studio for creating it and click on clone in Visual studio options:

      • Visual studio software will open its console.
      • Provide your Microsoft account credentials, which has been used for Azure DevOps and Azure account.
      • The project needs to be cloned on local disk. Click on clone.
      • This will pop-up for Azure DevOps credentials.
      • This may result in authentication failed or fatal error. To resolve this, follow below steps:
      • In Visual studio, select team explorer and select manage connections and click connect to project.
      • Select your user id for Azure DevOps and provide credentials. Then your Project (First project) will be listed for connect.
      • Now you will get clone options:
      • On Team Explorer view, click on Create a new project or solution in this repository.
      • Select Installed -> cloud and Azure Resource group

      • Select Blank template for deployment.

    • Select solution explorer view on Visual Studio
    • Select AzureResourceGroup and click on Azuredeploy.json
    • Click on Resources on Json outline and select virtual network for deployment. provide name for vnet : eg   firstnetwork01

  • On the bottom of Visual Studio, you find the number of changes icon has been performed to it. Click on it for commit changes.
    • Provide comments for commit and select commit all.

    • The change has been committed locally and we need to push the changes to Azure DevOps project file. Click on sync for change.

  • Click on push for changes to cloud (Azure DevOps).
  • Now, go back to Azure DevOps portal and select your project (First project) and select repos.
  • You will able to find your AzureResourceGroup, which you created on Visual Studio will be available.
  • Click on Azuredeploy.json file to verify your file.
    1. Enabling deployment of ARM Template in Azure DevOps:
  • Log on Azure DevOps portal and open Firstproject (your project name), then click on Builds.
  • On the new page, click on New Pipeline. Select “Use the visual designer to create a pipeline without YAML”.
  • Ensure your project & repository is selected and click on continue.
  • Select “Start with an Empty Job”
  • Click on + item on Agent Job.
  • On the new pane, select deploy and click on Azure Resource Group deployment and click ADD.
  • On the left pane, select Azure Deployment: Create or Update Resource Group action on
  • Select Azure Subscription and click on Authorize.
  • Select your resource group on your Azure subscription and location.
  • The template location will be linked artefact.
  • Select your template file (azuredeploy.json) from the selection menu.
  • Select your template parameter file (azuredeploy.parameters.json) from the selection menu.
  • Deployment mode: complete.
  • Click save and queue and provide your comment on the file changes.
  • After it  has saved, the build operation will commences deployment  on your Azure tenant.
  • You can view the deployment logs from the Azure DevOps portal. In addition, you will receive an email (email which has been used for Azure DevOps account) with deployment status.
  • Verify your network (Azure Resource which we added on ARM template) has been created on Azure tenant.
  1. This concludes Part 1 creating and deploying  ARM templates with Azure DevOps.
  2. In Part-2, I take you through on enabling Continuous Integration (CI) / Continuous Deployment (CD).

Replacing your Secure FTP Server with Amazon Simple Storage Service

First published at https://nivleshc.wordpress.com

Introduction

What if I told you that you could get rid of most of your servers, however still consume the services that you rely on them for? No longer will you have to worry about ensuring the servers are up all the time, that they are regularly patched and updated. Would you be interested?

To quote Werner Vogel “No server is easier to manage than no server”.

In this blog, I will show you how you can potentially replace your secure ftp servers by using Amazon Simple Storage Service (S3). Amazon S3 provides additional benefits, for instance, lifecycle policies which can be used to automatically move older files to a cheaper storage, which could potentially save you lots of money.

Architecture

The solution is quite simple and is illustrated in the following diagram.

Replacing Secure FTP with Amazon S3 - Architecture

We will create an Amazon S3 bucket, which will be used to store files. This bucket will be private. We will then create some policies that will allow our users to access the Amazon S3 bucket, to upload/download files from it. We will be using the free version of CloudBerry Explorer for Amazon S3, to transfer the files to/from the Amazon S3 bucket. CloudBerry Explorer is an awesome tool, its interface is quite intuitive and for those that have used a gui version of a secure ftp client, it looks very similar.

With me so far? Perfect. Let the good times begin 😉

Lets first configure the AWS side of things and then we will move on to the client configuration.

AWS Configuration

In this section we will configure the AWS side of things.

  1. Login to your AWS Account
  2. Create a private Amazon S3 bucket (for the purpose of this blog, I have created an S3 bucket in the region US East (North Virginia) called secureftpfolder)
  3. Use the JSON below to create an AWS Identity and Access Management (IAM) policy called secureftp-policy. This policy will allow access to the newly created S3 bucket (change the Amazon S3 bucket arn in the JSON to your own Amazon S3 bucket’s arn)
  4. {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "SecureFTPPolicyBucketAccess",
                "Effect": "Allow",
                "Action": "s3:ListBucket",
                "Resource": [
                    "arn:aws:s3:::secureftpfolder"
                ]
            },
            {
                "Sid": "SecureFTPPolicyObjectAccess",
                "Effect": "Allow",
                "Action": "s3:*",
                "Resource": [
                    "arn:aws:s3:::secureftpfolder/*"
                ]
            }
        ]
    }

    4. Create an AWS IAM group called secureftp-users and attach the policy created above (secureftp-policy) to it.

  5. Create AWS IAM Users with Programmatic access and add them to the AWS IAM group secureftp-users. Note down the access key and secret access key for the user accounts as these will have to be provided to the users.

Thats all that needs to be configured on the AWS side. Simple isn’t it? Now lets move on to the client configuration.

Client Configuration

In this section, we will configure CloudBerry Explorer on a computer, using one of the usernames created above.

  1. On your computer, download CloudBerry Explorer for Amazon S3 from https://www.cloudberrylab.com/explorer/amazon-s3.aspx. Note down the access key that is provided during the download as this will be required when you install it.
  2. Open the downloaded file to install it, and choose the free version when you are provided a choice between the free version and the trial for the pro version.
  3. After installation has completed, open CloudBerry Explorer.
  4. Click on File from the top menu and then choose New Amazon S3 Account.
  5. Provide a meaningful name for the Display Name (you can set this to the username that will be used)
  6. Enter the Access key and Secret key for the user that was created for you in AWS.
  7. Ensure Use SSL is ticked and then click on Advanced and change the Primary region to the region where you created the Amazon S3 bucket.
  8. Click OK to close the Advanced screen and return to the previous screen.
  9. Click on Test Connection to verify that the entered settings are correct and that you can access the AWS Account using the the access key and secret access key.
  10. Once the settings have been verified, return to the main screen for CloudBerry Explorer. The main screen is divided into two panes, left and right. For our purposes, we will use the left-hand side pane to pick files in our local computer and the right-hand side pane to correspond to the Amazon S3 bucket.
  11. In the right-hand side pane, click on Source and from the drop down, select the name you gave the account that was created in step 4 above.
  12. Next, in the right-hand side pane, click on the green icon that corresponds to External bucket. In the window that comes up, for Bucket or path to folder/subfolder enter the name of the Amazon S3 bucket you had created in AWS (I had created secureftpfolder) and then click OK.
  13. You will now be returned to the main screen, and the Amazon S3 bucket will now be visible in the right-hand side pane. Double click on the Amazon S3 bucket name to open it. Viola! You have successfully created a connection to the Amazon S3 bucket.
  14. To copy files/folders from your local computer to the Amazon S3 bucket, select the file/folder in the left-hand pane and then drag and drop it to the right-hand pane.
  15. To copy files/folders from the Amazon S3 bucket to your local computer, drag and drop the files/folder from the right-hand pane to the appropriate folder in the left-hand pane.

 

So, tell me honestly, was that easy or what?

Just to ensure I have covered all bases (for now), here are few questions I would like to answer

A. Is the transfer of files between the local computer and Amazon S3 bucket secure?

Yes, it is secure. This is due to the Use SSL setting that we saw when configuring the account within CloudBerry Explorer.

B. Can I protect subfolders within the Amazon S3 bucket, so that different users have different access to the subfolders?

Yes, you can. You will have to modify the AWS IAM policy to do this.

C. Instead of a GUI client, can I access the Amazon S3 bucket via a script?

Yes, you can. You can download AWS tools to access the Amazon S3 bucket using the command line interface or PowerShell. AWS tools are available from https://aws.amazon.com/tools/

I hope the above comes in handy to anyone thinking of moving their secure ftp (or normal ftp) servers to a serverless architecture.

Automatic Key Rotation for Azure Services

Securely managing keys for services that we use is an important, and sometimes difficult, part of building and running a cloud-based application. In general I prefer not to handle keys at all, and instead rely on approaches like managed service identities with role-based access control, which allow for applications to authenticate and authorise themselves without any keys being explicitly exchanged. However, there are a number of situations where do we need to use and manage keys, such as when we use services that don’t support role-based access control. One best practice that we should adopt when handling keys is to rotate (change) them regularly.

Key rotation is important to cover situations where your keys may have compromised. Common attack vectors include keys having been committed to a public GitHub repository, a log file having a key accidentally written to it, or a disgruntled ex-employee retaining a key that had previously been issued. Changing the keys means that the scope of the damage is limited, and if keys aren’t changed regularly then these types of vulnerability can be severe.

In many applications, keys are used in complex ways and require manual intervention to rotate. But in other applications, it’s possible to completely automate the rotation of keys. In this post I’ll explain one such approach, which rotates keys every time the application and its infrastructure components are redeployed. Assuming the application is deployed regularly, for example using a continuous deployment process, we will end up rotating keys very frequently.

Approach

The key rotation process I describe here relies on the fact that the services we’ll be dealing with – Azure Storage, Cosmos DB, and Service Bus – have both a primary and a secondary key. Both keys are valid for any requests, and they can be changed independently of each other. During each release we will pick one of these keys to use, and we’ll make sure that we only use that one. We’ll deploy our application components, which will include referencing that key and making sure our application uses it. Then we’ll rotate the other key.

The flow of the script is as follows:

  1. Decide whether to use the primary key or the secondary key for this deployment. There are several approaches to do this, which I describe below.
  2. Deploy the ARM template. In our example, the ARM template is the main thing that reads the keys. The template copies the keys into an Azure Function application’s configuration settings, as well as into a Key Vault. You could, of course, output the keys and have your deployment script put them elsewhere if you want to.
  3. Run the other deployment logic. For our simple application we don’t need to do anything more than run the ARM template deployment, but for many deployments  you might copy your application files to a server, swap the deployment slots, or perform a variety of other actions that you need to run as part of your release.
  4. Test the application is working. The Azure Function in our example will perform some checks to ensure the keys are working correctly. You might also run other ‘smoke tests’ after completing your deployment logic.
  5. Record the key we used. We need to keep track of the keys we’ve used in this deployment so that the next deployment can use the other one.
  6. Rotate the other key. Now we can rotate the key that we are not using. The way that we rotate keys is a little different for each service.
  7. Test the application again. Finally, we run one more check to ensure that our application works. This is mostly a last check to ensure that we haven’t accidentally referenced any other keys, which would break our application now that they’ve been rotated.

We don’t rotate any keys until after we’ve already switched the application to using the other set of keys, so we should never end up in a situation where we’ve referenced the wrong keys from the Azure Functions application. However, if we wanted to have a true zero-downtime deployment then we could use something like deployment slots to allow for warming up our application before we switch it into production.

A Word of Warning

If you’re going to apply this principle in this post or the code below to your own applications, it’s important to be aware of an important limitation. The particular approach described here only works if your deployments are completely self-contained, with the keys only used inside the deployment process itself. If you provide keys for your components to any other systems or third parties, rotating keys in this manner will likely cause their systems to break.

Importantly, any shared access signatures and tokens you issue will likely be broken by this process too. For example, if you provide third parties with a SAS token to access a storage account or blob, then rotating the account keys will cause the SAS token to be invalidated. There are some ways to avoid this, including generating SAS tokens from your deployment process and sending them out from there, or by using stored access policies; these approaches are beyond the scope of this post.

The next sections provide some detail on the important steps in the list above.

Step 1: Choosing a Key

The first step we need to perform is to decide whether we should use the primary or secondary keys for this deployment. Ideally each deployment would switch between them – so deployment 1 would use the primary keys, deployment 2 the secondary, deployment 3 the primary, deployment 4 the secondary, etc. This requires that we store some state about the deployments somewhere. Don’t forget, though, that the very first time we deploy the application we won’t have this state set. We need to allow for this scenario too.

The option that I’ve chosen to use in the sample is to use a resource group tag. Azure lets us use tags to attach custom metadata to most resource types, as well as to resource groups. I’ve used a custom tag named CurrentKeys to indicate whether the resources in that group currently use the primary or secondary keys.

There are other places you could store this state too – some sort of external configuration system, or within your release management tool. You could even have your deployment scripts look at the keys currently used by the application code, compare them to the keys on the actual target resources, and then infer which key set is being used that way.

A simpler alternative to maintaining state is to randomly choose to use the primary or secondary keys on every deployment. This may sometimes mean that you end up reusing the same keys repeatedly for several deployments in a row, but in many cases this might not be a problem, and may be worth the simplicity of not maintaining state.

Step 2: Deploy the ARM Template

Our ARM template includes the resource definitions for all of the components we want to create – a storage account, a Cosmos DB account, a Service Bus namespace, and an Azure Function app to use for testing. You can see the full ARM template here.

Note that we are deploying the Azure Function application code using the ARM template deployment method.

Additionally, we copy the keys for our services into the Azure Function app’s settings, and into a Key Vault, so that we can access them from our application.

Step 4: Testing the Keys

Once we’ve finished deploying the ARM template and completing any other deployment steps, we should test to make sure that the keys we’re trying to use are valid. Many deployments include some sort of smoke test – a quick test of core functionality of the application. In this case, I wrote an Azure Function that will check that it can connect to the Azure resources in question.

Testing Azure Storage Keys

To test connectivity to Azure Storage, we run a query against the storage API to check if a blob container exists. We don’t actually care if the container exists or not; we just check to see if we can successfully make the request:

Testing Cosmos DB Keys

To test connectivity to Cosmos DB, we use the Cosmos DB SDK to try to retrieve some metadata about the database account. Once again we’re not interested in the results, just in the success of the API call:

Testing Service Bus Keys

And finally, to test connectivity to Service Bus, we try to get a list of queues within the Service Bus namespace. As long as we get something back, we consider the test to have passed:

You can view the full Azure Function here.

Step 6: Rotating the Keys

One of the last steps we perform is to actually rotate the keys for the services. The way in which we request key rotations is different depending on the services we’re talking to.

Rotating Azure Storage Keys

Azure Storage provides an API that can be used to regenerate an account key. From PowerShell we can use the New-AzureRmStorageAccountKey cmdlet to access this API:

Rotating Cosmos DB Keys

For Cosmos DB, there is a similar API to regenerate an account key. There are no first-party PowerShell cmdlets for Cosmos DB, so we can instead a generic Azure Resource Manager cmdlet to invoke the API:

Rotating Service Bus Keys

Service Bus provides an API to regenerate the keys for a specified authorization rule. For this example we’re using the default RootManageSharedAccessKey authorization rule, which is created automatically when the Service Bus namespace is provisioned. The PowerShell cmdlet New-AzureRmServiceBusKey can be used to access this API:

You can see the full script here.

Conclusion

Key management and rotation is often a painful process, but if your application deployments are completely self-contained then the process described here is one way to ensure that you continuously keep your keys changing and up-to-date.

You can download the full set of scripts and code for this example from GitHub.

Remove/Modify Specific AWS Tags from the Environment- PowerShell

Why use TAGs

To help you manage your instances, images, and other Amazon EC2 resources, you can optionally assign your own metadata to each resource in the form of tags. This topic describes tags and shows you how to create them.

(Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html)

Problem :

Sometimes tags are applied in environments prior to developing a tagging strategy. The problem in exponentially increased with the size of the environment and the number of users creating resources.

Currently we are looking for a solution to remove specific unwanted tags from EC2 instances or modify the tag values which are incorrect.

For this purpose , the below mentioned script was developed that solves the problem for AWS.

Solution :

The below mentioned script performs the following tasks

  • Get the list of all the EC2 instances in the tenant
  • Loop through all the EC2 instances
  • Get values of all the tags in the environment
  • Check each Tag Key and Tag Value.
  • Modify of remove the tag value ( based on requirement )

Code:

#Set up the AWS profile using the Access Key and Secret Key

Set-AWSCredential -AccessKey AccessKey -SecretKey SecretKEy -StoreAs ProfileName

#Getting the list of all the instances in the Tenant

$instances = (Get-EC2Instance -ProfileName ProfileName -Region RegionName).Instances

$tagkeytoremove = 'TAG1' # Declaring the TAG Key to remove / modify

$tagvaluetoremove = 'ChangePLease' # Declaring the Tag Value to Remove / Modify

$NewTagValue = "NewTagValue" # Declaring the new tag value.

Foreach ( $instance in $instances ) # Looping through all the instances
{
    $OldTagList = $instance.tags
    foreach ($tag in $OldTagList) # Looping through all the Tags
    {
        if($tag.key -ceq $tagkeytoremove -and $tag.Value -ceq $tagvaluetoremove ) # Comparing the TAG Key and Values
        {
            Remove-EC2Tag -Resource $instances.instanceid -Tag $tag -Force # Removing the Old Tag Key Value Pair
            New-EC2Tag -Resource $instances.instanceid -Tag @{ Key=$tag.key;Value=$NewTagValue} -Force #Adding the New Tag Key Value pair.

        }
    }
} # Loop Ends

 

Creating custom Deep Learning models with AWS SageMaker

S

This blog will cover how to use SageMaker, and I’ve included the code from my GitHub, https://github.com/Steve–Hunter/DeepLens-Safety-Helmet.

1 What is AWS SageMaker?

AWS (Amazon Web Services) SageMaker is “a fully managed machine learning service. With Amazon SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment.” (https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html). In other words, SageMaker gives you a one-stop-shop to get your Deep Learning models going, in a relatively friction-less way.
Amazon have tried hard to deliver a service that appeals to the life-cycle for developing models, which are the results of training. It enables Deep Learning to complete the virtuous circle of:

Data can cover text, numeric, images, video – the idea is that the model gets ‘smarter’ as it learns more of the exceptions and relationships in being given more data.
SageMaker provides Jupyter Notebooks as a way to develop models; if you are unfamiliar, think of Microsoft OneNote with code snippets, you can run (and re-run) a snippet at a time, and intersperse with images, commentary, test runs. The most popular coding language is Python (which is in the name of Jupyter).

2 AI / ML / DL ?

I see the phrases AI (Artificial Intelligence), Machine Learning (ML) and Deep Learning used inter-changeably, this diagram shows the relationship:



(from https://www.geospatialworld.net/blogs/difference-between-ai%EF%BB%BF-machine-learning-and-deep-learning/

So I see AI encompassing most things not yet possible (e.g. Hollywood ‘killer robots’); Deep Learning has attracted attention, as it permits “software to train itself”; this is contrary to all previous software, which required a programmer to specifically tell the machine what to do. What makes this hard is that it is very difficult to foresee everything that could come up, and almost impossible to code for exception from ‘the real world’. An example of this is machine vision, where conventional ‘rule-based’ programming logic can’t be applied, or if you try, only works in very limited circumstances.
This post will cover the data and training of a custom model to identify people wearing safety helmets (like those worn on a construction site), and a future post will show how to load this model into an AWS DeepLens (please see Sam Zakkour’s post on this site). A use case for this would be getting something like a DeepLens to identify workers at a construction site that aren’t wearing helmets.

3 Steps in the project

This model will use a ‘classification’ approach, and only have to decide between people wearing helmets, and those that aren’t.
The project has 4 steps:

  • Get some images of people wearing and not wearing helmets
  • Store images in a format suitable for Deep Learning
  • Fine tune an existing model
  • Test it out!

3.1 Get some images of people wearing and not wearing helmets

The hunger for data to feed Deep Learning models has led to a number of online resources that can supply data. A popular one is Imagenet (http://www.image-net.org/), with over 14 million images in over 21,000 categories. If you search for ‘hard hat’ (a.k.a ‘safety helmet’) in Imagenet:

Your query returns:

The ‘Synset’ is a kind of category in Imagenet, and covers the inevitable synonyms such as ‘hard hat’, ‘tin hat’ and ‘safety hat’.
When you expand this Synset, you get all the images; we need the parameter in the URL that uniquely identifies these images (the ‘WordNet ID’) to download them:

Repeat this for images of ‘people’.
Once you have the ‘WordNet ID’ you can use this to download the images. I’ve put the code from my Jupyter Notebook here if you want to try it yourself https://github.com/Steve–Hunter/DeepLens-Safety-Helmet/blob/master/1.%20Download%20ImageNet%20images%20by%20Wordnet%20ID.ipynb
I added a few extras in my code to:

  1. Count of images and reporting
  2. Added continue on bad image (poisoned my .rec image file!)
  3. Parameterise the root folder and class for images

This saves the images to the SageMaker server in AWS, where they are picked up by the next stage …

3.2 Store images in a format suitable for Deep Learning

It would be nice if we could just feed in the images as JPEGs, but most image processing frameworks require the images to be pre-processed, mainly for performance reasons (disk IO). AWS uses MXNet a lot, and so that’s the format I used, ‘ImageRecord format or recordIO. You can read more about it here https://gluon-cv.mxnet.io/build/examples_datasets/recordio.html, and the Jupyter Notebook is here https://github.com/Steve–Hunter/DeepLens-Safety-Helmet/blob/master/2.%20Store%20images%20into%20binary%20recordIO%20format%20for%20MXNEt.ipynb .
The utility to create the ImageRecord format also splits the images into

  • a set of training and testing images
  • images that show wearing and not wearing helmets (the two categories we are interested in)

It’s best practice to train on a set of images, but test on another, in a ratio of around 70:30. This avoid the curse of deep learning of ‘over-fitting’ where the model hasn’t really learned ‘in general’ what people wearing safety helmets look like, only the ones it has seen already. This is the really cool part of deep learning, it really does learn, and can tell from an unseen image if there is a person(s) wearing a safety helmet!
The two ImageRecord files for training and testing are stored in SageMaker, for the next step …

3.3 Fine tune an existing model

One of my favourite saying is by Isaac Newton “If I have seen further it is by standing on the shoulders of Giants.”, and this applies to Deep Learning, in this case the ‘Giants’ are Google, Microsoft etc, and ‘standing on’ is the open source movement. You could train your model on all 14 million images in Imagenet, taking weeks and immense amount of compute power (which only Google/Microsoft can afford, but generously open source the trained models), but a neat trick in deep learning is to take an existing model that has been trained, and ‘re-purpose’ it for what you want. There may not be a pre-trained model for the images you want to identify, but you can find something close enough, and train it on just the images you want.
There are so many pre-trained models, the MXNet framework refers to them as a ‘model zoo’, the one I used is called ‘Squeezenet’ – there are competitions to find the model that can perform best, and Squeezenet gives good results, and is small enough to load onto a small device like a DeepLens.
So the trick is to start with something that looks like what we are trying to classify; Squeezenet has two existing categories for helmets, ‘Crash helmet’ and ‘Football helmet’.
When you use the model ‘as is’, it does not perform well, and gets things wrong – telling it to look for ‘Crash Helmets’ in these images, it thinks it can ‘see them’ – there are two sets of numbers below which each represent the probability of the corresponding images having helmets in them. Both numbers are a percentage and the first of the number being the prediction of a helmet, the second there not being a helmet.
!

Taking ‘Crash helmet’ as the starting point, and re-trained (also called ‘fine tuning’ or ‘transfer learning’) the last part of the model (the purple one on the far right), to learn what safety helmets look like.

The training took about an hour, on an Amazon ml.t2.medium instance (free tier) and I picked the ‘best’ accuracy, you can see the code and runs here: https://github.com/Steve–Hunter/DeepLens-Safety-Helmet/blob/master/3.%20Fine%20tune%20existing%20model.ipynb

3.4 Test it out!

After training things improve a lot – in the first image below, the model is now 96% certain it can see safety helmets, and in the second 98% certain it is not.
What still ‘blows my mind’ is that there are multiple people in the image – the training set contained individuals, groups, different lighting and helmet colours – imagine trying to ‘code’ for this in a conventional way! But the model has learned the ‘helmet-ness’ of the images!




You can give the model an image it has never seen (e.g. me wearing a red safety helmet, thanks fire warden!):

4 Next

My GitHub goes onto cover how to deploy to a DeepLens (still working on that), and I’ll blog about how that works later, and what it could do if it ‘sees’ someone not wearing a safety helmet.
This example is a simple classifier (‘is’ or ‘is not’ … like the ‘Silicon Valley’ episode of ‘Hotdog not hotdog’), but could cover many different categories, or be trained to recognise people faces from a list.
The same method can be applied to numeric data (e.g. find patterns to determine if someone is likely to default on a loan), and with almost limitless cloud-based storage and processing, new applications are emerging.
I feel that the technology is already amazing enough, we can now dream up equally amazing use cases and applications for this fast moving and evolving field of deep learning!

Azure Application Gateway WAF tuning

The Azure Application Gateway has a Web Application Firewall (WAF) capability that can be enabled on the gateway. The WAF will use the OWASP ModSecurity Core Rule Set 3.0 by default and there is an option to use CRS 2.2.9.
CRS 3.0 offers reduced occurrences of false positives over 2.2.9 by default. However, there may still be times when you need to tune your WAF rule sets to avoid false positives in your site.

Blocked access to the site

The Azure WAF filters all incoming requests to the servers in the backend of the Application Gateway. It uses the ModSecurity Core Rule Sets described above to protect your sites against various items such as code injections, hack attempts, web attacks, bots and mis-configurations.
When the threshold of rules are triggered on the WAF, access is denied to the page and a 403 error is returned. In the below screenshot, we can see that the WAF has blocked access to the site, and when viewing the page in Chrome tools under Network -> Headers we can see that the Status Code is 403 ModSecurity Action
403

Enable WAF Diagnostics

To be able to view more information on the rules that are being triggered on the WAF you will need to turn on Diagnostic Logs, you do this by adding a diagnostic setting. There are different options for configuring the diagnostic settings but in this example we will direct them to an Azure Storage Account.
diagnosticsettings

Viewing WAF Diagnostic Logs

Now that diagnostic logging is enabled for the WAF to direct to a storage account we can browse to the storage account and view the log files. An easy way to do this is to download the Azure Storage Explorer. You can then use it to browse the storage account and you will see 3 containers that are used for the Application Gateway logging.

  • insights-logs-applicationgatewayaccesslog
  • insights-logs-applicationgatewayfirewalllog
  • insights-logs-applicationgatewayperformancelog

The container that we are interested in for the WAF logs is the insights-logs-applicationgatewayfirewalllog container.
Navigate through the container until you find the PT1H.json file. This is the hourly log of firewall actions on the WAF. Double click on the file and it will open in the application set to view json files.
storageexplorer
Each entry in the WAF will include a information about the request and why it was triggered such as the ruleID, Message details. In the below sample log there are 2 highlighted entries.
The message details for the first highlighted log indicate the following “Access denied with code 403 (phase 2). Operator GE matched 5 at TX:anomaly_score.“.
So we can see that when the anomaly threshold of 5 was reached the WAF triggered the 403 ModSecurity action that we initially saw from the browser when trying to access the site. It is also important to notice that this particular rule cannot be disabled, and it indicates that it is an accumulation of rules being triggered.
The second rule indicates that a file with extension .axd is being blocked by a policy.
waflog

Tuning WAF policy rules

Each of the WAF log entries that are captured should be carefully reviewed to determine if they are valid threats. If after reviewing the logs you are able to determine that the entry is a false positive or the log captures something that is not considered a risk you have the option to tune the rules that will be enforced.
From the Web Application Firewall section within the Application Gateway you have the following options:

  • Enable or Disable the WAF
  • Configure Detection or Prevention modes for the WAF
  • Select rule set to use
  • Customize rule configuration

In the example above, if we were to decide that the .axd file extension is valid and allowed for the site we could search for the ruleID 9420440 and un-select it.
Once the number of rules being triggered reduces below the inbound threshold amount the 403 ModSecurity Action will no longer prevent access to the site.
For new implementations or during testing you could apply the Detection mode only and view and fine tune the WAF prior to enabling for production use.
waftuning

Creating an Enterprise-Wide Cloud Strategy – Considerations & Benefits

What is a strategy?

Click-cloud-icon
“a plan of action designed to achieve a long-term or overall aim”. A Strategy involves setting goals, determining actions to achieve the goals, and mobilising limited resources to execute the actions.
A good Cloud Strategy is…

  • Specific
  • Timely
  • Prioritised
  • Actionable
  • Tailored

Note – A strategy is different to organisations requirements, which can change over a period of time.
Best practice is to define your strategy is to maximise the benefits you achieve.
The following items can be considered when creating your cloud strategy (Source: NetApp Research recommendation)

  • Categorisation of your workloads: Strategic vs Operational (goals may change over time)
  • Determine which cloud type fits the workload: Type of cloud will be most efficient and cost-effective for delivering that workload (public, private & hybrid)
  • Prioritise workloads for initial projects: Non-critical applications/smaller workloads

Benefits of a Having Cloud Strategy

An enterprise-wide cloud strategy provides a structured way to incorporate cloud services into the IT mix. It helps make sure that all stakeholders have a say in how, when, and where cloud adoption occurs. It can also offer advantages that you may not have considered. Below are some of the typical benefits (source: NetApp research).

  • Maximise the business benefits of the cloud. cutting costs, improving efficiency, increasing agility, and so on—and those benefits increase when there is a clear cloud strategy in place.
  • Uncover business benefits you might otherwise miss. One frequently overlooked advantage of the cloud is the ability to accelerate innovation. By moving certain functions to the cloud, IT can speed up the process of building, testing, and refining new applications—so teams can explore more ideas, see what works and what doesn’t, and get innovative products and services to market faster.
  • Prepare the infrastructure you’ll need. A well-thought-out cloud strategy provides an opportunity to consider some of the infrastructure needs of the cloud model up front rather than as an afterthought.
  • Retain control in an era of on-demand cloud services. An enterprise-wide cloud strategy can help your business maintain control of how cloud services are purchased and used so that enterprise governance policies and standards can be managed and enforced.

How to avoid a cloud strategy that fails

  • It’s a way to save money: The danger here is that cloud computing is not always cheaper. Services that can be effectively outsourced to “the cloud” to save money are often highly standardised commodity services that have varied demand for infrastructure over time. Cloud computing can save money, but only for the right services.
  • It’s a way to renovate enterprise IT: Not everything requires speed of deployment, or rapid scaling up or down. Some services require significant and unique enterprise differentiation and customisation.
  • It’s a way to innovate and experiment: Cloud computing makes it extremely easy to get started and to pilot new services. The challenge for enterprises is to enable innovation and experimentation, but to have a feasible path from pilots to production, and operational industrialisation.

Source: Forbes.com
In summary, cloud computing does not have a single value proposition for all enterprises and all services.
Recommendation: A cloud computing strategy should include these three approaches,

  1. Define a high-level business case
  2. Define core requirements
  3. Define core technology

and organisations must probe where they can benefit from cloud in various ways.
This will help drive enterprise IT to a new core competency, away from solely being a provider of services, and toward being both a provider and a broker of services delivered in a variety of ways for a variety of business values (Hybrid IT)

Sample cloud decision framework

simple decision framework- cloud.jpg

Summary

Make you cloud strategy a Business Priority. The benefits are real, but don’t make your move to the cloud until you’ve made a serious commitment to creating an enterprise-wide cloud strategy. The time you take to formalise your strategy will pay off in higher cost savings, better efficiency, more agility, and higher levels of innovation.
Research shows that companies with an enterprise-wide cloud strategy are far more successful at using the cloud to reduce costs, improve efficiency, and increase business agility than companies without such a strategy.
I hope you find the above information is useful.

Creating an Enterprise-Wide Cloud Strategy – Considerations & Benefits

What is a strategy?

Click-cloud-icon

“a plan of action designed to achieve a long-term or overall aim”. A Strategy involves setting goals, determining actions to achieve the goals, and mobilising limited resources to execute the actions.

A good Cloud Strategy is…

  • Specific
  • Timely
  • Prioritised
  • Actionable
  • Tailored

Note – A strategy is different to organisations requirements, which can change over a period of time.

Best practice is to define your strategy is to maximise the benefits you achieve.

The following items can be considered when creating your cloud strategy (Source: NetApp Research recommendation)

  • Categorisation of your workloads: Strategic vs Operational (goals may change over time)
  • Determine which cloud type fits the workload: Type of cloud will be most efficient and cost-effective for delivering that workload (public, private & hybrid)
  • Prioritise workloads for initial projects: Non-critical applications/smaller workloads

Benefits of a Having Cloud Strategy

An enterprise-wide cloud strategy provides a structured way to incorporate cloud services into the IT mix. It helps make sure that all stakeholders have a say in how, when, and where cloud adoption occurs. It can also offer advantages that you may not have considered. Below are some of the typical benefits (source: NetApp research).

  • Maximise the business benefits of the cloud. cutting costs, improving efficiency, increasing agility, and so on—and those benefits increase when there is a clear cloud strategy in place.
  • Uncover business benefits you might otherwise miss. One frequently overlooked advantage of the cloud is the ability to accelerate innovation. By moving certain functions to the cloud, IT can speed up the process of building, testing, and refining new applications—so teams can explore more ideas, see what works and what doesn’t, and get innovative products and services to market faster.
  • Prepare the infrastructure you’ll need. A well-thought-out cloud strategy provides an opportunity to consider some of the infrastructure needs of the cloud model up front rather than as an afterthought.
  • Retain control in an era of on-demand cloud services. An enterprise-wide cloud strategy can help your business maintain control of how cloud services are purchased and used so that enterprise governance policies and standards can be managed and enforced.

How to avoid a cloud strategy that fails

  • It’s a way to save money: The danger here is that cloud computing is not always cheaper. Services that can be effectively outsourced to “the cloud” to save money are often highly standardised commodity services that have varied demand for infrastructure over time. Cloud computing can save money, but only for the right services.
  • It’s a way to renovate enterprise IT: Not everything requires speed of deployment, or rapid scaling up or down. Some services require significant and unique enterprise differentiation and customisation.
  • It’s a way to innovate and experiment: Cloud computing makes it extremely easy to get started and to pilot new services. The challenge for enterprises is to enable innovation and experimentation, but to have a feasible path from pilots to production, and operational industrialisation.

Source: Forbes.com

In summary, cloud computing does not have a single value proposition for all enterprises and all services.

Recommendation: A cloud computing strategy should include these three approaches,

  1. Define a high-level business case
  2. Define core requirements
  3. Define core technology

and organisations must probe where they can benefit from cloud in various ways.

This will help drive enterprise IT to a new core competency, away from solely being a provider of services, and toward being both a provider and a broker of services delivered in a variety of ways for a variety of business values (Hybrid IT)

Sample cloud decision framework

simple decision framework- cloud.jpg

Summary

Make you cloud strategy a Business Priority. The benefits are real, but don’t make your move to the cloud until you’ve made a serious commitment to creating an enterprise-wide cloud strategy. The time you take to formalise your strategy will pay off in higher cost savings, better efficiency, more agility, and higher levels of innovation.

Research shows that companies with an enterprise-wide cloud strategy are far more successful at using the cloud to reduce costs, improve efficiency, and increase business agility than companies without such a strategy.

I hope you find the above information is useful.

Disk Space Reporting through Lamba Functions- Windows servers

Solution Objective:

The solution provides detailed report related to hard disk space for all the Windows Ec2 instances in the AWS environment.

Requirements:

Mentioned below are the requirements the solution should be able to fulfil.

  • Gather information related to all mount points in all the Windows EC2 instances in the environment.
  • Able to generate cumulative report based on all instances in the environment.

3. Assumptions:

The following assumptions are considered

  • All the EC2 instances have SSM agent installed.
  • The personnel responsible for the configuration have some understanding of IAM Roles, S3 buckets and lambda functions

4. Solutions Description:

The following services provided by Amazon will be utilized to generate the report

  • PowerShell Scripts
  • AWS S3
  • AWS Lambda
  • AWS IAM Roles
  • Maintenances Windows

4.1      Linux Shell Script.

PowerShell Script will be utilized to generate information about the instance and the mount points space utilization.
Mentioned below script needs to be executed on all Windows Ec2 instances to generate the mount point information.

$instanceId = Invoke-WebRequest -Uri http://169.254.169.254/latest/meta-data/instance-id -UseBasicParsing
$instanceId.content
Get-WmiObject Win32_logicaldisk | select DeviceID,Size,Used,FreeSpace,PlaceHolder,VolumeName | ft -Autosize

4.1      AWS S3

The result of the shell script will be posted to an S3 bucket for further use.
The EC2 instances will need write access to the nominated S3 bucket for certificate Maintenance.
S3 Bucket Name: eomreport ( sample name )

4.2      AWS Lambda Functions

Lambda Functions will be used to perform the following activities.

  • Acquire the result of the Shell script from the S3 bucket
  • Generate a Report
  • Email the report to the relevant recipient

The Lambda Functions would need read access to the S3 bucket and access to AWS SES to send emails to recipients.
Mentioned below is the Lambda Functions that performs the mentioned above tasks.

import boto3
import codecs
import pprint
from datetime import datetime, date, time
def lambda_handler(event,Context):
    s3 = boto3.resource('s3')
    mybucket = s3.Bucket('diskspacewindows')
    resulthtml = ["<h1>Report : Hard disk Space Client Name </h1>"] # Adds heading to the email body
    resulthtml.append('<html><body><table border="1">') # Creates a table
    resulthtml.append('<tr><td><b>InstanceID</b></td><td><b>Drive Letter</b></td><td><b> FreeSpace</b></td><td><b>Total Space </b></td></b></tr>')
    for file_key in mybucket.objects.all():
        complete_string = str(file_key)
        search = "stdout"
        check = complete_string.find(search)
        if check > 0 :
            body = file_key.get()['Body'].read().decode('utf-8')
            complete=body.splitlines() #splits data into lines.
            id="".join(complete[0])
            details=complete[4:]
            resulthtml.append(("<td>'{}'</td><td></td><td></td><td></td></tr>").format(id)) # for the HTML email to be sent.
            for line in details:
                    output_word=line.split()
                    dstr="".join(line)
                    #print(output_word)
                    #print(len(output_word))
                    if len(output_word) > 0:
                      resulthtml.append(("<td></td><td>'{}'</td><td>'{}'</td><td>'{}'</td></tr>").format(output_word[0],output_word[1],output_word[2])) # for the HTML email to be sent.
    resulthtml.append('</table></body></html>')
    final=str("".join(resulthtml))
    final=final.replace("'","")
    print(final)
    sender = "syed.naqvi@kloud.com.au"
    recipient = "syed.naqvi@kloud.com.au"
    awsregion = "us-east-1"
    subject = "Client Hard Disk Space - Windows "
    charset = "UTF-8"
    mylist="mylist update"
    client = boto3.client('ses',region_name=awsregion)
    try:
        response = client.send_email(
           Destination={
               'ToAddresses': [
                   recipient,
                ],
            },
         Message={
                  'Body': {
                      'Html': {
                        'Charset': charset,
                        'Data': final,
                             },
                    'Text': {
                     'Charset': charset,
                     'Data': mylist,
                    },
                },
                'Subject': {
                    'Charset': charset,
                    'Data': subject,
                },
            },
            Source=sender,
        )
    # Display an error if something goes wrong.
    except Exception as e:
        print( "Error: ", e)
    else:
       print("Email sent!")

 
4.1 AWS IAM Roles
Roles will be used to grant

  • AWS S3 write access to all the EC2 instances as they will submit the output of the  the S3 bucket
  • AWS SES access to Lambda Functions to send emails to relevant recipients.

4.2 AWS SES

Amazon Simple Email Service (Amazon SES) evolved from the email platform that Amazon.com created to communicate with its own customers. In order to serve its ever-growing global customer base, Amazon.com needed to build an email platform that was flexible, scalable, reliable, and cost-effective. Amazon SES is the result of years of Amazon’s own research, development, and iteration in the areas of sending and receiving email.( Ref. From https://aws.amazon.com/ses/).
We would be utilizing AWS SES to generate emails using AWS lambda.
The configuration of the Lambda functions can be modified to send emails to a distribution group to provide Certificate reporting, or it can be used to send emails to ticketing system in order to provide alerting and ticket creation in case a certificate expiration date crosses a configured threshold.

5Solution Configuration

5.1 Configure IAM Roles

The following Roles should be configured

  • IAM role for Lambda Function.
  • IAM for EC2 instances for S3 bucket Access

5.1.1 Role for Lambda Function

Lambda function need the following access

  • Read data from the S3 bucket
  • Send Emails using Amazon S3

To accomplish the above the following policy should be created and attached to the IAM Role

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1501474857000",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::S3BucketName/*"
            ]
        },
        {
            "Sid": "Stmt1501474895000",
            "Effect": "Allow",
            "Action": [
                "ses:SendEmail"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

6.1.2  Role for EC2 instance

All EC2 instances should have access to store the Shell output in the S3 bucket.
To accomplish the above , the following policy should be assigned to the EC2 roles

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1501475224000",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::eomreport"
            ]
        }
    ]
}

6.2 Configure Maintenance Window.

The following tasks need to be performed for the maintenance window

  • Register a Run Command with Run-Shell Script using the script in section 4.1
  • Register targets based on the requirements
  • Select the schedule based on your requirement

Maintenance Window Ref : 
http://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html

6.3 Configure Lambda Function:

The following tasks need to be performed for the Lambda Function

  • Create a blank lambda function with the S3 put event as the trigger\lambda function
  • Click on Next
  • Enter the Name and Description
  • Select run time Python 3.6
  • Copy and paste the lambda function mentioned in section 4.3

    6.4 Configuring AWS SES

The following tasks need to be completed before the execution of the Run-commands.

  • Email Addresses should be added to the AWS SES section of the tenant.
  • The email addresses should be verified.

 7. Result:

Based on the above configuration, whenever the run command is executed, the following report is generated and sent to the nominated email account.

Office 365 URLs and IP address updates for firewall and proxy configuration, using Flow and Azure Automation

tl;dr

To use Microsoft Office 365, an organisation must allow traffic to [and sometimes from] the respective cloud services via the internet on specific ports and protocols to various URLs and/or IP addresses, or if you meet the requirements via Azure ExpressRoute. Oh duh?!
To further expand on that, connections to trusted networks (which we assume Office 365 falls into this category) that are also high in volume (since most communication and collaborative infrastructure resides there) should be via a low latency egress that is as close to the end user as possible.
As more and more customers use the service, as more and more services and functionality is added, so to will the URLs and IP addresses need to change over time. Firewalls and proxies need to be kept up to date with the destination details of Office 365 services. This is an evergreen solution, lets not forget. So, it’s important to put the processes in-place to correctly optimise connectivity to Office 365. It’s also very important to note that these processes, around change management, if left ignored, will result in services being blocked or delivering inconsistent experiences for end users.

Change is afoot

Come October 2nd 2018, Microsoft will change the way customers can keep up to date with these changes to these URLs and IP addresses. A new web service is coming online that publishes Office 365 endpoints, making it easier for you to evaluate, configure, and stay up to date with changes.

Furthermore, the holistic overview of these URLs and IP addresses is being broken down into three new key categories: OPTIMISE, ALLOW and DEFAULT.

You can get more details on these 3x categories from the following blog post on TechNet: https://blogs.technet.microsoft.com/onthewire/2018/04/06/new-office-365-url-categories-to-help-you-optimize-the-traffic-which-really-matters/
 
It’s not all doom and gloom as your RSS feed no longer works. The new web service (still in preview, public preview, at the time of writing this blog) is rather zippy and allows for some great automation. So, that’s the target state: automation.
Microsoft wants to make it nice and easy for firewall, proxy or whatever edge security appliance vendor or service provider to programmatically interact with the web service and offer dynamic updates for Office 365 URL and IP address information. In practice, change management and governance processes will evidently still be followed. In most circumstances, organisations are following whatever ITIL or ITIL like methodologies are in place for those sorts of things.
The dream Microsoft has, though, is actually one that is quite compelling.
Before we get to this streamlined utopia where my customers edge devices update automatically, I’ve needed to come up with a process for the interim tactical state. This process runs through as follows:

  • Check daily for changes in Office 365 URLs and IP addresses
  • Download changes in a user readable format (So, ideally, no XML or JSON. Perhaps CSV for easy data manipulation or even ingestion into production systems)
  • Email intended parties that there has been a change in the global version number of the current Office 365 URLs and IP addresses
  • Allow intended parties to download the output

NOTE – for my use case here, the details for the output is purely IP addresses. That’s because the infrastructure that the teams I’ll be sending this information to only allows for that data type. If you were to tweak the web service request (details further down), you can grab both URLs and IP addresses, or one or the other.

 

Leveraging Microsoft Flow and Azure Automation

My first instinct here was to use Azure Automation and run a very long PowerShell script with If’s and Then’s and so on. However, when going through the script, 1) my PowerShell skills are not that high level to bang this out and 2) Flow is an amazing tool to run through some of the tricky bits in a more effortless way.
So, leveraging the goodness of Flow, here’s a high level rundown of what the solution looks like:

 
The workflow runs as follows:

  1. Microsoft Flow
  2. On a daily schedule, the flow is triggered at 6am
  3. Runbook #1
    1. Runbook is initiated
    2. Runbook imports CSV from Azure Blob
    3. Powershell runs comment to query web service and saves output to CSV
    4. CSV is copied to Azure Blob
  4. Runbook #2 imports a CSV
    1. Runbook is initiated
    2. Runbook imports CSV from Azure Blob
    3. The last cell in the version column is compared to the previous
    4. An Output is saved to Azure Automation if a newer version found, “NEW-VERSION-FOUND”
  5. The Output is taken from the prvious Azure Automation Runbook run
  6. A Flow Condition is triggered – YES if Output is found, NO if nothing found

Output = YES

  • 7y1 = Runbook #3 is run
    • Runbook queries web service for all 3 conditions: optimise, allow and default
    • Each query for that days IP address information is saved into 3 separate CSV files
  • 7y2 = CSV files are copied to Azure Blob
  • 7y3 = Microsoft Flow queries Azure Blob for the three files
  • 7y4 = An email template is used to email respective interested parties about change to the IP address information
    • The 3x files are added as attachments

Output = Nothing or NO

  • 7n1 = Sent an email to the service account mailbox to say there was no changes to the IP address information for that day

 

The process

Assuming, dear reader, that you have some background with Azure and Flow, here’s a detailed outlined of the process I went through (and one that you can replicate) to automate checking and providing relevant parties with updates to the Office 365 URLs and IP address data.
Lets begin!

Step 1 – Azure AD
  • I created a service account in Azure AD that has been given an Office 365 license for Exchange Online and Flow
  • The user details don’t really matter here as you can follow your own naming convention
  • My example username is as follows: svc-as-aa-01@[mytenant].onmicrosoft.com
    • Naming convention being: “Service account – Australia South East – Azure Automation – Sequence number”
Step 2 – Azure setup – Resource Group
  • I logged onto my Azure environment and created a new Resource Group
  • My solution has a couple of components (Azure Automation account and a Storage account), so I put them all in the same RG. Nice and easy
  • My Resource Group details
    • Name = [ASPRODSVCAA01RG]
    • Region = Australia South East as that’s the local Azure Automation region
    • That’s a basic naming convention of: “Australia South East – Production environment – Purpose, being for the SVC account and Azure Automation – Sequence number – Resource Group”
  • Once the group was created, I added my service account as a Contributor to the group
    • This allows the account downstream permissions to the Azure Automation and Storage accounts I’ll add to the resource group
Step 3 – Azure Setup – Storage account
  • I created a storage account and stored that in my resource group
  • The storage account details are as follows
    • Name = [asprodsvcaa01] = Again, follow your own naming convention
    • Deployment model = Resource manager
    • Storage General Purpose v2
    • Local redundant storage only
    • Standard performance
    • Hot access tier
  • Within the storage account, I’ve used Blob storage
    • There’s two containers that I used:
      • Container #1 = “daily”
      • Container #2 = “ipaddresses”
    • This is where the output CSV files will be stored
  • Again, we don’t need to assign any permissions as we assigned Contributor permissions to the resource group
Step 4 – Azure Setup – Azure Automation
  • I created a new Azure Automation account with the following parameters
    • Name = [SVCASPROD01AA] = Again, follow your own naming convention
    • Default parameters, matching my resource group settings
    • Yes, I also had a Run As account created (default option)
  • I created three Runbooks created, as per below

 

  • Step1-GetGlobalVersion = Again, follow your own naming convention
  • This is a Powershell runbook
  • Here’s the example script I put together:
#region SETUP
Import-Module AzureRM.Profile
Import-Module AzureRM.Resources
Import-Module AzureRM.Storage
#endregion
#region CONNECT
$pass = ConvertTo-SecureString "[pass phrase here]" -AsPlainText –Force
$cred = New-Object -TypeName pscredential –ArgumentList "[credential account]", $pass
Login-AzureRmAccount -Credential $cred -ServicePrincipal –TenantId "[tenant id]"
#endregion
#region IMPORT CSV FILE FROM BLOB
$acctKey = (Get-AzureRmStorageAccountKey -Name [name here] -ResourceGroupName [name here]).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName "[name here]" -StorageAccountKey '[account key here]'
Get-AzureStorageBlob -Context $storageContext -Container "[name here]" | Get-AzureStorageBlobContent -Destination . -Context $storageContext -Force
#endregion
#region GET CURRENT VERION
$DATE = $(((get-date).ToUniversalTime()).ToString("yyyy-MM-dd"))
Invoke-RestMethod -Uri https://endpoints.office.com/version/Worldwide?ClientRequestId=b10c5ed1-bad1-445f-b386-b919946339a7 | Select-Object @{Label="VERSION";Expression={($_.Latest)}},@{Label="DATE";Expression={($Date)}} | Export-Csv [daily-export.csv] -NoTypeInformation -Append
# SAVE TO BLOB
$acctKey = (Get-AzureRmStorageAccountKey -Name [name here] -ResourceGroupName [name here]).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName "[name here]" -StorageAccountKey '[account key here]'
Set-AzureStorageBlobContent -File [.\daily-export.csv] -Container "[name here]" -BlobType "Block" -Context $storageContext -Force
#endregion
#region OUTPUT
Write-Output "SCRIPT-COMPLETE"
#endregion

 

  • Step2-CheckGlobalVersion = Again, follow your own naming convention
  • This is a Powershell runbook
  • Here’s the example script I put together:
#region SETUP
Import-Module AzureRM.Profile
Import-Module AzureRM.Resources
Import-Module AzureRM.Storage
#endregion
#region CONNECT
$pass = ConvertTo-SecureString "[pass phrase here]" -AsPlainText –Force
$cred = New-Object -TypeName pscredential –ArgumentList "[credential account]", $pass
Login-AzureRmAccount -Credential $cred -ServicePrincipal –TenantId "[tenant id]" #endregion
#region IMPORT CSV FILE FROM BLOB
$acctKey = (Get-AzureRmStorageAccountKey -Name [name here] -ResourceGroupName [name here]).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName "[name here]" -StorageAccountKey '[key here]'
Get-AzureStorageBlob -Context $storageContext -Container [name here] | Get-AzureStorageBlobContent -Destination . -Context $storageContext -Force
#endregion
#region CHECK IF THERE IS A DIFFERENCE IN THE VERSION
$ExportedCsv = import-csv [.\daily-export.csv]
$Last = $ExportedCsv | Select-Object -Last 1 -ExpandProperty Version # Last value in Version column
$SecondLast = $ExportedCsv | Select-Object -Last 1 -Skip 1 -ExpandProperty Version #Second last value in version column
If ($Last –gt $SecondLast) {
Write-Output '[NEW-VERSION-FOUND]'
}

 

  • Step3-GetURLsAndIPAddresses = Again, follow your own naming convention
  • This is a Powershell runbook
  • Here’s the example script I put together:
#region SETUP
Import-Module AzureRM.Profile
Import-Module AzureRM.Resources
Import-Module AzureRM.Storage
#endregion
#region EXECUTE PROCESS TO DOWNLOAD NEW VERSION
$endpoints = Invoke-RestMethod -Uri https://endpoints.office.com/endpoints/Worldwide?ClientRequestId=b10c5ed1-bad1-445f-b386-b919946339a7
$endpoints | Foreach {if ($_.category -in ('Optimize')) {$_.IPs}} | Sort-Object -unique | Out-File [.\OptimizeFIle.csv]
$endpoints | Foreach {if ($_.category -in ('Allow')) {$_.IPs}} | Sort-Object -unique | Out-File [.\AllowFile.csv]
$endpoints | Foreach {if ($_.category -in ('Default')) {$_.IPs}} | Sort-Object -unique | Out-File [.\DefaultFile.csv]
$acctKey = (Get-AzureRmStorageAccountKey -Name [name here] -ResourceGroupName [name here]).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName "[name here]" -StorageAccountKey '[key here]'
Set-AzureStorageBlobContent -File [.\OptimizeFIle.csv] -Container "[name here]" -BlobType "Block" -Context $storageContext -Force
Set-AzureStorageBlobContent -File [.\AllowFile.csv] -Container "[name here]" -BlobType "Block" -Context $storageContext -Force
Set-AzureStorageBlobContent -File [.\DefaultFile.csv] -Container "[name here]" -BlobType "Block" -Context $storageContext -Force
#endregion
#region OUTPUT
Write-Output "SCRIPT COMPLETE"
#endregion
  • Note that we don’t need to import the complete AzureRM Powershell modules
  • You’ll find that if you do something “lazy” like that, there’s a whole lot of dependencies in Azure Automation
    • You’ll need to manually add in all the sub-modules which is very time consuming
Step 5 – Microsoft Flow
  • With my service account having a Flow license, I created my Flow there
  • This means that I can pass this onto Managed Services to run with and maintain going forward
  • I started with a blank Flow
  • I added a schedule
    • The schedule runs at 6am every day

  • Step 1 is to add in an Azure Automation Create Job task
    • This is to execute the Runbook “Step1-GetGlobalVersion”
    • Flow will try and connect to Azure with our Service account
    • Because we added all the relevant permissions earlier in Azure, the Resource Group and downstream resources will come up automatically
    • Enter in the relevant details

  • Step 2 is to add in another Azure Automation Create Job task
    • This is to execute the Runbook “Step2-CheckGlobalVersion”
    • Again, Flow will connect and allow you to select resources that the service account has Contributor permissions to

  • Step 3 is to add in an Azure Automation Get Job Output
    • This is to grab the Output data from the previous Azure Automation runbook
    • The details are pretty simply
    • I selected the “JobID” from the Step 2 Azure Automation runbook job

  • Step 4 is where things get interesting
  • This is a Flow Condition
  • This is where we need to specify if a Value of “NEW-VERSION-FOUND” is found in the content of the Output from the Step 2 Job, Do something or Not do something

  • Step 5 is where I added in all the “IF YES” flow to Do something because we have an output of “NEW-VERSION-FOUND”
  • The first sub-process is another Azure Automation Create Job task
  • This is to execute the Runbook “Step3-GetURLsandIPaddresses”
  • Again, Flow will connect and allow you to select resources that the service account has Contributor permissions to

  • Step 6 is to create 3 x Get Blob Content actions
  • This will allow us to connect to Azure Blob storage and grab the 3x CSV files that the previous steps output to Blob created
  • We’re doing this so we can embed them in an email we’re going to send to relevant parties in need of this information

  • Step 7 is to create an email template
  • As we added an Exchange Online license to our service account earlier, we’ll have the ability to send email as the service accounts mailbox
  • The details are pretty straight forward here:
    • Enter in the recipient address
    • The sender
    • The subject
    • The email body
      • This can be a little tricky, but, I’ve found that if you enable HTML (last option in the Send An Email action), you can use <br> or line break to space out your email nicely
    • Lastly, we’ll attach the 3x Blobs that we picked up in the previous step
    • We just need to manually set the name of email file
    • Then select the Content via the Dynamic Content option
      • Note: if you see this error “We can’t find any outputs to match this input format.Select to see all outputs from previous actions.” – simply hit the “See more” button
      • The See more button will show you the content option in the previous step (step 6 above)

  • Step 8 is to go over to the If No condition
  • This is probably option because I believe the old saying goes “no new is good news”
  • However, for the purposes of tracking how often changes happen easily, I thought I’d email the service account and store a daily email if no action was taken
    • I’ll probably see myself as well here to keep an eye on Flow to make sure it’s running
    • I can use inbox rules to move the emails out of my inbox and into a folder to streamline it further and keep my inbox clean
  • The details are pretty much the same as the previous Step 7
    • However, there’s no attachments required here
    • This is a simple email notification where I entered the following in the body: “### NO CHANGE IN O365 URLs and IP ADDRESSES TODAY ###”


 

Final words

Having done many Office 365 email migrations, I’ve come to use Powershell and CSV’s quite a lot to make my life easier when there’s 1000’s of records to work with. This process uses that experience and that speed of working on a solution using CSV files. I’m sure there’s better ways to streamline that component, like for example using Azure Table Storage.
I’m also sure there’s better ways of storing credential information, which, for the time being isn’t a problem while I work out this new process. The overall governance will get ironed out and I’ll likely leverage Azure Automation Credential store, or even Azure Key Vault.
If you, dear reader, have found a more streamlined and novel way to achieve this that requires even less effort to setup, please share!
Best,
Lucian
#WorkSmarterNotHarder
 

Follow ...+

Kloud Blog - Follow