Authoring Identities in SailPoint IdentityNow via the API and PowerShell

Introduction

A key aspect of any Identity Management project is having an Authoritative Source for Identity. Typically this is a Human Resources system. But what about identity types that aren’t in the authoritative source? External Vendors, contingent contractors and identities that are used by End User Computing systems such as Privileged Accounts, Service Accounts, Training Accounts.

Now some Identity Management Solutions allow you to Author identity through their Portals, and provide a nice GUI to create a user/training/service account. SailPoint IdentityNow however doesn’t have that functionality. However it does have an API and I’ll show you in the post how you can use it to Author identity into IdentityNow via the API.

Overview

So, now you’re thinking great, I can author Identity into IdentityNow via the API. But, am I supposed to get managers to interface with an API to kick off a workflow to create identities? Um, no. I don’t think we want to be giving them API access into our Identity Management solution.

The concept is this. An Identity Request WebApp would collect the necessary information for the identities to be authored and facilitate the creation of them in IdentityNow via the API. SailPoint kindly provide a sample project that does just that. It is available on Github here. Through looking at this project and the IdentityNow API I worked out how to author identity via the API using PowerShell. There were a few gotchas I had to work through so I’m providing a working example for you to base a solution around.

Getting Started

There are a couple of things to note.

  • Obviously you’ll need API access
  • You’ll want to create a Source that is of the Flat File type (Generic or Delimited File)
    • We can’t create accounts against Directly Connected Sources
  • There are a few attributes that are mandatory for the creation
    • At a minimum I supply id, name, givenName, familyName, displayName and e-mail
    • At an absolute bare minimum you need the following. Otherwise you will end up with an account in IdentityNow that will show as “Identity Exception”
      • id, name, givenName, familyName, e-mail*

* see note below on e-mail/email attribute format based on Source type

Creating a Flat File Source to be used for Identity Authoring

In the IdentityNow Portal under Admin => Connections => Sources select New.

Create New Source.PNG

I’m using Generic as the Source Type. Give it a name and description. Select Continue

New Generic Source.PNG

Assign an Owner for the Source and check the Accounts checkbox. Select Save.

New Source Properties.PNG

At the end of the URL of the now Saved new Source get and record the SourceID. This will be required so that when we create users via the API, they will be created against this Source.

SourceID.PNG

If we look at the Accounts on this Source we see there are obviously none.

Accounts.PNG

We’d better create some. But first you need to complete the configuration for this Source. Go and create an Identity Profile for this Source, and configure your Identity Mappings as per your requirements. This is the same as you would for any other IdentityNow Source.

Authoring Identities in IdentityNow with PowerShell

The following script is the bare minimum to use PowerShell to create an account in IdentityNow. Change;

  • line 2 for your Client ID
  • line 4 for your Client Secret
  • line 8 for your Tenant Org Name
  • line 12 for your Source ID
  • the body of the request for the account to be created (lines 16-21)

NOTE: By default on the Generic Source the email attribute is ’email’. By default on the Delimited Source the email attribute is ‘e-mail’. If your identities after executing the script and a correlation are showing as ‘Identity Exception’ then it’s probably because of this field being incorrect for the Source type. If in doubt check the Account Schema on the Source.

Execute the script and refresh the Accounts page. You’ll see we now have an account for Rick.

Rick Sanchez.PNG

Expanding Rick’s account we can see the full details.

Rick Full Details.PNG

Testing it out for a Bulk Creation

A few weeks ago I wrote this post about generating user data from public datasets. I’m going to take that and generate 50 accounts. I’ve added additional attributes to the Account Schema (for suburb, state, postcode, street). Here is a script combining the two.

Running the script creates our 50 users in conjunction to the couple I already had present.

Bulk Accounts Created.PNG

Summary

Using the IdentityNow API we can quickly leverage it to author identity into SailPoint IdentityNow. That’s the easy bit sorted. Now to come up with a pretty UI and a UX that passes the End-User usability tests. I’ll leave that with you.

 

Remove/Modify Specific AWS Tags from the Environment- PowerShell

Why use TAGs

To help you manage your instances, images, and other Amazon EC2 resources, you can optionally assign your own metadata to each resource in the form of tags. This topic describes tags and shows you how to create them.

(Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html)

Problem :

Sometimes tags are applied in environments prior to developing a tagging strategy. The problem in exponentially increased with the size of the environment and the number of users creating resources.

Currently we are looking for a solution to remove specific unwanted tags from EC2 instances or modify the tag values which are incorrect.

For this purpose , the below mentioned script was developed that solves the problem for AWS.

Solution :

The below mentioned script performs the following tasks

  • Get the list of all the EC2 instances in the tenant
  • Loop through all the EC2 instances
  • Get values of all the tags in the environment
  • Check each Tag Key and Tag Value.
  • Modify of remove the tag value ( based on requirement )

Code:

#Set up the AWS profile using the Access Key and Secret Key

Set-AWSCredential -AccessKey AccessKey -SecretKey SecretKEy -StoreAs ProfileName

#Getting the list of all the instances in the Tenant

$instances = (Get-EC2Instance -ProfileName ProfileName -Region RegionName).Instances

$tagkeytoremove = 'TAG1' # Declaring the TAG Key to remove / modify

$tagvaluetoremove = 'ChangePLease' # Declaring the Tag Value to Remove / Modify

$NewTagValue = "NewTagValue" # Declaring the new tag value.

Foreach ( $instance in $instances ) # Looping through all the instances
{
    $OldTagList = $instance.tags
    foreach ($tag in $OldTagList) # Looping through all the Tags
    {
        if($tag.key -ceq $tagkeytoremove -and $tag.Value -ceq $tagvaluetoremove ) # Comparing the TAG Key and Values
        {
            Remove-EC2Tag -Resource $instances.instanceid -Tag $tag -Force # Removing the Old Tag Key Value Pair
            New-EC2Tag -Resource $instances.instanceid -Tag @{ Key=$tag.key;Value=$NewTagValue} -Force #Adding the New Tag Key Value pair.

        }
    }
} # Loop Ends

 

Kicking Things Off – Writing the Right SOW

It’s one thing to convert a conversation around a broad scope of work into a well-defined and articulated, 3 to 4-page proposal (sometimes 20 +, depending on whose template you’re using), it’s another thing for a client or customer to read through this document, often, multiple times due to a review and response cycle, before finally agreeing to it.

Most don’t enjoy this process. Client stakeholders usually look for a few key things when it comes to the SOW: price, time (hours) and key dates. Other parts are usually skimmed over or can be missed altogether, at least in my experience.

While the above might be nothing new, perhaps it’s time to ask ourselves whether there’s a better way of doing this – can the client business owner, or nominated stakeholder on behalf of the business owner, be more involved, collaboratively in the SOW writing process so that unified goal between supplier and customer be achieved?

Absolutely.

Recent engagements with various stakeholders have made me realise, as a business analyst, how crucial this aspect of the project is and can, at times, be a sore point to reference back to when the project is in-flight, and expectations don’t align with what is in writing. Therefore, entering an engagement with the mindset of getting semantics right from the get go might save from hindering any quality to delivering down the line.

Usually engaging in potential work with a client involves a conversation – not a sales pitch, just simply talking it through. What follows from here for effective SOW writing is what underpins any good collaborative effort – a channel of clear and responsive communication.

It’s best to idealise this process in stages:

  • After initial discussion, an email to prospective client containing a skeleton SOW that simply outlines the conversation that was had. This reaffirms the context and conveys listening and understanding from you as the potential solution/service provider. If the engagement is with a new client, convey some understanding of the context around the company
  • Avoid fluffing it out with proposal-sounding adjectives and dialogue, keep no-nonsense and to-the-point
  • Work with the client to clearly define what is expected to be delivered and how long it could potentially take, based on flexibility or constraints on resources for both sides
  • Define what it will cost based on all the known variables – avoid basing this on ambiguity or pre-gap analysis of the outlined work at this stage.
  • Add value to the SOW by considering if there’s a better way to do the proposed work. This is something I’ve found that Kloud always maintains when approaching a piece of work

By defining the ‘pre-sales’ in this way, and by communicating effectively with your client during a proposal, the ‘joyless’ experience of writing a SOW (as it can be commonly perceived) may be alleviated and player a smaller role in convincing your client to work with you.

It’s refreshing to view this process unlike a proposal, but rather a conversation. After the discovery call, we should establish confidence in the client with us as consultants. The only thing left to deal with now is the project itself.

 

 

 

Use AppKey to change WebApp’s default DNS settings since ASE App Services don’t inherit vnet’s DNS settings

Recently I helped a customer with app service implementations. The web app service was deployed under isolated App Service Environment (ASE) and connected with enterprise VNets between on-prem servers and Azure subscriptions. When the Application tried to connect to the on-prem SQL DBs, it threw out an exception – the SQL DB name can’t be resolved. I checked the ASE vnet’s DNS settings and it looks all good to me and DNS settings points to the correct internal DNS servers. But what caused the issue?

Firstly, I worked with the app team and test using DB server IP instead of DB server name and re-run the app, it’s successfully connected and retrieved SQL data and this can approve that the issue is related with name resolutions (most likely to be DNS issue) rather than network access.

Next, I tried to verify the traffic flow. Through the Kudu console, I managed to run “nslookup” under the cmd console, I can see the below result, web app is using MS DNS server 168.63.129.16 instead of internal DNS servers. Believe or not, Web app didn’t inherit the DNS settings from ASE connected VNet, which is the root cause I am afraid.

nslookup result

After researching on MSDN blog, I found a way to manually overwrite the MS default DNS settings for web app service, the solution is to add AppKeys “Website_DNS_Server= primary DNS server IP”, “Website_DNS_ALT_Server=secondary DNS server IP” under the App Settings, this makes sense now since web app is a PaaS application and we don’t have the control of the underlying infrastructure.

app setting screenshot

After adding App Keys “Website_DNS_Server=primary/secondary servers IP” and restarted the web app service, the web app started to use the correct internal DNS server to resolve SQL host names now. 😊

 

Common Sonus SBC 1000/2000 Troubleshooting Tips

If you regularly work with Sonus 1000/2000 session border controllers, you may often be sat there scratching your head as to why a simple inbound call from ISDN to Skype for Business won’t ring your test handset.

Before you go and make yourself another cup of coffee and spin up LX, here’s a list of common issues I frequently encounter.

ISDN Channels

Alright, so you’ve set up your signalling groups, your transformation tables are a thing of beauty and your ISDN cables are connected and green. You go to test an inbound call and all you get is a busy signal. What gives?

Be sure to check that you’ve configured the correct number of ISDN channels on the SBC. Your carrier will pick a channel at random, and if you’ve configured 10 channels on the gateway but the carrier is trying to send a call down channel 15, the call will fail.

The flip side of this is ensuring you don’t over-provision the number of channels. An outbound call will fail if the SBC attempts to send a call down channel 21 if you only have 20 channels available.

Skype for Business Servers

Are you having intermittent outbound call issues, or has your Skype for Business environment recently grown? Don’t forget to add the additional mediation servers to your SIP Server Tables and Signalling groups on the SBC! If a call happens to originate from a server not added to either of these lists, it’ll fail.

Calls not releasing upon hangup

You may notice (particularly with CAS analogue lines) that when a caller hands up the phone, the line remains in release mode. Your users may not immediately notice the issue unless they go to make another call right after the previous – only to receive a busy tone.

This issue is normally caused by an incorrectly configured tone table. There are various places around the internet to find suitable tone tables for your carrier and country that a quick Google search will locate. Be sure to update your tone tables and assign the correct tables to the correct signalling groups.

Finding an unknown called number

Lift alarms, security gates, door controls – most sites have them, and you can be pretty guaranteed that they won’t come with details around what numbers they call when a user presses the button.

So far, I’ve encountered systems that dial 10 digits, 9 digits, 5 digits and even 2 digits.

the easiest way to locate these numbers is to build a catch-all rule:

Called Number (.*)   translates to Called Number 0400123456 (your mobile number).

Have someone press the button to trigger an outbound call and then use the monitoring tab to capture the phone number dialled. Once you have it, you can then build your transformation rule to capture and transform it to any number you like! (Pizza Hut, anyone?)

Sending a call to two analogue extensions at the same time

A simple request that other phone systems can manage easily. You want to route an inbound call to ring on two analogue handsets at the same time, Or maybe you have a loud ringer and an analogue handset that must ring at the same time.

The easiest way of achieving this is to use an RJ12 (or 11) splitter connected directly to the CAS port on the SBC. You can then connect up two devices to the one port, and both will ring at the same time.

You should note that the FSX cards on a 1K/2K will deliver up to 45 volts to up to 3 devices at once (REN 3).

Making sure you’re not on DND

I’ll admit it. This one has caught me out more than I’d like to admit. If your handset or Skype for Business account is set to Do Not Disturb (DND), no one will be able to call you!

 

Do you have a tip that you’d like to share? Leave a comment below!.

Reporting on SailPoint IdentityNow Identities using the ‘Search’ (Beta) API and PowerShell

Introduction

SailPoint recently made available in BETA their new Search functionality. There’s some great documentation around using the Search functions through the IdentityNow Portal on Compass^. Specifically;

^ Compass Access Required

Each of those articles are great, but they are centered around performing the search via the Portal.  For some of my needs, I need to do it via the API and that’s what I’ll cover in this post.

*NOTE: Search is currently in BETA. There is a chance some functionality may change. SailPoint advise to not use this functionality in Production whilst it is in Beta.  

Enabling API Access

Under Admin => Global => Security Settings => API Management select New and give the API Account a Description.

New API Client.PNG

Client ID and Client Secret

ClientID & Secret.PNG

In the script to access the API we will take the Client ID and Client Secret and encode them for Basic Authentication to the IdentityNow Search API. To do that in PowerShell use the following example replacing ClientID and ClientSecret with yours.

$clientID = 'abcd1234567'
$clientSecret = 'abcd12345sdkslslfjahd'
$Bytes = [System.Text.Encoding]::utf8.GetBytes("$($clientID):$($clientSecret)")
$encodedAuth =[Convert]::ToBase64String($Bytes)

Searching

With API access now enabled we can start building some queries. There are two methods I’ve found. Using query strings on the URL and using JSON payloads as an HTTP Post. I’ll give examples of both.

PowerShell Setup

Here is the base of all my scripts for using PowerShell to access the IdentityNow Search.

Change;

  • line 3 for your Client ID
  • line 5 for your Client Secret
  • line 10 for your IdentityNow Tenant Organisation name (by default the host portion of the URL e.g https://orgname.identitynow.com )

Searching via URL Query String

First we will start with searching by having the query string in the URL.

Single attribute search via URL

$query = 'firstname EQ Darren'
$Accounts = Invoke-RestMethod -Method Get -Uri "$($URI)limit=$($searchLimit)&query=$($query)" -Headers @{Authorization = "Basic $($encodedAuth)" }

Single Attribute URL Search.PNG

Multiple attribute search via URL

Multiple criteria queries need to be constructed carefully. The query below just looks wrong, yet if you place the quotes where you think they should go, you don’t get the expected results. The following works.

$query = 'attributes.firstname"="Darren" AND attributes.lastname"="Robinson"'

and it works whether you Encode the URL or not

$queryEncoded = [System.Web.HttpUtility]::UrlEncode($query)
$Accounts = Invoke-RestMethod -Method Get -Uri "$($URI)limit=$($searchLimit)&query=$($queryEncoded)" -Headers @{Authorization = "Basic $($encodedAuth)" 

Multiple Attribute Query Search.PNG

Here is another searching based on identities having a connection to a source containing the word ‘Directory’ AND having less the 5 accounts

$URI = "https://$($org).api.identitynow.com/v2/search/identities?"
$query = '@access(source.name:*Directory*) AND entitlementCount:<5'
$Accounts = Invoke-RestMethod -Method Get -Uri "$($URI)limit=$($searchLimit)&query=$($query)" -Headers @{Authorization = "Basic $($encodedAuth)" }

Multiple Attribute Query Search2.PNG

Searching via HTTP Post and JSON Body

Now we will perform similar searches, but with the search strings in the body of the HTTP Request.

Single attribute search via POST and JSON Based Body Query

$body = @{"match"=@{"attributes.firstname"="Darren"}}
$body = $body | convertto-json 
$Accounts = Invoke-RestMethod -Method POST -Uri "$($URI)limit=$($searchLimit)" -Headers @{Authorization = "Basic $($encodedAuth)" } -ContentType 'application/json' -Body $body
Single Attribute JSON Search.PNG

Multiple attribute search via POST and JSON Based Body Query

If you want to have multiple criteria and submit it via a POST request, this is how I got it working. For each part I construct it and convert it to JSON and build up the body with each search element.

$body1 = @{"match"=@{"attributes.firstname"="Darren"}}
$body2 = @{"match"=@{"attributes.lastname"="Robinson"}}
$body = $body1 | ConvertTo-Json
$body += $body2 | ConvertTo-Json
$Accounts = Invoke-RestMethod -Method POST -Uri "$($URI)limit=$($searchLimit)" -Headers @{Authorization = "Basic $($encodedAuth)" } -ContentType 'application/json' -Body $body
Multiple Attribute JSON Search.PNG

Getting Full Identity Objects based off Search

Lastly now that we’ve been able to build queries via two different methods and we have the results we’re looking for, lets output some relevant information about them. We will iterate through each of the returned results and output some specifics about their sources and entitlements. Same as above, update for your ClientID, ClientSecret, Orgname and search criteria.

Extended Information.PNG

Summary

Once you’ve enabled API access and understood the query format it is super easy to get access to the identity data in your IdentityNow tenant.

My recommendation is to use the IdentityNow Search function in the Portal to refine your searches for what you are looking to return programmatically and then use the API to get the data for whatever purpose it is.

Creating custom Deep Learning models with AWS SageMaker

S

This blog will cover how to use SageMaker, and I’ve included the code from my GitHub, https://github.com/Steve–Hunter/DeepLens-Safety-Helmet.

1 What is AWS SageMaker?

AWS (Amazon Web Services) SageMaker is “a fully managed machine learning service. With Amazon SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment.” (https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html). In other words, SageMaker gives you a one-stop-shop to get your Deep Learning models going, in a relatively friction-less way.

Amazon have tried hard to deliver a service that appeals to the life-cycle for developing models, which are the results of training. It enables Deep Learning to complete the virtuous circle of:

Data can cover text, numeric, images, video – the idea is that the model gets ‘smarter’ as it learns more of the exceptions and relationships in being given more data.

SageMaker provides Jupyter Notebooks as a way to develop models; if you are unfamiliar, think of Microsoft OneNote with code snippets, you can run (and re-run) a snippet at a time, and intersperse with images, commentary, test runs. The most popular coding language is Python (which is in the name of Jupyter).

2 AI / ML / DL ?

I see the phrases AI (Artificial Intelligence), Machine Learning (ML) and Deep Learning used inter-changeably, this diagram shows the relationship:



(from https://www.geospatialworld.net/blogs/difference-between-ai%EF%BB%BF-machine-learning-and-deep-learning/

So I see AI encompassing most things not yet possible (e.g. Hollywood ‘killer robots’); Deep Learning has attracted attention, as it permits “software to train itself”; this is contrary to all previous software, which required a programmer to specifically tell the machine what to do. What makes this hard is that it is very difficult to foresee everything that could come up, and almost impossible to code for exception from ‘the real world’. An example of this is machine vision, where conventional ‘rule-based’ programming logic can’t be applied, or if you try, only works in very limited circumstances.

This post will cover the data and training of a custom model to identify people wearing safety helmets (like those worn on a construction site), and a future post will show how to load this model into an AWS DeepLens (please see Sam Zakkour’s post on this site). A use case for this would be getting something like a DeepLens to identify workers at a construction site that aren’t wearing helmets.

3 Steps in the project

This model will use a ‘classification’ approach, and only have to decide between people wearing helmets, and those that aren’t.

The project has 4 steps:

  • Get some images of people wearing and not wearing helmets
  • Store images in a format suitable for Deep Learning
  • Fine tune an existing model
  • Test it out!

3.1 Get some images of people wearing and not wearing helmets

The hunger for data to feed Deep Learning models has led to a number of online resources that can supply data. A popular one is Imagenet (http://www.image-net.org/), with over 14 million images in over 21,000 categories. If you search for ‘hard hat’ (a.k.a ‘safety helmet’) in Imagenet:

Your query returns:

The ‘Synset’ is a kind of category in Imagenet, and covers the inevitable synonyms such as ‘hard hat’, ‘tin hat’ and ‘safety hat’.

When you expand this Synset, you get all the images; we need the parameter in the URL that uniquely identifies these images (the ‘WordNet ID’) to download them:

Repeat this for images of ‘people’.

Once you have the ‘WordNet ID’ you can use this to download the images. I’ve put the code from my Jupyter Notebook here if you want to try it yourself https://github.com/Steve–Hunter/DeepLens-Safety-Helmet/blob/master/1.%20Download%20ImageNet%20images%20by%20Wordnet%20ID.ipynb

I added a few extras in my code to:

  1. Count of images and reporting
  2. Added continue on bad image (poisoned my .rec image file!)
  3. Parameterise the root folder and class for images

This saves the images to the SageMaker server in AWS, where they are picked up by the next stage …

3.2 Store images in a format suitable for Deep Learning

It would be nice if we could just feed in the images as JPEGs, but most image processing frameworks require the images to be pre-processed, mainly for performance reasons (disk IO). AWS uses MXNet a lot, and so that’s the format I used, ‘ImageRecord format or recordIO. You can read more about it here https://gluon-cv.mxnet.io/build/examples_datasets/recordio.html, and the Jupyter Notebook is here https://github.com/Steve–Hunter/DeepLens-Safety-Helmet/blob/master/2.%20Store%20images%20into%20binary%20recordIO%20format%20for%20MXNEt.ipynb .

The utility to create the ImageRecord format also splits the images into

  • a set of training and testing images
  • images that show wearing and not wearing helmets (the two categories we are interested in)

It’s best practice to train on a set of images, but test on another, in a ratio of around 70:30. This avoid the curse of deep learning of ‘over-fitting’ where the model hasn’t really learned ‘in general’ what people wearing safety helmets look like, only the ones it has seen already. This is the really cool part of deep learning, it really does learn, and can tell from an unseen image if there is a person(s) wearing a safety helmet!

The two ImageRecord files for training and testing are stored in SageMaker, for the next step …

3.3 Fine tune an existing model

One of my favourite saying is by Isaac Newton “If I have seen further it is by standing on the shoulders of Giants.”, and this applies to Deep Learning, in this case the ‘Giants’ are Google, Microsoft etc, and ‘standing on’ is the open source movement. You could train your model on all 14 million images in Imagenet, taking weeks and immense amount of compute power (which only Google/Microsoft can afford, but generously open source the trained models), but a neat trick in deep learning is to take an existing model that has been trained, and ‘re-purpose’ it for what you want. There may not be a pre-trained model for the images you want to identify, but you can find something close enough, and train it on just the images you want.

There are so many pre-trained models, the MXNet framework refers to them as a ‘model zoo’, the one I used is called ‘Squeezenet’ – there are competitions to find the model that can perform best, and Squeezenet gives good results, and is small enough to load onto a small device like a DeepLens.

So the trick is to start with something that looks like what we are trying to classify; Squeezenet has two existing categories for helmets, ‘Crash helmet’ and ‘Football helmet’.

When you use the model ‘as is’, it does not perform well, and gets things wrong – telling it to look for ‘Crash Helmets’ in these images, it thinks it can ‘see them’ – there are two sets of numbers below which each represent the probability of the corresponding images having helmets in them. Both numbers are a percentage and the first of the number being the prediction of a helmet, the second there not being a helmet.

!

Taking ‘Crash helmet’ as the starting point, and re-trained (also called ‘fine tuning’ or ‘transfer learning’) the last part of the model (the purple one on the far right), to learn what safety helmets look like.

The training took about an hour, on an Amazon ml.t2.medium instance (free tier) and I picked the ‘best’ accuracy, you can see the code and runs here: https://github.com/Steve–Hunter/DeepLens-Safety-Helmet/blob/master/3.%20Fine%20tune%20existing%20model.ipynb

3.4 Test it out!

After training things improve a lot – in the first image below, the model is now 96% certain it can see safety helmets, and in the second 98% certain it is not.

What still ‘blows my mind’ is that there are multiple people in the image – the training set contained individuals, groups, different lighting and helmet colours – imagine trying to ‘code’ for this in a conventional way! But the model has learned the ‘helmet-ness’ of the images!

You can give the model an image it has never seen (e.g. me wearing a red safety helmet, thanks fire warden!):

4 Next

My GitHub goes onto cover how to deploy to a DeepLens (still working on that), and I’ll blog about how that works later, and what it could do if it ‘sees’ someone not wearing a safety helmet.

This example is a simple classifier (‘is’ or ‘is not’ … like the ‘Silicon Valley’ episode of ‘Hotdog not hotdog’), but could cover many different categories, or be trained to recognise people faces from a list.

The same method can be applied to numeric data (e.g. find patterns to determine if someone is likely to default on a loan), and with almost limitless cloud-based storage and processing, new applications are emerging.

I feel that the technology is already amazing enough, we can now dream up equally amazing use cases and applications for this fast moving and evolving field of deep learning!

Azure Application Gateway WAF tuning

The Azure Application Gateway has a Web Application Firewall (WAF) capability that can be enabled on the gateway. The WAF will use the OWASP ModSecurity Core Rule Set 3.0 by default and there is an option to use CRS 2.2.9.

CRS 3.0 offers reduced occurrences of false positives over 2.2.9 by default. However, there may still be times when you need to tune your WAF rule sets to avoid false positives in your site.

Blocked access to the site

The Azure WAF filters all incoming requests to the servers in the backend of the Application Gateway. It uses the ModSecurity Core Rule Sets described above to protect your sites against various items such as code injections, hack attempts, web attacks, bots and mis-configurations.

When the threshold of rules are triggered on the WAF, access is denied to the page and a 403 error is returned. In the below screenshot, we can see that the WAF has blocked access to the site, and when viewing the page in Chrome tools under Network -> Headers we can see that the Status Code is 403 ModSecurity Action

403

Enable WAF Diagnostics

To be able to view more information on the rules that are being triggered on the WAF you will need to turn on Diagnostic Logs, you do this by adding a diagnostic setting. There are different options for configuring the diagnostic settings but in this example we will direct them to an Azure Storage Account.

diagnosticsettings

Viewing WAF Diagnostic Logs

Now that diagnostic logging is enabled for the WAF to direct to a storage account we can browse to the storage account and view the log files. An easy way to do this is to download the Azure Storage Explorer. You can then use it to browse the storage account and you will see 3 containers that are used for the Application Gateway logging.

  • insights-logs-applicationgatewayaccesslog
  • insights-logs-applicationgatewayfirewalllog
  • insights-logs-applicationgatewayperformancelog

The container that we are interested in for the WAF logs is the insights-logs-applicationgatewayfirewalllog container.

Navigate through the container until you find the PT1H.json file. This is the hourly log of firewall actions on the WAF. Double click on the file and it will open in the application set to view json files.

storageexplorer

Each entry in the WAF will include a information about the request and why it was triggered such as the ruleID, Message details. In the below sample log there are 2 highlighted entries.

The message details for the first highlighted log indicate the following “Access denied with code 403 (phase 2). Operator GE matched 5 at TX:anomaly_score.“.

So we can see that when the anomaly threshold of 5 was reached the WAF triggered the 403 ModSecurity action that we initially saw from the browser when trying to access the site. It is also important to notice that this particular rule cannot be disabled, and it indicates that it is an accumulation of rules being triggered.

The second rule indicates that a file with extension .axd is being blocked by a policy.

waflog

Tuning WAF policy rules

Each of the WAF log entries that are captured should be carefully reviewed to determine if they are valid threats. If after reviewing the logs you are able to determine that the entry is a false positive or the log captures something that is not considered a risk you have the option to tune the rules that will be enforced.

From the Web Application Firewall section within the Application Gateway you have the following options:

  • Enable or Disable the WAF
  • Configure Detection or Prevention modes for the WAF
  • Select rule set to use
  • Customize rule configuration

In the example above, if we were to decide that the .axd file extension is valid and allowed for the site we could search for the ruleID 9420440 and un-select it.

Once the number of rules being triggered reduces below the inbound threshold amount the 403 ModSecurity Action will no longer prevent access to the site.

For new implementations or during testing you could apply the Detection mode only and view and fine tune the WAF prior to enabling for production use.

waftuning

Creating an Enterprise-Wide Cloud Strategy – Considerations & Benefits

What is a strategy?

Click-cloud-icon

“a plan of action designed to achieve a long-term or overall aim”. A Strategy involves setting goals, determining actions to achieve the goals, and mobilising limited resources to execute the actions.

A good Cloud Strategy is…

  • Specific
  • Timely
  • Prioritised
  • Actionable
  • Tailored

Note – A strategy is different to organisations requirements, which can change over a period of time.

Best practice is to define your strategy is to maximise the benefits you achieve.

The following items can be considered when creating your cloud strategy (Source: NetApp Research recommendation)

  • Categorisation of your workloads: Strategic vs Operational (goals may change over time)
  • Determine which cloud type fits the workload: Type of cloud will be most efficient and cost-effective for delivering that workload (public, private & hybrid)
  • Prioritise workloads for initial projects: Non-critical applications/smaller workloads

Benefits of a Having Cloud Strategy

An enterprise-wide cloud strategy provides a structured way to incorporate cloud services into the IT mix. It helps make sure that all stakeholders have a say in how, when, and where cloud adoption occurs. It can also offer advantages that you may not have considered. Below are some of the typical benefits (source: NetApp research).

  • Maximise the business benefits of the cloud. cutting costs, improving efficiency, increasing agility, and so on—and those benefits increase when there is a clear cloud strategy in place.
  • Uncover business benefits you might otherwise miss. One frequently overlooked advantage of the cloud is the ability to accelerate innovation. By moving certain functions to the cloud, IT can speed up the process of building, testing, and refining new applications—so teams can explore more ideas, see what works and what doesn’t, and get innovative products and services to market faster.
  • Prepare the infrastructure you’ll need. A well-thought-out cloud strategy provides an opportunity to consider some of the infrastructure needs of the cloud model up front rather than as an afterthought.
  • Retain control in an era of on-demand cloud services. An enterprise-wide cloud strategy can help your business maintain control of how cloud services are purchased and used so that enterprise governance policies and standards can be managed and enforced.

How to avoid a cloud strategy that fails

  • It’s a way to save money: The danger here is that cloud computing is not always cheaper. Services that can be effectively outsourced to “the cloud” to save money are often highly standardised commodity services that have varied demand for infrastructure over time. Cloud computing can save money, but only for the right services.
  • It’s a way to renovate enterprise IT: Not everything requires speed of deployment, or rapid scaling up or down. Some services require significant and unique enterprise differentiation and customisation.
  • It’s a way to innovate and experiment: Cloud computing makes it extremely easy to get started and to pilot new services. The challenge for enterprises is to enable innovation and experimentation, but to have a feasible path from pilots to production, and operational industrialisation.

Source: Forbes.com

In summary, cloud computing does not have a single value proposition for all enterprises and all services.

Recommendation: A cloud computing strategy should include these three approaches,

  1. Define a high-level business case
  2. Define core requirements
  3. Define core technology

and organisations must probe where they can benefit from cloud in various ways.

This will help drive enterprise IT to a new core competency, away from solely being a provider of services, and toward being both a provider and a broker of services delivered in a variety of ways for a variety of business values (Hybrid IT)

Sample cloud decision framework

simple decision framework- cloud.jpg

Summary

Make you cloud strategy a Business Priority. The benefits are real, but don’t make your move to the cloud until you’ve made a serious commitment to creating an enterprise-wide cloud strategy. The time you take to formalise your strategy will pay off in higher cost savings, better efficiency, more agility, and higher levels of innovation.

Research shows that companies with an enterprise-wide cloud strategy are far more successful at using the cloud to reduce costs, improve efficiency, and increase business agility than companies without such a strategy.

I hope you find the above information is useful.

Microsoft Azure Consumption Insights with Power BI

In a recent engagement I was tasked to assess the Azure consumption for a customer, they have been exceeding their forecasted budget for last several months. In a short timeframe I had to make sense out of the recon files provided via the Azure Enterprise Portal and present that in a decision-making format that is business understandable and easy to consume. To understand and optimise the cost, it is important to identify and understand where the cost originated, i.e. resources, resource groups and the build spend pattern by analysing the trends.

We all struggle at some point for intelligent visuals to analyse consumption data produced by Azure. Earlier, our options were to play with excel and be a chart artist, rely on Microsoft provided graphs in Azure Portal or ingest into a database and run analysis tools.  To make life easier, we ingested the present and historic data into Microsoft Power BI. This Blog discusses the visual capability provided by Microsoft BI that can be used for periodic review of Azure billing data along with Azure provided Advisor tool.

What is Microsoft Power BI?

Power BI is a free application you can either install on your local computer or use the web version, that lets you connect to, transform, and visualize your data. With Power BI Desktop, you can connect to multiple different sources of data, and combine them (often called modelling) into a data model that lets you build visuals, and collections of visuals you can share as reports. One of the data sources is Microsoft Azure Consumption Insight, currently in beta.

Step 1: Download Microsoft Power BI Desktop (Figure 1) or use web version (Figure 2), in this blog we use web version of Power BI.

1

2

Step 2: Ensure that you have a minimum of reader permission on the Azure Enterprise Portal. An existing Enrolment Administrator who has full control can grant you appropriate permissions.

3Step 3: Login to the EA portal and go to the ‘Reporting’ section on the left pane. This will get you to the below screen, go to ‘API Access Key’ and copy the access key, if you do not see an access key speak to the enrolment administrator to generate this. One you have the key make a note of it and keep it in a secured location, you will need this in the following steps.

4

Step 4: While you are logged into the EA portal make a note of the ‘Enrolment Number’ it is available under ‘Manage’ and Enrolment Details. You will need this in the following steps.

Step 5: Launch the Power BI Desktop app you downloaded or open https://app.powerbi.com/groups/me/getdata/welcome. Click on ‘Get Data’ to connect to the EA portal APIs, in the desktop app it is located in the ‘Home’ menu ribbon and in the web version it is located at the bottom left pane.  One you have launched get data, you will see the below window. Go to ‘Online Services’ in the left list and select ‘Microsoft Azure Consumption Insights (Beta)’ on the right list and hit ‘Connect’.

5

Step 6: One you connect the new window will ask you to provide the ‘Enrolment Number’ you captured in Step 4. Input that number and hit ‘OK’

6

Step 7: Now input the Account Key you captured in Step 3 and hit ‘Connect’. In the next screen you’ll be presented with multiple options to select from, choose ‘Usage’. One you have made your choice, this could take few to several minutes to ingest data from the portal.

7

Step 8: When the data source is connected you will see consumption data with dashboard for your enterprise. Default data is provided in following tabs:

  • Usage Trend by Account and Subscriptions graph provide a summary view of enterprise consumption by subscription and various accounts in EA portal
  • Usage Trend by Services8
  • Top 5 Usage Drivers graph provide a visual for each of the resources by ‘Meter Name’. Meter Name is the identifier used by Microsoft Azure to monitor consumption by resource type for billing purposes. Highlighted period indicated unusual consumption.9
  • Usage Summary. When compared with the previous graph (Top 5 Usage Drivers) and the period highlighted in the below graph, analysis confirmed an unusual consumption. We drilled into the usage detailed for further analysis for this period which revealed high usage of premium managed disks and discussed with the customer about the change during this period.10
  • Market Place Charges provides you any purchases from Azure Marketplace. In this case, customer did not have any third-party usage.11
  • Usage dashboards based on Tags, Resource Groups, Location continued to point to unusual consumption of premium disks. Usage by tags data is only available if you have tagged your resources in Azure.1213

    Conclusion

    After careful observation and detailed analysis of the data ingested from the EA portal, it was concluded that the premium disk usage (as discussed earlier in this blog) for the period of several months (highlighted in earlier figures) had done the major damage to their financials, which was due to the application design that consumed the premium storage. Fixing the application configuration optimised the disk usage and dropped the cost significantly in the following months, i.e. 2018-06 to 2018-08.

    Power BI provides you the capability to download the analysis in a PowerPoint format that help present and use the visuals in your report, similar to the graphs discussed in this blog.

    Cloud Services have revolutionised the way Information Technology (IT) departments support their organisation to be more productive without investing huge capital upfront on building servers, procuring software’s etc. The key to effective use of Cloud Service is planning, and one of the key areas to effectively forecast is consumption to prevent unexpected charges. This section highlights some of the best practices to be more predictable:

    • To get a better idea of your spend, Azure pricing calculator can provide an estimate of costs before you provision an Azure resource.
    • Account Admin for an Azure subscription can use the Azure Billing Alert Service to create customized billing alerts that help you monitor and manage billing activity for your Azure accounts.
    • Apply tags to your Azure resources to logically organize them by categories. Each tag consists of a name and a value. After you apply tags, you can retrieve all the resources in your subscription with that tag name and value.
    • Regularly checks for cost breakdown and burn rate allows you to see the current spend and burn rate in Azure portal.
    • Consider enabling cost-cutting features like auto-shutdown for VMs where possible. This can be enabled either in the virtual machine configuration or via runbook(s) through the automation account
    • Periodic review of Azure Advisor recommendation helps you reduce costs by identifying resources with low usage and possible cost optimisation
    • Pre-paying for one-year or three-years of virtual machine or SQL Database compute capacity via Azure reservations for virtual machines or SQL databases that run for long periods of time. Pre-paying allows you to get a discount on the resources you use. Azure reservations can significantly reduce your virtual machine or SQL database compute costs—up to 72 percent.