Measure O365 ATP Safe Attachments Latency using PowerShell

Microsoft Office 365 Advanced Threat Protection (ATP) is a cloud based security service that is part of the O365 E5 offering. Also can be separately added to other O365 subscriptions. Now a lot can be learned about ATP from here. But in this post we’re going to extract data corresponding to one of ATP’s primary features; ATP Safe Attachments.

In short, ATP Safe Attachments scans documents for malicious content and can block these attachments depending on the policy configuration. More details can be learned about ATP Safe Attachments from here. Now, Safe Attachments can basically react to emails with attachments in multiple ways. It can deliver the email without an attachment, just a thumbnail that tells the user that ATP is currently scanning the attachment. Or it can delay the message until scanning is complete. Microsoft has been  improving on ATP Safe Attachments with a lot of new features like reducing message delays, support for different file formats, dynamic delivery, and previewing, etc. But what we’re interested in this post is that the delay that ATP Safe Attachments introduces if the policy was set to delay emails as sometimes the other features may cause confusion to users and would require educating the end-user if enabled.

But why is there a delay in the first place?

The way ATP Safe Attachments work is by creating a small sandbox, then it detonates / opens these attachments in that sandbox. Basically it checks what types of changes the attachment does to the sandbox and also uses machine learning to then decide if the attachment is safe or not. It is a nice feature, and it actually simulates how users open attachments when receiving them.

So how fast is this thing?

Now recently I’ve been working on a project with a customer, who wanted to test ATP Safe Attachments with the policy configured to delay messages and not deliver and scan at the same time (dynamic delivery). The question they had in mind is that how can we ensure that ATP Safe Attachments delay is acceptable (tried so hard to find an official document from Microsoft around the numbers but couldn’t). But the delay is not that big of a deal, it is about a minute or two (used to be a lot longer when ATP was first introduced). That is really fast considering the amount of work that needs to be done in the the backend for this thing to work. Still the customer wanted this measured, and I had to think of a way in doing that. So I tried to find a direct way to do this, but with all of the advanced reporting that ATP brings to the table, there isn’t one that tells you how long messages actually take to arrive to the users mailbox, if they have ATP Safe Attachments policy in place and set to delay messages.

Now let us define ‘delay‘ here. The measurement is done when messages arrive at Exchange Online, now you can have other solutions in front of O365 so that doesn’t count. So we are going to measure the time the message hits Exchange Online Protection (EOP) and ATP, until O365 delivers the message to the recipient. Also the size of the attachment(s) plays a role in the process.

The solution!

Now assuming you have the Microsoft Exchange Online Modules installed, or if you have MFA enabled, you need to authenticate in a different way (that was my case and I will show you how I did it).

First.. jump on to the Office 365 Admin Centre (assuming you’re an admin). Then navigate to the Exchange Online admin centre. From the left side, search for ‘Hybrid‘, an then click on second ‘Configure’ button (This one installs the Exchange Online PowerShell modules for users with MFA). Once the file is downloaded, click on it and let it do its magic. And once you’re in there, you should see this:

Screen Shot 2018-07-19 at 9.55.02 am

Connect-EXOPSSession -UserPrincipalName yourAccount@testcorp.com 

Now go through the authentication process and let PowerShell load the modules into this session. You should see the following:

Screen Shot 2018-07-19 at 10.00.57 am

Now this means all of the modules have been loaded into the PowerShell session. Before we do any PowerShell scripting, let us do the following:

  • Send two emails to an address that has ATP Safe Attachment policy enabled.
  • The first email is simple and has no attachments into it.
  • The second contains at least one attachment (i.e. a DOCX or PDF File).
  • Now the subject name shouldn’t matter, but for the sake of this test, try to name the subject in a way that you can tell which email contains the attachment.
  • Wait 5 minutes, why? Because ATP Safe Attachments should do some magic before releasing that email (assuming you have it turned ON and for delaying messages only).

Let us do some PowerShell scripting 🙂

Declare a variable that contains the recipient email address that has ATP Safe Attachments enabled. Then use the cmdlet (Get-MessageTrace) to fetch emails delivered to that address in the last hour.

$RecipientAddress = 'FirstName.LastName@testcorp.com';

$Messages = Get-MessageTrace -RecipientAddress $RecipientAddress -StartDate (Get-Date).AddHours(-1) -EndDate (get-date) 

Expand the $Messages variable, and you should see:

Screen Shot 2018-07-19 at 11.10.59 am

Here is what we are going to do next. Now I wish I can explain each line of code, but that will make this blog post too lengthy. So I’ll summarise the steps:

  • Do a For each message loop.
  • Use the ‘Get-MessageTraceDetail’ cmdlet to extract details for each message and filter out events related to ATP.
  • Create a custom object containing two main properties;
    • The recipient address (of type String)
    • The message Trace ID ( of type GUID).
  • Remove all temp variables used in this loop.
foreach($Message in $Messages)
{
$Message_RecipientAddress = $Message.RecipientAddress
$Message_Detail = $Message | Get-MessageTraceDetail | Where-Object -FilterScript {$PSItem.'Event' -eq "Advanced Threat Protection"} 
if($Message_Detail)
{
$Message_Detail = $Message_Detail | Select-Object -Property MessageTraceId -Unique
$Custom_Object += New-Object -TypeName psobject -Property ([ordered]@{'RecipientAddress'=$Message_RecipientAddress;'MessageTraceId'=$Message_Detail.'MessageTraceId'})
} #End If Message_Detail Variable 
Remove-Variable -Name MessageDetail,Message_RecipientAddress -ErrorAction SilentlyContinue
} #End For Each Message  

The expected outcome is a single row.. Why? that is because we sent only two messages, one with an attachment (got captured using the above script), the other is without an attachment (doesn’t contain ATP events).

Screen Shot 2018-07-19 at 11.28.42 am

Just in case you want to do multiple tests, remember to empty your Custom Object before retesting, so you do not get duplicate results.

The next step is to extract details for the message in order to measure the start time and end time until it was sent to the recipient. Here’s what we are going to do:

  • Create a for each item in that custom object.
  • Run a ‘Get-MessageTraceDetail’ again to extract all of the message events. Sort the data by Date.
  • Measure the time difference between the last event and the first event.
  • Store the result into a new custom object for further processing.
  • Remove the temp variables used in the loop.
foreach($MessageTrace in $Custom_Object)
{
$Message = $MessageTrace | Get-MessageTraceDetail | sort Date
$Message_TimeDiff = ($Message | select -Last 1 | select Date).Date - ($Message | select -First 1 | select Date).Date
$Final_Data += New-Object -TypeName psobject -Property ([ordered]@{'RecipientAddress'=$MessageTrace.'RecipientAddress';'MessageTraceId'=$MessageTrace.'MessageTraceId';'TotalMinutes'="{0:N3}" -f [decimal]$Message_TimeDiff.'TotalMinutes';'TotalSeconds'="{0:N2}" -f [decimal]$Message_TimeDiff.'TotalSeconds'})
Remove-Variable -Name Message,Message_TimeDiff -ErrorAction SilentlyContinue
} # End For each Message Trace in the custom object

 The expected output should look like:

Screen Shot 2018-07-19 at 11.42.25 am

So here are some tips you can use to extract more valuable data from this:

  • You can try to do this on a Distribution Group and get more users, but you need a larger foreach loop for that.
  • If your final data object has more than one result, you can use the ‘measure-object’ cmdlet to find average time for you.
  • If you have a user complaining that they are experiencing delays in receiving messages, you can use this script to extract the delay.

Just keep in mind that if you’re doing this on a large number of users, it might take some time to process all that data, so patience is needed 🙂

Happy scripting 🙂

 

 

Querying against an Azure SQL Database using Azure Automation Part 1

What if you wanted to leverage Azure automation to analyse database entries and send some statistics or even reports on a daily or weekly basis?

Well why would you want to do that?

  • On demand compute:
    • You may not have access to a physical server. Or your computer isn’t powerful enough to handle huge data processing. Or you would definitely do not want to wait in the office for the task to complete before leaving on a Friday evening.
  • You pay by the minute
    • With Azure automation, your first 500 minutes are for free, then you pay by the minute. Check out Azure Automation Pricing for more details. By the way its super cheap.
  • Its Super Cool doing it with PowerShell. 

There are other reasons why would anyone use Azure automation but we are not getting into the details around that. What we want to do is to leverage PowerShell to do such things. So here it goes!

To query against a SQL database whether its in Azure or not isn’t that complex. In fact this part of the post is to just get us started. Now for this part, we’re going to do something simple because if you want to get things done, you need the fastest way of doing it. And that is what we are going to do. But here’s a quick summary for the ways I thought of doing it:

    1. Using ‘invoke-sqlcmd2‘. This Part of the blog.. its super quick and easy to setup and it helps getting things done quickly.
    2. Creating your own SQL Connection object to push complex SQL Querying scenarios. [[ This is where the magic kicks in.. Part 2 of this series ]]

How do we get this done quickly?

For the sake of keeping things simple, we’re assuming the following:

  • We have an Azure SQL Database called ‘myDB‘, inside an Azure SQL Server ‘mytestAzureSQL.database.windows.net‘.
  • Its a simple database containing a single table ‘test_table’. This table has basically three columns  (Id, Name, Age) and this table contains only two records.
  • We’ve setup ‘Allow Azure Services‘ Access on this database in the firewall rules Here’s how to do that just in case:
    • Search for your database resource.
    • Click on ‘Set firewall rules‘ from the top menu.
    • Ensure the option ‘Allow Azure Services‘ is set to ‘ON
  • We do have an Azure automation account setup. We’ll be using that to test our code.

Now lets get this up and running

Start by creating two variables, one containing the SQL server name and the other containing the database name.

Then create an Automation credential object to store your SQL Login username and password. You need this as you definitely should not be thinking of storing your password in plain text in script editor.

I still see people storing passwords in plain text inside scripts.

Now you need to import the ‘invoke-sqlcmd2‘ module in the automation account. This can be done by:

  • Selecting the modules tab from the left side options in the automation account.
  • From the top menu, click on Browse gallery, search for the module ‘invoke-sqlcmd2‘, click on it and hit ‘Import‘. It should take about a minute to complete.

Now from the main menu of the automation account, click on the ‘Runbooks‘ tab and then ‘Add a Runbook‘, Give it a name and use ‘PowerShell‘ as the type. Now you need to edit the runbook. To do that, click on the Pencil icon from the top menu to get into the editing pane.

Inside the pane, paste the following code. (I’ll go through the details don’t worry).

#Import your Credential object from the Automation Account
 
 $SQLServerCred = Get-AutomationPSCredential -Name "mySqllogin" #Imports your Credential object from the Automation Account
 
 #Import the SQL Server Name from the Automation variable.
 
 $SQL_Server_Name = Get-AutomationVariable -Name "AzureSQL_ServerName" #Imports the SQL Server Name from the Automation variable.
 
 #Import the SQL DB from the Automation variable.
 
 $SQL_DB_Name = Get-AutomationVariable -Name "AzureSQL_DBname"
    • The first cmdlet ‘Get-AutomationPSCredential‘ is to retrieve the automation credential object we just created.
    • The next two cmdlets ‘Get-AutomationVariable‘ are to retrieve the two Automation variables we just created for the SQL server name and the SQL database name.

Now lets query our database. To do that, paste the below code after the section above.

#Query to execute
 
 $Query = "select * from Test_Table"
 
 "----- Test Result BEGIN "
 
 # Invoke to Azure SQL DB
 
 invoke-sqlcmd2 -ServerInstance "$SQL_Server_Name" -Database "$SQL_DB_Name" -Credential $SQLServerCred -Query "$Query" -Encrypt
 
 "`n ----- Test Result END "

So what did we do up there?

    • We’ve created a simple variable that contains our query. I know the query is too simple but you can put in there whatever you want.
    • We’ve executed the cmdlet ‘invoke-sqlcmd2‘. Now if you noticed, we didn’t have to import the module we’ve just installed, Azure automation takes care of that upon every execution.
    • In the cmdlet parameter set, we specified the SQL server (that has been retrieved from the automation variable), and the database name (automation variable too). Now we used the credential object we’ve imported from Azure automation. And finally, we used the query variable we also created. An optional switch parameter ‘-encypt’ can be used to encrypt the connection to the SQL server.

Lets run the code and look at the output!

From the editing pane, click on ‘Test Pane‘ from the menu above. Click on ‘Start‘ to begin testing the code, and observe the output.

Initially the code goes through the following stages for execution

  • Queuing
  • Starting
  • Running
  • Completed

Now what’s the final result? Look at the black box and you should see something like this.

----- Test Result BEGIN 

Id Name Age
-- ---- ---
 1 John  18
 2 Alex  25

 ----- Test Result END 

Pretty sweet right? Now the output we’re getting here is an object of type ‘Datarow‘. If you wrap this query into a variable, you can start to do some cool stuff with it like

$Result.count or

$Result.Age or even

$Result | where-object -Filterscript {$PSItem.Age -gt 10}

Now imagine if you could do so much more complex things with this.

Quick Hint:

If you include a ‘-debug’ option in your invoke cmdlet, you will see the username and password in plain text. Just don’t run this code with debugging option ON 🙂

Stay tuned for Part 2!!

 

Using your Voice to Search Microsoft Identity Manager – Part 2

Introduction

Last month I wrote this post that detailed using your voice to search/query Microsoft Identity Manager. That post demonstrated a working solution (GitHub repository coming next month) but was still incomplete if it was to be used in production within an Enterprise. I hinted then that there were additional enhancements I was looking to make. One is an Auditing/Reporting aspect and that is what I cover in this post.

Overview

The one element of the solution that has visibility of each search scenario is the IoT Device. As a potential future enhancement this could also be a Bot. For each request I wanted to log/audit;

  • Device the query was initiated from (it is possible to have many IoT devices; physical or bot leveraging this function)
  • The query
  • The response
  • Date and Time of the event
  • User the query targeted

To achieve this my solution is to;

  • On my IoT Device the query, target user and date/time is held during the query event
  • At the completion of the query the response along with the earlier information is sent to the IoT Hub using the IoT Hub REST API
  • The event is consumed from the IoT Hub by an Azure Event Hub
  • The message containing the information is processed by Stream Analytics and put into Azure Table Storage and Power BI.

Azure Table Storage provides the logging/auditing trail of what requests have been made and the responses.  Power BI provides the reporting aspect. These two services provide visibility into what requests have been made, against who, when etc. The graphic below shows this in the bottom portion of the image.

Auditing Reporting Searching MIM with Speech.png

Sending IoT Device Events to IoT Hub

I covered this piece in a previous post here in PowerShell. I converted it from PowerShell to Python to run on my device. In PowerShell though for initial end-to-end testing when developing the solution the body of the message being sent and sending it looks like this;

[string]$datetime = get-date
$datetime = $datetime.Replace("/","-")
$body = @{
 deviceId = $deviceID
 messageId = $datetime
 messageString = "$($deviceID)-to-Cloud-$($datetime)"
 MIMQuery = "Does the user Jerry Seinfeld have an Active Directory Account"
 MIMResponse = "Yes. Their LoginID is jerry.seinfeld"
 User = "Jerry Seinfeld"
}

$body = $body | ConvertTo-Json
Invoke-RestMethod -Uri $iotHubRestURI -Headers $Headers -Method Post -Body $body

Event Hub and IoT Hub Configuration

First I created an Event Hub. Then on my IoT Hub I added an Event Subscription and pointed it to my Event Hub.

IoTHub Event Hub.PNG

Streaming Analytics

I then created a Stream Analytics Job. I configured two Inputs. One each from my IoT Hub and from my Event Hub.

Stream Analytics Inputs.PNG

I then created two Outputs. One for Table Storage for which I used an existing Storage Group for my solution, and the other for Power BI using an existing Workspace but creating a new Dataset. For the Table storage I specified deviceId for Partition key and messageId for Row key.

Stream Analytics Outputs.PNG

Finally as I’m keeping all the data simple in what I’m sending, my query is basically copying from the Inputs to the Outputs. One is to get the events to Table Storage and the other to get it to Power BI. Therefore the query looks like this.

Stream Analytics Query.PNG

Events in Table Storage

After sending through some events I could see rows being added to Table Storage. When I added an additional column to the data the schema-less Table Storage obliged and dynamically added another column to the table.

Table Storage.PNG

A full record looks like this.

Full Record.PNG

Events in Power BI

Just like in Table Storage, in Power BI I could see the dataset and the table with the event data. I could create a report with some nice visuals just as you would with any other dataset. When I added an additional field to the event being sent from the IoT Device it magically showed up in the Power BI Dataset Table.

PowerBI.PNG

Summary

Using the Azure IoT Hub REST API I can easily send information from my IoT Device and then have it processed through Stream Analytics into Table Storage and Power BI. Instant auditing and reporting functionality.

Let me know what you think on twitter @darrenjrobinson

Securing your Web front-end with Azure Application Gateway Part 1

I have just completed a project with a customer who were using Azure Application Gateway to secure their web front-end and thought it would be good to post some findings.

This is part one in a two part post looking at how to secure a web front-end using Azure Application Gateway with the WAF component enabled. In this post I will explain the process for configuring the Application Gateway once deployed. You can deploy the Application Gateway from an ARM Template, Azure PowerShell or the portal. To be able to enable the WAF component you must use a Medium or Large instance size for the Application Gateway.

Using Application Gateway allows you to remove the need for your web front-end to have a public endpoint assigned to it, for instance if it is a Virtual Machine then you no longer need a Public IP address assigned to it. You can deploy Application Gateway in front of Virtual Machines (IaaS) or Web Apps (PaaS).

An overview of how this will look is shown below. The Application Gateway requires its own subnet which no other resources can be deployed to. The web server (Virtual Machine) can be assigned to a separate subnet, if using a web app no subnet is required.

AppGW

 

The benefits we will receive from using Application Gateway are:

  • Remove the need for a public endpoint from our web server.
  • End-to-end SSL encryption.
  • Automatic HTTPS to HTTPS redirection.
  • Multi-site hosting, though in this example we will configure a single site.
  • In-built WAF solution utilising OWASP core rule sets 3.0 or 2.2.9.

To follow along you will require the Azure PowerShell module version of 3.6 or later. You can install or upgrade following this link

Before starting you need to make sure that an Application Gateway with an instance size of Medium or Large has been deployed with the WAF component enabled and that the web server or web app has been deployed and configured.

Now open PowerShell ISE and login to your Azure account using the below command.


Login-AzureRmAccount

Now we need to set our variables to work with. These variables are your Application Gateway name, the resource group where you Application Gateway is deployed, your Backend Pool name and IP, your HTTP and HTTPS Listener names, your host name (website name), the HTTP and HTTPS rule names, your front end (Private) and back end (Public) SSL Names along with your Private certificate password.

NOTE: The Private certificate needs to be in PFX format and your Public certificate in CER format.

Change these to suit your environment and copy both your pfx and cer certificate files to C:\Temp\Certs on your computer.

# Application Gateway name.
[string]$ProdAppGw = "PRD-APPGW-WAF"
# The resource group where the Application Gateway is deployed.
[string]$resourceGroup = "PRD-RG"
# The name of your Backend Pool.
[string]$BEPoolName = "BackEndPool"
# The IP address of your web server or URL of web app.
[string]$BEPoolIP = "10.0.1.10"
# The name of the HTTP Listener.
[string]$HttpListener = "HTTPListener"
# The name of the HTTPS Listener.
[string]$HttpsListener = "HTTPSListener"
# Your website hostname/URL.
[string]$HostName = "website.com.au"
# The HTTP Rule name.
[string]$HTTPRuleName = "HTTPRule"
# The HTTPS Rule name.
[string]$HTTPSRuleName = "HTTPSRule"
# SSL certificate name for your front-end (Private cert pfx).
[string]$FrontEndSSLName = "Private_SSL"
# SSL certificate name for your back-end (Public cert cer).
[string]$BackEndSSLName = "Public_SSL"
# Password for front-end SSL (Private cert pfx).
[string]$sslPassword = "<Enter your Private Certificate pfx password here.>"

Our first step is to configure the Front and Back end HTTPS settings on the Application Gateway.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Add the Front-end (Private) SSL certificate. If you have any issues with this step you can upload the certificate from within the Azure Portal by creating a new Listener.

Add-AzureRmApplicationGatewaySslCertificate -ApplicationGateway $AppGw `
-Name $FrontEndSSLName -CertificateFile "C:\Temp\Certs\PrivateCert.pfx" `
-Password $sslPassword

Save the certificate as a variable.

$AGFECert = Get-AzureRmApplicationGatewaySslCertificate -ApplicationGateway $AppGW `
            -Name $FrontEndSSLName

Configure the front-end port for SSL.

Add-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
-Name "appGatewayFrontendPort443" `
-Port 443

Add the back-end (Public) SSL certificate.

Add-AzureRmApplicationGatewayAuthenticationCertificate -ApplicationGateway $AppGW `
-Name $BackEndSSLName `
-CertificateFile "C:\Temp\Certs\PublicCert.cer"

Save the back-end (Public) SSL as a variable.

$AGBECert = Get-AzureRmApplicationGatewayAuthenticationCertificate -ApplicationGateway $AppGW `
            -Name $BackEndSSLName

Configure back-end HTTPS settings.

Add-AzureRmApplicationGatewayBackendHttpSettings -ApplicationGateway $AppGW `
-Name "appGatewayBackendHttpsSettings" `
-Port 443 `
-Protocol Https `
-CookieBasedAffinity Enabled `
-AuthenticationCertificates $AGBECert

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The next stage is to configure the back-end pool to connect to your Virtual Machine or Web App. This example is using the IP address of the NIC attached to the web server VM. If using a web app as your front-end you can configure it to accept traffic only from the Application Gateway by setting an IP restriction on the web app to the Application Gateway IP address.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Add the Backend Pool Virtual Machine or Web App. This can be a URL or an IP address.

Add-AzureRmApplicationGatewayBackendAddressPool -ApplicationGateway $AppGw `
-Name $BEPoolName `
-BackendIPAddresses $BEPoolIP

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The next steps are to configure the HTTP and HTTPS Listeners.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the front-end port as a variable – port 80.

$AGFEPort = Get-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
            -Name "appGatewayFrontendPort"

Save the front-end IP configuration as a variable.

$AGFEIPConfig = Get-AzureRmApplicationGatewayFrontendIPConfig -ApplicationGateway $AppGw `
                -Name "appGatewayFrontendIP"

Add the HTTP Listener for your website.

Add-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HttpListener `
-Protocol Http `
-FrontendIPConfiguration $AGFEIPConfig `
-FrontendPort $AGFEPort `
-HostName $HostName

Save the HTTP Listener for your website as a variable.

$AGListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
              -Name $HTTPListener

Save the front-end SSL port as a variable – port 443.

$AGFESSLPort = Get-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
               -Name "appGatewayFrontendPort443"

Add the HTTPS Listener for your website.

Add-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HTTPSListener `
-Protocol Https `
-FrontendIPConfiguration $AGFEIPConfig `
-FrontendPort $AGFESSLPort `
-HostName $HostName `
-RequireServerNameIndication true `
-SslCertificate $AGFECert

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The final part of the configuration is to configure the HTTP and HTTPS rules and the HTTP to HTTPS redirection.

First configure the HTTPS rule.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the Backend Pool as a variable.

$BEP = Get-AzureRmApplicationGatewayBackendAddressPool -ApplicationGateway $AppGW `
       -Name $BEPoolName

Save the HTTPS Listener as a variable.

$AGSSLListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
                 -Name $HttpsListener

Save the back-end HTTPS settings as a variable.

$AGHTTPS = Get-AzureRmApplicationGatewayBackendHttpSettings -ApplicationGateway $AppGW `
           -Name "appGatewayBackendHttpsSettings"

Add the HTTPS rule.

Add-AzureRmApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
-Name $HTTPSRuleName `
-RuleType Basic `
-BackendHttpSettings $AGHTTPS `
-HttpListener $AGSSLListener `
-BackendAddressPool $BEP

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

Now configure the HTTP to HTTPS redirection and the HTTP rule with the redirection applied.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the HTTPS Listener as a variable.

$AGSSLListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
                 -Name $HttpsListener

Add the HTTP to HTTPS redirection.

Add-AzureRmApplicationGatewayRedirectConfiguration -Name ProdHttpToHttps `
-RedirectType Permanent `
-TargetListener $AGSSLListener `
-IncludePath $true `
-IncludeQueryString $true `
-ApplicationGateway $AppGw

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the redirect as a variable.

$Redirect = Get-AzureRmApplicationGatewayRedirectConfiguration -Name ProdHttpToHttps `
            -ApplicationGateway $AppGw

Save the HTTP Listener as a variable.

$AGListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
              -Name $HttpListener

Add the HTTP rule with redirection to HTTPS.

Add-AzureRmApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
-Name $HTTPRuleName `
-RuleType Basic `
-HttpListener $AGListener `
-RedirectConfiguration $Redirect

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

 
In this post we covered how to configure Azure Application Gateway to secure a web front-end whether running on Virtual Machines or Web Apps. We have configured the Gateway for end-to-end SSL encryption and automatic HTTP to HTTPS redirection removing this overhead from the web server.

In part two we will look at some additional post configuration tasks and how to make sure your web application works with the WAF component.

Sending Events from IoT Devices to Azure IoT Hub using HTTPS and REST

Overview

Different IoT Devices have different capabilities. Whether it is a Micro-controller or Single Board Computer your options will vary. In this post I detailed using MQTT to send messages from an IoT Device to an Azure IoT Hub as well as using the AzureIoT PowerShell Module.

For a current project I needed to send the events from an IoT Device that runs Linux and had Python support. The Azure IoT Hub includes an HTTPS REST endpoint. For this particular application using the HTTPS REST endpoint is going to be much easier than compiling the Azure SDK for the particular flavour of Linux running on my device.

Python isn’t my language of choice so first I got it working in PowerShell then converted it to Python. I detail both scripts here as a guide for anyone else trying to do something similar but also for myself as I know I’m going to need these snippets in the future.

Prerequisites

You’ll need to have configured an;

Follow this post to get started.

PowerShell Device to Cloud Events using HTTPS and REST Script

Here is the PowerShell version of the script. Update Line 3 for your DeviceID, Line 5 for your IoT Hub Name and LIne 11 for your SAS Token.

Using Device Explorer to Monitor the Device on the associated IoT Hub I can see that the message is received.

Device Explorer

Python Device to Cloud Events using HTTPS and REST Script

Here is my Python version of the same script. Again update Line 5 for your IoT DeviceID, Line 7 for your IoT Hub and Line 12 for the SAS Token.

And in Device Explorer we can see the message is received.

Device Explorer Python

Summary

When you have a device that has the ability to run Python you can use the IoT Hub HTTPS REST API to send messages from the Client to Cloud negating the need to build and compile the Azure IoT SDK to generate client libraries.

Auto-redirect ADFS 4.0 home realm discovery based on client IP

As I mentioned in my previous post here that I will explain how to auto-redirect the home realm discovery page to an ADFS namespace (claims provider trust) based on client’s IP so here I am.

Let’s say you have many ADFS servers (claims providers trusts) linked to a central ADFS 4.0 server and you want to auto-redirect the user to a linked ADFS server login page based on user’s IP instead of letting the user to choose a respective ADFS server from the list on the home realm discovery page as explained in the below request flow diagram.

You can do so by doing some customization as mentioned below:

  1. Create a database of IP ranges mapped to ADFS namespaces
  2. Develop a Web API which returns the relevant ADFS namespace based on request IP
  3. Add custom code in onload.js file on the central ADFS 4.0 server to call the Web API and do the redirection

It is assumed that all the boxes including Central ADFS, linked ADFS, Web Server, SQL Server are setup. All the nitties and gritties are sorted out in terms of firewall rules, DNS lookups, SSL certificates. If not then you can get help from an infrastructure guy on that.

Lets perform the required action on SQL, Web and ADFS Server.

SQL Server

Perform the following actions on the SQL Server:

  1. Create a new database
  2. Create a new table called Registration as shown below

  1. Insert some records in the table for the linked ADFS server IP range, for example

Start IP: 172.31.117.1, End IP: 172.31.117.254, Redirect Name: http://adfs.adminlab.com/adfs/services/trust

Web Server

Perform the following actions for the Web API development and deployment:

  1. Create a new ASP.NET MVC Web API project using Visual Studio
  2. Create a new class called Redirect.cs as shown below (I would have used the same name as database table name ‘Registration’ but it’s OK for now)

  1. Insert a new Web API controller class called ResolverController.cs as shown below. What we are doing here is getting the request IP address and getting the IP ranges from the database, comparing the request IP address with the IP ranges from the database by converting both to long IP address. If the request IP is in range then returning the redirect object.

  1. Add a connection string in the web.config named DbConnectionString pointing to the database we created above.
  2. Deploy this web API project to the web server IIS
  3. Configure the HTTPS binding as well for this web API project using the SSL certificate
  4. Note down the URL of the web API, something like ‘https://{Web-Server-Web-API-URL}/api/resolver/get’, this will be used in the onload.js

Central ADFS 4.0 Server

Perform the following actions on the central ADFS 4.0 server:

  1. Run the following PowerShell command to export current theme to a location

Export-AdfsWebTheme -Name default -DirectoryPath D:\Themes\Custom

  1. Run the following PowerShell command to create a new custom theme based on current theme

New-AdfsWebTheme -Name custom -SourceName default 

  1. Update onload.js file extracted in step 1 at D:\Themes\Custom\theme\script with following code added at the end of the file. What we are doing here is calling the web API which returns the matched Redirect object with RedirectName as ADFS namespace and setting the HRD.selection as that redirect name.

  1. Run the following PowerShell command to update back the onload.js file in the theme

Set-AdfsWebTheme -TargetName custom -AdditionalFileResource @{Uri=’/adfs/portal/script/onload.js’;path=”D:\Themes\Custom\theme\script\onload.js”} 

  1. Run the following PowerShell command to make the custom theme as your default theme

Set-AdfsWebConfig -ActiveThemeName custom -HRDCookieEnabled $false

Now when you test from your linked ADFS server or a client machine linked to the linked ADFS server (which is linked to a central ADFS server), the auto-redirect kicks in from onload.js and forwards it to web API which gets the client IP and matches it with relevant ADFS where the request came from and redirects the user to the relevant ADFS login page, instead of user selecting the relevant ADFS namespace from the available list on home realm discovery page.

If the relevant match is not found, the default home realm discovery page with list of available ADFS namespaces is shown.

Utilising Azure Speech to Text Cognitive Services with PowerShell

Introduction

Yesterday I posted about using Azure Cognitive Services to convert text to speech. I also eluded that I’ve been leveraging Cognitive Services to do the conversion from Speech to Text. I detail that in this post.

Just as with the Text to Speech we will need an API key to use Cognitive Services. You can get one from Azure Cognitive Services here.

Source Audio File

I created an audio file in Audacity  for testing purposes. In my real application it is direct spoken text, but that’s a topic for another time.  I set the project rate to 16000hz for the conversion source file then exported the file as a .wav file.

Capture Audio

The Script

The Script below needs to be updated for your input file (line 2) and your API Key (line 7). Run it liine by line in VSCode or PowerShell ISE.

Summary

That’s it. Pretty simple once you have a reference script to work with. Enjoy.

Converted

 

Utilising Azure Text to Speech Cognitive Services with PowerShell

Introduction

Recently I’ve been building an IoT Project that leverages Azure Cognitive Services. A couple of the services I needed to use were for converting Text to Speech and Speech to Text. The guides were pretty good from Microsoft, but not obvious for use with native PowerShell. I’ve got it all working, so am documenting it for myself for the future but also to help anyone else trying to work it out.

Accessing the Cognitive Services Text to Speech API

Azure Cognitive Services Text to Speech is a great service that provides the ability as the name suggests, convert text to speech.

First you’ll need to get an API key. Head to the Cognitive Services Getting Started page and select Try Text to Speech and Get API Key. It will give you a trial key and 5000 transactions limited to 20 per minute. If you want to use it longer, provision a Speech to Text service using the Azure Portal.

The Script

I’m using a female voice in English for my output format. All the available output languages and genders are available here.

There are also 8 audio output formats. The two I’ve used most are raw 16khz pcm for .wav format and 16khz mp3 for MP3 output as highlighted below. The script further below is configured for MP3.

  • ssml-16khz-16bit-mono-tts
  • raw-16khz-16bit-mono-pcm
  • audio-16khz-16kbps-mono-siren
  • riff-16khz-16kbps-mono-siren
  • riff-16khz-16bit-mono-pcm
  • audio-16khz-128kbitrate-mono-mp3
  • audio-16khz-64kbitrate-mono-mp3
  • audio-16khz-32kbitrate-mono-mp3

The script below is pretty self-explanatory. Update Line 5 for your API Key, and Lines 11 and 13 if you want the output audio file to go to a different directory or filename.  The text to be converted is in line 59.

Step through it using VSCode or PowerShell ISE.

Summary

Using Azure Cognitive Services you can quickly convert text to audio. Enjoy.

Implementing Azure API Management with the Lithnet Microsoft Identity Manager Rest API

Introduction

Earlier this week I wrote this post that detailed implementing the Lithnet REST API for FIM/MIM Service. I also detailed using PowerShell to interact with the API Endpoint.

Now lets imagine you are looking to have a number of Azure Serverless features leverage your Rest API enabled Microsoft Identity Manager environment. Or even offer it “as-a-Service”. You’ll want to have some visibility as to how it is performing, and you’ll probably want to implement features such as caching and rate limiting let alone putting more security controls around it. Enter Azure API Management, which provides all those functions and more.

In this post I detail getting started with Azure API Management by using it to front-end the Lithnet FIM/MIM Rest API.

Overview

In this post I will detail;

  • Enabling Azure API Management
  • Configuring the Lithnet FIM/MIM Rest API integration with Azure API Management
  • Accessing MIM via Azure API Management and the Lithnet FIM/MIM Rest API using PowerShell
  • Reporting

Prerequisites

For this particular scenario I’m interfacing Azure API Management with a Rest API that uses Digest Authentication. So even though it is a Windows WCF Webservice you could do something similar with a similar API Endpoint. If the backend API endpoint is using SSL it will need to have a valid certificate. Even though Azure API Management allows you to add your own certificates I had issues with Self Signed Certificates. I have it working fine with Lets Encrypt issued certificates. Obviously you’ll need an Azure Subscription as well as an App/Servive with an API.

Enabling Azure API Management

From the Azure Portal select Create a resource and search for API management and select it.

Add API Mgmt.PNG

Select Create

Create API Mgmt.PNG

Give your API Management Service a name, select a subscription, resource group etc and select Create.

API Mgmt Config 1.PNG

Once you select Create it will take about 30 minutes to be deployed.

Configuring the Lithnet FIM/MIM Rest API integration with Azure API Management

Once your new API Management service has been deployed, from the Azure Portal select the API Management services blade and select the API Management service that you just created. Select APIs.

API Config 1.PNG

Select Add API and then select Add a new API

API Mgmt Config 2.PNG

Give the API a name, description, enter the URI for your API EndPoint, and select HTTPS. I’m going to call this MIMSearcher so have entered that under API URL Suffix. For initial testing under Products select starter. Finally select Create.

API Mgmt Config 4.PNG

We now have our base API setup. From the Backend tile select the Edit icon.

API Mgmt Config 5.PNG

As the backed is authenticated using Basic Authentication, select Basic in Gateway credentials and enter the details of an account with access that will be used by the API Gateway. Select Save.

API Mgmt Config 6.PNG

Now from our API Configuration select Add operation.

API Mgmt Config 7.PNG

First we will create a test operation for the Help page on the Lithnet FIM/MIM Rest API. Provide a Display name, and for the URL add /v2/help. Give it a description and select Create.

Note: I could have had v2 as part of the base URI for the API in the previous steps. I didn’t as I will be using API’s from both v1 and v2 and didn’t want to create multiple operations.

API Mgmt Config 8.PNG

Select the new Operation (Help)

API Mgmt Config 9.PNG

Select the Test menu. Select Send.

API Mgmt Config 10.PNG

If everything is set up correctly you will get a 200 Success OK response as below.

API Mgmt Config 11.PNG

Accessing MIM via Azure API Management and the Lithnet FIM/MIM Rest API using PowerShell

Head over to your API Portal. The URL is https://.portal.azure-api.net/ where is the name you gave your API Management Service shown in the third screenshot at the top of this post. If you are doing this from the browser you used to create the API Management Service you should be signed in already. From the Administrator menu on the right select Profile.

Test API Mgmt 1.PNG

Click on Show under one of the keys and record its value.

Test API Mgmt 2.PNG

Using PowerShell ISE or VSCode update the following Code Snippet and test.

$APIURL = 'https://.azure-api.net//v2/help'
$secret = 'yourSecret'
$Headers = @{'Ocp-Apim-Subscription-Key' = $secret} 
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

$response = Invoke-RestMethod -Uri $APIURL -Headers $Headers -ContentType "application/json" -UseBasicParsing -Method Get
$response

The snippet will create a Web Request to the new API and display the results.

Test API Mgmt 3.PNG

Querying the Lithnet Rest API via Azure API Management

Now that we have a working solution end-to-end, let’s do something useful with it. Looking at the Lithnet Rest API, the Resources URI is the key one exposing Resources from the MIM Service.

Resources.PNG

Let’s create a new Operation for Resources similar to what we did for the Help. After selecting Create configure the Backend for Basic Authentication like we did for Help.

Get Resources.PNG

Testing out the newly exposed endpoint is very similar to before. Just a new APIURL with the addition of /?Person to return all Person Resources from the MIM Portal. It lets us know it’s returned 7256 Person Objects, and the Results are still paged (100 by default).

Get Persons.PNG

Let’s now Search for just a single user. Search for a Person object whose Display Name is ‘darrenjrobinson’.

$query = "Person[DisplayName='darrenjrobinson']"
$queryEncoded = [System.Web.HttpUtility]::UrlEncode($query)

$APIURL = "https://.azure-api.net//v2/resources/?filter=/$($queryEncoded)" 
$secret = 'yourSecret'
$Headers = @{'Ocp-Apim-Subscription-Key' = $secret} 
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

$user = Invoke-RestMethod -Uri $APIURL -Headers $Headers -ContentType "application/json" -UseBasicParsing -Method Get
$user

Executing, we get a single user returned.

Search for User.PNG

Reporting

Using the Publisher Portal we can get some Stats on what is happening with our API Management implementation.

Go to https://.portal.azure-api.net/admin and select Analytics.

We then have visibility to what has been using the API Management Service. At a Glance gives and overview and you can drill down into;

  • Top Users
  • Top Products
  • Top subscriptions
  • Top APIs
  • Top Operations

At a glance looks like this;

At a Glance Stats.PNG

And Top Operations looks like this;

Top Operations.PNG

Summary

That is a quick start guide to implementing Azure API Management in front of a Rest API and using PowerShell to integrate with it. Next steps would be to enable caching, and getting into more of the advanced features. Enjoy.

 

Utilizing Sharegate migration Templates for Network share migrations

Sharegate supports PowerShell based scripting which can be used to automate and schedule migrations. The purpose of this post is to demonstrate the use of pre-created migration templates to initiate migration tasks in Sharegate using PowerShell scripts. In one of my previous project, we were migrating network shares to SharePoint Online using Sharegate as the migration tool of choice.

Based on our discussions with business divisions and IT department, the following requirements were identified for most of the divisions:

  1. Office documents, PDFs, Image files will be migrated
  2. Include only documents modified after a date for e.g. January 1, 2016
  3. Permissions should be preserved
  4. There were some exceptions where the identified divisions requested all available data to be migrated.

To address the above business requirements, we created custom migration templates and these templates were used to trigger the migration tasks. This allowed us to do use pre-created configurations for each migration and eliminate of manual overhead and risk of errors. Sharegate migration templates (.sgt files) are xml based configuration documents and encapsulate the configuration options such as:

  • Copy options
  • Content type mappings
  • Allowed extensions
  • Cut-off dates

Migration Templates

I’ll present two (2) migration templates, that were prepared to cover off the business requirements:

Template 1 – All data This migration template configures Sharegate to migrate all data to the destination library. The configuration values to be considered are:

  • CopyPermissions Specifies that all permissions on the files / folders be over [configured as “true”]
  • KeepAuthorsAndTimestamps – Specifies that the file / folder metadata such as Authors, associated time stamps (created date, modified date) be copied over [configured as “true”]
  • ContentFileExtensionFilterType – specifies file extension type filter to copy data [configured as “AllExtensions”]

 

Template 2 – Filtered Data This migration template configures Sharegate to migrate all the documents that was modified / created after 1st January, 2016 and matches one of the configured extensions to the destination library. The configuration values to be considered are:

  • CopyPermissions – Specifies that all permissions on the files / folders be over [configured as “true”]
  • ContentFrom – specifies the date cutover anything modified after this date shall be copied [configured as “01/01/2016 10:00:00”]
  • KeepAuthorsAndTimestamps – Specifies that the file / folder metadata such as Authors, associated time stamps (created date, modified date) be copied over [configured as “true”]
  • ContentFileExtension – specifies that all allowed file type to be copied over [configured as “pdf; xlsx; xls; doc; pptx; docx; ppt; xlsm; rtf; vsd; pub; docm; xlsb; mpp; one; pps; dotx; dot; pot; xlk; vdx; pptm; xlt; xlam; potx; odt; xla; wk4; dic; dotm; xltm; xltx; onetoc2; vss; thmx; vsdx; xps; msg; eml; jpg; cr2; png; tif; psd; bmp; eps; ai; jpeg; gif; indd; dwg; svg; djvu; ico; wmf; dcx; emf; xpm; pdn; cam; ”]
  • ContentFileExtensionFilterType – specifies file extension type filter to copy data [configured as “LimitTo”]

 

Migration templates in Sharegate user interface

The migration templates presented as XML files before may be imported into Sharegate user interface to inspect and test. Migration options are available while configuring a migration as demonstrated in the screen shot provided below:

 

The screen shots below represent the migration options as visible in Sharegate user interface, when the templates are imported into the tool.

Copy Options Screen – Template 1

Content type mapping screen – Template 1

Filter options screen – Template 1

 

Copy Options Screen – Template 2

Content type mapping screen – Template 2

Filter options screen – Template 2

 

PowerShell scripts to initiate migrations

Migrations can be initiated from PowerShell console by running the following script

 

The coded migration templates as demonstrated above has merits in terms of eliminating the need to manually configure options for each migration using Sharegate user interface.

To extend this to a whole new level, the overall migration project can further be split into multiple migration tasks and scheduled as sequential migrations across multiple machines utilizing common migration templates and scripts stored in a network location, but that is something I’ll cover off as a part of separate blog post.

Happy Migrating…