Measure O365 ATP Safe Attachments Latency using PowerShell

Microsoft Office 365 Advanced Threat Protection (ATP) is a cloud based security service that is part of the O365 E5 offering. Also can be separately added to other O365 subscriptions. Now a lot can be learned about ATP from here. But in this post we’re going to extract data corresponding to one of ATP’s primary features; ATP Safe Attachments.

In short, ATP Safe Attachments scans documents for malicious content and can block these attachments depending on the policy configuration. More details can be learned about ATP Safe Attachments from here. Now, Safe Attachments can basically react to emails with attachments in multiple ways. It can deliver the email without an attachment, just a thumbnail that tells the user that ATP is currently scanning the attachment. Or it can delay the message until scanning is complete. Microsoft has been  improving on ATP Safe Attachments with a lot of new features like reducing message delays, support for different file formats, dynamic delivery, and previewing, etc. But what we’re interested in this post is that the delay that ATP Safe Attachments introduces if the policy was set to delay emails as sometimes the other features may cause confusion to users and would require educating the end-user if enabled.

But why is there a delay in the first place?

The way ATP Safe Attachments work is by creating a small sandbox, then it detonates / opens these attachments in that sandbox. Basically it checks what types of changes the attachment does to the sandbox and also uses machine learning to then decide if the attachment is safe or not. It is a nice feature, and it actually simulates how users open attachments when receiving them.

So how fast is this thing?

Now recently I’ve been working on a project with a customer, who wanted to test ATP Safe Attachments with the policy configured to delay messages and not deliver and scan at the same time (dynamic delivery). The question they had in mind is that how can we ensure that ATP Safe Attachments delay is acceptable (tried so hard to find an official document from Microsoft around the numbers but couldn’t). But the delay is not that big of a deal, it is about a minute or two (used to be a lot longer when ATP was first introduced). That is really fast considering the amount of work that needs to be done in the the backend for this thing to work. Still the customer wanted this measured, and I had to think of a way in doing that. So I tried to find a direct way to do this, but with all of the advanced reporting that ATP brings to the table, there isn’t one that tells you how long messages actually take to arrive to the users mailbox, if they have ATP Safe Attachments policy in place and set to delay messages.

Now let us define ‘delay‘ here. The measurement is done when messages arrive at Exchange Online, now you can have other solutions in front of O365 so that doesn’t count. So we are going to measure the time the message hits Exchange Online Protection (EOP) and ATP, until O365 delivers the message to the recipient. Also the size of the attachment(s) plays a role in the process.

The solution!

Now assuming you have the Microsoft Exchange Online Modules installed, or if you have MFA enabled, you need to authenticate in a different way (that was my case and I will show you how I did it).

First.. jump on to the Office 365 Admin Centre (assuming you’re an admin). Then navigate to the Exchange Online admin centre. From the left side, search for ‘Hybrid‘, an then click on second ‘Configure’ button (This one installs the Exchange Online PowerShell modules for users with MFA). Once the file is downloaded, click on it and let it do its magic. And once you’re in there, you should see this:

Screen Shot 2018-07-19 at 9.55.02 am

Connect-EXOPSSession -UserPrincipalName yourAccount@testcorp.com 

Now go through the authentication process and let PowerShell load the modules into this session. You should see the following:

Screen Shot 2018-07-19 at 10.00.57 am

Now this means all of the modules have been loaded into the PowerShell session. Before we do any PowerShell scripting, let us do the following:

  • Send two emails to an address that has ATP Safe Attachment policy enabled.
  • The first email is simple and has no attachments into it.
  • The second contains at least one attachment (i.e. a DOCX or PDF File).
  • Now the subject name shouldn’t matter, but for the sake of this test, try to name the subject in a way that you can tell which email contains the attachment.
  • Wait 5 minutes, why? Because ATP Safe Attachments should do some magic before releasing that email (assuming you have it turned ON and for delaying messages only).

Let us do some PowerShell scripting 🙂

Declare a variable that contains the recipient email address that has ATP Safe Attachments enabled. Then use the cmdlet (Get-MessageTrace) to fetch emails delivered to that address in the last hour.

$RecipientAddress = 'FirstName.LastName@testcorp.com';

$Messages = Get-MessageTrace -RecipientAddress $RecipientAddress -StartDate (Get-Date).AddHours(-1) -EndDate (get-date) 

Expand the $Messages variable, and you should see:

Screen Shot 2018-07-19 at 11.10.59 am

Here is what we are going to do next. Now I wish I can explain each line of code, but that will make this blog post too lengthy. So I’ll summarise the steps:

  • Do a For each message loop.
  • Use the ‘Get-MessageTraceDetail’ cmdlet to extract details for each message and filter out events related to ATP.
  • Create a custom object containing two main properties;
    • The recipient address (of type String)
    • The message Trace ID ( of type GUID).
  • Remove all temp variables used in this loop.
foreach($Message in $Messages)
{
$Message_RecipientAddress = $Message.RecipientAddress
$Message_Detail = $Message | Get-MessageTraceDetail | Where-Object -FilterScript {$PSItem.'Event' -eq "Advanced Threat Protection"} 
if($Message_Detail)
{
$Message_Detail = $Message_Detail | Select-Object -Property MessageTraceId -Unique
$Custom_Object += New-Object -TypeName psobject -Property ([ordered]@{'RecipientAddress'=$Message_RecipientAddress;'MessageTraceId'=$Message_Detail.'MessageTraceId'})
} #End If Message_Detail Variable 
Remove-Variable -Name MessageDetail,Message_RecipientAddress -ErrorAction SilentlyContinue
} #End For Each Message  

The expected outcome is a single row.. Why? that is because we sent only two messages, one with an attachment (got captured using the above script), the other is without an attachment (doesn’t contain ATP events).

Screen Shot 2018-07-19 at 11.28.42 am

Just in case you want to do multiple tests, remember to empty your Custom Object before retesting, so you do not get duplicate results.

The next step is to extract details for the message in order to measure the start time and end time until it was sent to the recipient. Here’s what we are going to do:

  • Create a for each item in that custom object.
  • Run a ‘Get-MessageTraceDetail’ again to extract all of the message events. Sort the data by Date.
  • Measure the time difference between the last event and the first event.
  • Store the result into a new custom object for further processing.
  • Remove the temp variables used in the loop.
foreach($MessageTrace in $Custom_Object)
{
$Message = $MessageTrace | Get-MessageTraceDetail | sort Date
$Message_TimeDiff = ($Message | select -Last 1 | select Date).Date - ($Message | select -First 1 | select Date).Date
$Final_Data += New-Object -TypeName psobject -Property ([ordered]@{'RecipientAddress'=$MessageTrace.'RecipientAddress';'MessageTraceId'=$MessageTrace.'MessageTraceId';'TotalMinutes'="{0:N3}" -f [decimal]$Message_TimeDiff.'TotalMinutes';'TotalSeconds'="{0:N2}" -f [decimal]$Message_TimeDiff.'TotalSeconds'})
Remove-Variable -Name Message,Message_TimeDiff -ErrorAction SilentlyContinue
} # End For each Message Trace in the custom object

 The expected output should look like:

Screen Shot 2018-07-19 at 11.42.25 am

So here are some tips you can use to extract more valuable data from this:

  • You can try to do this on a Distribution Group and get more users, but you need a larger foreach loop for that.
  • If your final data object has more than one result, you can use the ‘measure-object’ cmdlet to find average time for you.
  • If you have a user complaining that they are experiencing delays in receiving messages, you can use this script to extract the delay.

Just keep in mind that if you’re doing this on a large number of users, it might take some time to process all that data, so patience is needed 🙂

Happy scripting 🙂

 

 

Why is the Azure Load Balancer NOT working?

Context

For most workloads that I’ve deployed in Azure that have required load balancing, for the Azure Load Balancer (ALB) used in those architectures, the out of the box experience or the default configuration was used. The load balancer service is great like that, whereby for the majority of scenarios it just works out of the box. I’m sure this isn’t an Azure only experience either. The other public cloud providers have a great out of the box load balancing service that would work with just about any service without in depth configuration.

You can see that I’ve been repetitive on the point around out of the box experience. This is where I think I’ve become complacent in thinking that this out of the box experience should work in the majority of circumstances.

My problem, as outlined in this blog post, is that I’ve experienced both the Azure Service Manager (ASM) load balancer and the Azure Resource Manager (ARM) load balancer not working as intended…

UPDATE 2018-07-13 – The circumstances in both of the examples given in this blog post assume that the workloads in the backend pool are configured correctly AND that the Azure Load Balancer is also configured correctly as per Microsoft recommendations. The specific solution that I’ve outlined came about after making sure that all the settings were checked, checked again and also had Microsoft Premier Support validate the config.

Azure Load Balancer

From what I’ve been told by Microsoft Premier Support, the Azure Load Balancer has had a 5-tuple distribution algorithm, based on source IP, source port, destination IP, destination port and protocol type, since its inception. However, that was certainly not the case as I’ll explain in the next paragraph. While this 5-tuple mode should in theory work well with just about any scenario, because at the end of the day the distribution is still round robin between endpoints, the stickiness of sessions to those endpoints comes into play where that can cause some issues.

In a blog post from way back when, Microsoft outlines that to accommodate RDS Gateway, the distribution mode options for the ALB have been updated. There are a total of 3 distribution modes: 5-tuple (mentioned before), 3-tuple (source IP, destination IP and protocol) and 2-tuple (source IP and destination IP).

Now that we have established the configuration options and roughly when they came about, lets get stuck into the impact in two scenarios, months apart and in both ASM Classic and ARM deployment modes…

 

The problem

Earlier this year at a customer, we ran into a problem where we had a number of Azure workloads hard reset. This was either the cause of an outage in the region, some scheduled or unscheduled maintenance that had to occur. Nothing to serious sounding until we found that the Network Device Enrolment Server (NDES) was not able to accept traffic from the Web Application Proxy (WAP) server that was inline and “north” of the server. The WAP itself was in a Cloud Service (so ASM/Classic environment here) where there was multiple WAP servers that leveraged Load Balanced Sets (or the Azure Internal Load Balancer) as part of the Cloud Service.

The odd thing that happened was that since the outage/scheduled/unscheduled maintenance had happened, inbound NDES traffic via the WAP suddenly became erratic. Certificates that were requested via NDES (from Intune in this circumstance) were for the most part not being completed. So ensued, a long and enjoyable Microsoft Premier case that involved the Azure Product Group (sarcasm intended).

The solution

I’ll keep this short and sweet as I would rather save you, dear reader, the time of not having to relive that incident. The outcome was as follows:

It was determined that the ASM Cloud Service Load Balanced Set (or Azure Load Balancer, Azure Internal Load Balancer) configuration was set to the out of the box default of 5-tuple distribution. While this implementation of the WAP + NDES solution was in production for at least 2-3 years, working without fault or issue, was not the correct configuration. It was determined that the correct configuration for this setup was to leverage either 2-tuple (source and destination IPs) distribution, or 3-tuple (source IP, destination IP and protocol). We went with the more specific 3-tuple and that resolved connectivity issues.

The PowerShell to execute this solution (setting the ASM Cloud Service LBSet to 3-tuple, source IP+destinationIP+protocol) is as follows:

Set-AzureLoadBalancedEndpoint -ServiceName "[CloudServiceX]" -LBSetName "[LBSetX]" -Protocol tcp LoadBalancerDistribution "sourceIPprotocol"

The specific parameter (-LoadBalancerDistribution) which sets the load balancer distribution algorithm, has the Valid values of:

  • sourceIP. A two tuple affinity: Source IP, Destination IP
  • sourceIPProtocol. A three tuple affinity: Source IP, Destination IP, Protocol
  • none. A five tuple affinity: Source IP, Source Port, Destination IP, Destination Port, Protocol

Round 2: The problem happened again

Recently, in another work stream, we ran into the same issue. However, the circumstances were slightly different. The problem parameters this time were:

  • An Azure Resource Manager (ARM) environment, not ASM or Classic
    • So, we were using the individual resource of the Azure Internal Load Balancer
    • Much more configuration available here
    • Again though- 5-tuple is the default “Source IP affinity mode” (note: no longer called simply the distribution mode in ARM)
  • The work load WAS NOT NDES
  • The work load WAS deployed on Windows Server 2016 again – same as before
    • I’m not sure if that is a coincidence or not

Round 2: Solution

Having gone through the load balancing distribution mode issue only a few months earlier, I had it fresh in my mind. I suggested to investigate that. After the parameter was changed, in this second instance, we were able to resolve the issue again and get the intended work load working as intended via the Azure Load Balancer.

With Azure Resource Manager, theres a couple of ways you can go about the configuration change. The most common way would be to change the JSON template which is quick and easy. The below is an example of the section around load balancing rules which has the specific “loadDistribtuion” parameter that would need to be changed. The ARMARM load balancer has basically the same configuration options as the ASM counterpart around this setting; sourceIP, sourceIPProtocol. However, the only difference is that there is a “Default” option which is the ARM equivalent to the ASM or “None” (default = 5-tuple).

"loadBalancingRules": [
{
"name": "[concat(parameters('loadBalancers_EXAMPLE_name')]",
"etag": "W/\"[XXXXXXXXXXXXXXXXXXXXXXXXXXX]\"",
"properties": {
"provisioningState": "Succeeded",
"frontendIPConfiguration": {
"id": "[parameters('loadBalancers_EXAMPLE_id')]"
},
"frontendPort": XX,
"backendPort": XX,
"enableFloatingIP": false,
"idleTimeoutInMinutes": X,
"protocol": "TCP",
"loadDistribution": "SourceIP",
"backendAddressPool": {
"id": "[parameters('loadBalancers_EXAMPLE_id_1')]"
},
"probe": {
"id": "[parameters('loadBalancers_EXAMPLE_id_2')]"
}
}
}
],

The alternative option would be to just user PowerShell. To do that, you can execute the following:

Get-AzureRmLoadBalancer -Name [LBName] -ResourceGroupName [RGName] | Set-AzureRmLoadBalancerRuleConfig -LoadDistribution "[Parameter]"

 

Conclusion and final thoughts

For the most part I would usually go with the default for any configuration. Through this exercise I have come to question the load balancing requirements to be specific around this distribution mode to avoid any possible fault. Certainly, this is a practice that should extend to every aspect of Azure. The only challenge is balancing questioning every configuration and/or simply going with the defaults. Happy balancing! (Pun intended).


This was originally posted on Lucian.Blog by Lucian.

Follow Lucian on Twitter @LucianFrango.

Bots: An Understanding of Time

Some modern applications must understand time, because the messages they receive contain time sensitive information. Consider a modern Service Desk solution, that may have to retrieve tickets based on a date range (the span between dates) or a duration of time.

In this blog post, I’ll explain how bots can interpret date ranges and durations, so they can respond to natural language queries provided by users, either via keyboard or microphone.

First, let’s consider  the building blocks of a bot, as depicted in the following view:

The client runs an application that sends messages to a messaging endpoint in the cloud. The connection between the client and the endpoint is called a channel. The message is basically something typed or spoken by the user.

Now, the bot must handle the message and provide a response. The challenge here is interpreting what the user said or typed. This is where cognitive services come in.

A cognitive service is trained to take an message from the user and resolve it into an intent (a function the bot can then execute). The intent determines which function the bot will execute, and the resulting response to the user.

To build time/date intelligence into a bot, the cognitive service must be configured to recognise date/time sensitive information in messages, and the bot itself must be able to convert this information into data it can use to query data sources.

Step 1: The Cognitive Service

In this example, I’ll be using the LIUS cognitive service. Because my bot resides in an Australia based Azure tenant, I’ll be using the https://au.luis.ai endpoint. I’ve created an app called Service Desk App.

Next, I need to build some Intents and Entities and train LUIS.

  • An Entity is an thing or phrase (or set of things or phrases) that may occur in in an utterance. I want LUIS (and subsequently the bot) to identify such entities in message provided to it.

The good news is that LUIS has a prebuilt entity called datetimeV2 so let’s add that to our Service Desk App. You may also want to add additional entities, for example: a list of applications managed by your service desk (and their synonyms), or perhaps resolver groups.

Next, we’ll need an Intent so that LUIS can have the bot execute the correct function (i.e. provide a response appropriate to the message). Let’s create an Intent called List.Tickets.

  • An Intent, or intention represents something the user wants to do (in this case, retrieve tickets from the service desk). A bot may be designed to handle more than one Intent. Each Intent is mapped to a function/method the bot executes.

I’ll need to provide some example utterances that LUIS can associate with the List.Tickets intent. These utterances must contain key words or phrases that LUIS can recognise as entities. I’ll use two examples:

  • “Show me tickets lodged for Skype in the last 10 weeks”
  • “List tickets raised for SharePoint  after July this year”

Now, assuming I’ve created an list based entity called Application (so LUIS knows that Skype and SharePoint are Applications), LUIS will recognise these terms as entities in the utterances I’ve provided:

Now I can train LUIS and test some additional utterances. As a general rule, the more utterances you provide, the smarter LUIS gets when resolving a message provided by a user to an intent. Here’s an example:

Here, I’ve provided a message that is a variation of utterances provided to LUIS, but it is enough for LUIS to resolve it to the List.Tickets intent. 0.84 is a measure of certainty – not a percentage, and it’s weighted against all other intents. You can see from the example that LUIS has correctly identified the Application (“skype”), and the measure of time  (“last week”).

Finally, I publish the Service Desk App. It’s now ready to receive messages relayed from the bot.

Step 2: The Bot

Now, it’s possible to create a bot from the Azure Portal, which will automate many of the steps for you. During this process, you can use the Language Understanding template to create a bot with a built in LUISRecognizer, so the code will be generated for you.

  • Recognizer is a component (class) of the bot that is responsible determining intent. The LUISRecognizer does this by relaying the message to the LUIS cognitive service.

Let’s take a look at the bot’s handler for the List.Tickets intent. I’m using Node.js here.

The function that handles the List.Tickets intent uses the EntityRecognizer class and findEntity method to extract entities identified by LUIS and returned in the payload (results).

It passes these values to a function called getData . In this example, I’m going to have my bot call a (fictional) remote service at http://xxxxx.azurewebsites.net/Tickets. This service will support the Open Data (OData) Protocol, allowing me to query data using the query string. Here’s the code:

(note I am using the sync-request package to call the REST service synchronously).

Step 3: Chrono

So let’s assume we’ve sent the following message to the bot:

  • “List tickets raised for SharePoint  after July this year”

It’s possible to query an OData data source for date based information using syntax as follows:

  • $filter=CreatedDate gt datetime’2018-03-08T12:00:00′ and CreatedDate lt datetime’2018-07-08T12:00:00′

So we need to be able to convert ‘after July this year’ to something we can use in an OData query string.

Enter chrono-node and dateformat – neat packages that can extract date information from natural language statements and convert the resulting date into ISO UTC format respectively. Let’s put them both to use in this example:

It’s important to note that chrono-node will ignore some information provided by LUIS (in this case the word ‘after’, but also ‘last’ and ‘before’), so we need a function to process additional information to create the appropriate filter for the OData query:


Handling time sensitive information is a crucial when building modern applications designed to handle natural language queries. After all, wouldn’t it be great to ask for information using your voice, Cortana,  and your mobile device when on the move! For now, these modern apps will be dependent on data in older systems with APIs that require dates or date ranges in a particular format.

The beauty of languages like Node.js and the npm package manager is that building these applications becomes an exercise in assembling building blocks as opposed to writing functionality from scratch.

Querying against an Azure SQL Database using Azure Automation Part 1

What if you wanted to leverage Azure automation to analyse database entries and send some statistics or even reports on a daily or weekly basis?

Well why would you want to do that?

  • On demand compute:
    • You may not have access to a physical server. Or your computer isn’t powerful enough to handle huge data processing. Or you would definitely do not want to wait in the office for the task to complete before leaving on a Friday evening.
  • You pay by the minute
    • With Azure automation, your first 500 minutes are for free, then you pay by the minute. Check out Azure Automation Pricing for more details. By the way its super cheap.
  • Its Super Cool doing it with PowerShell. 

There are other reasons why would anyone use Azure automation but we are not getting into the details around that. What we want to do is to leverage PowerShell to do such things. So here it goes!

To query against a SQL database whether its in Azure or not isn’t that complex. In fact this part of the post is to just get us started. Now for this part, we’re going to do something simple because if you want to get things done, you need the fastest way of doing it. And that is what we are going to do. But here’s a quick summary for the ways I thought of doing it:

    1. Using ‘invoke-sqlcmd2‘. This Part of the blog.. its super quick and easy to setup and it helps getting things done quickly.
    2. Creating your own SQL Connection object to push complex SQL Querying scenarios. [[ This is where the magic kicks in.. Part 2 of this series ]]

How do we get this done quickly?

For the sake of keeping things simple, we’re assuming the following:

  • We have an Azure SQL Database called ‘myDB‘, inside an Azure SQL Server ‘mytestAzureSQL.database.windows.net‘.
  • Its a simple database containing a single table ‘test_table’. This table has basically three columns  (Id, Name, Age) and this table contains only two records.
  • We’ve setup ‘Allow Azure Services‘ Access on this database in the firewall rules Here’s how to do that just in case:
    • Search for your database resource.
    • Click on ‘Set firewall rules‘ from the top menu.
    • Ensure the option ‘Allow Azure Services‘ is set to ‘ON
  • We do have an Azure automation account setup. We’ll be using that to test our code.

Now lets get this up and running

Start by creating two variables, one containing the SQL server name and the other containing the database name.

Then create an Automation credential object to store your SQL Login username and password. You need this as you definitely should not be thinking of storing your password in plain text in script editor.

I still see people storing passwords in plain text inside scripts.

Now you need to import the ‘invoke-sqlcmd2‘ module in the automation account. This can be done by:

  • Selecting the modules tab from the left side options in the automation account.
  • From the top menu, click on Browse gallery, search for the module ‘invoke-sqlcmd2‘, click on it and hit ‘Import‘. It should take about a minute to complete.

Now from the main menu of the automation account, click on the ‘Runbooks‘ tab and then ‘Add a Runbook‘, Give it a name and use ‘PowerShell‘ as the type. Now you need to edit the runbook. To do that, click on the Pencil icon from the top menu to get into the editing pane.

Inside the pane, paste the following code. (I’ll go through the details don’t worry).

#Import your Credential object from the Automation Account
 
 $SQLServerCred = Get-AutomationPSCredential -Name "mySqllogin" #Imports your Credential object from the Automation Account
 
 #Import the SQL Server Name from the Automation variable.
 
 $SQL_Server_Name = Get-AutomationVariable -Name "AzureSQL_ServerName" #Imports the SQL Server Name from the Automation variable.
 
 #Import the SQL DB from the Automation variable.
 
 $SQL_DB_Name = Get-AutomationVariable -Name "AzureSQL_DBname"
    • The first cmdlet ‘Get-AutomationPSCredential‘ is to retrieve the automation credential object we just created.
    • The next two cmdlets ‘Get-AutomationVariable‘ are to retrieve the two Automation variables we just created for the SQL server name and the SQL database name.

Now lets query our database. To do that, paste the below code after the section above.

#Query to execute
 
 $Query = "select * from Test_Table"
 
 "----- Test Result BEGIN "
 
 # Invoke to Azure SQL DB
 
 invoke-sqlcmd2 -ServerInstance "$SQL_Server_Name" -Database "$SQL_DB_Name" -Credential $SQLServerCred -Query "$Query" -Encrypt
 
 "`n ----- Test Result END "

So what did we do up there?

    • We’ve created a simple variable that contains our query. I know the query is too simple but you can put in there whatever you want.
    • We’ve executed the cmdlet ‘invoke-sqlcmd2‘. Now if you noticed, we didn’t have to import the module we’ve just installed, Azure automation takes care of that upon every execution.
    • In the cmdlet parameter set, we specified the SQL server (that has been retrieved from the automation variable), and the database name (automation variable too). Now we used the credential object we’ve imported from Azure automation. And finally, we used the query variable we also created. An optional switch parameter ‘-encypt’ can be used to encrypt the connection to the SQL server.

Lets run the code and look at the output!

From the editing pane, click on ‘Test Pane‘ from the menu above. Click on ‘Start‘ to begin testing the code, and observe the output.

Initially the code goes through the following stages for execution

  • Queuing
  • Starting
  • Running
  • Completed

Now what’s the final result? Look at the black box and you should see something like this.

----- Test Result BEGIN 

Id Name Age
-- ---- ---
 1 John  18
 2 Alex  25

 ----- Test Result END 

Pretty sweet right? Now the output we’re getting here is an object of type ‘Datarow‘. If you wrap this query into a variable, you can start to do some cool stuff with it like

$Result.count or

$Result.Age or even

$Result | where-object -Filterscript {$PSItem.Age -gt 10}

Now imagine if you could do so much more complex things with this.

Quick Hint:

If you include a ‘-debug’ option in your invoke cmdlet, you will see the username and password in plain text. Just don’t run this code with debugging option ON 🙂

Stay tuned for Part 2!!

 

PowerShell gotcha when connecting ASM Classic VNETs to ARM ExpressRoute

Recently I was working on an Azure ExpressRoute configuration change that required an uplift from a 1GB circuit to a 10Gb circuit. Now thats nothing interesting, but, of note was using some PowerShell to execute a cmdlet.

A bit of a back story to set the scene here; and I promise it will be brief.

You can no longer provision Azure ExpressRoute circuits in the Classic or ASM deployment model. All ExpressRoute circuits that are provisioned now are indeed Azure Resource Manager (ASM) deployments. So there is a very grey area between what cmdlets to run on which PowerShell module.

The problem

The environment I was working with had a mixture of ASM and ARM deployments. The ExpressRoute circuit was re-created in ARM. When attempting to follow the Microsoft documentation to connect a VNET to that new circuit (available on this docs.microsoft.com page), I ran the following command–

Get-AzureDedicatedCircuitLink 
New-AzureDedicatedCircuitLink -ServiceKey "[XXXX-XXXX-XXXX-XXXX-XXXX]" -VNetName "[MyVNet]"

–but PowerShell threw out the following error (even after doing the end to end process in a new PS ISE session):

Get-AzureDedicatedCircuit : Object reference not set to an instance of an object. 
At line:1 char:1 + 
Get-AzureDedicatedCircuit 
+ ~~~~~~~~~~~~~~~~~~~~~~~~~     
+ CategoryInfo          : NotSpecified: (:) [Get-AzureDedicatedCircuit], NullReferenceException     
+ FullyQualifiedErrorId : System.NullReferenceException,Microsoft.WindowsAzure.Commands.ExpressRoute.GetAzureDedicatedCircuitCommand

Solution

This is actually quite a tricky one as there’s little to no documentation that outlines this specifically. It’s not too difficult though as theres only two things you need to do:

  1. Ensure you have the version 5.1.1 of the Azure PowerShell modules
    • Ensure you also have the Azure ExpressRoute PowerShell module
  2. Execute the all Import-Module commands and in the right order as per bellow

Note: If you have a newer version of the Azure PowerShell modules, like I did (version 5.2.0), import the older module as per bellow

Import-Module "C:\Program Files\WindowsPowerShell\Modules\Azure\5.1.1\Azure.psd1"
Import-Module "C:\Program Files\WindowsPowerShell\Modules\Azure\5.1.1\ExpressRoute\ExpressRoute.psd1

Then you can execute the New-AzureDedicatedCircuitLink command and connect your ASM VNET to ARM ExpressRoute.

Cheers!


Originally posted on Lucian.Blog. Follow Lucian on Twitter @LucianFrango.

Using your Voice to Search Microsoft Identity Manager – Part 2

Introduction

Last month I wrote this post that detailed using your voice to search/query Microsoft Identity Manager. That post demonstrated a working solution (GitHub repository coming next month) but was still incomplete if it was to be used in production within an Enterprise. I hinted then that there were additional enhancements I was looking to make. One is an Auditing/Reporting aspect and that is what I cover in this post.

Overview

The one element of the solution that has visibility of each search scenario is the IoT Device. As a potential future enhancement this could also be a Bot. For each request I wanted to log/audit;

  • Device the query was initiated from (it is possible to have many IoT devices; physical or bot leveraging this function)
  • The query
  • The response
  • Date and Time of the event
  • User the query targeted

To achieve this my solution is to;

  • On my IoT Device the query, target user and date/time is held during the query event
  • At the completion of the query the response along with the earlier information is sent to the IoT Hub using the IoT Hub REST API
  • The event is consumed from the IoT Hub by an Azure Event Hub
  • The message containing the information is processed by Stream Analytics and put into Azure Table Storage and Power BI.

Azure Table Storage provides the logging/auditing trail of what requests have been made and the responses.  Power BI provides the reporting aspect. These two services provide visibility into what requests have been made, against who, when etc. The graphic below shows this in the bottom portion of the image.

Auditing Reporting Searching MIM with Speech.png

Sending IoT Device Events to IoT Hub

I covered this piece in a previous post here in PowerShell. I converted it from PowerShell to Python to run on my device. In PowerShell though for initial end-to-end testing when developing the solution the body of the message being sent and sending it looks like this;

[string]$datetime = get-date
$datetime = $datetime.Replace("/","-")
$body = @{
 deviceId = $deviceID
 messageId = $datetime
 messageString = "$($deviceID)-to-Cloud-$($datetime)"
 MIMQuery = "Does the user Jerry Seinfeld have an Active Directory Account"
 MIMResponse = "Yes. Their LoginID is jerry.seinfeld"
 User = "Jerry Seinfeld"
}

$body = $body | ConvertTo-Json
Invoke-RestMethod -Uri $iotHubRestURI -Headers $Headers -Method Post -Body $body

Event Hub and IoT Hub Configuration

First I created an Event Hub. Then on my IoT Hub I added an Event Subscription and pointed it to my Event Hub.

IoTHub Event Hub.PNG

Streaming Analytics

I then created a Stream Analytics Job. I configured two Inputs. One each from my IoT Hub and from my Event Hub.

Stream Analytics Inputs.PNG

I then created two Outputs. One for Table Storage for which I used an existing Storage Group for my solution, and the other for Power BI using an existing Workspace but creating a new Dataset. For the Table storage I specified deviceId for Partition key and messageId for Row key.

Stream Analytics Outputs.PNG

Finally as I’m keeping all the data simple in what I’m sending, my query is basically copying from the Inputs to the Outputs. One is to get the events to Table Storage and the other to get it to Power BI. Therefore the query looks like this.

Stream Analytics Query.PNG

Events in Table Storage

After sending through some events I could see rows being added to Table Storage. When I added an additional column to the data the schema-less Table Storage obliged and dynamically added another column to the table.

Table Storage.PNG

A full record looks like this.

Full Record.PNG

Events in Power BI

Just like in Table Storage, in Power BI I could see the dataset and the table with the event data. I could create a report with some nice visuals just as you would with any other dataset. When I added an additional field to the event being sent from the IoT Device it magically showed up in the Power BI Dataset Table.

PowerBI.PNG

Summary

Using the Azure IoT Hub REST API I can easily send information from my IoT Device and then have it processed through Stream Analytics into Table Storage and Power BI. Instant auditing and reporting functionality.

Let me know what you think on twitter @darrenjrobinson

Securing your Web front-end with Azure Application Gateway Part 2

In part one of this post we looked at configuring an Azure Application Gateway to secure your web application front-end, it is available here.

In part two we will be looking at some additional post configuration tasks and how to start investigating whether the WAF is blocking any of our application traffic and how to check for this.

First up we will look at configuring some NSG (Network Security Group) inbound and outbound rules for the subnet that the Application Gateway is deployed within.

The inbound rules that you will require are below.

  • Allow HTTPS Port 443 from any source.
  • Allow HTTP Port 80 from any source (only if required for your application).
  • Allow HTTP/HTTPS Port 80/443 from your Web Server subnet.
  • Allow Application Gateway Health API ports 65503-65534. These are required for correct operation of your Application Gateway.
  • Deny-all other traffic, set this rule as a high value ie. 4000 – 4096.

The outbound rules that you will require are below.

  • Allow HTTPS Port 443 from your Application Gateway to any destination.
  • Allow HTTP Port 80 from your Application Gateway to any destination (only if required for your application).
  • Allow HTTP/HTTPS Port 80/443 to your Web Server subnet.
  • Allow Internet traffic using the Internet traffic tag.
  • Deny-all other traffic, set this rule as a high value ie. 4000 – 4096.

Now we need to configure the Application Gateway to write Diagnostic Logs to a Storage Account. To do this open the Application Gateway from within the Azure Portal, find the Monitoring section and click on Diagnostic Logs.

Click on Add diagnostic settings and browse to the Storage Account you wish to write logs to, select all log types and save the changes.

DiagStg

Now that the Application Gateway is configured to store diagnostic logs (we need the ApplicationFirewallLog) you can start testing your web front-end. To do this, firstly you should set the WAF to “Detection” mode which will log any traffic that would have been blocked. This setting is only recommended for testing purposes and should not be permanent state.

To change this setting open your Application Gateway from within the Azure Portal and click Web Application Firewall under settings.

Change the Firewall mode to “Detection” for testing purposes. Save the changes.

Now you can start your web front-end testing. Any traffic that would be blocked will now be allowed, however it will still create a log entry showing you the details for the traffic that would be blocked.

Once testing is completed open your Storage Account from within the Azure Portal and browse to the insights-logs-applicationgatewayfirewalllog container, continue opening the folder structure and find the date and time of the testing. The log file is named PT1H.json, download it to your local computer.

Open the PT1H.json file. Any entries for traffic that would be blocked will look similar to the below.

{
"resourceId": "/SUBSCRIPTIONS/....",
"operationName": "ApplicationGatewayFirewall",
"time": "2018-07-03T03:30:59Z",
"category": "ApplicationGatewayFirewallLog",
"properties": {
  "instanceId": "ApplicationGatewayRole_IN_1",
  "clientIp": "202.141.210.52",
  "clientPort": "0",
  "requestUri": "/uri",
  "ruleSetType": "OWASP",
  "ruleSetVersion": "3.0",
  "ruleId": "942430",
  "message": "Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)",
  "action": "Detected",
  "site": "Global",
  "details": {
    "message": "Warning. Pattern match \",
    "file": "rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf",
    "line": "1002"
  },
  "hostname": "website.com.au"
}

This will give you useful information to either fix your application or disable a rule that is blocking traffic, the “ruleId” section of the log will show you the rule that requires action. Rules should only be disabled temporarily while you remediate your application. They can be disabled/enabled from within the Web Application Firewall tab within Application Gateway, just make sure the “Advanced Rule Configuration” box is ticked so that you can see them.

This process of testing and fixing code/disabling rules should continue until you can complete a test without any errors showing in the logs. Once no errors occur you can change the Web Application Firewall mode back to “Prevention” mode which will make the WAF actively block traffic that does not pass the rule sets.

Something important to note is the below log entry type with a ruleId of “0”. This error would need to be resolved by remediating the code as the default rules cannot be changed within the WAF. Microsoft are working on changing this, however at the time of writing it cannot be done as the default data length cannot be changed. Sometimes this will occur with a part of the application that cannot be resolved, if this is the case you would need to look at another WAF product such as a Barracuda etc.

{
"resourceId": "/SUBSCRIPTIONS/...",
"operationName": "ApplicationGatewayFirewall",
"time": "2018-07-03T01:21:44Z",
"category": "ApplicationGatewayFirewallLog",
"properties": {
  "instanceId": "ApplicationGatewayRole_IN_0",
  "clientIp": "1.136.111.168",
  "clientPort": "0",
  "requestUri": "/..../api/document/adddocument",
  "ruleSetType": "OWASP",
  "ruleSetVersion": "3.0",
  "ruleId": "0",
  "message": "",
  "action": "Blocked",
  "site": "Global",
  "details": {
    "message": "Request body no files data length is larger than the configured limit (131072).. Deny with code (413)",
    "data": "",
    "file": "",
    "line": ""
  },
  "hostname": "website.com.au"
}

 

In this post we looked at some post configuration tasks for Application Gateway such as configuring NSG rules to further protect the network, configure diagnostic logging and how to check the Web Application Firewall logs for application traffic that would be blocked by the WAF. The Application Gateway can be a good alternative to dedicated appliances as it is easier to configure and manage. However, in some cases where more control of WAF rule sets are required a dedicated WAF appliance may be required.

Hopefully this two part series helps you with your decision making when it comes to securing your web front-end applications.

Securing your Web front-end with Azure Application Gateway Part 1

I have just completed a project with a customer who were using Azure Application Gateway to secure their web front-end and thought it would be good to post some findings.

This is part one in a two part post looking at how to secure a web front-end using Azure Application Gateway with the WAF component enabled. In this post I will explain the process for configuring the Application Gateway once deployed. You can deploy the Application Gateway from an ARM Template, Azure PowerShell or the portal. To be able to enable the WAF component you must use a Medium or Large instance size for the Application Gateway.

Using Application Gateway allows you to remove the need for your web front-end to have a public endpoint assigned to it, for instance if it is a Virtual Machine then you no longer need a Public IP address assigned to it. You can deploy Application Gateway in front of Virtual Machines (IaaS) or Web Apps (PaaS).

An overview of how this will look is shown below. The Application Gateway requires its own subnet which no other resources can be deployed to. The web server (Virtual Machine) can be assigned to a separate subnet, if using a web app no subnet is required.

AppGW

 

The benefits we will receive from using Application Gateway are:

  • Remove the need for a public endpoint from our web server.
  • End-to-end SSL encryption.
  • Automatic HTTPS to HTTPS redirection.
  • Multi-site hosting, though in this example we will configure a single site.
  • In-built WAF solution utilising OWASP core rule sets 3.0 or 2.2.9.

To follow along you will require the Azure PowerShell module version of 3.6 or later. You can install or upgrade following this link

Before starting you need to make sure that an Application Gateway with an instance size of Medium or Large has been deployed with the WAF component enabled and that the web server or web app has been deployed and configured.

Now open PowerShell ISE and login to your Azure account using the below command.


Login-AzureRmAccount

Now we need to set our variables to work with. These variables are your Application Gateway name, the resource group where you Application Gateway is deployed, your Backend Pool name and IP, your HTTP and HTTPS Listener names, your host name (website name), the HTTP and HTTPS rule names, your front end (Private) and back end (Public) SSL Names along with your Private certificate password.

NOTE: The Private certificate needs to be in PFX format and your Public certificate in CER format.

Change these to suit your environment and copy both your pfx and cer certificate files to C:\Temp\Certs on your computer.

# Application Gateway name.
[string]$ProdAppGw = "PRD-APPGW-WAF"
# The resource group where the Application Gateway is deployed.
[string]$resourceGroup = "PRD-RG"
# The name of your Backend Pool.
[string]$BEPoolName = "BackEndPool"
# The IP address of your web server or URL of web app.
[string]$BEPoolIP = "10.0.1.10"
# The name of the HTTP Listener.
[string]$HttpListener = "HTTPListener"
# The name of the HTTPS Listener.
[string]$HttpsListener = "HTTPSListener"
# Your website hostname/URL.
[string]$HostName = "website.com.au"
# The HTTP Rule name.
[string]$HTTPRuleName = "HTTPRule"
# The HTTPS Rule name.
[string]$HTTPSRuleName = "HTTPSRule"
# SSL certificate name for your front-end (Private cert pfx).
[string]$FrontEndSSLName = "Private_SSL"
# SSL certificate name for your back-end (Public cert cer).
[string]$BackEndSSLName = "Public_SSL"
# Password for front-end SSL (Private cert pfx).
[string]$sslPassword = "<Enter your Private Certificate pfx password here.>"

Our first step is to configure the Front and Back end HTTPS settings on the Application Gateway.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Add the Front-end (Private) SSL certificate. If you have any issues with this step you can upload the certificate from within the Azure Portal by creating a new Listener.

Add-AzureRmApplicationGatewaySslCertificate -ApplicationGateway $AppGw `
-Name $FrontEndSSLName -CertificateFile "C:\Temp\Certs\PrivateCert.pfx" `
-Password $sslPassword

Save the certificate as a variable.

$AGFECert = Get-AzureRmApplicationGatewaySslCertificate -ApplicationGateway $AppGW `
            -Name $FrontEndSSLName

Configure the front-end port for SSL.

Add-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
-Name "appGatewayFrontendPort443" `
-Port 443

Add the back-end (Public) SSL certificate.

Add-AzureRmApplicationGatewayAuthenticationCertificate -ApplicationGateway $AppGW `
-Name $BackEndSSLName `
-CertificateFile "C:\Temp\Certs\PublicCert.cer"

Save the back-end (Public) SSL as a variable.

$AGBECert = Get-AzureRmApplicationGatewayAuthenticationCertificate -ApplicationGateway $AppGW `
            -Name $BackEndSSLName

Configure back-end HTTPS settings.

Add-AzureRmApplicationGatewayBackendHttpSettings -ApplicationGateway $AppGW `
-Name "appGatewayBackendHttpsSettings" `
-Port 443 `
-Protocol Https `
-CookieBasedAffinity Enabled `
-AuthenticationCertificates $AGBECert

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The next stage is to configure the back-end pool to connect to your Virtual Machine or Web App. This example is using the IP address of the NIC attached to the web server VM. If using a web app as your front-end you can configure it to accept traffic only from the Application Gateway by setting an IP restriction on the web app to the Application Gateway IP address.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Add the Backend Pool Virtual Machine or Web App. This can be a URL or an IP address.

Add-AzureRmApplicationGatewayBackendAddressPool -ApplicationGateway $AppGw `
-Name $BEPoolName `
-BackendIPAddresses $BEPoolIP

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The next steps are to configure the HTTP and HTTPS Listeners.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the front-end port as a variable – port 80.

$AGFEPort = Get-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
            -Name "appGatewayFrontendPort"

Save the front-end IP configuration as a variable.

$AGFEIPConfig = Get-AzureRmApplicationGatewayFrontendIPConfig -ApplicationGateway $AppGw `
                -Name "appGatewayFrontendIP"

Add the HTTP Listener for your website.

Add-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HttpListener `
-Protocol Http `
-FrontendIPConfiguration $AGFEIPConfig `
-FrontendPort $AGFEPort `
-HostName $HostName

Save the HTTP Listener for your website as a variable.

$AGListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
              -Name $HTTPListener

Save the front-end SSL port as a variable – port 443.

$AGFESSLPort = Get-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
               -Name "appGatewayFrontendPort443"

Add the HTTPS Listener for your website.

Add-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HTTPSListener `
-Protocol Https `
-FrontendIPConfiguration $AGFEIPConfig `
-FrontendPort $AGFESSLPort `
-HostName $HostName `
-RequireServerNameIndication true `
-SslCertificate $AGFECert

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The final part of the configuration is to configure the HTTP and HTTPS rules and the HTTP to HTTPS redirection.

First configure the HTTPS rule.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the Backend Pool as a variable.

$BEP = Get-AzureRmApplicationGatewayBackendAddressPool -ApplicationGateway $AppGW `
       -Name $BEPoolName

Save the HTTPS Listener as a variable.

$AGSSLListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
                 -Name $HttpsListener

Save the back-end HTTPS settings as a variable.

$AGHTTPS = Get-AzureRmApplicationGatewayBackendHttpSettings -ApplicationGateway $AppGW `
           -Name "appGatewayBackendHttpsSettings"

Add the HTTPS rule.

Add-AzureRmApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
-Name $HTTPSRuleName `
-RuleType Basic `
-BackendHttpSettings $AGHTTPS `
-HttpListener $AGSSLListener `
-BackendAddressPool $BEP

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

Now configure the HTTP to HTTPS redirection and the HTTP rule with the redirection applied.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the HTTPS Listener as a variable.

$AGSSLListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
                 -Name $HttpsListener

Add the HTTP to HTTPS redirection.

Add-AzureRmApplicationGatewayRedirectConfiguration -Name ProdHttpToHttps `
-RedirectType Permanent `
-TargetListener $AGSSLListener `
-IncludePath $true `
-IncludeQueryString $true `
-ApplicationGateway $AppGw

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the redirect as a variable.

$Redirect = Get-AzureRmApplicationGatewayRedirectConfiguration -Name ProdHttpToHttps `
            -ApplicationGateway $AppGw

Save the HTTP Listener as a variable.

$AGListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
              -Name $HttpListener

Add the HTTP rule with redirection to HTTPS.

Add-AzureRmApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
-Name $HTTPRuleName `
-RuleType Basic `
-HttpListener $AGListener `
-RedirectConfiguration $Redirect

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

 
In this post we covered how to configure Azure Application Gateway to secure a web front-end whether running on Virtual Machines or Web Apps. We have configured the Gateway for end-to-end SSL encryption and automatic HTTP to HTTPS redirection removing this overhead from the web server.

In part two we will look at some additional post configuration tasks and how to make sure your web application works with the WAF component.

Getting Started with Adaptive Cards and the Bot Framework

This article will provide an introduction to working with AdaptiveCards and the Bot Framework. AdaptiveCards provide bot developers with an option to create their own card templates to suit variety of different scenarios. I’ll also show you a couple of tricks with node.js that will help you design smart.

Before I run through the example, I want to point you to some great resources from adaptivecards.io which will help you build and test your own AdaptiveCards:

  • The schema explorer provides a breakdown of the constructs you can use to build your AdaptiveCards. Note that there are limitations to the schema so don’t expect to do all the things you can do with regular mark-up.
  • The schema visualizer is a great tool to enable you (and your stakeholders) to give the cards a test drive.

There are many great examples online (start with GitHub), so you can go wild with your own designs.

In this example, we’re going to use an AdaptiveCard to display an ‘About’ card for our bot. Schemas for AdaptiveCards are JSON payloads. Here’s the schema for the card:

This generates the following card (go play in the visualizer):

dialog2.PNG

We’ve got lots of %placeholders% for information the bot will insert at runtime. This information could be sourced, for example, from a configuration file collocated with the bot, or from a service the bot has to invoke.

Next, we need to define the components that will play a role in populating our About card. My examples here will use node.js. The following simple view outlines what we need to create in our Visual Studio Code workspace:

bot_view.PNG

The about.json file contains the schema for the AdaptiveCard (which is the code in the script block above). I like to create a folder called ‘cards’ in my workspace and store the schemas for each AdaptiveCard there.

The Source Data

I’m going to use dotenv to store the values we need to plug into our AdaptiveCard at runtime. It’s basically a config file (.env) that sits with your bot. Here we declare the values we want inserted into the AdaptiveCard at runtime:

This is fine for the example here but in reality you’ll probably be hitting remote services for records and parsing returned JSON payloads, rendering carousels of cards.

The Class

about.js is the object representation of the card. It provides attributes for each item of source data and a method to generate a card schema for our bot. Here we go:

The constructor simply offloads incoming arguments to class properties. The toCard() method reads the about.json schema and recursively does a find/replace job on the class properties. A card is created and the updated schema is assigned to the card’s content property. The contentType attribute in the JSON payload tells a handling function that the schema represents an AdaptiveCard.

The Bot

In our bot we have a series of event handlers that trigger based on input from the user via the communication app, or from a cognitive service, which distils input from the user into an intent.

For this example, let’s assume that we have an intent called Show.Help. Utterances from the user such as ‘tell me about yourself’ or quite simply ‘help’ might resolve to this intent.

So we need to add a handler (function) in app.js that responds to the Show.Help intent (this is called a triggerAction). The handler deals with the dialog (interaction) between the user and the bot so we need it to both generate the About card and handle any interactions the card supports (such as clicking the Submit Feedback button on the card).

Note that the dialog between user and bot ends when the endDialog function is called, or when the conditions of the cancelAction are met.

Here’s the code for the handler:

The function starts with a check to see if a dialog is in session (i.e. a message was received). If not (the else condition), it’s a new dialog.

We instantiate an instance of the About class and use the toCard() method to generate a card to add to the message the bot sends back to the channel. So you end up with this:

dialog_final.png


And there you have it. There are many AdaptiveCard examples online but I couldn’t find any for Node.js that covered the manipulation of cards at runtime. Now, go forth and build fantastic user experiences for your customers!

Sending Events from IoT Devices to Azure IoT Hub using HTTPS and REST

Overview

Different IoT Devices have different capabilities. Whether it is a Micro-controller or Single Board Computer your options will vary. In this post I detailed using MQTT to send messages from an IoT Device to an Azure IoT Hub as well as using the AzureIoT PowerShell Module.

For a current project I needed to send the events from an IoT Device that runs Linux and had Python support. The Azure IoT Hub includes an HTTPS REST endpoint. For this particular application using the HTTPS REST endpoint is going to be much easier than compiling the Azure SDK for the particular flavour of Linux running on my device.

Python isn’t my language of choice so first I got it working in PowerShell then converted it to Python. I detail both scripts here as a guide for anyone else trying to do something similar but also for myself as I know I’m going to need these snippets in the future.

Prerequisites

You’ll need to have configured an;

Follow this post to get started.

PowerShell Device to Cloud Events using HTTPS and REST Script

Here is the PowerShell version of the script. Update Line 3 for your DeviceID, Line 5 for your IoT Hub Name and LIne 11 for your SAS Token.

Using Device Explorer to Monitor the Device on the associated IoT Hub I can see that the message is received.

Device Explorer

Python Device to Cloud Events using HTTPS and REST Script

Here is my Python version of the same script. Again update Line 5 for your IoT DeviceID, Line 7 for your IoT Hub and Line 12 for the SAS Token.

And in Device Explorer we can see the message is received.

Device Explorer Python

Summary

When you have a device that has the ability to run Python you can use the IoT Hub HTTPS REST API to send messages from the Client to Cloud negating the need to build and compile the Azure IoT SDK to generate client libraries.