Using your Voice to Search Microsoft Identity Manager – Part 2

Introduction

Last month I wrote this post that detailed using your voice to search/query Microsoft Identity Manager. That post demonstrated a working solution (GitHub repository coming next month) but was still incomplete if it was to be used in production within an Enterprise. I hinted then that there were additional enhancements I was looking to make. One is an Auditing/Reporting aspect and that is what I cover in this post.

Overview

The one element of the solution that has visibility of each search scenario is the IoT Device. As a potential future enhancement this could also be a Bot. For each request I wanted to log/audit;

  • Device the query was initiated from (it is possible to have many IoT devices; physical or bot leveraging this function)
  • The query
  • The response
  • Date and Time of the event
  • User the query targeted

To achieve this my solution is to;

  • On my IoT Device the query, target user and date/time is held during the query event
  • At the completion of the query the response along with the earlier information is sent to the IoT Hub using the IoT Hub REST API
  • The event is consumed from the IoT Hub by an Azure Event Hub
  • The message containing the information is processed by Stream Analytics and put into Azure Table Storage and Power BI.

Azure Table Storage provides the logging/auditing trail of what requests have been made and the responses.  Power BI provides the reporting aspect. These two services provide visibility into what requests have been made, against who, when etc. The graphic below shows this in the bottom portion of the image.

Auditing Reporting Searching MIM with Speech.png

Sending IoT Device Events to IoT Hub

I covered this piece in a previous post here in PowerShell. I converted it from PowerShell to Python to run on my device. In PowerShell though for initial end-to-end testing when developing the solution the body of the message being sent and sending it looks like this;

[string]$datetime = get-date
$datetime = $datetime.Replace("/","-")
$body = @{
 deviceId = $deviceID
 messageId = $datetime
 messageString = "$($deviceID)-to-Cloud-$($datetime)"
 MIMQuery = "Does the user Jerry Seinfeld have an Active Directory Account"
 MIMResponse = "Yes. Their LoginID is jerry.seinfeld"
 User = "Jerry Seinfeld"
}

$body = $body | ConvertTo-Json
Invoke-RestMethod -Uri $iotHubRestURI -Headers $Headers -Method Post -Body $body

Event Hub and IoT Hub Configuration

First I created an Event Hub. Then on my IoT Hub I added an Event Subscription and pointed it to my Event Hub.

IoTHub Event Hub.PNG

Streaming Analytics

I then created a Stream Analytics Job. I configured two Inputs. One each from my IoT Hub and from my Event Hub.

Stream Analytics Inputs.PNG

I then created two Outputs. One for Table Storage for which I used an existing Storage Group for my solution, and the other for Power BI using an existing Workspace but creating a new Dataset. For the Table storage I specified deviceId for Partition key and messageId for Row key.

Stream Analytics Outputs.PNG

Finally as I’m keeping all the data simple in what I’m sending, my query is basically copying from the Inputs to the Outputs. One is to get the events to Table Storage and the other to get it to Power BI. Therefore the query looks like this.

Stream Analytics Query.PNG

Events in Table Storage

After sending through some events I could see rows being added to Table Storage. When I added an additional column to the data the schema-less Table Storage obliged and dynamically added another column to the table.

Table Storage.PNG

A full record looks like this.

Full Record.PNG

Events in Power BI

Just like in Table Storage, in Power BI I could see the dataset and the table with the event data. I could create a report with some nice visuals just as you would with any other dataset. When I added an additional field to the event being sent from the IoT Device it magically showed up in the Power BI Dataset Table.

PowerBI.PNG

Summary

Using the Azure IoT Hub REST API I can easily send information from my IoT Device and then have it processed through Stream Analytics into Table Storage and Power BI. Instant auditing and reporting functionality.

Let me know what you think on twitter @darrenjrobinson

Securing your Web front-end with Azure Application Gateway Part 2

In part one of this post we looked at configuring an Azure Application Gateway to secure your web application front-end, it is available here.

In part two we will be looking at some additional post configuration tasks and how to start investigating whether the WAF is blocking any of our application traffic and how to check for this.

First up we will look at configuring some NSG (Network Security Group) inbound and outbound rules for the subnet that the Application Gateway is deployed within.

The inbound rules that you will require are below.

  • Allow HTTPS Port 443 from any source.
  • Allow HTTP Port 80 from any source (only if required for your application).
  • Allow HTTP/HTTPS Port 80/443 from your Web Server subnet.
  • Allow Application Gateway Health API ports 65503-65534. These are required for correct operation of your Application Gateway.
  • Deny-all other traffic, set this rule as a high value ie. 4000 – 4096.

The outbound rules that you will require are below.

  • Allow HTTPS Port 443 from your Application Gateway to any destination.
  • Allow HTTP Port 80 from your Application Gateway to any destination (only if required for your application).
  • Allow HTTP/HTTPS Port 80/443 to your Web Server subnet.
  • Allow Internet traffic using the Internet traffic tag.
  • Deny-all other traffic, set this rule as a high value ie. 4000 – 4096.

Now we need to configure the Application Gateway to write Diagnostic Logs to a Storage Account. To do this open the Application Gateway from within the Azure Portal, find the Monitoring section and click on Diagnostic Logs.

Click on Add diagnostic settings and browse to the Storage Account you wish to write logs to, select all log types and save the changes.

DiagStg

Now that the Application Gateway is configured to store diagnostic logs (we need the ApplicationFirewallLog) you can start testing your web front-end. To do this, firstly you should set the WAF to “Detection” mode which will log any traffic that would have been blocked. This setting is only recommended for testing purposes and should not be permanent state.

To change this setting open your Application Gateway from within the Azure Portal and click Web Application Firewall under settings.

Change the Firewall mode to “Detection” for testing purposes. Save the changes.

Now you can start your web front-end testing. Any traffic that would be blocked will now be allowed, however it will still create a log entry showing you the details for the traffic that would be blocked.

Once testing is completed open your Storage Account from within the Azure Portal and browse to the insights-logs-applicationgatewayfirewalllog container, continue opening the folder structure and find the date and time of the testing. The log file is named PT1H.json, download it to your local computer.

Open the PT1H.json file. Any entries for traffic that would be blocked will look similar to the below.

{
"resourceId": "/SUBSCRIPTIONS/....",
"operationName": "ApplicationGatewayFirewall",
"time": "2018-07-03T03:30:59Z",
"category": "ApplicationGatewayFirewallLog",
"properties": {
  "instanceId": "ApplicationGatewayRole_IN_1",
  "clientIp": "202.141.210.52",
  "clientPort": "0",
  "requestUri": "/uri",
  "ruleSetType": "OWASP",
  "ruleSetVersion": "3.0",
  "ruleId": "942430",
  "message": "Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)",
  "action": "Detected",
  "site": "Global",
  "details": {
    "message": "Warning. Pattern match \",
    "file": "rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf",
    "line": "1002"
  },
  "hostname": "website.com.au"
}

This will give you useful information to either fix your application or disable a rule that is blocking traffic, the “ruleId” section of the log will show you the rule that requires action. Rules should only be disabled temporarily while you remediate your application. They can be disabled/enabled from within the Web Application Firewall tab within Application Gateway, just make sure the “Advanced Rule Configuration” box is ticked so that you can see them.

This process of testing and fixing code/disabling rules should continue until you can complete a test without any errors showing in the logs. Once no errors occur you can change the Web Application Firewall mode back to “Prevention” mode which will make the WAF actively block traffic that does not pass the rule sets.

Something important to note is the below log entry type with a ruleId of “0”. This error would need to be resolved by remediating the code as the default rules cannot be changed within the WAF. Microsoft are working on changing this, however at the time of writing it cannot be done as the default data length cannot be changed. Sometimes this will occur with a part of the application that cannot be resolved, if this is the case you would need to look at another WAF product such as a Barracuda etc.

{
"resourceId": "/SUBSCRIPTIONS/...",
"operationName": "ApplicationGatewayFirewall",
"time": "2018-07-03T01:21:44Z",
"category": "ApplicationGatewayFirewallLog",
"properties": {
  "instanceId": "ApplicationGatewayRole_IN_0",
  "clientIp": "1.136.111.168",
  "clientPort": "0",
  "requestUri": "/..../api/document/adddocument",
  "ruleSetType": "OWASP",
  "ruleSetVersion": "3.0",
  "ruleId": "0",
  "message": "",
  "action": "Blocked",
  "site": "Global",
  "details": {
    "message": "Request body no files data length is larger than the configured limit (131072).. Deny with code (413)",
    "data": "",
    "file": "",
    "line": ""
  },
  "hostname": "website.com.au"
}

 

In this post we looked at some post configuration tasks for Application Gateway such as configuring NSG rules to further protect the network, configure diagnostic logging and how to check the Web Application Firewall logs for application traffic that would be blocked by the WAF. The Application Gateway can be a good alternative to dedicated appliances as it is easier to configure and manage. However, in some cases where more control of WAF rule sets are required a dedicated WAF appliance may be required.

Hopefully this two part series helps you with your decision making when it comes to securing your web front-end applications.

Securing your Web front-end with Azure Application Gateway Part 1

I have just completed a project with a customer who were using Azure Application Gateway to secure their web front-end and thought it would be good to post some findings.

This is part one in a two part post looking at how to secure a web front-end using Azure Application Gateway with the WAF component enabled. In this post I will explain the process for configuring the Application Gateway once deployed. You can deploy the Application Gateway from an ARM Template, Azure PowerShell or the portal. To be able to enable the WAF component you must use a Medium or Large instance size for the Application Gateway.

Using Application Gateway allows you to remove the need for your web front-end to have a public endpoint assigned to it, for instance if it is a Virtual Machine then you no longer need a Public IP address assigned to it. You can deploy Application Gateway in front of Virtual Machines (IaaS) or Web Apps (PaaS).

An overview of how this will look is shown below. The Application Gateway requires its own subnet which no other resources can be deployed to. The web server (Virtual Machine) can be assigned to a separate subnet, if using a web app no subnet is required.

AppGW

 

The benefits we will receive from using Application Gateway are:

  • Remove the need for a public endpoint from our web server.
  • End-to-end SSL encryption.
  • Automatic HTTPS to HTTPS redirection.
  • Multi-site hosting, though in this example we will configure a single site.
  • In-built WAF solution utilising OWASP core rule sets 3.0 or 2.2.9.

To follow along you will require the Azure PowerShell module version of 3.6 or later. You can install or upgrade following this link

Before starting you need to make sure that an Application Gateway with an instance size of Medium or Large has been deployed with the WAF component enabled and that the web server or web app has been deployed and configured.

Now open PowerShell ISE and login to your Azure account using the below command.


Login-AzureRmAccount

Now we need to set our variables to work with. These variables are your Application Gateway name, the resource group where you Application Gateway is deployed, your Backend Pool name and IP, your HTTP and HTTPS Listener names, your host name (website name), the HTTP and HTTPS rule names, your front end (Private) and back end (Public) SSL Names along with your Private certificate password.

NOTE: The Private certificate needs to be in PFX format and your Public certificate in CER format.

Change these to suit your environment and copy both your pfx and cer certificate files to C:\Temp\Certs on your computer.

# Application Gateway name.
[string]$ProdAppGw = "PRD-APPGW-WAF"
# The resource group where the Application Gateway is deployed.
[string]$resourceGroup = "PRD-RG"
# The name of your Backend Pool.
[string]$BEPoolName = "BackEndPool"
# The IP address of your web server or URL of web app.
[string]$BEPoolIP = "10.0.1.10"
# The name of the HTTP Listener.
[string]$HttpListener = "HTTPListener"
# The name of the HTTPS Listener.
[string]$HttpsListener = "HTTPSListener"
# Your website hostname/URL.
[string]$HostName = "website.com.au"
# The HTTP Rule name.
[string]$HTTPRuleName = "HTTPRule"
# The HTTPS Rule name.
[string]$HTTPSRuleName = "HTTPSRule"
# SSL certificate name for your front-end (Private cert pfx).
[string]$FrontEndSSLName = "Private_SSL"
# SSL certificate name for your back-end (Public cert cer).
[string]$BackEndSSLName = "Public_SSL"
# Password for front-end SSL (Private cert pfx).
[string]$sslPassword = "<Enter your Private Certificate pfx password here.>"

Our first step is to configure the Front and Back end HTTPS settings on the Application Gateway.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Add the Front-end (Private) SSL certificate. If you have any issues with this step you can upload the certificate from within the Azure Portal by creating a new Listener.

Add-AzureRmApplicationGatewaySslCertificate -ApplicationGateway $AppGw `
-Name $FrontEndSSLName -CertificateFile "C:\Temp\Certs\PrivateCert.pfx" `
-Password $sslPassword

Save the certificate as a variable.

$AGFECert = Get-AzureRmApplicationGatewaySslCertificate -ApplicationGateway $AppGW `
            -Name $FrontEndSSLName

Configure the front-end port for SSL.

Add-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
-Name "appGatewayFrontendPort443" `
-Port 443

Add the back-end (Public) SSL certificate.

Add-AzureRmApplicationGatewayAuthenticationCertificate -ApplicationGateway $AppGW `
-Name $BackEndSSLName `
-CertificateFile "C:\Temp\Certs\PublicCert.cer"

Save the back-end (Public) SSL as a variable.

$AGBECert = Get-AzureRmApplicationGatewayAuthenticationCertificate -ApplicationGateway $AppGW `
            -Name $BackEndSSLName

Configure back-end HTTPS settings.

Add-AzureRmApplicationGatewayBackendHttpSettings -ApplicationGateway $AppGW `
-Name "appGatewayBackendHttpsSettings" `
-Port 443 `
-Protocol Https `
-CookieBasedAffinity Enabled `
-AuthenticationCertificates $AGBECert

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The next stage is to configure the back-end pool to connect to your Virtual Machine or Web App. This example is using the IP address of the NIC attached to the web server VM. If using a web app as your front-end you can configure it to accept traffic only from the Application Gateway by setting an IP restriction on the web app to the Application Gateway IP address.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Add the Backend Pool Virtual Machine or Web App. This can be a URL or an IP address.

Add-AzureRmApplicationGatewayBackendAddressPool -ApplicationGateway $AppGw `
-Name $BEPoolName `
-BackendIPAddresses $BEPoolIP

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The next steps are to configure the HTTP and HTTPS Listeners.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the front-end port as a variable – port 80.

$AGFEPort = Get-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
            -Name "appGatewayFrontendPort"

Save the front-end IP configuration as a variable.

$AGFEIPConfig = Get-AzureRmApplicationGatewayFrontendIPConfig -ApplicationGateway $AppGw `
                -Name "appGatewayFrontendIP"

Add the HTTP Listener for your website.

Add-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HttpListener `
-Protocol Http `
-FrontendIPConfiguration $AGFEIPConfig `
-FrontendPort $AGFEPort `
-HostName $HostName

Save the HTTP Listener for your website as a variable.

$AGListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
              -Name $HTTPListener

Save the front-end SSL port as a variable – port 443.

$AGFESSLPort = Get-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
               -Name "appGatewayFrontendPort443"

Add the HTTPS Listener for your website.

Add-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HTTPSListener `
-Protocol Https `
-FrontendIPConfiguration $AGFEIPConfig `
-FrontendPort $AGFESSLPort `
-HostName $HostName `
-RequireServerNameIndication true `
-SslCertificate $AGFECert

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The final part of the configuration is to configure the HTTP and HTTPS rules and the HTTP to HTTPS redirection.

First configure the HTTPS rule.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the Backend Pool as a variable.

$BEP = Get-AzureRmApplicationGatewayBackendAddressPool -ApplicationGateway $AppGW `
       -Name $BEPoolName

Save the HTTPS Listener as a variable.

$AGSSLListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
                 -Name $HttpsListener

Save the back-end HTTPS settings as a variable.

$AGHTTPS = Get-AzureRmApplicationGatewayBackendHttpSettings -ApplicationGateway $AppGW `
           -Name "appGatewayBackendHttpsSettings"

Add the HTTPS rule.

Add-AzureRmApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
-Name $HTTPSRuleName `
-RuleType Basic `
-BackendHttpSettings $AGHTTPS `
-HttpListener $AGSSLListener `
-BackendAddressPool $BEP

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

Now configure the HTTP to HTTPS redirection and the HTTP rule with the redirection applied.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the HTTPS Listener as a variable.

$AGSSLListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
                 -Name $HttpsListener

Add the HTTP to HTTPS redirection.

Add-AzureRmApplicationGatewayRedirectConfiguration -Name ProdHttpToHttps `
-RedirectType Permanent `
-TargetListener $AGSSLListener `
-IncludePath $true `
-IncludeQueryString $true `
-ApplicationGateway $AppGw

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the redirect as a variable.

$Redirect = Get-AzureRmApplicationGatewayRedirectConfiguration -Name ProdHttpToHttps `
            -ApplicationGateway $AppGw

Save the HTTP Listener as a variable.

$AGListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
              -Name $HttpListener

Add the HTTP rule with redirection to HTTPS.

Add-AzureRmApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
-Name $HTTPRuleName `
-RuleType Basic `
-HttpListener $AGListener `
-RedirectConfiguration $Redirect

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

 
In this post we covered how to configure Azure Application Gateway to secure a web front-end whether running on Virtual Machines or Web Apps. We have configured the Gateway for end-to-end SSL encryption and automatic HTTP to HTTPS redirection removing this overhead from the web server.

In part two we will look at some additional post configuration tasks and how to make sure your web application works with the WAF component.

IT Service Management (ITSM) – Continual Service Improvement (CSI)Process and Approach

Continual Service Improvement (CSI) Process

To define specific initiatives aimed at improving services and processes, based on the results of service reviews and process evaluations. The resulting initiatives are either internal initiatives pursued by the service provider on his own behalf, or initiatives which require the customer’s cooperation. (from ITIL).

Continual Service Improvement (CSI) Purpose, Goals and Objectives

  • Continually align IT services to changing business needs
  • Identify and implement improvements throughout the service life cycle
  • Determine what to measure, why to measure it and define successful outcomes
  • Implement processes with clearly defined goals, objectives and measures
  • Review service level achievement results
  • Ensure quality management methods are used

Continual-Service-Improvement.png

Continual Service Improvement (CSI) Values

  • Enables continuous monitoring and feedback through all life cycle stages
  • Sets targets for improvement
  • Calculates Return on Investment (ROI)
  • Calculates Value on Investment (VOI)

Business Value of Measurement

Consider the following factors when measuring process or service efficiency.

CSI 1.jpg

  • Why are we monitoring and measuring?
  • When do we stop?
  • Is anyone Is using the data?
  • Do we still need this?

Metric Types

  • Service metrics
  • Technology metrics
  • Process metrics

Continual Service Improvement(CSI) Supporting Models and Processes

  1. Plan-DO-Check-ACT (PDCA) Model
  2. 7-Step Improvement Process
  3. Continual Service Improvement Model

1. Plan-Do-Check-Act (PDCA) Model

PDCA 2.jpg

2. 7-Step Improvement Process

CSi 7 step process.jpg

3. Continual Service Improvement Model

CSI model.jpg

Key Takeaways

  1. Once you have implemented a new process, tool or an event – Plan for improvement. As the end users will expecting the next levels of service
  2. Obtain feedback from end users, always encourage them to do so.
  3. Plan it, Do it (Implement), Check it (assess, metrics) and act (take actions to align or rectify)
  4. Always look to improve you service, through benefit, cost, risk and strategy

Summary

Hope you found it useful to implement your CSI journey.

Getting Started with Adaptive Cards and the Bot Framework

This article will provide an introduction to working with AdaptiveCards and the Bot Framework. AdaptiveCards provide bot developers with an option to create their own card templates to suit variety of different scenarios. I’ll also show you a couple of tricks with node.js that will help you design smart.

Before I run through the example, I want to point you to some great resources from adaptivecards.io which will help you build and test your own AdaptiveCards:

  • The schema explorer provides a breakdown of the constructs you can use to build your AdaptiveCards. Note that there are limitations to the schema so don’t expect to do all the things you can do with regular mark-up.
  • The schema visualizer is a great tool to enable you (and your stakeholders) to give the cards a test drive.

There are many great examples online (start with GitHub), so you can go wild with your own designs.

In this example, we’re going to use an AdaptiveCard to display an ‘About’ card for our bot. Schemas for AdaptiveCards are JSON payloads. Here’s the schema for the card:

This generates the following card (go play in the visualizer):

dialog2.PNG

We’ve got lots of %placeholders% for information the bot will insert at runtime. This information could be sourced, for example, from a configuration file collocated with the bot, or from a service the bot has to invoke.

Next, we need to define the components that will play a role in populating our About card. My examples here will use node.js. The following simple view outlines what we need to create in our Visual Studio Code workspace:

bot_view.PNG

The about.json file contains the schema for the AdaptiveCard (which is the code in the script block above). I like to create a folder called ‘cards’ in my workspace and store the schemas for each AdaptiveCard there.

The Source Data

I’m going to use dotenv to store the values we need to plug into our AdaptiveCard at runtime. It’s basically a config file (.env) that sits with your bot. Here we declare the values we want inserted into the AdaptiveCard at runtime:

This is fine for the example here but in reality you’ll probably be hitting remote services for records and parsing returned JSON payloads, rendering carousels of cards.

The Class

about.js is the object representation of the card. It provides attributes for each item of source data and a method to generate a card schema for our bot. Here we go:

The constructor simply offloads incoming arguments to class properties. The toCard() method reads the about.json schema and recursively does a find/replace job on the class properties. A card is created and the updated schema is assigned to the card’s content property. The contentType attribute in the JSON payload tells a handling function that the schema represents an AdaptiveCard.

The Bot

In our bot we have a series of event handlers that trigger based on input from the user via the communication app, or from a cognitive service, which distils input from the user into an intent.

For this example, let’s assume that we have an intent called Show.Help. Utterances from the user such as ‘tell me about yourself’ or quite simply ‘help’ might resolve to this intent.

So we need to add a handler (function) in app.js that responds to the Show.Help intent (this is called a triggerAction). The handler deals with the dialog (interaction) between the user and the bot so we need it to both generate the About card and handle any interactions the card supports (such as clicking the Submit Feedback button on the card).

Note that the dialog between user and bot ends when the endDialog function is called, or when the conditions of the cancelAction are met.

Here’s the code for the handler:

The function starts with a check to see if a dialog is in session (i.e. a message was received). If not (the else condition), it’s a new dialog.

We instantiate an instance of the About class and use the toCard() method to generate a card to add to the message the bot sends back to the channel. So you end up with this:

dialog_final.png


And there you have it. There are many AdaptiveCard examples online but I couldn’t find any for Node.js that covered the manipulation of cards at runtime. Now, go forth and build fantastic user experiences for your customers!

Planning Site structure and Navigation in SharePoint Modern Experience Communication and Team sites

If you are planning to implement or implementing Modern team sites or Communication sites, there is change in best practices for planning and managing the Sites structure, Site Hierarchy and Navigation. This is a very common question during my presentations – how do we manage site structures, navigation and content in Modern experiences.

So, in this blog, we will look at few strategies for planning Site structure and Navigation in Modern Experience sites.

1. First and foremost, get rid of nested subsites and Site hierarchy navigation. Recently Microsoft has been pushing for Site Collections flat structure with Modern Team and Communication sites, which adds a lot of benefit for managing isolation and content. So, the new approach – Flat Site Collections and no Subsites. (There are various advantages of flat structure site collections which will be listed in another upcoming blog)

2. Secondly, to achieve a hierarchy relationship among sites such as Navigation, news items, search etc, use Hub Sites. Hub sites are the new way of connecting SharePoint site collections together. Besides, they have added advantage of aggregating information such as News and Search results from related hub associated sites. So, create a Hub site for Navigation across related sites.HubSiteAssociatedTeam

3. A best candidate for Hub sites, in my opinion, is Communication sites. Communication sites have a top navigation that can be easily extended for Hub sites. They are perfect for publishing information and showing aggregrated content. However, it also depends on if the Hub site is meant for a team and business unit or company as a whole. So, use Communication as a Hub site if targeting all company or a major group.QuickLaunchNestedCommunicationSite

4. One Navigation structure – Quick launch (Left hand) is Top Navigation for Communication sites. So no need to maintain two navigations. If you ask me, this a big win and removes a lot of confusion for end users.QuickLaunchEdit_CommSite

5. Quick launch for Modern Team and Communication Sites allows three level sub hierarchy which allows to define a nested custom hierarchy structure for Navigation which could be different from the content structure and site structure.

6. Focus on Content, not on Navigation or location of Content, through new Modern web parts such as Highlighted content, Quick links etc. which allow you to find content anywhere easily.HighlightedContent

7. Finally, few limitations of Modern Site Structure and Navigation (as of June 2018) for reference. Hopefully, this will be changed soon.

    • Permissions management still needs to be managed at each Site Collection, no nested structure there yet. Yet it is possible to use AD groups for consistent permissions set up
    • Office 365 Unified Security Groups cannot have AD or other Office 365 groups nested for Modern Team sites. But SharePoint site permissions could be managed through AD groups
    • Contextual Navigation bolding is missing in Hub sites i.e. if you click on the link to move to a child site then navigation is not automatically bolded, but this might be coming soon.
    • Navigation headers in Modern sites cannot be blank and needs to be set to a link

Conclusion:

Hence in this blog, we looked at an approach for Modern site structures, hierarchy and navigation.

Building a Breakfast Ordering Skill for Amazon Alexa – Part 1

First published at https://nivleshc.wordpress.com

Introduction

At the AWS Summit Sydney this year, Telstra decided to host a breakfast session for some of their VIP clients. This was more of a networking session, to get to know the clients much better. However, instead of having a “normal” breakfast session, we decided to take it up one level 😉

Breakfast ordering is quite “boring” if you ask me 😉 The waitress comes to the table, gives you a menu and asks what you would like to order. She then takes the order and after some time your meal is with you.

As it was AWS Summit, we decided to sprinkle a bit of technical fairy dust on the ordering process. Instead of having the waitress take the breakfast orders, we contemplated the idea of using Amazon Alexa instead 😉

I decided to give the Alexa skill development a go. However, not having any prior Alexa skill development experience, I anticipated an uphill battle, having to first learn the product and then developing for it. To my amazement, the learning curve wasn’t too steep and over a weekend, spending just 12 hours in total, I had a working proof of concept breakfast ordering skill ready!

Here is a link to the proof of concept skill https://youtu.be/Z5Prr31ya10

I then spent a week polishing the Alexa skill, giving it more “personality” and adding a more “human” experience.

All the work paid off when I got told that my Alexa skill would be used at the Telstra breakfast session! I was over the moon!

For the final product, to make things even more interesting, I created a business intelligence chart using Amazon QuickSight, showing the popularity of each of the food and drink items on the menu. The popularity was based on the orders that were being received.

BothVisualsSidebySide

Using a laptop, I displayed the chart near the Amazon Echo Dot. This was to help people choose what food or drink they wanted to order (a neat marketing trick 😉 ) . If you would like to know more about Amazon QuickSight, you can read about it at Amazon QuickSight – An elegant and easy to use business analytics tool

Just as a teaser, you can watch one of the ordering scenarios for the finished breakfast ordering skill at https://youtu.be/T5PU9Q8g8ys

In this blog, I will introduce the architecture behind Amazon Alexa and prepare you for creating an Amazon Alexa Skill. In the next blog, we will get our hands dirty with creating the breakfast ordering Alexa skill.

How does Amazon Alexa actually work?

I have heard a lot of people use the name “Alexa” interchangeably for the Amazon Echo devices. As good as it is for Amazon’s marketing team, unfortunately, I have to set the records straight. Amazon Echo are the physical devices that Amazon sells that interface to the Alexa Cloud. You can see the whole range at https://www.amazon.com/Amazon-Echo-And-Alexa-Devices/b?ie=UTF8&node=9818047011. These devices don’t have any smarts in them. They sit in the background listening for the “wake” command, and then they start streaming the audio to Alexa Cloud. Alexa Cloud is where all the smarts are located. Using speech recognition, machine learning and natural language processing, Alexa Cloud converts the audio to text. Alexa Cloud identifies the skill name that the user had requested, the intent and any slot values it finds (these will be explained further in the next blog). The intent and slot values (if any) are passed to the identified skill. The skill uses the input and processes it using some form of compute (AWS Lambda in my case) and then passes the output back to Alexa Cloud. Alexa Cloud, converts the skill output to Speech Synthesis Markup Language (SSML) and sends it to the Amazon Echo device. The device then converts the SSML to audio and plays it to the user.

Below is an overview of the process.

alexa-skills-kit-diagram._CB1519131325_

Diagram is from https://developer.amazon.com/blogs/alexa/post/1c9f0651-6f67-415d-baa2-542ebc0a84cc/build-engaging-skills-what-s-inside-the-alexa-json-request

Getting things ready

Getting an Alexa enabled device

The first thing to get is an Alexa enabled device. Amazon has released quite a few different varieties of Alexa enabled devices. You can checkout the whole family here.

If you are keen to try a side project, you can build your own Alexa device using a Raspberry Pi. A good guide can be found at https://www.lifehacker.com.au/2016/10/how-to-build-your-own-amazon-echo-with-a-raspberry-pi/

You can also try out EchoSim (Amazon Echo Simulator). This is a browser-based interface to Amazon Alexa. Please ensure you read the limits of EchoSim on their website. For instance, it cannot stream music

For developing the breakfast ordering skill, I decided to purchase an Amazon Echo Dot. It’s a nice compact device, which doesn’t cost much and can run off any usb power source. For the Telstra Breakfast session, I actually ran it off my portable battery pack 😉

Create an Amazon Account

Now that you have got yourself an Alexa enabled device, you will need an Amazon account to register it with. You can use one that you already have or create a new one. If you don’t have an Amazon account, you can either create one beforehand by going to https://www.amazon.com or you can create it straight from the Alexa app (the Alexa app is used to register the Amazon Echo device).

Setup your Amazon Echo Device

Use the Alexa app to setup your Amazon Echo device. When you login to the app, you will be asked for the Amazon Account credentials. As stated above, if you don’t have an Amazon account, you can create it from within the app.

Create an Alexa Developer Account

To create skills for Alexa, you need a developer account. If you don’t have one already, you can create one by going to https://developer.amazon.com/alexa. There are no costs associated with creating an Alexa developer account.

Just make sure that the username you choose for your Alexa developer account matches the username of the Amazon account to which your Amazon Echo is registered to. This will enable you to test your Alexa skills on your Amazon Echo device without having to publish it on the Alexa Skills Store (the skills will show under Your Skills in the Alexa App)

Create an AWS Free Tier Account

In order to process any of the requests sent to the breakfast ordering Alexa skill, we will make use of AWS Lambda. AWS Lambda provides a cheap and cost-effective way to run code due to the fact that you are only charged for the time that the code is run. There are no costs for any idle time.

If you already have an AWS account, you can use that otherwise, you can sign up for an AWS Free tier account by going to https://aws.amazon.com . AWS provides a lot of services for free for the first 12 months under the Free Tier, with some services continuing the free tier allowance even beyond the 12 months (AWS Lambda is one such). For a full list of Free Tier services, visit https://aws.amazon.com/free/

High Level Architecture for the Breakfast Ordering Skill

Below is the architectural overview for the Breakfast Ordering Skill that I built. I will introduce you to the various components over the next few blogs.Breakfast Ordering System_HighLevelArchitecture

In the next blog, I will take you through the Alexa Developer console, where we will use the Alexa Skills Kit (ASK) to start creating our breakfast ordering skill. We will define the invocation name, intents, slot names for our Alexa Skill. Not familiar with these terms? Don’t worry,  I will explain them in the next blog.  I hope to see you there.

See you soon.

Using your Voice to Search Microsoft Identity Manager – Part 1

Introduction

Yes, you’ve read the title correctly. Speaking to Microsoft Identity Manager. The concept behind this was born off the back of some other work I was doing with Microsoft Cognitive Services. I figured it shouldn’t be that difficult if I just break down the concept into individual elements of functionality and put together a proof of concept to validate the idea. That’s what I did and this is the first post of the solution as an overview.

Here’s a quick demo.

Overview

The diagram below details the basis of the solution. There are a few extra elements I’m still working on that I’ll cover in a future post if there is any interest in this.

Searching MIM with Speech Overview

The solution works like this;

  1. You speak to a microphone connected to a single board computer with the query for Microsoft Identity Manager
  2. The spoken phrase is converted to text using Cognitive Speech to Text (Bing Speech API)
  3. The text phrase is;
    1. sent to Cognitive Services Language Understanding Intelligent Service (LUIS) to identify the target of the query (firstname lastname) and the query entity (e.g. Mailbox)
    2. Microsoft Identity Manager is queried via API Management and the Lithnet REST API for the MIM Service
  4. The result is returned to the single board computer as a text result phase which it then uses Cognitive Services Text to Speech to convert the response to audio
  5. The result is spoken back

Key Functional Elements

  • The microphone array I’m using is a ReSpeaker Core v1 with a ReSpeaker Mic Array
  • All credentials are stored in an Azure Key Vault
  • An Azure Function App (PowerShell) interfaces with the majority of the Cognitive Services being used
  • Azure API Management is used to front end the Lithnet MIM Webservice
  • The Lithnet REST API for the MIM Service provides easy integration with the MIM Service

Summary

Leveraging a lot of Serverless (PaaS) Services, a bunch of scripting (Python on the ReSpeaker and PowerShell in the Azure Function) and the Lithnet REST API it was pretty simple to integrate the ReSpeaker with Microsoft Identity Manager. An alternative to MIM could be any other service you have an API interface into. MIM is obviously a great choice as it can aggregate from many other applications/services.

Why a female voice? From a small response it was the popular majority.

Let me know what you think on twitter @darrenjrobinson

Your Modern Collaboration Landscape

There are many ways people collaborate within your organisation. You may or may not enjoy the fruits of that collaboration. Does your current collaboration landscape cater for the wide variety of groups that form (organically or inorganically) to build relationships and develop your business?

Moving to the cloud is a catalyst for re-evaluating your collaboration solutions and their value. Platforms like Office 365 are underpinned by search/discovery tools that can traverse and help draw insight from the output of collaboration, including conversations and connections between people and information. Modern applications open up new opportunities to build working groups that include people form outside your organisation with whom you can freely, and securely share content.

I’ve been in many discussions with customers on how enabling technologies play a role in the modern collaborative landscape. Part of this discussion is about identifying the various group archetypes and how characteristics can align or differ. I’ve developed a view that forms these groups into three ‘tiers’, as follows:

collab_landscape

Organisations should consider a solution for each tier, because there are requirements in each tier that are distinct. The challenge for an organisation (as part of a wider Digital Workplace strategy) is to:

  • Understand how existing and prospective solutions will meet collaboration requirements in each tier, and communicate that understanding.
  • Develop a platform where information managed in each tier can be shared with other tiers.

Let’s go into the three tiers in more detail.

Tier One (Intranet)

Most organisations I work with have an established Tier One business solution, like a corporate intranet. These are the first to mature. They are logically represented as hierarchy of containers (i.e. sites), with a mix of implicit and explicit access control (and associated auditing difficulties). The principal use is to store documents and host authored web content (such as news). Tier One systems are usually dependent on solutions in other tiers to facilitate (and retain) group conversations or discussions.

  • Working groups are hierarchical and long term, based off a need to model the relationships between groups in an organisation (e.g. Payroll sits under Finance, Auditing sits under Payroll )
  • Activity here is closed and formal. Contribution is restricted to smaller groups.
  • Information is one-way and top down. Content is authored and published for group-wide or organisation-wide consumption.
  • To get things done, users will be more dependent on a Service Desk (for example: managing access control, provisioning new containers), at the cost of agility.
  • Groups are established here to work towards a business outcome or goal (deliver a project, achieve our organisations objectives for 2019).

Tier Three (Social Network):

Tier Three business solutions represent your organisation’s social network. Maturity here ranges from “We launched [insert platform here] and no-one is using it” to “We’ve seen an explosion in adoption and it’s Giphy city”. They are usually dependent on solutions in other tiers to provide capabilities such as web content/document management (case in point: O365 Groups and Yammer).

  • Tier Three groups here are flattened, and cannot by design model a hierarchy. They tend to be long term, and prone to stagnation.
  • Groups represent communities, capabilities and similar interest groups, all of which are of value to your organisation. At this point you say: “I understand how the ‘Welcome to Yammer’ group is valuable, but what about the ‘Love Island Therapy’ group?”. At this point I say: “Here you have a collection of individuals who are proactively using and adopting your platform”.
  • Unlike in the other tiers, groups here tend to have no business outcome, although they’ll have objectives to gain momentum and visibility.
  • Collaboration here is open (public) and informal, down to the #topics people discuss and the language that is used.
  • A good Tier Three solution will be fully self service, subject to a pre-defined usage policy. There should be no restrictions beyond group level moderation in terms of who can contribute. If it’s public or green it’s fair game!
  • Tier Three groups have the biggest membership, and can support thousands of members.

Tier Two (Workspaces)

Tier Two comes last, because in my experience it’s the capability that is the least developed in organisations I work with and the last to mature.

A Tier Two business solution delivers a collaborative area for teams such as working groups, committees and project teams. They will provide a combination of features inherent in Tier One and Tier Three solutions. For example, the chat/discussion capabilities of a Tier Three solution and the content management capabilities of a Tier One solution

  • Tier Two groups here are flattened, and cannot by design model a hierarchy. They tend to me short term, in place to support a timeboxed initiative or activity.
  • Groups represent working groups, committees and project teams, with a need to create content and converse. These groups are coalitions, including representation from different organisational groups that need to come together to deliver an outcome.
  • Groups work towards a business outcome, for example: develop a business case, deliver a document.
  • Collaboration here tends to be closed (restricted to a small group) and semi-formal, but it is possible for such groups to be both closed, formal and open, informal.
  • A good Tier Two solution will be fully self service, subject to a pre-defined usage policy. There should be no restrictions beyond group level moderation in terms of who can contribute.
  • Groups represent a small number of individuals, and do not grow to the size of departmental (Tier One) groups or social (Tier Three) groups.

The three-tiers view identifies the different ways collaboration happens with in your organisation. It is solution agnostic, you can advocate any technology in any tier if it meets the requirement. The view helps evaluate the diverse needs of your organisation, and determine how effective your current solutions are at meeting requirements for collaboration and information working.

 

 

Cloud Operations – Key Service Operation Principles – Consideration

Below are some good IT Service Management Operational Principles to consider when migrating applications into Cloud.  These will help to align your operational goals and organisation’s strategic initiatives.

Principle #1

Organisation’s IT Service Management will govern and lead all IT services utilising strategic processes and technology enablers based on industry best practices.

Implications / Outcomes

  • The selected process and technology will be fit for purpose
  • Suppliers and Service Partners will be required to integrate with strategic processes and technologies
  • Process re-engineering including training will be required
  • Everyone uses or integrates with a central platform
  • Process efficiency through effective process integration
  • Reduced operating cost
  • Ensures contestability of services for Organisation

Principle #2

Contestability between IT Service providers is a key outcome for service management across IT@Organisation, where it does not have a negative impact on the customer experience.

Implications / Outcomes

  • Avoid vendor lock-in
  • Requires strategic platforms
  • Sometimes greater complexity
  • More ownership of process by Organisation
  • Better cost outcomes through competition
  • Improved performance, incumbent advantage is earned
  • Access to innovation
  • Access to capacity

Principle #3

The Organisation’s IT operating model will be based on the principles of Customer-centricity (Organisation’s business and IT), consistency and quality of service and continual improvement of process maturity.

Implications / Outcomes

  • More extensive process integration
  • Possible constraints – cost, time, resources, agility
  • Additional internal expertise
  • Governance as a focal point
  • Continual improvement
  • Improved process alignment with business alignment
  • Quantitative, demonstrable benefits
  • Improved customer satisfaction

Principle #4

Organisation will retain and own all IP for Organisation’s Service Management knowledge assets and processes.

Implications / Outcomes

  • Strong asset, capacity, knowledge management
  • Service provider governance
  • Improved internal capability
  • Service provider independence
  • Reduced risk
  • Exploitation of skills and experience gained
  • Encourage self-healing culture

Principle #5

Changes to existing Organisation processes and procedures will only be made where those changes are necessary to deliver benefits from the Cloud platform.

Implications / Outcomes

  • Vendors adapt to Organisation’s processes
  • Existing process needs to be critically assessed
  • Reduced exposure to risk
  • Reduced levels of disruption
  • Faster adoption of new processes through familiarity
  • Faster Implementation due to less change

Principle #6

Before beginning process design, ownership of the process and its outcomes, resource availability, cost benefit analysis and performance measurements will be defined and agreed.

Implications / Outcomes

  • Ownership of process is known
  • The process is appropriately resourced
  • Alignment of activities with desired outcomes
  • Improved process effectiveness
  • Reduced risk of failure
  • Resourcing cost

 

Summary

Please note that there will be practical implications to organisation’s service management processes (typically – incident management, problem management, capacity management, service restoration, change management, configuration management and release management). Also these are some of the good principles to consider and can be customised as per organisational strategy and priorities.