Measure O365 ATP Safe Attachments Latency using PowerShell

Microsoft Office 365 Advanced Threat Protection (ATP) is a cloud based security service that is part of the O365 E5 offering. Also can be separately added to other O365 subscriptions. Now a lot can be learned about ATP from here. But in this post we’re going to extract data corresponding to one of ATP’s primary features; ATP Safe Attachments.

In short, ATP Safe Attachments scans documents for malicious content and can block these attachments depending on the policy configuration. More details can be learned about ATP Safe Attachments from here. Now, Safe Attachments can basically react to emails with attachments in multiple ways. It can deliver the email without an attachment, just a thumbnail that tells the user that ATP is currently scanning the attachment. Or it can delay the message until scanning is complete. Microsoft has been  improving on ATP Safe Attachments with a lot of new features like reducing message delays, support for different file formats, dynamic delivery, and previewing, etc. But what we’re interested in this post is that the delay that ATP Safe Attachments introduces if the policy was set to delay emails as sometimes the other features may cause confusion to users and would require educating the end-user if enabled.

But why is there a delay in the first place?

The way ATP Safe Attachments work is by creating a small sandbox, then it detonates / opens these attachments in that sandbox. Basically it checks what types of changes the attachment does to the sandbox and also uses machine learning to then decide if the attachment is safe or not. It is a nice feature, and it actually simulates how users open attachments when receiving them.

So how fast is this thing?

Now recently I’ve been working on a project with a customer, who wanted to test ATP Safe Attachments with the policy configured to delay messages and not deliver and scan at the same time (dynamic delivery). The question they had in mind is that how can we ensure that ATP Safe Attachments delay is acceptable (tried so hard to find an official document from Microsoft around the numbers but couldn’t). But the delay is not that big of a deal, it is about a minute or two (used to be a lot longer when ATP was first introduced). That is really fast considering the amount of work that needs to be done in the the backend for this thing to work. Still the customer wanted this measured, and I had to think of a way in doing that. So I tried to find a direct way to do this, but with all of the advanced reporting that ATP brings to the table, there isn’t one that tells you how long messages actually take to arrive to the users mailbox, if they have ATP Safe Attachments policy in place and set to delay messages.

Now let us define ‘delay‘ here. The measurement is done when messages arrive at Exchange Online, now you can have other solutions in front of O365 so that doesn’t count. So we are going to measure the time the message hits Exchange Online Protection (EOP) and ATP, until O365 delivers the message to the recipient. Also the size of the attachment(s) plays a role in the process.

The solution!

Now assuming you have the Microsoft Exchange Online Modules installed, or if you have MFA enabled, you need to authenticate in a different way (that was my case and I will show you how I did it).

First.. jump on to the Office 365 Admin Centre (assuming you’re an admin). Then navigate to the Exchange Online admin centre. From the left side, search for ‘Hybrid‘, an then click on second ‘Configure’ button (This one installs the Exchange Online PowerShell modules for users with MFA). Once the file is downloaded, click on it and let it do its magic. And once you’re in there, you should see this:

Screen Shot 2018-07-19 at 9.55.02 am

Connect-EXOPSSession -UserPrincipalName yourAccount@testcorp.com 

Now go through the authentication process and let PowerShell load the modules into this session. You should see the following:

Screen Shot 2018-07-19 at 10.00.57 am

Now this means all of the modules have been loaded into the PowerShell session. Before we do any PowerShell scripting, let us do the following:

  • Send two emails to an address that has ATP Safe Attachment policy enabled.
  • The first email is simple and has no attachments into it.
  • The second contains at least one attachment (i.e. a DOCX or PDF File).
  • Now the subject name shouldn’t matter, but for the sake of this test, try to name the subject in a way that you can tell which email contains the attachment.
  • Wait 5 minutes, why? Because ATP Safe Attachments should do some magic before releasing that email (assuming you have it turned ON and for delaying messages only).

Let us do some PowerShell scripting 🙂

Declare a variable that contains the recipient email address that has ATP Safe Attachments enabled. Then use the cmdlet (Get-MessageTrace) to fetch emails delivered to that address in the last hour.

$RecipientAddress = 'FirstName.LastName@testcorp.com';
$Messages = Get-MessageTrace -RecipientAddress $RecipientAddress -StartDate (Get-Date).AddHours(-1) -EndDate (get-date) 

Expand the $Messages variable, and you should see:
Screen Shot 2018-07-19 at 11.10.59 am

Here is what we are going to do next. Now I wish I can explain each line of code, but that will make this blog post too lengthy. So I’ll summarise the steps:

  • Do a For each message loop.
  • Use the ‘Get-MessageTraceDetail’ cmdlet to extract details for each message and filter out events related to ATP.
  • Create a custom object containing two main properties;
    • The recipient address (of type String)
    • The message Trace ID ( of type GUID).
  • Remove all temp variables used in this loop.
foreach($Message in $Messages)
{
$Message_RecipientAddress = $Message.RecipientAddress
$Message_Detail = $Message | Get-MessageTraceDetail | Where-Object -FilterScript {$PSItem.'Event' -eq "Advanced Threat Protection"} 
if($Message_Detail)
{
$Message_Detail = $Message_Detail | Select-Object -Property MessageTraceId -Unique
$Custom_Object += New-Object -TypeName psobject -Property ([ordered]@{'RecipientAddress'=$Message_RecipientAddress;'MessageTraceId'=$Message_Detail.'MessageTraceId'})
} #End If Message_Detail Variable 
Remove-Variable -Name MessageDetail,Message_RecipientAddress -ErrorAction SilentlyContinue
} #End For Each Message  

The expected outcome is a single row.. Why? that is because we sent only two messages, one with an attachment (got captured using the above script), the other is without an attachment (doesn’t contain ATP events).

Screen Shot 2018-07-19 at 11.28.42 am

Just in case you want to do multiple tests, remember to empty your Custom Object before retesting, so you do not get duplicate results.

The next step is to extract details for the message in order to measure the start time and end time until it was sent to the recipient. Here’s what we are going to do:

  • Create a for each item in that custom object.
  • Run a ‘Get-MessageTraceDetail’ again to extract all of the message events. Sort the data by Date.
  • Measure the time difference between the last event and the first event.
  • Store the result into a new custom object for further processing.
  • Remove the temp variables used in the loop.
foreach($MessageTrace in $Custom_Object)
{
$Message = $MessageTrace | Get-MessageTraceDetail | sort Date
$Message_TimeDiff = ($Message | select -Last 1 | select Date).Date - ($Message | select -First 1 | select Date).Date
$Final_Data += New-Object -TypeName psobject -Property ([ordered]@{'RecipientAddress'=$MessageTrace.'RecipientAddress';'MessageTraceId'=$MessageTrace.'MessageTraceId';'TotalMinutes'="{0:N3}" -f [decimal]$Message_TimeDiff.'TotalMinutes';'TotalSeconds'="{0:N2}" -f [decimal]$Message_TimeDiff.'TotalSeconds'})
Remove-Variable -Name Message,Message_TimeDiff -ErrorAction SilentlyContinue
} # End For each Message Trace in the custom object

 The expected output should look like:

Screen Shot 2018-07-19 at 11.42.25 am

So here are some tips you can use to extract more valuable data from this:

  • You can try to do this on a Distribution Group and get more users, but you need a larger foreach loop for that.
  • If your final data object has more than one result, you can use the ‘measure-object’ cmdlet to find average time for you.
  • If you have a user complaining that they are experiencing delays in receiving messages, you can use this script to extract the delay.

Just keep in mind that if you’re doing this on a large number of users, it might take some time to process all that data, so patience is needed 🙂

Happy scripting 🙂
 
 

Querying against an Azure SQL Database using Azure Automation Part 1

What if you wanted to leverage Azure automation to analyse database entries and send some statistics or even reports on a daily or weekly basis?
Well why would you want to do that?

  • On demand compute:
    • You may not have access to a physical server. Or your computer isn’t powerful enough to handle huge data processing. Or you would definitely do not want to wait in the office for the task to complete before leaving on a Friday evening.
  • You pay by the minute
    • With Azure automation, your first 500 minutes are for free, then you pay by the minute. Check out Azure Automation Pricing for more details. By the way its super cheap.
  • Its Super Cool doing it with PowerShell. 

There are other reasons why would anyone use Azure automation but we are not getting into the details around that. What we want to do is to leverage PowerShell to do such things. So here it goes!
To query against a SQL database whether its in Azure or not isn’t that complex. In fact this part of the post is to just get us started. Now for this part, we’re going to do something simple because if you want to get things done, you need the fastest way of doing it. And that is what we are going to do. But here’s a quick summary for the ways I thought of doing it:

    1. Using ‘invoke-sqlcmd2‘. This Part of the blog.. its super quick and easy to setup and it helps getting things done quickly.
    2. Creating your own SQL Connection object to push complex SQL Querying scenarios. [[ This is where the magic kicks in.. Part 2 of this series ]]

How do we get this done quickly?
For the sake of keeping things simple, we’re assuming the following:

  • We have an Azure SQL Database called ‘myDB‘, inside an Azure SQL Server ‘mytestAzureSQL.database.windows.net‘.
  • Its a simple database containing a single table ‘test_table’. This table has basically three columns  (Id, Name, Age) and this table contains only two records.
  • We’ve setup ‘Allow Azure Services‘ Access on this database in the firewall rules Here’s how to do that just in case:
    • Search for your database resource.
    • Click on ‘Set firewall rules‘ from the top menu.
    • Ensure the option ‘Allow Azure Services‘ is set to ‘ON
  • We do have an Azure automation account setup. We’ll be using that to test our code.

Now lets get this up and running
Start by creating two variables, one containing the SQL server name and the other containing the database name.
Then create an Automation credential object to store your SQL Login username and password. You need this as you definitely should not be thinking of storing your password in plain text in script editor.

I still see people storing passwords in plain text inside scripts.

Now you need to import the ‘invoke-sqlcmd2‘ module in the automation account. This can be done by:

  • Selecting the modules tab from the left side options in the automation account.
  • From the top menu, click on Browse gallery, search for the module ‘invoke-sqlcmd2‘, click on it and hit ‘Import‘. It should take about a minute to complete.

Now from the main menu of the automation account, click on the ‘Runbooks‘ tab and then ‘Add a Runbook‘, Give it a name and use ‘PowerShell‘ as the type. Now you need to edit the runbook. To do that, click on the Pencil icon from the top menu to get into the editing pane.
Inside the pane, paste the following code. (I’ll go through the details don’t worry).

#Import your Credential object from the Automation Account
 $SQLServerCred = Get-AutomationPSCredential -Name "mySqllogin" #Imports your Credential object from the Automation Account
 #Import the SQL Server Name from the Automation variable.
 $SQL_Server_Name = Get-AutomationVariable -Name "AzureSQL_ServerName" #Imports the SQL Server Name from the Automation variable.
 #Import the SQL DB from the Automation variable.
 $SQL_DB_Name = Get-AutomationVariable -Name "AzureSQL_DBname"
    • The first cmdlet ‘Get-AutomationPSCredential‘ is to retrieve the automation credential object we just created.
    • The next two cmdlets ‘Get-AutomationVariable‘ are to retrieve the two Automation variables we just created for the SQL server name and the SQL database name.

Now lets query our database. To do that, paste the below code after the section above.

#Query to execute
 $Query = "select * from Test_Table"
 "----- Test Result BEGIN "
 # Invoke to Azure SQL DB
 invoke-sqlcmd2 -ServerInstance "$SQL_Server_Name" -Database "$SQL_DB_Name" -Credential $SQLServerCred -Query "$Query" -Encrypt
 "`n ----- Test Result END "

So what did we do up there?

    • We’ve created a simple variable that contains our query. I know the query is too simple but you can put in there whatever you want.
    • We’ve executed the cmdlet ‘invoke-sqlcmd2‘. Now if you noticed, we didn’t have to import the module we’ve just installed, Azure automation takes care of that upon every execution.
    • In the cmdlet parameter set, we specified the SQL server (that has been retrieved from the automation variable), and the database name (automation variable too). Now we used the credential object we’ve imported from Azure automation. And finally, we used the query variable we also created. An optional switch parameter ‘-encypt’ can be used to encrypt the connection to the SQL server.

Lets run the code and look at the output!
From the editing pane, click on ‘Test Pane‘ from the menu above. Click on ‘Start‘ to begin testing the code, and observe the output.
Initially the code goes through the following stages for execution

  • Queuing
  • Starting
  • Running
  • Completed

Now what’s the final result? Look at the black box and you should see something like this.

----- Test Result BEGIN
Id Name Age
-- ---- ---
 1 John  18
 2 Alex  25
 ----- Test Result END

Pretty sweet right? Now the output we’re getting here is an object of type ‘Datarow‘. If you wrap this query into a variable, you can start to do some cool stuff with it like
$Result.count or
$Result.Age or even
$Result | where-object -Filterscript {$PSItem.Age -gt 10}
Now imagine if you could do so much more complex things with this.
Quick Hint:

If you include a ‘-debug’ option in your invoke cmdlet, you will see the username and password in plain text. Just don’t run this code with debugging option ON 🙂

Stay tuned for Part 2!!
 

Automate network share migrations to Sharepoint Online using ShareGate PowerShell

Sharegate supports PowerShell scripting which can be used to automate and schedule migrations. In this post, I am going to demonstrate an example of end to end automation to migrate network Shares to SharePoint Online. The process effectively reduces the task of executing migrations to “just flicking a switch”.

Pre-Migration

The following pre-migration activities were conducted before the actual migration:

  1. Analysis of Network Shares
  2. Discussions with stakeholders from different business units to identify content needs
  3. Pilot migrations to identify average throughput capability of migration environment
  4. Identification of acceptable data filtration criteria, and prepare Sharegate migration template files based on business requirements
  5. Derive a migration plan from above steps

Migration Automation flow

The diagram represents a high-level flow of the process:
 

The migration automation was implemented to execute the following steps:

  1. Migration team indicates that migration(s) are ready to be initiated by updating the list item(s) in the SharePoint list
  2. Updated item(s) are detected by a PowerShell script polling the SharePoint list
  3. The list item data is downloaded as a CSV file. It is one CSV file per list item. The list item status is updated to “started”, so that it would not be read again
  4. The CSV file(s) are picked up by another migration PowerShell script to initiate migration using Sharegate
  5. The required migration template is selected based on the options specified in the migration list item / csv to create a migration package
  6. The prepared migration task is queued for migration with Sharegate, and migration is executed
  7. Information mails are “queued” to be dispatched to migration team
  8. Emails are sent out to the recipients
  9. The migration reports are extracted out as CSV and stored at a network location.

Environment Setup

Software Components

The following software components were utilized for the implementing the automation:

  1. SharePoint Online Management shell
  2. SharePoint PnP PowerShell
  3. Sharegate migration Tool

Environment considerations

Master and Migration terminals hosted as Virtual machines – Each terminal is a windows 10 virtual machine. The use of virtual machines provides following advantages over using desktops:

  • VMs are generally deployed directly in datacenters, hence, near the data source.
  • Are available all the time and are not affected by power outages or manual shutdowns
  • Can be easily scaled up or down based on the project requirements
  • Benefit from having better internet connectivity and separate internet routing can be drawn up based on requirements

Single Master Terminal – A single master terminal is useful to centrally manage all other migration terminals. Using a single master terminal offers following advantages:

  • Single point of entry to migration process
  • Acts as central store for scripts, templates, aggregated reports
  • Acts as a single agent to execute non-sequential tasks such as sending out communication mails

Multiple Migration terminals – It would be optimal to initiate parallel migrations and use multiple machines (terminals) to expedite overall throughput by parallel runs in an available migration window (generally non-business hours).  Sharegate has option to use either 1 or 5 licenses at once during migration. We utilized 5 ShareGate licenses on 5 separate migration terminals.
PowerShell Remoting – Using PowerShell remoting allows opening remote PowerShell sessions to other windows machines. This will allow the migration team to control and manage migrations using just one terminal (Master Terminal) and simplify monitoring of simultaneous migration tasks. More information about PowerShell remoting can be found here.
PowerShell execution policy – The scripts running on migration terminals will be stored at a network location in Master Terminal. This will allow changing / updating scripts on the fly without copying the script over to other migration terminals. The script execution policy of the PowerShell window will need to be set as “Bypass” to allow execution of scripts stored in network location (for quick reference, the command is “Set-ExecutionPolicy -ExecutionPolicy Bypass”.
Windows Scheduled Tasks – The PowerShell scripts are scheduled as tasks through Windows Task schedulers and these tasks could be managed remotely using scripts running on the migration terminals. The scripts are stored at a network location in master terminal.

Basic task in a windows scheduler

PowerShell script file configured to run as a Task

Hardware specifications

Master terminal (Manage migrations)

  • 2 cores, 4 GB RAM, 100 GB HDD
  • Used for managing scripts execution tasks on other terminals (start, stop, disable, enable)
  • Used for centrally storing all scripts and ShareGate property mapping and migration templates
  • Used for Initiating mails (configured as basic tasks in task scheduler)
  • Used for detecting and downloading migration configuration of tasks ready to be initiated (configured as basic tasks in task scheduler)
  • Windows 10 virtual machine installed with the required software.
  • Script execution policy set as “Bypass”

Migration terminals (Execute migrations)

  • 8 cores, 16 GB RAM, 100 GB HDD
  • Used for processing migration tasks (configured as basic tasks in windows task scheduler)
  • Multiple migration terminals may be set up based on the available Sharegate licenses
  • Windows 10 Virtual machines each installed with the required software.
  • Activated Sharegate license on each of the migration terminals
  • PowerShell remoting needs to be enabled
  • Script execution policy set as “Bypass”

Migration Process

Initiate queueing of Migrations

Before migration, migration team must perform manual pre – migration tasks (if any as defined by the migration process as defined and agreed with stake holders). Some of the pre-migration tasks / checks may be:

  • Inform other teams about a possible network surge
  • Confirming if another activity is consuming bandwith (scheduled updates)
  • Inform the impacted business users about the migration – this would be generally set up as the communication plan
  • Freezing the source data store as “read-only”

A list was created on a SharePoint online site to enable users to indicate that the migration is ready to be processed. The updates in this list shall trigger the actual migration downstream. The migration plan is pre-populated in this list as a part of migration planning phase. The migration team can then update one of the fields (ReadyToMigrate in this case) to initiate the migration. Migration status is also updated back to this list by the automation process or skip a planned migration (if so desired).
The list provides as a single point of entry to initiate and monitor migrations. In other words, we are abstracting out migration processing with this list and can be an effective tool for migration and communication teams.
The list was created with the following columns:

  • Source => Network Share root path
  • Destination site => https://yourteanant.sharepoint.com/sites/<Sitename>
  • Destination Library => Destination library on the site
  • Ready to migrate => Indicates that the migration is ready to be triggered
  • Migrate all data => Indicate if all data from the source is to be migrated (default is No). Only filtered data based on the predefined options will be migrated. (more on filtered options can be found here)
  • Started => updated by automation when the migration package has been downloaded
  • Migrated => updated by automation after migration completion
  • Terminal Name => updated by automation specifying the terminal being used to migrate the task

 

Migration configuration list

 

After the migration team is ready to initiate the migration, the field “ReadyToMigrate” for the migration item in the SharePoint list is updated to “Yes”.


“Flicking the switch”

 
Script to create the migration configuration list
The script below creres the source list in sharepoint online.


Script to store credentials
The file stores the credentials and can be used subsequent scripts.

Queuing the migration tasks

A PowerShell script is executed to poll the migration configuration list in SharePoint at regular intervals to determine if a migration task is ready to be initiated. The available migration configurations are then downloaded as CSV files, one item / file and stored in a migration packages folder on the master terminal. Each CSV file maps to one migration task to be executed by a migration terminal and ensures that the same migration task is not executed by more than one terminal. It is important that this script runs on a single terminal to ensure only one migration is executed for one source item.
 


 

Execute Migration

The downloaded migration configuration CSV files are detected by migration script tasks executing on each of the migration terminals. Based on the specified source, destination and migration options the following tasks are executed:

  1. Reads the item from configuration list to retrieve updated data based on item ID
  2. Verify Source. Additionally, sends a failure mail if source is invalid or not available
  3. Revalidates if a migration is already initiated by another terminal
  4. Updates the “TerminalName” field in the SharePoint list to indicate an initiated migration
  5. Checks if the destination site is created. Creates if not already available
  6. Checks if the destination library is created. Creates if not already available
  7. Triggers an information mail informing migration start
  8. Loads the required configurations based on the required migration outcome. The migration configurations specify migration options such as cut over dates, source data filters, security and metadata. More about this can be found here.
  9. Initiates the migration task
  10. Extracts the migration report and stores as CSV
  11. Extracts the secondary migration report as CSV to derive paths of all files successfully migrated. These CSV can be read by an optional downstream process.
  12. Triggers an information mail informing migration is completed
  13. Checks for another queued migration to repeat the procedure.

The automatioin script is given below –


 


Additional Scripts

Send Mails

The script triggers emails to required recipients.
This script polls a folder:   ‘\masterterminal\c$\AutomatedMigrationData\mails\input’ to check any files to be send out as emails. The csv files sepcify subject and body to be send out as emails to recipients configured in the script. Processed CSV files are moved to final folder.


 

Manage migration tasks (scheduled tasks)

The PowerShell script utilizes PowerShell remoting to manage windows Task scheduler tasks configured on other terminals.
 


Conclusion

The migration automation process as described above helps in automating the migration project and reduces manual overhead during the migration. Since the scripts utilize pre-configured migration options / templates, the outcome is consistent with the plan. Controlling and monitoring migration tasks utilizing a SharePoint list introduces transparency in the system and abstracts the migration complexity. Business stakeholders can review migration status easily from the SharePoint list and this ensures an effective communication channel. Automated mails informing about migration status provide additional information about the migration. The migration tasks are executed in parallel across multiple migration machines which aids in a better utilization of available migration window.
 
 

Utilizing Sharegate migration Templates for Network share migrations

Sharegate supports PowerShell based scripting which can be used to automate and schedule migrations. The purpose of this post is to demonstrate the use of pre-created migration templates to initiate migration tasks in Sharegate using PowerShell scripts. In one of my previous project, we were migrating network shares to SharePoint Online using Sharegate as the migration tool of choice.

Based on our discussions with business divisions and IT department, the following requirements were identified for most of the divisions:

  1. Office documents, PDFs, Image files will be migrated
  2. Include only documents modified after a date for e.g. January 1, 2016
  3. Permissions should be preserved
  4. There were some exceptions where the identified divisions requested all available data to be migrated.

To address the above business requirements, we created custom migration templates and these templates were used to trigger the migration tasks. This allowed us to do use pre-created configurations for each migration and eliminate of manual overhead and risk of errors. Sharegate migration templates (.sgt files) are xml based configuration documents and encapsulate the configuration options such as:

  • Copy options
  • Content type mappings
  • Allowed extensions
  • Cut-off dates

Migration Templates
I’ll present two (2) migration templates, that were prepared to cover off the business requirements:
Template 1 – All data This migration template configures Sharegate to migrate all data to the destination library. The configuration values to be considered are:

  • CopyPermissions Specifies that all permissions on the files / folders be over [configured as “true”]
  • KeepAuthorsAndTimestamps – Specifies that the file / folder metadata such as Authors, associated time stamps (created date, modified date) be copied over [configured as “true”]
  • ContentFileExtensionFilterType – specifies file extension type filter to copy data [configured as “AllExtensions”]


 
Template 2 – Filtered Data This migration template configures Sharegate to migrate all the documents that was modified / created after 1st January, 2016 and matches one of the configured extensions to the destination library. The configuration values to be considered are:

  • CopyPermissions – Specifies that all permissions on the files / folders be over [configured as “true”]
  • ContentFrom – specifies the date cutover anything modified after this date shall be copied [configured as “01/01/2016 10:00:00”]
  • KeepAuthorsAndTimestamps – Specifies that the file / folder metadata such as Authors, associated time stamps (created date, modified date) be copied over [configured as “true”]
  • ContentFileExtension – specifies that all allowed file type to be copied over [configured as “pdf; xlsx; xls; doc; pptx; docx; ppt; xlsm; rtf; vsd; pub; docm; xlsb; mpp; one; pps; dotx; dot; pot; xlk; vdx; pptm; xlt; xlam; potx; odt; xla; wk4; dic; dotm; xltm; xltx; onetoc2; vss; thmx; vsdx; xps; msg; eml; jpg; cr2; png; tif; psd; bmp; eps; ai; jpeg; gif; indd; dwg; svg; djvu; ico; wmf; dcx; emf; xpm; pdn; cam; ”]
  • ContentFileExtensionFilterType – specifies file extension type filter to copy data [configured as “LimitTo”]


 
Migration templates in Sharegate user interface
The migration templates presented as XML files before may be imported into Sharegate user interface to inspect and test. Migration options are available while configuring a migration as demonstrated in the screen shot provided below:

 
The screen shots below represent the migration options as visible in Sharegate user interface, when the templates are imported into the tool.

Copy Options Screen – Template 1

Content type mapping screen – Template 1

Filter options screen – Template 1

 

Copy Options Screen – Template 2

Content type mapping screen – Template 2

Filter options screen – Template 2

 
PowerShell scripts to initiate migrations
Migrations can be initiated from PowerShell console by running the following script


 
The coded migration templates as demonstrated above has merits in terms of eliminating the need to manually configure options for each migration using Sharegate user interface.
To extend this to a whole new level, the overall migration project can further be split into multiple migration tasks and scheduled as sequential migrations across multiple machines utilizing common migration templates and scripts stored in a network location, but that is something I’ll cover off as a part of separate blog post.
Happy Migrating…

Deploying an Active Directory Forest using AWS CloudFormation

First published at https://nivleshc.wordpress.com

Introduction

Wow, it is amazing how time flies. Almost two years ago, I wrote a set of blogs that showed how one can use Azure Resource Manager (ARM) templates and Desired State Configuration (DSC) scripts to deploy an Active Directory Forest automatically.
For those that would like to take a trip down memory lane, here is the link to the blog.
Recently, I have been playing with AWS CloudFormation and I am simply in awe by its power. For those that are not familiar with AWS CloudFormation, it is a tool, similar to Azure Resource Manager, that allows you to “code” your computing infrastructure in Amazon Web Services. Long gone are the days when you would have to sit down, pressing each button and choosing each option to deploy your environment. Cloud computing provides you with a way to interface with the fabric, so that you can script the build of your environment. The benefits of this are enormous. Firstly, it allows you to standardise all your builds. Secondly, it allows you to have a live as-built document (the code is the as-built document). Thirdly, the code is re-useable. Most important of all, since the deployment is now scripted, you can automate it.
In this blog I will show you how to create an AWS CloudFormation template to deploy an AWS Elastic Compute Cloud (EC2) Windows Server instance. The template will also include steps to promote the EC2 instance to a Domain Controller in a new Active Directory Forest.
Guess what the best part is? Once the template has been created, all you will have to do is to load it into AWS CloudFormation, provide a few values and sit back and relax. AWS CloudFormation will do everything for you from there on!
Sounds interesting? Lets begin.

Creating the CloudFormation Template

A CloudFormation template starts with a definition of the parameters that will be used. The person running the template (lets refer to them as an operator) will be asked to provide the values for these parameters.
When defining a parameter, you will provide the following

  • a name for the parameter
  • its type
  • a brief description for the parameter so that the operator knows what it will be used for
  • any constraints you want to put on the parameter, for instance
    • a maximum length (for strings)
    • a list of allowed values (in this case a drop down list is presented to the operator, to choose from)
  • a default value for the parameter

For our template, we will use the following parameters.

Next, we will define some mappings. Mappings allow us to define the values for variables, based on what value was provided for a parameter.
When creating EC2 instances, we need to provide a value for the Amazon Machine Image (AMI) to be used. In our case, we will use the OS version to decide which AMI to use.
To find the subnet into which the EC2 instance will be deployed in, we will use the Environment and AvailabilityZone parameters to find it.
The code below defines the mappings we will use

The next section in the CloudFormation template is Resources. This defines all resources that will be created.
If you have any experience deploying Active Directory Forests, you will know that it is extremely simple to do it using PowerShell scripts. Guess what, we will be using PowerShell scripts as well 😉 Now, after the EC2 instance has been created, we need to provide the PowerShell scripts to it, so that it can run them. We will use AWS Simple Storage Service (S3) buckets to store our PowerShell scripts.
To ensure our PowerShell scripts are stored securely, we will allow access to it only via a certain role and policy.
The code below will create an AWS Identity and Access Management (IAM) role and policy to access the S3 Bucket where the PowerShell scripts are stored.

We will use cf-init to do all the heavy lifting for us, once the EC2 instance has been created. cf-init is a utility that is present by default in EC2 instances and we can ask it to perform tasks for us.
To trigger cf-init, we will use the Userdata feature of EC2 instance provisioning. cf-init, when started, will check the EC2 Metadata for the credentials it will use, and it will also check it for all the tasks it needs to perform.
Below is the metadata that will be used. For simplicity, I have hardcoded the URL to the files in the S3 bucket.

As you can see, I have first defined the role that cf-init will use to access the S3 bucket. Next, the following tasks will be carried out, in the order defined in the configuration set

  • get-files
    • it will download the files from S3 and place them in the local directory c:\s3-downloads\scripts.
  • configure-instance (the commands in this section are run in alphabetical order, that is why I have prefixed them with a number, to ensure it follows the order I want)
    • It will change the execution policy for PowerShell to unrestricted (please note that this is just for demonstration purposes and the execution policy should not be made this relaxed).
    • next, the name of the server will be changed to what was provided in the Parameters section
    • the following Windows Components will be installed (as defined in the Add-WindowsComponents.ps1 script file)
      • RSAT-AD-PowerShell
      • AD-Domain-Services
      • DNS
      • GPMC
    • the Active Directory Forest will be created, using the Configure-ADForest.ps1 script and the values provided in the Parameters section

In the last part of the CloudFormation template, we will provide the UserData information that will trigger cfn-init to run and do all the configuration. We will also tag the the EC2 instance, based on values from the Parameters section.
For simplicity, I have hardcoded the security group that will be attached to the EC2 instance (this is defined as GroupSet under NetworkInterfaces). You can easily create an additional parameter for this, if you want.
Finally, our template will output the instance’s hostname, environment it has been created in and its privateip. This provides an easy way to identify the EC2 instance once it has been created.
Below is the last part of the template

Now all you have to do is login to AWS CloudFormation, load the template we have created, provide the parameter values and sit back and relax.
AWS CloudFormation will take it from here and do everything for you 😉
How easy was that? Magic 🙂
The complete CloudFormation template is available at https://gist.github.com/nivleshc/867b1a2ca119c7d22cf215b5a9a5de02
The two PowerShell Scripts that are used in the CloudFormation template can be downloaded using the links below
Add-WindowsComponents.ps1
Configure-ADForest.ps1
For anyone deploying an Active Directory Forest in AWS, I hope the above comes in handy.
Enjoy 😉

Enterprise-ready testing in Xamarin

As focus moves towards mobile-first development, enterprise developers should also focus on automated testing. In an enterprise setting, we like to test for merge conflicts, sync issues, authentication failures, etc and we need to test this on hundreds of devices.
In this post, we will look at how we can set UI test project and application code so that we can test against scenarios that we face in a real-life situation (no connection, sync issues, authentication failures). We will achieve this by utilizing mock application services in the application. The project is a Xamarin Forms application, where most of the application code resides in a .NET Core project. It does not matter which application pattern we use, e.g. MVC, MVVM or MVP.
In a basic unit test, we test various method of a library by mocking all services that it might depend upon and check against expected behaviour.
In UI tests, our aim is to check that UI layout works across multiple devices, and the fact the data displayed by the view is correct. We also check that the application behaves as expected in scenarios such as password expiration, where there is no connection, a failure to get device location etc. Using the same principles as above, we should be able to simulate multiple conditions/data sets for the UI and test against expected results.
All of the code for the below application is available for download at Sample UI Automation.

MOCKING

The first service to mock will be a network service, i.e. a service that is responsible for interacting will back-end APIs. Like any other project, we have a basic network service contract inside our application, defined as:

Assuming we already have an implementation for the above service called NetworkService (along with its unit tests), we are going to add a mocked service. The below code works in 2 modes, RECORD, and MOCK. In RECORD mode, the service acts similar to a real network service, but also saves those interactions (mostly JSON/XML payloads) to a file. These interactions are then used to replay the scene inside a UI test.

 
The RECORDING mode is used with an Android device so that we are able to save the file on the file system and retrieve it using ADB shell tool.

Project Setup

Before we use the above service, let’s define a new configuration for our solution called ‘Automation’.
Right click on your project solution and click ‘Options’. Then under Build -> Configurations, select the ‘General’ tab. Now Click ‘Add’. Enter the name as ‘Automation’. Do this for each platform i.e. Any CPU,  iPhone, and iPhoneSimulator.
Screen Shot 2018-04-23 at 3.29.51 pm.png
Now go to your main iOS and Android projects and define symbols for ‘Automation’ configuration. Here I have added ‘ENABLED_TEST_CLOUD’ and ‘AUTOMATION’.
Screen Shot 2018-04-23 at 3.40.00 pm.png
For this project, we are using Autofac for IoC. When the configuration selected is Automation, the mock network service will be injected into the application.

The mock service, in the RECORDING mode, behaves like the real service and also save those interactions. After loading the service, all that is left to do is test your screen manually. Once the manual testing of the application is finished, we should be able to retrieve the MockRecoding.json file (name defined in the service above) from the Android device. This file will have all the data that we will need while writing our UI tests.
Retrieve the file using ADB tools as shown below
Screen Shot 2018-05-03 at 10.56.33 am
Here is the structure of the recording that we have saved.
 
Screen Shot 2018-05-04 at 9.03.05 am

Backdoor methods

Next, we will see how we can use the above-saved interaction file and use it in our tests. UI tests are mostly commands that we send to the application, such as button clicks and gestures, and we then test the UI for expected results. We need to write backdoor methods in our application that we can later invoke from a UI test.
Below is a ‘LoadApiData’ method, which injects data into mock service using its exposed public methods. Again, this code only works when the selected configuration is ‘Automation’.

Consider a scenario where we swipe down a list view to refresh it. For testing this scenario, we want to first set the data that the list initially has, then we will want to set the refreshed data, then perform the swipe gesture, and finally check that the list has the expected data in it.
Add the recording file that we saved earlier inside a folder in our UI test project. To use the above recording, add it to your test project inside a folder named ‘Data’. Then right-click on the file and set its ‘Build Action’ to ‘EmbeddedResource’. For more about the embedded resource, check out this blog.
The test can be as simple as this. Note how it uses the backdoor method before moving to list screen. Using Xamarin test cloud (now App center), we can run the below test on hundreds of devices and check for UI specifics.

With mocking and utilizing test cloud we are able to able to test the same scenario on hundreds of devices. Not only that – since we are mocking it, the same MockNetworkService can be used to perform development too, such as when APIs are under development. All we need is the JSON payload, and we can manually edit the recording file.
 
 
 

Automate SharePoint Document Libraries Management using Flow, Azure Functions and SharePoint CSOM

I’ve been working on a client requirement to automate SharePoint library management via scripts to implement a document lifecycle with many document libraries that have custom content types and requires regular housekeeping for ownership and permissions.
Solution overview
To provide a seamless user experience, we decided to do the following:

1. Create a document library template (.stp) with all the prerequisite folders and content types applied.

2. Create a list to store the data about entries for said libraries. Add the owner and contributors for the library as columns in that list.

3. Whenever the title, owners or contributors are changed, the destination document library will be updated.

Technical Implementation
The solution has two main elements to automate this process

1. Microsoft Flow – Trigger when an item is created or modified

2. Two Azure Functions – Create the library and update permissions

The broad steps and code are as follows:
1. When the flow is triggered, we would check the status field to find if it is a new entry or a change.
Note: Since Microsoft flow doesn’t have conditional triggers to differentiate between create and modified list item events, use a text column in the SharePoint list which is set to start, in progress and completed values to identify create and update events.
2. The flow will call an Azure function via an HTTP Post action in a Function. Below is the configuration of this.
AzureFunctionFromFlow
3. For the “Create Library” Azure function, create a HTTP C# Function in your Azure subscription.
4. In the Azure Function, open Properties -> App Service Editor. Then add a folder called bin and then copy two files to it.

  • Microsoft.SharePoint.Client
  • Microsoft.SharePoint.Client.Runtime

KuduTools_AzureFunction
Create Lib App Service Editor
Please make sure to get the latest copy of the Nuget package for SharepointPnPOnlineCSOM. To do that, you can set up a VS solution and copy the files from there, or download the Nuget package directly and extract the files from it.
5. After copying the files, reference them in the Azure function using the below code

#r "Microsoft.SharePoint.Client.dll"
#r "Microsoft.SharePoint.Client.Runtime.dll"
#r "System.Configuration"
#r "System.Web"

6. Then create the SharePoint client context and create a connection to the source list.
7. After that, use the ListCreationInformation class to create the Document library from the library template using the code below.

8. After the library is created, break the role inheritance for the library as per the requirement
9. Update the library permissions using the role assignment object
10. To differentiate between People, SharePoint Groups and AD Groups, find the unique ID and add the group as per the script below.

Note: In case you have people objects that are not in AD anymore because they have left the organisation, please refer to this blog for validating them before updating – Resolving “User not found” issue while assigning permissions using SharePoint CSOM

      Note: Try to avoid item.Update() from the Azure Function as that will trigger a second flow run, causing an iterative loop, instead use item.SystemUpdate()
11. After the update is done, return to the Flow with the success value from the Azure Function which will complete the loop.
As shown above, we saw how we can automate document library creation from a template and permissions management using Flow and Azure Functions

Replacing the service desk with bots using Amazon Lex and Amazon Connect (Part 3)

Hopefully you’ve had the chance to follow along in parts 1 and 2 where we set up our Lex chatbot to take and validate input. In this blog, we’ll interface with our Active Directory environment to perform the password reset function. To do this, we need to create a Lambda function that will be used as the logic to fulfil the user’s intent. The Lambda function will be packaged with the python LDAP library to modify the AD password attribute for the user. Below are the components that need to be configured.

Active Directory Service Account

To begin, we need to start by creating a service account in Active Directory that has permissions to modify the password attribute for all users. Our Lambda function will then use this service account to perform password resets. To do this, create a service account in your Active Directory domain and perform the following to delegate the required permissions:

  1. Open Active Directory Users and Computers.
  2. Right click the OU or container that contains organisational users and click Delegate Control
  3. Click Next on the Welcome Wizard.
  4. Click Add and enter the service account that will be granted the reset password permission.
  5. Click OK once you’ve made your selection, followed by Next.
  6. Ensure that Delegate the following common tasks is enabled, and select Reset user passwords and force password change at next logon.
  7. Click Next, and Finish.

KMS Key for the AD Service Account

Our Lambda function will need to use the credentials for the service account to perform password resets. We want to avoid storing credentials within our Lambda function, so we’ll store the password as an encrypted Lambda variable and allow our Lambda function to decrypt it using Amazon’s Key Management Service (KMS). To create the KMS encryption key, perform the following steps:

  1. In the Amazon Console, navigate to IAM
  2. Select Encryption Keys, then Create key
  3. Provide an Alias (e.g. resetpw) then select Next
  4. Select Next Step for all subsequent steps then Finish on step 5 to create the key.

IAM Role for the Lambda Function

Because our Lambda function will need access to several AWS services such as SNS to notify the user of their new password and KMS to decrypt the service account password, we need to provide our function with an IAM role that has the relevant permissions. To do this, perform the following steps:

  1. In the Amazon Console, navigate to IAM
  2. Select Policies then Create policy
  3. Switch to the JSON editor, then copy and paste the following policy, replacing the KMS resource with the KMS ARN created above
    {
       "Version": "2012-10-17",
       "Statement": [
         {
           "Effect": "Allow",
           "Action": "sns:Publish",
           "Resource": "*"
         },
         {
           "Effect": "Allow",
           "Action": "kms:Decrypt",
           "Resource": "<KMS ARN>"
         }
       ]
    }
  4. Provide a name for the policy (e.g. resetpw), then select Create policy
  5. After the policy has been created, select Roles, then Create Role
  6. For the AWS Service, select Lambda, then click Next:Permissions
  7. Search for and select the policy you created in step 5, as well as the AWSLambdaVPCAccessExecutionRole and AWSLambdaBasicExecutionRole policies then click Next: Review
  8. Provide a name for the role (e.g. resetpw) then click Create role

Network Configuration for the Lambda Function

To access our Active Directory environment, the Lambda function will need to run within a VPC or a peered VPC that hosts an Active Directory domain controller. Additionally, we need the function to access the internet to be able to access the KMS and SNS services. Lambda functions provisioned in a VPC are assigned an Elastic network Interface with a private IP address from the subnet it’s deployed in. Because the ENI doesn’t have a public IP address, it cannot simply leverage an internet gateway attached to the VPC for internet connectivity and as a result, must have traffic routed through a NAT Gateway similar to the diagram below.

Password Reset Lambda Function

Now that we’ve performed all the preliminary steps, we can start building the Lambda function. Because the Lambda execution environment uses Amazon Linux, I prefer to build my Lambda deployment package on a local Amazon Linux docker container to ensure library compatibility. Alternatively, you could deploy a small Amazon Linux EC2 instance for this purpose, which can assist with troubleshooting if it’s deployed in the same VPC as your AD environment.
Okay, so let’s get started on building the lambda function. Log in to your Amazon Linux instance/container, and perform the following:

  • Create a project directory and install python-ldap dependencies, gcc, python-devel and openldap-devel
    Mkdir ~/resetpw
    sudo yum install python-devel openldap-devel gcc
  • Next, we’re going to download the python-ldap library to the directory we created
    Pip install python-ldap -t ~/resetpw
  • In the resetpw directory, create a file called reset_function.py and copy and paste the following script
  • Now, we need to create the Lambda deployment package. As the package size is correlated with the speed of Lambda function cold starts, we need to filter out anything that’s not necessary to reduce the package size. The following zip’s the script and LDAP library:
    Cd ~/resetpw
    zip -r ~/resetpw.zip . -x "setuptools*/*" "pkg_resources/*" "easy_install*"
  • We need to deploy this package as a Lambda function. I’ve got AWSCLI installed in my Amazon Linux container, so I’m using the following CLI to create the Lambda function. Alternatively, you can download the zip file and manually create the Lambda function in the AWS console using the same parameters specified in the CLI below.
    aws lambda create-function --function-name reset_function --region us-east-1 --zip-file fileb://root/resetpw.zip --role resetpw --handler reset_function.lambda_handler --runtime python2.7 --timeout 30 --vpc-config SubnetIds=subnet-a12b3cde,SecurityGroupIds=sg-0ab12c30 --memory-size 128
  • For the script to run in your environment, a number of Lambda variables need to be set which will be used at runtime. In the AWS Console, navigate to Lambda then click on your newly created Lambda function. In the environment variables section, create the following variables:
    • Url – This is the LDAPS URL for your domain controller. Note that it must be LDAP over SSL.
    • Domain_base_dn – The base distinguished name used to search for the user
    • User – The service account that has permissions to change the user password
    • Pw – The password of the service account
  • Finally, we need to encrypt the Pw variable in the Lambda console. Navigate to the Encryption configuration and select Enable helpers for encryption in transit. Select your KMS key for both encryption in transit and at reset, then select the Encrypt button next to the pw variable. This will encrypt and mask the value.

  • Hit Save in the top right-hand corner to save the environment variables.

That’s it! The Lambda function is now configured. A summary of the Lambda function’s logic is as follows:

  1. Collect the Lambda environment variables and decrypt the service account password
  2. Perform a secure AD bind but don’t verify the certificate (I’m using a Self-Signed Cert in my lab)
  3. Collect the user’s birthday, start month, telephone number, and DN from AD
  4. Check the user’s verification input
  5. If the input is correct, reset the password and send it to the user’s telephone number, otherwise exit the function.

Update and test Lex bot fulfillment

The final step is to add the newly created Lambda function to the Lex bot so it’s invoked after input is validated. To do this, perform the following:

  1. In the Amazon Console, navigate to Amazon Lex and select your bot
  2. Select Fulfillment for your password reset intent, then select AWS Lambda function
  3. Choose the recently created Lambda function from the drop down box, then select Save Intent

That should be it! You’ll need to build your bot again before testing…

My phone buzzes with a new SMS…

Success! A few final things worth noting:

  • All Lambda execution logs will be written to CloudWatch logs, so this is the best place to begin troubleshooting if required
  • Modification to the AD password attribute is not possible without a secure bind and will result in an error.
  • The interrogated fields (month started and date of birth) are custom AD attributes in my lab.
  • Password complexity in my lab is relaxed. If you’re using the default password complexity in AD, the randomly generated password in the lambda function may not meet complexity requirements every time.

Hope to see you in part 4, where we bolt on Amazon Connect to field phone calls!

Replacing the service desk with bots using Amazon Lex and Amazon Connect (Part 2)

Welcome back! Hopefully you had the chance to follow along in part 1 where we started creating our Lex chatbot. In part 2, we attempt to make the conversation more human-like and begin integrating data validation on our slots to ensure we’re getting the correct input.

Creating the Lambda initialisation and validation function

As data validation requires compute, we’ll need to start by creating an AWS Lambda function. Head over to the AWS console, then navigate to the AWS Lambda page. Once you’re there, select Create Function and choose to Author from Scratch then specify the following:
Name: ResetPWCheck
Runtime: Python 2.7 (it’s really a matter of preference)
Role: I use an existing Out of the Box role, “Lambda_basic_execution”, as I only need access to CloudWatch logs for debugging.

Once you’ve populated all the fields, go ahead and select Create Function. The script we’ll be using is provided (further down) in this blog, however before we go through the script in detail, there are two items worth mentioning.

Input Events and Response Formats

It’s well worth familiarising yourself with the page on Lambda Function Input Event and Response Formats in the Lex Developer guide. Every time input is provided to Lex, it invokes the Lambda initalisation and validation function. For example, when I tell my chatbot “I need to reset my password”, the lambda function is invoked and the following event is passed:

Amazon Lex expects a response from the Lambda function in JSON format that provides it with the next dialog action.

Persisting Variables with Session Attributes

There are many ways to determine within your Lambda function where you’re up to in your chat dialog, however my preferred method is to pass state information within the SessionAttributes object of the input event and response as a key/value pair. The SessionAttributes can persist between invocations of the Lambda function (every time input is provided to the chatbot), however you must remember to collect and pass the attributes between input and responses to ensure it persists.

Input Validation Code

With that out of the way, let’s begin looking at the code. The below script is what I’ve used which you can simply copy and paste, assuming you’re using the same slot and intent names in your Lex bot that were used in Part 1.

Let’s break it down.
When the lambda function is first invoked, we check to see if any state is set in the sessionAttributes. If not, we can assume this is the first time the lambda function is invoked and as a result, provide a welcoming response while requesting the User’s ID. To ensure the user isn’t welcomed again, we set a session state so the Lambda function knows to move to User ID validation when next invoked. This is done by setting the “Completed” : “None” key/value pair in the response SessionAttributes.

Next, we check the User ID. You’ll notice the chkUserId function checks for two things; That the slot is populated, and if it is, the length of the field. Because the slot type is AMAZON.Number, any non-numeric characters that are entered will be rejected by the slot. If this occurs, the slot will be left empty, hence this is something we’re looking out for. We also want to ensure the User ID is 6 digits, otherwise it is considered invalid. If the input is correct, we set the session state key/value pair to indicate User ID validation is complete then allow the dialog to continue, otherwise we request the user to re-enter their User ID.

Next, we check the Date of Birth. Because the slot type is strict regarding input, we don’t do much validation here. An utterance for this slot type generally maps to a complete date: YYYY-MM-DD. For validation purpose, we’re just looking for an empty slot. Like the User ID check, we set the session state and allow the dialog to continue if all looks good.

Finally, we check the last slot which is the Month Started. Assuming the input for the month started is correct, we then confirm the intent by reading all the slot values back to the user and asking if it’s correct. You’ll notice here that there’s a bit of logic to determine if the user is using voice or text to interact with Lex. If voice is used, we use Speech Synthesis Markup Language (SSML) to ensure the UserID value is read as digits, rather than as the full number.
If the user is happy with the slot values, the validation completes and Lex then moves to the next Lambda function to fulfil the intent (next blog). If the user isn’t happy with the slot values, the lambda function exits telling the user to call back and try again.

Okay, now that our Lambda function is finished, we need to enable it as a code hook for initialisation and validation. Head over to your Lex bot, select the “ResetPW” intent, then tick the box under Lambda initialisation and validation and select your Lambda function. A prompt will be given to provide permissions to allow your Lex bot to invoke the lambda function. Select OK.

Let’s hit Build on the chatbot, and test it out.

So, we’ve managed to make the conversation a bit more human like and we can now detect invalid input. If you use the microphone to chat with your bot, you’ll notice the User ID value is read as digits. That’s it for this blog. Next blog, we’ll integrate Active Directory and actually get a password reset and sent via SNS to a mobile phone.

Replacing the service desk with bots using Amazon Lex and Amazon Connect (Part 1)

“What! Is this guy for real? Does he really think he can replace the front line of IT with pre-canned conversations?” I must admit, it’s a bold statement. The IT Service Desk has been around for years and has been the foot in the door for most of us in the IT industry. It’s the face of IT operations and plays an important role in ensuring an organisation’s staff can perform to the best of their abilities. But what if we could take some of the repetitive tasks the service desk performs and automate them? Not only would we be saving on head count costs, we would be able to ensure correct policies and procedures are followed to uphold security and compliance. The aim of this blog is to provide a working example of the automation of one service desk scenario to show how easy and accessible the technology is, and how it can be applied to various use cases.
To make it easier to follow along, I’ve broken this blog up into a number of parts. Part 1 will focus on the high-level architecture for the scenario and begin creating the Lex chatbot.

Scenario

Arguably, the most common service desk request is the password reset. While this is a pretty simple issue for the service desk to resolve, many service desk staff seem to skip over, or not realise the importance of user verification. Both the simple resolution and the strict verification requirement make this a prime scenario to automate.

Architecture

So what does the architecture look like? The diagram below dictates the expected process flow. Let’s step through each item in the diagram.

 

Amazon Connect

The process begins when the user calls the service desk and requests to have their password reset. In our architecture, the service desk uses Amazon Connect which is a cloud based customer contact centre service, allowing you to create contact flows, manage agents, and track performance metrics. We’re also able to plug in an Amazon Lex chatbot to handle user requests and offload the call to a human if the chatbot is unable to understand the user’s intent.

Amazon Lex

After the user has stated their request to change their password, we need to begin the user verification process. Their intent is recognised by our Amazon Lex chatbot, which initiates the dialog for the user verification process to ensure they are who they really say they are.

AWS Lambda

After the user has provided verification information, AWS Lambda, which is a compute on demand service, is used to validate the user’s input and verify it against internal records. To do this, Lambda interrogates Active Directory to validate the user.

Amazon SNS

Once the user has been validated, their password is reset to a random string in Active Directory and the password is messaged to the user’s phone using Amazon’s Simple Notification Service. This completes the interaction.

Building our Chatbot

Before we get into the details, it’s worth mentioning that the aim of this blog is to convey the technology capability. There’s many ways of enhancing the solution or improving validation of user input that I’ve skipped over, so while this isn’t a finished production ready product, it’s certainly a good foundation to begin building an automated call centre.
To begin, let’s start with building our Chatbot in Amazon Lex. In the Amazon Console, navigate to Amazon Lex. You’ll notice it’s only available in Ireland and US East. As Amazon Connect and my Active Directory environment is also in US East, that’s the region I’ve chosen.
Go ahead and select Create Bot, then choose to create your own Custom Bot. I’ve named mine “UserAdministration”. Choose an Output voice and set the session timeout to 5 minutes. An IAM Role will automatically be created on your behalf to allow your bot to use Amazon Polly for speech. For COPPA, select No, then select Create.

Once the bot has been created, we need to identify the user action expected to be performed, which is known as an intent. A bot can have multiple intents, but for our scenario, we’re only creating one, which is the password reset intent. Go ahead and select Create Intent, then in the Add Intent window, select Create new intent. My intent name is “ResetPW”. Select Add, which should add the intent to your bot. We now need to specify some expected sample utterances, which are phrases the user can use to trigger the Reset Password intent. There’s quite a few that could be listed here, but I’m going to limit mine to the following:

  • I forgot my password
  • I need to reset my password
  • Can you please reset my password


The next section is the configuration for the Lambda validation function. Let’s skip past this for the time being and move onto the slots. Slots are used to collect information from the user. In our case, we need to collect verification information to ensure the user is who they say they are. The verification information collected is going to vary between environments. I’m looking to collect the following to verify against Active Directory:

  • User ID – In my case, this is a 6-digit employee number that is also the sAMAccountName in Active Directory
  • User’s birthday – This is a custom attribute in my Active Directory
  • Month started – This is a custom attribute in my Active Directory

In addition to this, it’s also worth collecting and verifying the user’s mobile number. This can be done by passing the caller ID information from Amazon Connect, however we’ll skip this, as the bulk of our testing will be text chat and we need to ensure we have a consistent experience.
To define a slot, we need to specify three items:

  • Name of the slot – Think of this as the variable name.
  • Slot type – The data type expected. This is used to train the machine learning model to recognise the value for the slot.
  • Prompt – How the user is prompted to provide the value sought.

Many slot types are provided by Amazon, two of which has been used in this scenario. For “MonthStarted”, I’ve decided to create my own custom slot type, as the in-built “AMAZON.Month” slot type wasn’t strictly enforcing recognisable months. To create your own slot type, press the plus symbol on the left-hand side of the page next to Slot Types, then provide a name and description for your slot type. Select to Restrict to Slot values and Synonyms, then enter each month and its abbreviation. Once completed, click Add slot to intent.

Once the custom slot type has been configured, it’s time to set up the slots for the intent. The screenshot below shows the slots that have been configured and the expected order to prompt the user.

Last step (for this blog post), is to have the bot verify the information collected is correct. Tick the Confirmation Prompt box and in the Confirm text box provided, enter the following:
Just to confirm, your user ID is {UserID}, your Date of Birth is {DOB} and the month you started is {MonthStarted}. Is this correct?
For the Cancel text box, enter the following:
Sorry about that. Please call back and try again.

Be sure to leave the fulfillment to Return parameters to client and hit Save Intent.
Great! We’ve built the bare basics of our bot. It doesn’t really do much yet, but let’s take it for a spin anyway and get a feel for what to expect. In the top right-hand corner, there’s a build button. Go ahead and click the button. This takes some time, as building a bot triggers machine learning and creates the models for your bot. Once completed, the bot should be available to text or voice chat on the right side of the page. As you move through the prompts, you can see at the bottom the slots getting populated with the expected format. i.e. 14th April 1983 is converted to 1983-04-14.

So at the moment, our bot doesn’t do much but collect the information we need. Admittedly, the conversation is a bit robotic as well. In the next few blogs, we’ll give the bot a bit more of a personality, we’ll do some input validation, and we’ll begin to integrate with Active Directory. Once we’ve got our bot working as expected, we’ll bolt on Amazon Connect to allow users to dial in and converse with our new bot.

Follow ...+

Kloud Blog - Follow