How far to take response group

I have been working on a SFB Enterprise Voice Implementation project recently. The client is very keen to use native response group to create a corporate IVR for their receptions. The requirement in particular ended up needing 4 workflows, 19 Queues, 2 Groups and going beyond 2-Level, 4-Options IVR simple cases. The whole implementation won’t be completed under GUI, instead, Lync Powershell is the only way to meet the requirement.

I drew the reception IVR workflow below:

RGS

The root level menu is 7 options with the option 9 to loop back and the sub menu is also up to 8 options to help receptions to reduce the workload.

I like to start with GUI to set up the quickly set up the IVR framework with first 4 options and then we use scripts to extend options and manage the IVR framework. Take the “Reception Main Menu” as an example, I used the below scripts adding in Option 5, Option 6, Option 9.

##Create Option 5

$Workflow=get-csrgsworkflow -name "Reception Main Menu"

$queue = Get-CsRgsQueue -name "Press5sub Queue[R]"

$Question = $workflow.DefaultAction.Question

$Action5 = New-CsRgsCallAction -Action TransferToQueue -QueueID $queue.Identity

$Answer5 = New-CsRgsAnswer -Action $Action5 -DtmfResponse 5 -VoiceResponseList "Option5"

$Question.AnswerList.Add($Answer5)

Set-CsRgsWorkflow -Instance $workflow

##Create Option 6

$Workflow=get-csrgsworkflow -name "Reception Main Menu"

$queue = Get-CsRgsQueue -name "Press6sub Queue[R]"

$Question = $workflow.DefaultAction.Question

$Action6 = New-CsRgsCallAction -Action TransferToQueue -QueueID $queue.Identity

$Answer6 = New-CsRgsAnswer -Action $Action6 -DtmfResponse 6 -VoiceResponseList "Option6"

$Question.AnswerList.Add($Answer6)

Set-CsRgsWorkflow -Instance $workflow

##Create Option 9

$Workflow=get-csrgsworkflow -name "Reception Main Menu"

$queue = Get-CsRgsQueue -name "Press9sub Queue[R]"

$Question = $workflow.DefaultAction.Question

$Action9 = New-CsRgsCallAction -Action TransferToQueue -QueueID $queue.Identity

$Answer9 = New-CsRgsAnswer -Action $Action9 -DtmfResponse 9 -VoiceResponseList "Option9"

$Question.AnswerList.Add($Answer9)

Set-CsRgsWorkflow -Instance $workflow

To manage the business hours of IVR workflows, I used the below scripts to reset/update the business hours:

##Business Hours update

$weekday = New-CsRgsTimeRange -Name "Weekday Hours" -OpenTime 00:08:30 -CloseTime 17:30:00

$x = Get-CsRgsHoursOfBusiness -Identity "service:ApplicationServer:nmlpoolaus01.company.com.au" -Name "Reception Main Menu_434d7c29-9893-4946-afcf-3bb9ac7aad8a"

$x.MondayHours1 = $weekday

$x.TuesdayHours1 = $weekday

$x.WednesdayHours1 = $weekday

$x.ThursdayHours1 = $weekday

$x.FridayHours1 = $weekday

Set-CsRgsHoursOfBusiness -Instance $x

$x

To manage the greeting/announcement of IVR workflows, I used the below scripts to reset/update the IVR greeting:

##greeting/announcement update

$audioFile = import-CsRgsAudioFile -Identity "service:ApplicationServer:nmlpoolaus01.company.com.au" -FileName "Greeting reception.wma" -Content (Get-Content "C:\temp\Greeting Reception.wma" -Encoding byte -readcount 0)

$prompt = New-CsRgsPrompt -AudioFilePrompt $audioFile -TextSpeechPrompt ""

$workflow.DefaultAction.Question.Prompt = $prompt

$workflow.DefaultAction.Question

Set-CsRgsWorkflow $workflow

The native Lync response group is a basic IVR platform that covers most simple cases and can even go as far as multiple level and multiple option IVR with text-to-speech, and speech recognition (Interactive workflow), that’s not too shabby at all!

Hopefully my scripts can help you to extend your Lync IVR RGS workflow. 😊

Xamarin Forms: Mircosoft.EntityFrameworkCore.Sqlite issue with Physical devices

Introduction

Building Xamarin Forms apps using .Net Standard 2.0 is still pretty much new to industry, we are just started to learn how differently we have to configure Xamarin setting to get it working when compared to PCL based projects.

I was building a Xamarin Forms based App using Microsoft’s Entityframeworks SQlite to store app’s data. Entity framework using sqlite is an obvious choice when it comes to building App using .Net Standard 2.0

Simulator

Works well on pretty much on all simulators without any issue, all read/write operations works well.

Issue  – Physical Device

App crashes on physical device, when tried to read or write data from the SQlite database

Error

System.TypeInitializationException: The type initializer for ‘Microsoft.EntityFrameworkCore.EntityFrameworkQueryableExtensions’ threw an exception. —> System.InvalidOperationException: Sequence contains
no matching element

Resolution

Change linker behavior to “Don’t Link”

Xamarin forms using .Net Standard 2.0

Introduction

All Xamarin developers, please welcome Net standard 2.0. This is the kind of class library we were waiting for all these years. The .Net standard 2.0 specification is now complete and it is included with Net core 2.0, Net framework 4.6.1 and up to latest versions. It can be used using Visual Studio versions 15.3 and up. Net Standard 2.0 obviously supports C# and also F# and Visual Basic.

More APIs

Net Standard 2.0 is for sharing code via various platforms. It is included with all the common APIs that all .Net implementations, it unified all .net frameworks to avoid any fragmentations in future. There are more than 32000 APIs in .Net Standard 2.0 most of them that are already available in .Net Framework APIs. Microsoft has made it easy to port existing code to .Net Standard 2.0. It is now easy to extend any .Net Standard to .Net core 2.0 or any versions that come in future.

NuGet Support

Most NuGet packages currently work well with .Net framework, but not all projects are compatible to move to .Net Standard 2.0, therefore a compatibility mode is added to support them.  Even after compatibility mode, only upt0 70% of packages are supported.

Frameworks and Libraries

Below is the table,list all the support frameworks and libraries. Click here for more details

.NET Standard
1.0 1.1 1.2 1.3 1.4 1.5 1.6 2.0
.NET Core 1.0 1.0 1.0 1.0 1.0 1.0 1.0 2.0
.NET Framework 4.5 4.5 4.5.1 4.6 4.6.1 4.6.1 4.6.2 4.6.1 vNext 4.6.1
Mono 4.6 4.6 4.6 4.6 4.6 4.6 4.6 5.4
Xamarin.iOS 10.0 10.0 10.0 10.0 10.0 10.0 10.0 10.14
Xamarin.Mac 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.8
Xamarin.Android 7.0 7.0 7.0 7.0 7.0 7.0 7.0 8.0
Universal Windows Platform 10.0 10.0 10.0 10.0 10.0 10.0.16299 10.0.16299 10.0.16299
Windows 8.0 8.0 8.1
Windows Phone 8.1 8.1 8.1
Windows Phone Silverlight 8.0

Sample to convert PCL or Shared to .Net Standard 2.0

  1. Create a default PCL or Shared based Xamarin Forms applications and name it appropreately and wait for solution to loadScreen Shot 2017-12-09 at 09.18.05
  2. Add .Net Standard class library by selecting .Net Standard 2.0Screen Shot 2017-12-09 at 09.24.38Screen Shot 2017-12-09 at 09.25.41Now project should look something like belowScreen Shot 2017-12-09 at 09.26.38.png
  3. Now remove PCL or Shared based project (VERY Important only after moving all the required project files to Netstandard20Test library) and compileScreen Shot 2017-12-09 at 09.28.38.png
  4. now rename the NetStandard20Test to NetStandardTest (Same as deleted library), make sure to rename DefaultNameSpace and Assembly to NetStandarTestScreen Shot 2017-12-09 at 09.30.07Screen Shot 2017-12-09 at 09.30.14Screen Shot 2017-12-09 at 09.30.24Screen Shot 2017-12-09 at 09.30.44Screen Shot 2017-12-09 at 09.34.23.png
  5. Now build the project and see if build is successfully.
  6. Your build should fail with errors as shown below, it is because of the deleted project, now we have to reference back the newly created .Net Standard 2.0 to both Android and iOSScreen Shot 2017-12-09 at 09.35.53.png
  7. Now edit references on each platform project to add newly created project as shown below onceScreen Shot 2017-12-09 at 09.37.58Screen Shot 2017-12-09 at 09.38.05
  8. references are applied correctly, you should get below errorsScreen Shot 2017-12-09 at 09.52.14
  9. Now add Xamarin.Forms NuGet package for all projectsScreen Shot 2017-12-09 at 09.54.04.png
  10. Now build the project and you should see any errorsScreen Shot 2017-12-09 at 10.58.06
  11. Microsoft has also released a compatibility NuGet package that makes sure’s all the existing packages are compatible to .Net Standard 2.0
  12. Add NuGet package, Mirosoft.NETCore.Portable.Compatibility to .Net Standard 2.0 project.Screen Shot 2017-12-09 at 11.03.01

Hope this blog is useful to you.

 

Seamless Multi-identity Browsing for Cloud Consultants

If you’re a technical consultant working with cloud services like Office 365 or Azure on behalf of various clients, you have to deal with many different logins and passwords for the same URLs. This is painful, as your default browser instance doesn’t handle multiple accounts and you generally have to resort to InPrivate (IE) or Incognito (Chrome) modes which mean a lot of copying and pasting of usernames and passwords to do your job. If this is how you operate today: stop. There is an easier way.

Two tools for seamless logins

OK, the first one is technically a feature. The most important part of removing the login bottleneck is Chrome Profiles. This essential feature of Chrome lets you maintain completely separate profiles for Chrome, including saved passwords, browser cache, bookmarks, plugins, etc. Fantastic.

Set one up for each customer that you have a dedicated account for. Once you log in once, the credentials will be cached and you’ll be able to pass through seamlessly.

This is obviously a great improvement, but only half of the puzzle. It’s when Profiles are combined with another tool that the magic happens…

SlickRun your Chrome sessions

If you haven’t heard of the venerable SlickRun (which must be pushing 20 years if it’s a day) – download it right now. This gives you the godlike power of being able to launch any application or browse to any Url nearly instantaneously. Just hit ALT-Q and input the “magic word” (which autocompletes nicely) that corresponds to the command you want to execute and Bob’s your Mother’s Brother! I tend to hide the SlickRun prompt by default, so it only shows up when I use the global ALT-Q hotkey.

First we have to set up our magic word. If you simply put a URL into the ‘Filename or URL’ box, SlickRun will open it using your default browser. We don’t want that. Instead put ‘chrome.exe’ in the box and use the ‘–profile-directory’ command line switch to target the profile you want, followed by the URL to browse to.

N.B. You don’t seem to be able to reference the profiles by name. Instead you have to put “Profile n” (where n is the number of the profile in the order you created it).

SlickRun-MagicWord

That’s all there is to it. Once you’ve set up your magic words for the key web apps you need to be able to access for each client (I go with a naming convention of ‘clientappname‘ and extend that further if I have multiple test accounts I need to log in as, etc), then get to any of them in seconds and usually as seamlessly as single-sign-on would provide.

This hands-down my favourite productivity trick and yet I’ve never seen anyone else do it, or seen a better solution to the multiple logins problem. Hence this post! Hope you find it as awesome a shortcut as I do…

Till next time!

HoloLens – Continuous Integration

Continuous integration is best defined as the process of constantly merging development artifacts produced or modified by different members of a team into a central shared repository. This task of collating changes becomes more and more complex as the size of the team grows. Ensuring the stability of the central repository becomes a serious challenge in such cases.

A solution to this problem is to validate every merge with automated builds and automated testing. Modern code management platforms like Visual Studio Team Services (VSTS) offers built-in tools to perform these operations. Visual Studio Team Services (VSTS) is a hosted service offering from Microsoft which bundles a collection of Dev Ops services for application developers.

The requirement for a Continuous Integration workflow is important for HoloLens applications considering the agility of the development process. In a typical engagement, designers and developers will work on parallel streams sharing scripts and artifacts which constitute a scene. Having an automated process in place to validate every check-in to the central repository can add tremendous value to the quality of the application. In this blog, we will walk through the process of setting up a Continuous Integration workflow for a HoloLens application using VSTS build and release tools.

Build pipeline

A HoloLens application will have multiple application layers. The development starts with creating the game world using Unity and then proceeds to wiring up backend scripts and services using Visual Studio. To build a HoloLens application package, we need to first build the front-end game world with the Unity compiler and then, the back-end with the visual studio compiler. The following diagram illustrates the build pipeline:

pipeline

In the following sections, we will walk through the process of setting up the infrastructure for building a HoloLens application package using VSTS.

Build agent setup

VSTS uses build agents to perform the task of compiling an application on the central repository. These build agents can either be Microsoft hosted agents, which is available as a service in VSTS or they can be custom-deployed services managed by you. HoloLens application will require custom build agents as they run custom build tasks for compiling the Unity application. Following are the steps for creating a build agent to run the tasks required for building a HoloLens application:

1.      Provision hosting environment for the build agent

The first step in this process is to provision a machine to run the build agent as a service. I’d recommend using an Azure Virtual Machine hosted within an Azure DevTest Lab for this purpose. The DevTest Lab comes with built-in features for managing start up and shut down schedules for the virtual machines which are very effective in controlling the consumption cost.  Following are the steps for setting up the host environment for the build agent in Azure.

  1. Login to the Azure portal and create a new instance of DevTest LabDevtest labs
  2. Add a Virtual machine to the Lab.Add VM
  3. Pick an image with Visual Studio 2017 pre-installed.Image
  4. Choose the hardware with a high number of CPUs and IOPS as the agents are heavy on disks and compute. I’d advice a D8S_V3 machine for a team of approximately 15 developers.                                                                                                        imagesize
  5. Select the PowerShell artifacts to be added to the Virtual machineselected atrefacts
  6. Provision the Virtual Machine and remote desktop into it.

2.      Create authorization token

Build agent will require an authorized channel to communicate with the build server which in our case is the VSTS service. Following are the steps to generate a token:

  1. On VSTS portal, navigate to the security screen using the profile menumenu
  2. Create a personal access token for the agent to authorize to the server. Ensure that you have selected ‘Agent pools (read, manage) in the authorized scope.Create PAT
  3. Note the generated token. This will be used to configure the agent the build host virtual machine.

3.      Installing and configuring the agent

Once the token is generated we are now ready to configure the VSTS agent. Following are the steps

  1. Remote desktop into the build host virtual machine on Azure
  2. Open the VSTS portal on a browser and navigate to the ‘Agent Queue’ screen. (https://.visualstudio.com/Utopia/_admin/_AgentQueue)
  3. Click on ‘Download Agent’ buttondownload agent
  4. Click on the ‘Download’ button to download the installer onto the disk of your VM. Choose the default download location.configuring account
  5. Follow the steps listed in the previous step to configure the agent using PowerShell commands. Detailed instructions can be found at the below link:

https://docs.microsoft.com/en-au/vsts/build-release/actions/agents/v2-windows

  1. Once configures, the agent should appear on the agent list within the selected pool.agent post creation

This completes the build environment setup. We can now configure a build definition for our HoloLens application.

Build definition

Creating the build definition involves queuing up a sequence of activities to be performed during a build. In our case, this includes the following steps.

  • Performing Unity build
  • Restoring NuGet packages
  • Performing Visual Studio build

Following are the steps to be performed:

  1. Login to the VSTS portal and navigate to the marketplace.Marketplace icon
  2. Search for the ‘HoloLens Unity Build’ component and install it. Make sure that you are selecting the right VSTS project while installing the component.Install task
  3. Navigate to Builds on the VSTS portal and click on the ‘New’ button under ‘Build Definitions’new buld defnition
  4. Select an empty template    template selection
  5. Add the following dev tasks
    1. HoloLens Unity Build
    2. NuGet
    3. Visual Studio Build

tasks

  1. Select the Unity Project folder to configure the build taskUnity project folder
  2. Configure the Nuget build task to restore the packages.nuget restoration
  3. Configure the Visual Studio build task by selecting the solution path, platform, and configuration.visual studio build task
  4. Navigate to the ‘Triggers’ tab and Enable the build to be triggered for every check-in.                                                 Trigger

You should now see a build being fired for every merge into the repository. The whole build process for an empty HoloLens application can take anywhere between four to six minutes on an average.

To summarise, in this blog, we learned about the build pipeline for a HoloLens application. We also explored the build agent set up and build definition required to enable continuous integration for a HoloLens application.

Azure Log Analytics and Power BI Desktop for Advanced SharePoint Reporting

In a previous blog post we explored some of the basics around integration of OMS and Power BI to report on user activity. In this blog post we’ll look at this subject in more detail and show what can be achieved with Power BI Desktop, especially with the updates now available in Azure Log Analytics as part of the Operations Management Suite (OMS).

Power BI presents a wealth of data visualisation capability, primarily as two use types, the online version which is geared toward sharing and collaboration (accessed on this URL: https://app.powerbi.com/) and the Power BI Desktop which is more of a high-powered data import and modelling tool, though both can be used to create visuals.

So, why are we looking at Power BI Desktop? Well, while the online flavour has some definite advantages – such as easy sharing and dashboard as well as the flexibility of being accessible to anyone with a browser, there’s a few things I want to do with my data that I need the Desktop version for – specifically merging multiple queries to create some interesting insights into Office 365 activity.

Our Requirement: Show user activity in SharePoint Online and correlate and filter the data based on the users’ business unit and location.

Now, while the Office 365 Monitoring Solution in OMS provides detailed logging of user activity, aggregation of this data to show interesting things like “how many users in each state are active”, or, “how often are users in the Finance business unit sharing documents externally” is a little difficult. The reason for this is that Office 365 Activity API logs the user’s User Principal Name (UPN), e.g. daisy.smith@contoso.com but no other identifying information. We want to track internal users, as well as those from outside the organisation who have been invited to collaborate.

For those external users, we want to see what groups they are part of and what they are doing in our tenant.

So, if we want to report on activity that is filtered just for an individual business unit or location we need to gather data from the user’s identity source and merge that with the OMS logs.

As we see from the example below, we can match Office 365 logs using the UPN with the user’s Azure AD account, and with that we can enrich our report with all the attributes of that user. And as we’re dealing with two separate datasets, we’re going to use Power BI Desktop and not the online version.

So, we have a few steps here to do:

  1. Get the OMS log query data into Power BI Desktop
  2. From Power BI Desktop, query the user’s account to gain attributes
  3. Create relationships between these datasets in Power BI, based on UPN
  4. Visualise the data
  5. Publish to Power BI Online for consumption

An overview of the moving parts:

‘The Overview’

Step 1 – get the Azure Log Analytics log query data into Power BI Desktop

Microsoft recently rolled out upgrades for Azure Log Analytics workspaces, and the new iteration integrates quite nicely with Power BI Desktop by exposing a REST API: api.loganalytics.io.

Using Power BI Desktop we can simply drop a query directly into PBI and have it pull directly from your Azure Log Analytics workspace.

Firstly, let’s create a query to get some interesting data from Office 365. Let’s say we want to find out what external users are up to, so here’s a simple query to pull in all logs from externals. Note this is a fairly simple one, in your environment you may want to be more precise and/or aggregate the results into a smaller data set:

OfficeActivity | where UserId contains_cs “#ext#”

Now it’s a couple of simple steps to get this query into Power BI, we just want to click on the little Power BI button above where we enter the query, this will allow us to download the configuration for an M-language query we can use directly in Power BI desktop.

Azure Log Analytics – Power BI Export

The instructions are included in this file, so I’ll just repeat them here:

The exported Power Query Formula Language (M Language) can be used with Power Query in Exceland Power BI Desktop.

For Power BI Desktop follow the instructions below:

1) Download Power BI Desktop from https://powerbi.microsoft.com/desktop/

2) In Power BI Desktop select: ‘Get Data’ -> ‘Blank Query’->’Advanced Query Editor’

3) Paste the M Language script into the Advanced Query Editor and select ‘Done’

2.    From Power BI Desktop, query the user’s account to gain attributes

To get our user information form Azure Active Directory we’re going to query the Microsoft graph, you can just enter the following into the Advanced Query Editor same as above:

2017-11-20 21_35_03-Advanced Editor

Power BI Advanced Editor

Note, you’ll need to authenticate with a Global Admin account for this to work

3.    Create relationships between these datasets in Power BI, based on UPN

In the home screen of Power BI desktop, select “Manage Relationships”

Select “New”, and select to use both your new Azure Log Analytics query and the Azure AD User query and match on UserId and UserPrincipalName from the respective queries

4.    Visualise the data

There’s no strict formula here, think about what questions to ask, play around a bit to find ways to show meaningful data, use visual and page level filters

Power BI Desktop

5.    Publish to Power BI Online for consumption

When you’re happy with your report, click on the “Publish’ button in Power BI Desktop, and select an appropriate workspace in Power BI Web to publish the report to. Once it’s in Power BI Web, you can then create your dashboard.

To add pages of the report to the dashboard, open the page you want to select, and use the “pin live page” option at the top of the screen. Add the pages of the report you want to the same dashboard, this is the one I built reasonably quickly from our demo tenant:

Power BI Dashboard

Good luck, and happy reporting!

Azure AD Identity and Access Management & Features

I’ve been using Azure AD Identity for quite a while now. I thought it would be good to share the summary of Azure AD Identity features and gather some feedbacks.

Azure AD Identity

Azure Active Directory: A comprehensive identity and access management cloud solution for your employees, partners, and customers. It combines directory services, advanced identity governance, application access management, and a rich standards-based platform for developers.

Identity and access management License option: Azure Active Directory Premium P2 (E5), P1 (E3)

“Identity as the Foundation of Enterprise Mobility”

Identity and access management

Protect at the front door: innovative and advanced risk-based conditional accesses, protect your data against user mistakes, detect attacks before they cause damage

Identity and access management in the cloud:

  • 1000s of apps, 1 identity: Provide one persona to the workforce for SSO to 1000s of cloud and on-premises apps.
  • Enable business without borders: Stay productive with universal access to every app and collaboration capability.
  • Manage access at Scale: Manage identities and access at scale in the cloud and on-premises, advanced user lifecycle management and advanced identity monitor tools
  • Cloud-powered protection: Ensure user and admin accountability with better security and governance

Azure AD portal:

Configure users & groups, Configured SaaS applications identity, configure on-prem applications with Application proxy, license management, password reset, password reset notifications, password reset authentication methods, company branding, whether users can register/consent applications, whether users can invite external contacts, whether guest can invite external contacts, whether users can register devices with Azure AD, whether require MFA, Define whether use pass-through authentication or federation authentication.

Azure AD application integration:

3 types of applications integration:

  • LOB applications: using Azure AD for authentication
  • SaaS applications: configure SSO
  • Azure AD Application proxy: we can publish on-prem applications to internet through Azure AD application proxy.

Inbound/outbound user provisioning to SaaS apps

User Experience with Integrated apps: Access Panel https://myapps.microsoft.com. Custom Branding? Load by appending your organization’s domain https://myapps.microsoft.com/company_domain_name. From Myapps, users can: change PW, Edit PW reset, MFA, view account details, view launch apps, self-management groups. Admins can configure apps to be self-service -users add apps by themselves.

Authentication (Front End & Back End) & Reporting (reporting access & alerts, reporting API, MFA)

Front End Authentication 

Back End Authentication 

Pass-thru authentication:

  • Traffic to the backend app NOT authenticated in Azure AD
  • Useful for NDES, CRLs, etc
  • Still has benefits of not exposing backend apps to http based attacks

Pass-thru authentication:

  • Does not try and authenticate to the backend
  • Useful with forms based applications
  • Auth headers returned to client
  • Can be used with front-end pre-authentication

Pre-Authentication

  • Users must authenticate to AAD to access backend app
  • Allows ability to plug into AAD control plane
  • Can also be extended to provide true SSO to the backend app

Kerberos/IWA

  • Must use pre-authentication on front end
  • Allows for an SSO experience from AAD to the app
  • Support for SPNego (i.e. non AD Kerberos)

 

Azure AD Connect health

Monitor & Report on ADFS, AAD Sync, ADDS. Advanced logs for configuration troubleshooting.

Azure Identity protection (Azure AD premium P2)

  • AIP dashboard is a consolidated view to examine suspicious user activities and configuration vulnerabilities
  • Remediation recommendations
  • Risk Severity calculation
  • Risk-based policies for protection for future threats

If user is at risk, either we can block users or we can trigger MFA automatically

AIP can help to identify spoof attack happening or leak credentials, suspicious sign in activities. infected devices, configurations vulnerabilities, for example, when a user signed in from unfamiliar location, then we can trigger to reset his/her password or we can use user risk condition to allow user access to corporate resources with password change or block access straight away. Alternatively, we can configure the alert to send an approval request to admin.

Identity protection risk types and reports generated:

Azure AD privileged Identity Management

For examples, I am on leave for 2 days and I want my colleagues to become global admin for only two days. if I come back from leave and forget to remove the global admin permissions from that colleagues, he will still be global admin, this will be put company at risk, because potentially either global admin password can be compromised.

Just in time administrative access, we can use this to give only has 2 days “global admin” access

Securing Privileged access: just in Time administration

  • Assume breach of existing AD forests may have occurred
  • Provide privileged access through a workflow
  • Access is limited in time and audited
  • Administrative account not used when reading mail/etc.

Result = limited in time & capability

 

 

 

Exchange Online & Splunk – Automating the solution

NOTES FROM THE FIELD:

I have recently been consulting on, what I think is a pretty cool engagement to integrate some Office365 mailbox data into the Splunk reporting platform.

I initially thought about using a .csv export methodology however through trial & error (more error than trial if I’m being honest), and realising that this method still required some manual interaction, I decided to embark on finding a fully automated solution.

The final solution comprises the below components:

  • Splunk HTTP event collector
    • Splunk hostname
    • Token from HTTP event collector config page
  • Azure automation account
    • Azure Run As Account
    • Azure Runbook
    • Exchange Online credentials (registered to Azure automation account

I’m not going to run through the creation of the automation account, or required credentials as these had already been created, however there is a great guide to configuring the solution I have used for this customer at  https://www.splunk.com/blog/2017/10/05/splunking-microsoft-cloud-data-part-3.html

What the PowerShell script we are using will achieve is the following:

  • Connect to Azure and Exchange Online – Azure run as account authentication
  • Configure variables for connection to Splunk HTTP event collector
  • Collect mailbox data from the Exchange Online environment
  • Split the mailbox data into parts for faster processing
  • Specify SSL/TLS protocol settings for self-signed cert in test environment
  • Create a JSON object to be posted to the Splunk environment
  • HTTP POST the data directly to Splunk

The Code:

#Clear Existing PS Sessions
Get-PSSession | Remove-PSSession | Out-Null
#Create Split Function for CSV file
function Split-array {
param($inArray,[int]$parts,[int]$size)
if($parts) {
$PartSize=[Math]::Ceiling($inArray.count/$parts)
}
if($size) {
$PartSize=$size
$parts=[Math]::Ceiling($inArray.count/$size)
}
$outArray=New-Object’System.Collections.Generic.List[psobject]’
for($i=1;$i-le$parts;$i++) {
$start=(($i-1)*$PartSize)
$end=(($i)*$PartSize)-1
if($end-ge$inArray.count) {$end=$inArray.count-1}
$outArray.Add(@($inArray[$start..$end]))
}
return,$outArray
}
function Connect-ExchangeOnline {
param(
$Creds
)
#Connect to Exchange Online
$Session=New-PSSession –ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/-Credential $Credentials-Authentication Basic -AllowRedirection
$Commands=@(“Add-MailboxPermission”,”Add-RecipientPermission”,”Remove-RecipientPermission”,”Remove-MailboxPermission”,”Get-MailboxPermission”,”Get-User”,”Get-DistributionGroupMember”,”Get-DistributionGroup”,”Get-Mailbox”)
Import-PSSession-Session $Session-DisableNameChecking:$true-AllowClobber:$true-CommandName $commands|Out-Null
}
#Create Variables
$SplunkHost = “Your Splunk hostname or IP Address”
$SplunkEventCollectorPort = “8088”
$SplunkEventCollectorToken = “Splunk Token from Http Event Collector”
$servicePrincipalConnection = Get-AutomationConnection -Name ‘AzureRunAsConnection’
$credentials = Get-AutomationPSCredential -Name ‘Exchange Online’
#Connect to Azure
Add-AzureRMAccount -ServicePrincipal -Tenant $servicePrincipalConnection.TenantID -ApplicationId $servicePrincipalConnection.ApplicationID -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
#Connect to Exchange Online
Connect-ExchangeOnline -Creds $credentials
#Invoke Script
$mailboxes = Get-Mailbox -resultsize unlimited | select-object -property DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Get Current Date & Time
$time = get-date -Format s
#Convert Timezone to Australia/Brisbane
$bnetime = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId($time, [System.TimeZoneInfo]::Local.Id, ‘E. Australia Standard Time’)
#Adding Time Column to Output
$mailboxes = $mailboxes | Select-Object @{expression = {$bnetime}; Name = ‘Time’}, DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Create Split Array for Mailboxes Spreadsheet
$recipients = Split-array -inArray $mailboxes -parts 5
#Create JSON objects and HTTP Post to Splunk HTTP Event Collector
foreach ($recipient in $recipients) {
foreach($rin$recipient) {
#Create SSL Validation Bypass for Self-Signed Certificate in Testing
$AllProtocols = [System.Net.SecurityProtocolType]’Ssl3,Tls,Tls11,Tls12′
[System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
#Get JSON string to post to Splunk
$StringToPost = “{ `”Time`”: `”$($r.Time)`”, `”DisplayName`”: `”$($r.DisplayName)`”, `”PrimarySMTPAddress`”: `”$($r.PrimarySmtpAddress)`”, `”IsMailboxEnabled`”: `”$($r.IsMailboxEnabled)`”, `”ForwardingSmtpAddress`”: `”$($r.ForwardingSmtpAddress)`”, `”GrantSendOnBehalfTo`”: `”$($r.GrantSendOnBehalfTo)`”, `”ProhibitSendReceiveQuota`”: `”$($r.ProhibitSendReceiveQuota)`”, `”AddressBookPolicy`”: `”$($r.AddressBookPolicy)`” }”
$uri = “https://” + $SplunkHost + “:” + $SplunkEventCollectorPort + “/services/collector/raw”
$header = @{“Authorization”=”Splunk ” + $SplunkEventCollectorToken}
#Post to Splunk Http Event Collector
Invoke-RestMethod -Method Post -Uri $uri -Body $StringToPost -Header $header
}
}
Get-PSSession | Remove-PSSession | Out-Null

 

The final output that can be seen in Splunk looks like the following:

11/13/17
12:28:22.000 PM
{ [-]
AddressBookPolicy:
DisplayName: Shane Fisher
ForwardingSmtpAddress:
GrantSendOnBehalfTo:
IsMailboxEnabled: True
PrimarySMTPAddress: shane.fisher@xxxxxxxx.com.au
ProhibitSendReceiveQuota: 50 GB (53,687,091,200 bytes)
Time: 11/13/2017 12:28:22
}Show as raw text·         AddressBookPolicy =  

·         DisplayName = Shane Fisher

·         ForwardingSmtpAddress =  

·         GrantSendOnBehalfTo =  

·         IsMailboxEnabled = True

·         PrimarySMTPAddress = shane.fisher@xxxxxxxx.com.au

·         ProhibitSendReceiveQuota = 50 GB (53,687,091,200 bytes)

I hope this helps some of you out there.

Cheers,

Shane.

 

 

 

Resolving unable to access App published with Barracuda WAF over Azure Express Route

Recently, one of the customers reported they can’t access to all UAT apps from their Melbourne office, but it worked fine for other offices. When they tried to access the UAT app domains, they were getting below errors: “The request service is temporarily unavailable. It is either overloaded or under maintenance. Please try later.”

WAF error

Due to the UAT environment IP restrictions on the WAF, it is normal behaviour for me to get the error messages due to the fact our Kloud office’s public IPs are not in the WAFs’ whitelist. This error approved the web traffic did hit the WAFs. Ping the URL hostname, it returned the correct IP without DNS problems, this means that the web traffic did go to the correct WAF farm considering the customer has a couple of other WAF farms in other countries. So we can focus on the AU WAFs now for the troubleshooting.

I pulled out all the WAFs access logs and planned to go through those to verify if the web traffic was hitting on the AU WAFs or went to somewhere else. I did a log search based on the public IPs which were provided by customer, no results returned for the last 7 days.Search Result 1

interesting. did it mean no traffic from Melbourne office came in? I did another search based on my public IPs, it clearly returned a couple of access logs related with my testing, correct time, correct client IP, correct WAF domain hostname, method is GET, Status is 503 which is correct because my office IP is restricted.

Search Result 2

Since customer mentioned all other offices had no problem to access the UAT app environment, I asked them to provide me with one public IP from another office, we tested it again and verified people in Indian office can successfully open the web app and I can see their web traffic appear in the WAF logs as well. I believed when Melbourne staff tried to browse the web app, the traffic should go to the same WAF farm because the DNS hostname was resolved to the same IP no matter whether in Melbourne or in India.

The question is what exactly happened and what was the root cause? :/

In order to capture another good example, I noted down the time and asked the customer to browse the website again. This time I did an access log search based on the time instead of Melbourne public IPs. I got a couple of results returned with some unknown IPs.

Search result 3

I googled the unknown IPs, it turned out they are Microsoft Australian data centre IPs. Now I kind of felt there are some routing or NAT issues in the customer network. I contacted the customer and provided the unknown IPs, customer did a bit of investigations on this and advised that those unknown IPs are the public IPs for their Azure Express Route interfaces. It makes sense now. Because customer didn’t whitelist their new Azure public IPs, so when web traffic came from the unknown source IPs (Azure Public IPs), WAF doesn’t know them and they were all being blocked as well, just like me. Once I added the new Azure IPs into the app whitelist IPs, all the access issues were resolved.

Resolving “User not found” issue while assigning permissions using SharePoint CSOM

I was recently working on a SharePoint Online project where we were trying to automate library creation and provide required permissions on those libraries. We had an issue while modifying permissions with CSOM code on SharePoint libraries when the Created By user had left the company.

In this post I will outline the cause and the resolution as there was no online reference for resolving this error.

Issue: The CSOM code was throwing an error “User not found” even when creating a User object from web.EnsureUser() method.

Cause: The User object returned by web.EnsureUser() method was empty but not null and hence couldn’t be instantiated while adding after breaking permissions.

Resolution: The resolution to this issue was to explicitly load of the user object, then catch the exception while loading, and set a flag to false which could be later be checked to prevent the add method from erroring out. Yeah, this is a roundabout way of overcoming the issue but it works. Hopefully it will save you some hours.

Below is the code that could be used to do that.