Use Azure Health to track active incidents in your Subscriptions

siliconvalve

Yesterday afternoon while doing some work I ran into an issue in Azure. Initially I thought this issue was due to a bug in my (new) code and went to my usual debugging helper Application Insights to review what was going on.

The below graphs a little old, but you can see a clear spike on the left of the graphs which is where we started seeing issues and which gave me a clue that something was not right!

App Insights views

Initially I thought this was a compute issue as the graphs are for a VM-hosted REST API (left) and a Functions-based backend (right).

At this point there was no service status indicating an issue so I dug a little deeper and reviewed the detailed Exception information from Application Insights and realised that the source of the problem was the underlying Service Bus and Event Hub features that we use to glue…

View original post 268 more words

Azure AD Domain Services

I recently had what I thought was a rather unique requirement from a customer.

The requirement was to build Azure IaaS virtual machines and have them joined to a managed domain, while also being able to authenticate to the virtual machines using Azure AD credentials.

The answer is Azure AD Domain Services!

Azure AD Domain Services provides managed domain services such as domain join, group policy and Kerberos/NTLM authentication without the need for you to deploy and  manage domain controllers in the cloud. For more information see https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-overview

It is not without its limitations though, main things to call out is that configuring domain trusts and applying schema extensions is not possible with Azure AD Domain Services. For a full list of limitations see: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-comparison

Unfortunately at this point in time you cannot use ARM templates to configure Azure AD Domain Services so you are limited to the Azure Portal or PowerShell. I am not going to bore you with the details of the deployment steps as it is quite simple and you can easily follow the steps supplied in the Microsoft documentation: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-enable-using-powershell

What I would like to do is point out the following learnings that I discovered during my deployment.

  1. In order to utilise Azure AD credentials that are synchronised from on-premises, synchronisation of NTLM/Kerberos credential hashes must be enabled in Azure AD Connect, this is not enabled by default.
  2. If there is any cloud-only user accounts, all users who need to use Azure AD Domain Services must change their passwords after Azure AD Domain Services is provisioned. The password change process causes the credential hashes for Kerberos and NTLM authentication to be generated in Azure AD.
  3. Once a cloud-only user account has changed their password, you will need to wait for a minimum of 20 minutes before you will be able to use Azure AD Domain Services (this got me as I was impatient).
  4. Speaking of patience the provisioning process of Azure Domain Services takes about an hour.
  5. Have a dedicated subnet for Azure AD Domain services to avoid any connectivity issues that may occur with NSGs/firewalls.
  6. You can only have one managed domain connected to your Azure Active Directory.

That’s it, hopefully this helped you get a better understanding of Azure AD Domain Services and assists with a smooth deployment.

Understanding Azure’s Container PaaS Capabilities

siliconvalve

If you’ve been using Azure over the past twelve months, you can’t but have the feeling that it’s become a bit like this…

Containers... Containers Everywhere

.. and you’d be right.

To be fair, though, Containers have been one of the hot topics in computing in general and certainly one that’s been getting the most interest in my recent Azure Open Source Roadshows.

One thing that has struck me though is that people are not clear on the purpose of all the services in Azure that have ‘Containers’ listed as a capability, so in this post I am going to try and review the Azure Platform-as-a-Service offerings that have Container capabilities and cover what the services can be used for.

First, before we begin, let’s quickly get some fundamentals under our belts.

What is a Container?

Containers provide encapsulation and isolation for workloads and remove the need for a complete Operating System image…

View original post 1,698 more words

Azure Application Security Groups

Azure Application Security Groups (ASG) are a new feature, currently in Preview, that allows for configuring network security using an application-centric approach within Network Security Groups (NSG). This approach allows for the grouping of Virtual Machines logicaly, irrespective of their IP address or subnet assignment within a VNet.

They work by assigning the network interfaces of virtual machines, as members of the ASG. ASGs are then used within NSGs as either a source or destination of a rule, and this provides additional options and flexibility for controlling network flows of resources within a subnet.

The following requirements apply to the creation and use of ASGs:

  • All network interfaces used in an ASG must be within the same VNet
  • If ASGs are used in the source and destination, they must be within the same VNet

The following scenario demonstrates a use case where ASGs may be useful. In the below diagram, there are 2 sets of VMs within a single subnet. The blue set of VMs require outbound connectivity on TCP port 443, while the green set of VMs require outbound connectivity on TCP port 1433.

As each VM is within the same subnet, to achieve this with traditional NSG rules would require that each IP address be added to a relevant rule that allows the required connectivity. For example:


NSG1

As virtual machines are added, removed or updated the management overhead that is required to maintain the NSG may become quite considerable. This is where ASGs come in to play to simplify the NSG rule creation, and continued maintenance of the rule. Instead of defining IP prefixes, you create an ASG and use the it within the NSG rule. The Azure platform takes care of the rest by determining the IPs that are covered within the ASG.

As network interfaces of VMs are added to the ASG, the effective network security rules are applied without the need to update the NSG rule itself.


NSG2

The following steps will demonstrate this process using 2 virtual machines.

Enable Preview Feature

ASGs are currently in preview and the feature must be enabled. At present these are only available within US West Central.

Check the status of the registration, and wait for the RegistrationState to change to Registered.


Create Application Security Groups

We will create 2 application security groups

  • WebAsg
  • SqlAsg

Create security rules

In this example, we create rules that use the source as the application security group created in the previous step.

Create Network Security Group

Now that the ASGs are created and the relevant rules scoped to use the ASG as the source, we can create an NSG that uses these rules.

You can verify the rule from PowerShell, using Get-AzureRmNetworkSecurityGroup, and view the SecurityRules section. In there we can see that the reference to the ASG exists in SourceApplicationSecurityGroups:

Assign the NSG to a subnet:

Add network interfaces to ASG

The final step is to add the network interfaces of the VMs to the Application Security Group. The following example updates existing network interfaces to belong to the application security group. As network interfaces are added and removed the traffic flows will be controlled by the security rules applied to the NSG through the use of the ASGs, without further requirement to update the NSG.

You can verify this by viewing the network interface with Get-AzureRmNetworkInterface and checking the IpConfigurations properties. In there we can see the reference to the ASG memberships in ApplicationSecurityGroups.

Exchange Online & Splunk – Automating the solution

NOTES FROM THE FIELD:

I have recently been consulting on, what I think is a pretty cool engagement to integrate some Office365 mailbox data into the Splunk reporting platform.

I initially thought about using a .csv export methodology however through trial & error (more error than trial if I’m being honest), and realising that this method still required some manual interaction, I decided to embark on finding a fully automated solution.

The final solution comprises the below components:

  • Splunk HTTP event collector
    • Splunk hostname
    • Token from HTTP event collector config page
  • Azure automation account
    • Azure Run As Account
    • Azure Runbook
    • Exchange Online credentials (registered to Azure automation account

I’m not going to run through the creation of the automation account, or required credentials as these had already been created, however there is a great guide to configuring the solution I have used for this customer at  https://www.splunk.com/blog/2017/10/05/splunking-microsoft-cloud-data-part-3.html

What the PowerShell script we are using will achieve is the following:

  • Connect to Azure and Exchange Online – Azure run as account authentication
  • Configure variables for connection to Splunk HTTP event collector
  • Collect mailbox data from the Exchange Online environment
  • Split the mailbox data into parts for faster processing
  • Specify SSL/TLS protocol settings for self-signed cert in test environment
  • Create a JSON object to be posted to the Splunk environment
  • HTTP POST the data directly to Splunk

The Code:

#Clear Existing PS Sessions
Get-PSSession | Remove-PSSession | Out-Null
#Create Split Function for CSV file
function Split-array {
param($inArray,[int]$parts,[int]$size)
if($parts) {
$PartSize=[Math]::Ceiling($inArray.count/$parts)
}
if($size) {
$PartSize=$size
$parts=[Math]::Ceiling($inArray.count/$size)
}
$outArray=New-Object’System.Collections.Generic.List[psobject]’
for($i=1;$i-le$parts;$i++) {
$start=(($i-1)*$PartSize)
$end=(($i)*$PartSize)-1
if($end-ge$inArray.count) {$end=$inArray.count-1}
$outArray.Add(@($inArray[$start..$end]))
}
return,$outArray
}
function Connect-ExchangeOnline {
param(
$Creds
)
#Connect to Exchange Online
$Session=New-PSSession –ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/-Credential $Credentials-Authentication Basic -AllowRedirection
$Commands=@(“Add-MailboxPermission”,”Add-RecipientPermission”,”Remove-RecipientPermission”,”Remove-MailboxPermission”,”Get-MailboxPermission”,”Get-User”,”Get-DistributionGroupMember”,”Get-DistributionGroup”,”Get-Mailbox”)
Import-PSSession-Session $Session-DisableNameChecking:$true-AllowClobber:$true-CommandName $commands|Out-Null
}
#Create Variables
$SplunkHost = “Your Splunk hostname or IP Address”
$SplunkEventCollectorPort = “8088”
$SplunkEventCollectorToken = “Splunk Token from Http Event Collector”
$servicePrincipalConnection = Get-AutomationConnection -Name ‘AzureRunAsConnection’
$credentials = Get-AutomationPSCredential -Name ‘Exchange Online’
#Connect to Azure
Add-AzureRMAccount -ServicePrincipal -Tenant $servicePrincipalConnection.TenantID -ApplicationId $servicePrincipalConnection.ApplicationID -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
#Connect to Exchange Online
Connect-ExchangeOnline -Creds $credentials
#Invoke Script
$mailboxes = Get-Mailbox -resultsize unlimited | select-object -property DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Get Current Date & Time
$time = get-date -Format s
#Convert Timezone to Australia/Brisbane
$bnetime = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId($time, [System.TimeZoneInfo]::Local.Id, ‘E. Australia Standard Time’)
#Adding Time Column to Output
$mailboxes = $mailboxes | Select-Object @{expression = {$bnetime}; Name = ‘Time’}, DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Create Split Array for Mailboxes Spreadsheet
$recipients = Split-array -inArray $mailboxes -parts 5
#Create JSON objects and HTTP Post to Splunk HTTP Event Collector
foreach ($recipient in $recipients) {
foreach($rin$recipient) {
#Create SSL Validation Bypass for Self-Signed Certificate in Testing
$AllProtocols = [System.Net.SecurityProtocolType]’Ssl3,Tls,Tls11,Tls12′
[System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
#Get JSON string to post to Splunk
$StringToPost = “{ `”Time`”: `”$($r.Time)`”, `”DisplayName`”: `”$($r.DisplayName)`”, `”PrimarySMTPAddress`”: `”$($r.PrimarySmtpAddress)`”, `”IsMailboxEnabled`”: `”$($r.IsMailboxEnabled)`”, `”ForwardingSmtpAddress`”: `”$($r.ForwardingSmtpAddress)`”, `”GrantSendOnBehalfTo`”: `”$($r.GrantSendOnBehalfTo)`”, `”ProhibitSendReceiveQuota`”: `”$($r.ProhibitSendReceiveQuota)`”, `”AddressBookPolicy`”: `”$($r.AddressBookPolicy)`” }”
$uri = “https://” + $SplunkHost + “:” + $SplunkEventCollectorPort + “/services/collector/raw”
$header = @{“Authorization”=”Splunk ” + $SplunkEventCollectorToken}
#Post to Splunk Http Event Collector
Invoke-RestMethod -Method Post -Uri $uri -Body $StringToPost -Header $header
}
}
Get-PSSession | Remove-PSSession | Out-Null

 

The final output that can be seen in Splunk looks like the following:

11/13/17
12:28:22.000 PM
{ [-]
AddressBookPolicy:
DisplayName: Shane Fisher
ForwardingSmtpAddress:
GrantSendOnBehalfTo:
IsMailboxEnabled: True
PrimarySMTPAddress: shane.fisher@xxxxxxxx.com.au
ProhibitSendReceiveQuota: 50 GB (53,687,091,200 bytes)
Time: 11/13/2017 12:28:22
}Show as raw text·         AddressBookPolicy =  

·         DisplayName = Shane Fisher

·         ForwardingSmtpAddress =  

·         GrantSendOnBehalfTo =  

·         IsMailboxEnabled = True

·         PrimarySMTPAddress = shane.fisher@xxxxxxxx.com.au

·         ProhibitSendReceiveQuota = 50 GB (53,687,091,200 bytes)

I hope this helps some of you out there.

Cheers,

Shane.

 

 

 

‘Generic’ LDAP Connector for Azure AD Connect

I’m working for a large corporate who has a large user account store in Oracle Unified Directory (LDAP).   They want to use these existing accounts and synchronise them to Azure Active Directory for Azure application services (such as future Office 365 services).

Microsoft state here that Azure Active Directory Connect (AAD Connect) will, in a ‘Future Release’ version, provide native LDAP support (“Connect to single on-premises LDAP directory”), so timing wise I’m in a tricky position – do I guide my customer to attempt to use the current version? (at the time of writing is: v1.1.649.0) or wait for this ‘future release version’?.

This blog may not have a very large lifespan – indeed a new version of AAD Connect might be released at any time with native LDAP tree support, so be sure to research AAD Connect prior to providing a design or implementation.

My customer doesn’t have any requirement for ‘write back’ services (where data is written back from Azure Active Directory to the local directory user store) so this blog post covers just a straight export from the on-premises LDAP into Azure Active Directory.

I contacted Microsoft and they stated it’s supported ‘today’ to provide connectivity from AAD Connect to LDAP, so I’ve spun up a Proof of Concept (PoC) lab to determine how to get it working in this current version of AAD Connect.

Good news first, it works!  Bad news, it’s not very well documented so I’ve created this blog just to outline my learnings in getting it working for my PoC lab.

I don’t have access to the Oracle Unified Directory in my PoC lab, so I substituted in Active Directory Lightweight Directory Services (AD LDS) so my configuration reflects the use of AD LDS.

Tip #1 – You still need an Active Directory Domain Service (AD DS) for installation purposes

During the AAD connect installation wizard (specifically the ‘Connect your directories’ page), it expects to connect to an AD DS forest to progress the installation.  In this version of AAD Connect, the ‘Directory Type’ listbox only shows ‘Active Directory’ – which I’m expecting to include more options when the ‘Future Release’ version is available.

I created a single Domain Controller (forest root domain) and used the local HOST file of my AAD Connect Windows Server to point the forest root domain FQDN e.g. ‘forestAD.internal’ to the IP address of that Domain Controller.

I did not need to ‘join’ my AAD Connect Windows Server to that domain to complete the installation, which will make it easier to decommission this AD DS (if it’s put into Production) if Microsoft releases an AAD Connect version that does not require AD DS.

Screen Shot 2017-10-25 at 10.39.34 am

Tip #2 – Your LDAP Connector service account needs to be a part of the LDAP tree

After getting AAD Connect installed with mostly the default options, I then went into the “Synchronization Engine” to install the ‘Generic LDAP’ connector that is available by clicking on the ‘Connector’ tab and clicking ‘Create’ under the ‘Actions’ heading on the right hand side:

Screen Shot 2017-11-01 at 11.50.41 am

For the ‘User Name’ field, I created and used an account (in this example: ‘CN=FIMLDSAdmin,DC=domain,DC=internal) that was part of the LDAP tree itself instead of the installation account created by AD LDS which is part of the local Windows Server user store.

If I tried use a local Windows Server ie. ‘Server\username’ user, it would just give me generic connection errors and not bind to the LDAP tree, even if that user had full administrative rights to the AD LDS tree.  I gave up troubleshooting this – so I’ve resigned myself to needing a service account in the LDAP tree itself.

Screen Shot 2017-11-03 at 9.50.55 am

Tip #3 – Copy (and modify) the existing Inbound Rules for AD

When you create a custom Connector, the data mapping rules for that Connector are taken from the ‘Sychronization Rules Editor’ program and are not located in the Synchronization Engine.  There’s basically two choices at the moment for a custom connector: copy existing rules from another Connector, or create a brand new (blank) rule set for that connector (per object type e.g. ‘User’).

I chose the first option – I copied the three existing ‘Inbound’ Rules from the AD DS connector:

  • In from AD – User Join
  • In from AD – User AccountEnabled
  • In from AD – User Common

If you ‘edit’ each of these existing AD DS rules, you’ll get a choice to create a copy of that rule set and disable the original.  Select ‘Yes’ will create a copy of that rule, and you can then modify the ‘Connected Systen’ to use the LDAP Connector instead of the AD DS Connector:

Screen Shot 2017-11-03 at 10.10.51 am

I also modified the priority numbering of each of the rules from ‘201’ to ‘203’ to have these new rules applied last in my Synchronization Engine.

I ended up with the configuration of these three new ‘cloned’ rules for my LDAP Connector:

Screen Shot 2017-11-03 at 10.04.27 am

I found I had to edit or remove any rules that required the following:

  • Any rule looking for ‘sAMAccountName’, I modified the text of the rule to look for ‘UID’ instead (which is the schema attribute name most closely resembling that account name field in my custom LDAP)
  • I deleted the following rule from the Transformation section of the ‘In from AD – User Join’ cloned rule.  I found that it was preventing any of my LDAP accounts reaching Azure AD:

Screen Shot 2017-11-03 at 10.32.39 am

  • In the ‘In from AD – User AccountEnabled’ Rule, I modified the existing Scoping Filter to not look at the ‘userAccountControl’ bit and instead used the AD LDS attribute name:  msDS-UserAccountDisabled = FALSE’ value instead:

Screen Shot 2017-11-03 at 10.35.36 am

There are obviously many, many ways of configuring the rules for your LDAP tree but I thought I’d share how I did it with AD LDS.  The reasoning why I ‘cloned’ existing rules was that I wanted to protect the data integrity of Azure AD primarily.  There are many, many default data mapping rules for Azure AD that come with the AD DS rule set – a lot of them use ‘TRIM’ and ‘LEFT’ functions to ensure the data reaches Azure AD with the correct formatting.

It will be interesting to see how Microsoft tackles these rules sets in a more ‘wizard’ driven approach – particularly since LDAP trees can be highly customised with unique attribute names and data approaches.

Before closing the ‘Synchronization Rules Editor’, don’t forgot to ‘re-enable’ each of the (e.g AD DS) Connector rules you’ve previously cloned because the Synchronization Rules Editor assumes you’re not modifying the Connector they’re using.  Select the original rule cloned, and uncheck the ‘Disabled’ box.

Tip #4 – Create ‘Delta’ and ‘Full’ Run Profiles

Lastly, you might be wondering: how does the AAD Connector Scheduler (the one based entirely in PowerShell with seemingly no customisation commands) pickup the new LDAP Connector?

Well, it’s simply a matter of naming your ‘Run Profiles’ in the Synchronization Engine with the text: ‘Delta’ and ‘Full’ where required.  Select ‘Configure Run Profiles’ in the Engine for your LDAP Connector:

Screen Shot 2017-11-03 at 10.16.29 am.png

I then created ‘Run Profiles’ with the same naming convention as the ones created for AD DS and Azure AD:

Screen Shot 2017-11-03 at 10.17.58 am

Next time I ran an ‘Initial’ (which executes ‘Full Import’ and ‘Full Sync.’ jobs) or a ‘Delta’ AD Scheduler job (I’ve previously blogged about the AD Scheduler, but you can find the official Microsoft doc on it here), my new LDAP Connector Run Profiles were executed automatically along with the AD DS and AAD Connector Run Profiles:

Screen Shot 2017-11-03 at 10.19.57 am

Before I finish up, my colleague David Minnelli has found IDAMPundit’s blog post about a current bug upgrading to AAD Connect v1.1.649.0 version if you already have an LDAP Connector.  In a nutshell, just open up the existing LDAP Connector and step through each page and re-save it to clear the issue.

** Edit: I have had someone query how I’m authenticating with these accounts, well I’m leveraging an existing SecureAuth service that uses the WS-Federation protocol to communicate with Azure AD.  So ‘Federation’ basically – I’m not extracting passwords out of this LDAP or doing any kind of password hash synchronization.  Thanks for the question though!

Hope this helped, good luck!

 

Continuous Deployment for Docker with VSTS and Azure Container Registry

siliconvalve

I’ve been watching with interest the growing maturity of Containers, and in particular their increasing penetration as a hosting and deployment artefact in Azure. While I’ve long believed them to be the next logical step for many developers, until recently they have had limited appeal to many every-day developers as the tooling hasn’t been there, particularly in the Microsoft ecosystem.

Starting with Visual Studio 2015, and with the support of Docker for Windows I started to see this stack as viable for many.

In my current engagement we are starting on new features and decided that we’d look to ASP.Net Core 2.0 to deliver our REST services and host them in Docker containers running in Azure’s Web App for Containers offering. We’re heavy uses of Visual Studio Team Services and given Microsoft’s focus on Docker we didn’t see that there would be any blockers.

Our flow at high level is…

View original post 978 more words

Automatically Provision Azure AD B2B Guest Accounts

Azure ‘Business to Business’ (or the catchy acronym ‘B2B’) has been an area of significant development in the last 12 months when it comes to providing access to Azure based applications and services to identities outside an organisation’s tenancy.

Recently, Ryan Murphy (who has contributed to this blog) and I have been tasked to provide an identity based architecture to share Dynamics 365 services within a large organisation, but across two ‘internal’ Azure AD tenancies.

Dynamics 365 takes its identity store from Azure AD; if you’re assigned a license for Dynamics 365 in the Azure Portal, including in a ‘B2B’ scenario, you’re granted access to the Dynamics 365 application (as outlined here).  Further information about how Dynamic 365 concepts integrate with a single or multiple Azure tenancies is outlined here.

Microsoft provide extensive guidance in completing a typical ‘invitation’ based scenario using the Azure portal (using the links above).  Essentially this involves inviting users using an email which relies on that person manually clicking on the embedded link inside the email to complete the acceptance (and the ‘guest account’ creation in the Dynamics 365 service linked to that Azure AD).

However, this obviously won’t scale when you’re requiring on inviting thousands of new users, initially, but then also having to repeatedly invite new users as part of a Business-As-Usual (BAU) process as they join the organisation (or ‘identity churn’ as we say).

Therefore, to automate the creation of new Guest Users Azure AD accounts, without involving the user at all, this process can be followed:

  1. Create a ‘service account’ Guest User from the invited Azure AD (has to have the same UPN suffix as the users you’re inviting) to be a member of the resource Azure AD.
  2. Assign the ‘service account’ Guest User to be a member of the ‘Guest Inviter’ role of the resource Azure AD.
  3. Use PowerShell to auto. provision new Guest User accounts using the credentials of the ‘service account’ Guest User.

In this blog, we’ll use the terms ‘Resource Azure AD’ or ‘Resource Tenancy’ which is the location where you’re trying to share the sources out to another Azure AD called ‘Invited Azure AD’ or ‘Invited Tenancy’ where the user accounts (including usernames & passwords) you’re inviting reside.  The invited users only ever use their credentials in their own Azure AD or tenancy – never credentials of the ‘Resource Azure AD or tenancy’.  The ‘Guest User’ object created in the ‘Resource Tenancy’ are essentially just linking objects without any stored password.

A ‘Service Account’ Azure AD account dedicated solely to the automatic creation of Guest Users in the Resource Tenancy will need to be created first in the ‘Invited Azure AD’ – for this blog, we used an existing Azure AD account sourced using a synchronised local Active Directory.  This account did not have any ‘special’ permissions in the ‘Invited Azure AD’ but according to some blogs, it requires at least ‘read’ access to the user store in the ‘Invited Azure AD’ at least (which is default).

This ‘Service Account’ Azure AD account should have a mailbox associated with it, i.e. either an Exchange Online (Office 365) mailbox, or a mail value that has a valid SMTP address for a remote mailbox.  This mailbox is needed to approve the creation of a Guest User account in the Resource Tenancy (only needed for this individual Service Account).

It is strongly recommended that this ‘Service Account’ user in the ‘Invited Azure AD’ has a very strong & complex password, and that any credential used for that account within a PowerShell script be encrypted using David Lee’s blog.

The PowerShell scripts listed below to create these Guest Accounts accounts could then be actioned by an identity management system e.g. Microsoft Identity Manager (MIM) or a ‘Runbook’ or workflow system (e.g. SharePoint).

 

Task 1: Create the ‘Service Account’ Guest User using PowerShell

Step 1: Sign into the Azure AD Resource Tenancy’s web portal: ‘portal.azure.com’, using a Global Admin credential.

Step 2:  When you’re signed in, click on the account profile picture on the top right of the screen and select the correct ‘Resource Tenancy’ (There could be more than one tenant associated with the account you’re using):

Screenshot 2017-09-19 at 9.34.18 AM

Step 3:  Once the tenancy is selected, click on the ‘Azure Active Directory’ link on the left pane.

Step 4:  Click ‘User Settings’ and verify the setting (which is set by default for new Azure AD tenancies):  ‘Members can invite’.

Screenshot 2017-09-19 11.31.51

Step 5:  Using a new PowerShell session, connect and authenticate to the Azure AD tenancy where the Guest User accounts are required to be created into (i.e. the ‘Resource Azure AD’).

Be sure to specify the correct ‘Tenant ID’ of the ‘Resource Azure AD’ using the PowerShell switch ‘-TenantId‘ followed by the GUID value of your tenancy (to find that Tenant ID, follow the instructions here).

$Creds = Get-Credential

Connect-AzureAD -Credential $creds -TenantId “aaaaa-bbbbb-ccccc-ddddd”

 

Step 6:  The following PowerShell command should be executed under a ‘Global Admin’ to create the ‘Service Account’ e.g. ‘serviceaccount@invitedtenancy.com’.

 

New-AzureADMSInvitation -InvitedUserDisplayName “Service Account Guest Inviter” -InvitedUserEmailAddress “serviceaccount@invitedtenancy.com” -SendInvitationMessage $true -InviteRedirectUrl http://myapps.microsoft.com -InvitedUserType member

 

Step 7:  The ‘Service Account’ user account will then need to locate the email invitation sent out but his command and click on the link embedded within to authorise the creation of the Guest User object in the ‘Resource Tenancy’.

 

Task 2: Assign the ‘Service Account’ Guest Inviter Role using Azure Portal

Step 1:  Sign into the Azure web portal: ‘portal.azure.com’ with the same ‘Global Admin’ (or lower permission account) credential used in Task 1 (or re-use the same ‘Global Admin’ session from Task 1).

Step 2:  Click on the ‘Azure Active Directory’ shortcut on the left pane of the Azure Portal.

Step 3:  Click on the ‘All Users’ tab and select the ‘Service Account’ Guest User.

(I’m using ‘demo.microsoft.com’ pre-canned identities in the screen shot below, any names similar to real persons is purely coincidental – an image for ‘serviceaccount@invitedtenancy’ used as the example in Task 1 could not be reproduced)

Screenshot 2017-09-19 09.44.36

Step 4:  Once the ‘Service Account’ user is selected, click on the ‘Directory Role’ on the left pane.  Click to change their ‘Directory Role’ type to ‘Limited administrator’ and select ‘Guest Inviter’ below that radio button.  Click the ‘Save’ button.

Screenshot 2017-09-19 09.43.53

Step 5:  The next step is to test to ensure that ‘Service Account’ Guest User account can invite users from the same ‘UPN/Domain suffix’.   Click on the ‘Azure Active Directory’ link on the left pane off the main Azure Portal.

Step 6:  Click ‘User and groups’ and click ‘Add a guest user’ on the right:

Screenshot 2017-09-19 09.36.02

Step 7:  On the ‘Invite a guest’ screen, send an email invitation to a user from the same Azure AD as the ‘Service Account’ Guest User.  For example, if your ‘Service Account’ Guest user UPN / Domain Suffix is: ‘serviceaccount@remotetenant.com’, then invite a user from the same UPN/domain suffix e.g. ‘jim@remotetenant.com’  (again, only an example – any coincidence to current or future email address is purely coincidental).

Screenshot 2017-09-19 09.36.03

Step 8:  When the user receives the invitation email, ensure that the following text appears at the bottom of the email:  ‘There is no action required from you at this time’:

image002

Step 9:  If that works, then PowerShell can now automate that invitation process bypassing the need for emails to be sent out.  Automatic Guest Account creation can now leverage the ‘Service Account’ Guest User.

NOTE:  If you try to invite a user from with UPN/Domain suffix that does not match the ‘Service Account’ Guest User, the invitation will still be sent but it will appear requesting the user accept the invitation.  The invitation will be in a ‘pending acceptance’ state until that is done, and the Guest User object will not be created until that is completed.

Task 3:  Auto. Provision new Guest User accounts using PowerShell

Step 1:  Open Windows PowerShell (or re-use an existing PowerShell session that has rights to the ‘Resource Tenancy’).

Step 2:  Type the following example PowerShell command to send  invitation out, and authenticate when prompted using the ‘Invited Tenancy’ credentials of the ‘Service Account’ Guest User.

In the script, again be sure to specify the ‘TenantID’ for the switch –TenantID of the ‘Resource Tenancy’, not the ‘Invited Tenancy’.

 

#Connect to Azure AD

$Creds = Get-Credential

Connect-AzureAD -Credential $creds -TenantId “aaaaa-bbbbb-ccccc-ddddd”

$messageInfo = New-Object Microsoft.Open.MSGraph.Model.InvitedUserMessageInfo

$messageInfo.customizedMessageBody = “Hey there! Check this out. I created and approved my own invitation through PowerShell”

New-AzureADMSInvitation -InvitedUserEmailAddress “ted@invitedtenancy.com” -InvitedUserDisplayName “Ted at Invited Tenancy”  -InviteRedirectUrl https://myapps.microsoft.com -InvitedUserMessageInfo $messageInfo -SendInvitationMessage $false

 

Compared to using the Azure portal, this time no email will be sent (the display name and message body will never be seen by the invited user, it’s just required for the command complete).   To send a confirmation email to the user, you can change the switch -SendInvitationMessage to: $True.

 

Step 3:  The output of the PowerShell command should have at the end of the text next to ‘Status’ as ‘Accepted’:

image001

This means the Guest User object has automatically been created and approved by the ‘Resource Tenancy’.   That Guest User object created will be associated with the actual Azure AD user object from the ‘Invited Tenancy’.

The next steps for this invited Guest User will be then to assign them a Dynamics 365 license and then a Dynamics 365 role in the ‘Resource Tenancy’ (which might be topics of future blogs).

Hope this blog has proven useful.

 

Ok Google Email me the status of all vms – Part 2

First published at https://nivleshc.wordpress.com

In my last blog, we configured the backend systems necessary for accomplishing the task of asking Google Home “OK Google Email me the status of all vms” and it sending us an email to that effect. If you haven’t finished doing that, please refer back to my last blog and get that done before continuing.

In this blog, we will configure Google Home.

Google Home uses Google Assistant to do all the smarts. You will be amazed at all the tasks that Google Home can do out of the box.

For our purposes, we will be using the platform IF This Then That or IFTTT for short. IFTTT is a very powerful platform as it lets you create actions based on triggers. This combination of triggers and actions is called a recipe.

Ok, lets dig in and create our IFTTT recipe to accomplish our task

1.1   Go to https://ifttt.com/ and create an account (if you don’t already have one)

1.2   Login to IFTTT and click on My Applets menu from the top

IFTTT_MyApplets_Menu

1.3   Next, click on New Applet (top right hand corner)

1.4   A new recipe template will be displayed. Click on the blue + this choose a service

IFTTT_Reicipe_Step1

1.5   Under Choose a Service type “Google Assistant”

IFTTT_ChooseService

1.6   In the results Google Assistant will be displayed. Click on it

1.7   If you haven’t already connected IFTTT with Google Assistant, you will be asked to do so. When prompted, login with the Google account that is associated with your Google Home and then approve IFTTT to access it.

IFTTT_ConnectGA

1.8   The next step is to choose a trigger. Click on Say a simple phrase

IFTTT_ChooseTrigger

1.9   Now we will put in the phrases that Google Home should trigger on.

IFTTT_CompleteTrigger

For

  • What do you want to say? enter “email me the status of all vms
  • What do you want the Assistant to say in response? enter “no worries, I will send you the email right away

All the other sections are optional, however you can fill them if you prefer to do so

Click Create trigger

1.10   You will be returned to the recipe editor. To choose the action service, click on + that

IFTTT_That

1.11  Under Choose action service, type webhooks. From the results, click on Webhooks

IFTTT_ActionService

1.12   Then for Choose action click on Make a web request

IFTTT_Action_Choose

1.13   Next the Complete action fields screen is shown.

For

  • URL – paste the webhook url of the runbook that you had copied in the previous blog
  • Method – change this to POST
  • Content Type – change this to application/json

IFTTT_CompleteActionFields

Click Create action

1.13   In the next screen, click Finish

IFTTT_Review

 

Woo hoo. Everything is now complete. Lets do some testing.

Go to your Google Home and say “email me the status of all vms”. Google Home should reply by saying “no worries. I will send you the email right away”.

I have noticed some delays in receiving the email, however the most I have had to wait for is 5 minutes. If this is unacceptable, in the runbook script, modify the Send-MailMessage command by adding the parameter -Priority High. This sends all emails with high priority, which should make things faster. Also, the runbook is currently running in Azure. Better performance might be achieved by using Hybrid Runbook Workers

To monitor the status of the automation jobs, or to access their logs, in the Azure Automation Account, click on Jobs in the left hand side menu. Clicking on any one of the jobs shown will provide more information about that particular job. This can be helpful during troubleshooting.

Automation_JobsLog

There you go. All done. I hope you enjoy this additional task you can now do with your Google Home.

If you don’t own a Google Home yet, you can do the above automation using Google Assistant as well.

Ok Google Email me the status of all vms – Part 1

First published at https://nivleshc.wordpress.com

Technology is evolving at a breathtaking pace. For instance, the phone in your pocket has more grunt than the desktop computers of 10 years ago!

One of the upcoming areas in Computing Science is Artificial Intelligence. What seemed science fiction in the days of Isaac Asimov, when he penned I, Robot seems closer to reality now.

Lately the market is popping up with virtual assistants from the likes of Apple, Amazon and Google. These are “bots” that use Artificial Intelligence to help us with our daily lives, from telling us about the weather, to reminding us about our shopping lists or letting us know when our next train will be arriving. I still remember my first virtual assistant Prody Parrot, which hardly did much when you compare it to Siri, Alexa or Google Assistant.

I decided to test drive one of these virtual assistants, and so purchased a Google Home. First impressions, it is an awesome device with a lot of good things going for it. If only it came with a rechargeable battery instead of a wall charger, it would have been even more awesome. Well maybe in the next version (Google here’s a tip for your next version 😉 )

Having played with Google Home for a bit, I decided to look at ways of integrating it with Azure, and I was pleasantly surprised.

In this two-part blog, I will show you how you can use Google Home to send an email with the status of all your Azure virtual machines. This functionality can be extended to stop or start all virtual machines, however I would caution against NOT doing this in your production environment, incase you turn off some machine that is running critical workloads.

In this first blog post, we will setup the backend systems to achieve the tasks and in the next blog post, we will connect it to Google Home.

The diagram below shows how we will achieve what we have set out to do.

Google Home Workflow

Below is a list of tasks that will happen

  1. Google Home will trigger when we say “Ok Google email me the status of all vms”
  2. As Google Home uses Google Assistant, it will pass the request to the IFTTT service
  3. IFTTT will then trigger the webhooks service to call a webhook url attached to an Azure Automation Runbook
  4. A job for the specified runbook will then be queued up in Azure Automation.
  5. The runbook job will then run, and obtain a status of all vms.
  6. The output will be emailed to the designated recipient

Ok, enough talking 😉 lets start cracking.

1. Create an Azure AD Service Principal Account

In order to run our Azure Automation runbook, we need to create a security object for it to run under. This security object provides permissions to access the azure resources. For our purposes, we will be using a service principal account.

Assuming you have already installed the Azure PowerShell module, run the following in a PowerShell session to login to Azure

Import-Module AzureRm
Login-AzureRmAccount

Next, to create an Azure AD Application, run the following command

$adApp = New-AzureRmADApplication -DisplayName "DisplayName" -HomePage "HomePage" -IdentifierUris "http://IdentifierUri" -Password "Password"

where

DisplayName is the display name for your AD Application eg “Google Home Automation”

HomePage is the home page for your application eg http://googlehome (or you can ignore the -HomePage parameter as it is optional)

IdentifierUri is the URI that identifies the application eg http://googleHomeAutomation

Password is the password you will give the service principal account

Now, lets create the service principle for the Azure AD Application

New-AzureRmADServicePrincipal -ApplicationId $adApp.ApplicationId

Next, we will give the service principal account read access to the Azure tenant. If you need something more restrictive, please find the appropriate role from https://docs.microsoft.com/en-gb/azure/active-directory/role-based-access-built-in-roles

New-AzureRmRoleAssignment -RoleDefinitionName Reader -ServicePrincipalName $adApp.ApplicationId

Great, the service principal account is now ready. The username for your service principal is actually the ApplicationId suffixed by your Azure AD domain name. To get the Application ID run the following by providing the identifierUri that was supplied when creating it above

Get-AzureRmADApplication -IdentifierUri {identifierUri}

Just to be pedantic, lets check to ensure we can login to Azure using the newly created service principal account and the password. To test, run the following commands (when prompted, supply the username for the service principal account and the password that was set when it was created above)

$cred = Get-Credential 
Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantId {TenantId}

where Tenantid is your Azure Tenant’s ID

If everything was setup properly, you should now be logged in using the service principal account.

2. Create an Azure Automation Account

Next, we need an Azure Automation account.

2.1   Login to the Azure Portal and then click New

AzureMarketPlace_New

2.2   Then type Automation and click search. From the results click the following.

AzureMarketPlace_ResultsAutomation

2.3   In the next screen, click Create

2.4   Next, fill in the appropriate details and click Create

AutomationAccount_Details

3. Create a SendGrid Account

Unfortunately Azure doesn’t provide relay servers that can be used by scripts to email out. Instead you have to either use EOP (Exchange Online Protection) servers or SendGrid to achieve this. SendGrid is an Email Delivery Service that Azure provides, and you need to create an account to use it. For our purposes, we will use the free tier, which allows the delivery of 2500 emails per month, which is plenty for us.

3.1   In the Azure Portal, click New

AzureMarketPlace_New

3.2   Then search for SendGrid in the marketplace and click on the following result. Next click Create

AzureMarketPlace_ResultsSendGrid

3.3   In the next screen, for the pricing tier, select the free tier and then fill in the required details and click Create.

SendGridAccount_Details

4. Configure the Automation Account

Inside the Automation Account, we will be creating a Runbook that will contain our PowerShell script that will do all the work. The script will be using the Service Principal and SendGrid accounts. To ensure we don’t expose their credentials inside the PowerShell script, we will store them in the Automation Account under Credentials, and then access them from inside our PowerShell script.

4.1   Go into the Automation Account that you had created.

4.2   Under Shared Resource click Credentials

AutomationAccount_Credentials

4.3    Click on Add a credential and then fill in the details for the Service Principal account. Then click Create

Credentials_Details

4.4   Repeat step 4.3 above to add the SendGrid account

4.5   Now that the Credentials have been stored, under Process Automation click Runbooks

Automation_Runbooks

Then click Add a runbook and in the next screen click Create a new runbook

4.6   Give the runbook an appropriate name. Change the Runbook Type to PowerShell. Click Create

Runbook_Details

4.7   Once the Runbook has been created, paste the following script inside it, click on Save and then click on Publish

Import-Module Azure
$cred = Get-AutomationPSCredential -Name 'Service Principal account'
$mailerCred = Get-AutomationPSCredential -Name 'SendGrid account'

Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantID {tenantId}

$outputFile = $env:TEMP+ "\AzureVmStatus.html"
$vmarray = @()

#Get a list of all vms 
Write-Output "Getting a list of all VMs"
$vms = Get-AzureRmVM
$total_vms = $vms.count
Write-Output "Done. VMs Found $total_vms"

$index = 0
# Add info about VM's to the array
foreach ($vm in $vms){ 
 $index++
 Write-Output "Processing VM $index/$total_vms"
 # Get VM Status
 $vmstatus = Get-AzurermVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Status

# Add values to the array:
 $vmarray += New-Object PSObject -Property ([ordered]@{
 ResourceGroupName=$vm.ResourceGroupName
 Name=$vm.Name
 OSType=$vm.StorageProfile.OSDisk.OSType
 PowerState=(get-culture).TextInfo.ToTitleCase(($vmstatus.statuses)[1].code.split("/")[1])
 })
}
$vmarray | Sort-Object PowerState,OSType -Desc

Write-Output "Converting Output to HTML" 
$vmarray | Sort-Object PowerState,OSType -Desc | ConvertTo-Html | Out-File $outputFile
Write-Output "Converted"

$fromAddr = "senderEmailAddress"
$toAddr = "recipientEmailAddress"
$subject = "Azure VM Status as at " + (Get-Date).toString()
$smtpServer = "smtp.sendgrid.net"

Write-Output "Sending Email to $toAddr using server $smtpServer"
Send-MailMessage -Credential $mailerCred -From $fromAddr -To $toAddr -Subject $subject -Attachments $outputFile -SmtpServer $smtpServer -UseSsl
Write-Output "Email Sent"

where

  • ‘Service Principal Account’ and ‘SendGrid Account’ are the names of the credentials that were created in the Automation Account (include the ‘ ‘ around the name)
  • senderEmailAddress is the email address that the email will show it came from. Keep the domain of the email address same as your Azure domain
  • recipientEmailAddress is the email address of the recipient who will receive the list of vms

4.8   Next, we will create a Webhook. A webhook is a special URL that will allow us to execute the above script without logging into the Azure Portal. Treat the webhook URL like a password since whoever possesses the webhook can execute the runbook without needing to provide any credentials.

Open the runbook that was just created and from the top menu click on Webhook

Webhook_menu

4.9   In the next screen click Create new webhook

4.10  A security message will be displayed informing that once the webhook has been created, the URL will not be shown anywhere in the Azure Portal. IT IS EXTREMELY IMPORTANT THAT YOU COPY THE WEBHOOK URL BEFORE PRESSING THE OK BUTTON.

Enter a name for the webhook and when you want the webhook to expire. Copy the webhook URL and paste it somewhere safe. Then click OK.

Once the webhook has expired, you can’t use it to trigger the runbook, however before it expires, you can change the expiry date. For security reasons, it is recommended that you don’t keep the webhook alive for a long period of time.

Webhook_details

Thats it folks! The stage has been set and we have successfully configured the backend systems to handle our task. Give yourselves a big pat on the back.

Follow me to the next blog, where we will use the above with IFTTT, to bring it all together so that when we say “OK Google, email me the status of all vms”, an email is sent out to us with the status of all the vms 😉

I will see you in Part 2 of this blog. Ciao 😉