A [brief] intro to Azure Resource Visualiser (ARMVIZ.io)

Another week, another Azure tool that I’ve come by and thought I’d share with the masses. Though this one isn’t a major revelation or a something that I’ve added to my Chrome work profile bookmarks bar like I did with the Azure Resource Explorer (as yet, though, I may well add this in the very near future), I certainly have it bookmarked in my Azure folder in Chrome bookmarks.

When working with Azure Resource Manager templates, you’re dealing with long JSON files. These files can get to be pretty big in size and span hundreds of lines. Here’s an example of one:

(word or warning- scroll to the bottom as there is more content after this ~250+ line ARM template sample)

{
 "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json",
 "contentVersion": "1.0.0.0",
 "parameters": {
 "storageAccountName": {
 "type": "string",
 "metadata": {
 "description": "Name of storage account"
 }
 },
 "adminUsername": {
 "type": "string",
 "metadata": {
 "description": "Admin username"
 }
 },
 "adminPassword": {
 "type": "securestring",
 "metadata": {
 "description": "Admin password"
 }
 },
 "dnsNameforLBIP": {
 "type": "string",
 "metadata": {
 "description": "DNS for Load Balancer IP"
 }
 },
 "vmSize": {
 "type": "string",
 "defaultValue": "Standard_D2",
 "metadata": {
 "description": "Size of the VM"
 }
 }
 },
 "variables": {
 "storageAccountType": "Standard_LRS",
 "addressPrefix": "10.0.0.0/16",
 "subnetName": "Subnet-1",
 "subnetPrefix": "10.0.0.0/24",
 "publicIPAddressType": "Dynamic",
 "nic1NamePrefix": "nic1",
 "nic2NamePrefix": "nic2",
 "imagePublisher": "MicrosoftWindowsServer",
 "imageOffer": "WindowsServer",
 "imageSKU": "2012-R2-Datacenter",
 "vnetName": "myVNET",
 "publicIPAddressName": "myPublicIP",
 "lbName": "myLB",
 "vmNamePrefix": "myVM",
 "vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('vnetName'))]",
 "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]",
 "publicIPAddressID": "[resourceId('Microsoft.Network/publicIPAddresses',variables('publicIPAddressName'))]",
 "lbID": "[resourceId('Microsoft.Network/loadBalancers',variables('lbName'))]",
 "frontEndIPConfigID": "[concat(variables('lbID'),'/frontendIPConfigurations/LoadBalancerFrontEnd')]",
 "lbPoolID": "[concat(variables('lbID'),'/backendAddressPools/BackendPool1')]"
 },
 "resources": [
 {
 "type": "Microsoft.Storage/storageAccounts",
 "name": "[parameters('storageAccountName')]",
 "apiVersion": "2015-05-01-preview",
 "location": "[resourceGroup().location]",
 "properties": {
 "accountType": "[variables('storageAccountType')]"
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "type": "Microsoft.Network/publicIPAddresses",
 "name": "[variables('publicIPAddressName')]",
 "location": "[resourceGroup().location]",
 "properties": {
 "publicIPAllocationMethod": "[variables('publicIPAddressType')]",
 "dnsSettings": {
 "domainNameLabel": "[parameters('dnsNameforLBIP')]"
 }
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "type": "Microsoft.Network/virtualNetworks",
 "name": "[variables('vnetName')]",
 "location": "[resourceGroup().location]",
 "properties": {
 "addressSpace": {
 "addressPrefixes": [
 "[variables('addressPrefix')]"
 ]
 },
 "subnets": [
 {
 "name": "[variables('subnetName')]",
 "properties": {
 "addressPrefix": "[variables('subnetPrefix')]"
 }
 }
 ]
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "type": "Microsoft.Network/networkInterfaces",
 "name": "[variables('nic1NamePrefix')]",
 "location": "[resourceGroup().location]",
 "dependsOn": [
 "[concat('Microsoft.Network/virtualNetworks/', variables('vnetName'))]",
 "[concat('Microsoft.Network/loadBalancers/', variables('lbName'))]"
 ],
 "properties": {
 "ipConfigurations": [
 {
 "name": "ipconfig1",
 "properties": {
 "privateIPAllocationMethod": "Dynamic",
 "subnet": {
 "id": "[variables('subnetRef')]"
 },
 "loadBalancerBackendAddressPools": [
 {
 "id": "[concat(variables('lbID'), '/backendAddressPools/BackendPool1')]"
 }
 ],
 "loadBalancerInboundNatRules": [
 {
 "id": "[concat(variables('lbID'),'/inboundNatRules/RDP-VM0')]"
 }
 ]
 }
 }
 ]
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "type": "Microsoft.Network/networkInterfaces",
 "name": "[variables('nic2NamePrefix')]",
 "location": "[resourceGroup().location]",
 "dependsOn": [
 "[concat('Microsoft.Network/virtualNetworks/', variables('vnetName'))]"
 ],
 "properties": {
 "ipConfigurations": [
 {
 "name": "ipconfig1",
 "properties": {
 "privateIPAllocationMethod": "Dynamic",
 "subnet": {
 "id": "[variables('subnetRef')]"
 }
 }
 }
 ]
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "name": "[variables('lbName')]",
 "type": "Microsoft.Network/loadBalancers",
 "location": "[resourceGroup().location]",
 "dependsOn": [
 "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]"
 ],
 "properties": {
 "frontendIPConfigurations": [
 {
 "name": "LoadBalancerFrontEnd",
 "properties": {
 "publicIPAddress": {
 "id": "[variables('publicIPAddressID')]"
 }
 }
 }
 ],
 "backendAddressPools": [
 {
 "name": "BackendPool1"
 }
 ],
 "inboundNatRules": [
 {
 "name": "RDP-VM0",
 "properties": {
 "frontendIPConfiguration": {
 "id": "[variables('frontEndIPConfigID')]"
 },
 "protocol": "tcp",
 "frontendPort": 50001,
 "backendPort": 3389,
 "enableFloatingIP": false
 }
 }
 ]
 }
 },
 {
 "apiVersion": "2015-06-15",
 "type": "Microsoft.Compute/virtualMachines",
 "name": "[variables('vmNamePrefix')]",
 "location": "[resourceGroup().location]",
 "dependsOn": [
 "[concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
 "[concat('Microsoft.Network/networkInterfaces/', variables('nic1NamePrefix'))]",
 "[concat('Microsoft.Network/networkInterfaces/', variables('nic2NamePrefix'))]"
 ],
 "properties": {
 "hardwareProfile": {
 "vmSize": "[parameters('vmSize')]"
 },
 "osProfile": {
 "computername": "[variables('vmNamePrefix')]",
 "adminUsername": "[parameters('adminUsername')]",
 "adminPassword": "[parameters('adminPassword')]"
 },
 "storageProfile": {
 "imageReference": {
 "publisher": "[variables('imagePublisher')]",
 "offer": "[variables('imageOffer')]",
 "sku": "[variables('imageSKU')]",
 "version": "latest"
 },
 "osDisk": {
 "name": "osdisk",
 "vhd": {
 "uri": "[concat('http://',parameters('storageAccountName'),'.blob.core.windows.net/vhds/','osdisk', '.vhd')]"
 },
 "caching": "ReadWrite",
 "createOption": "FromImage"
 }
 },
 "networkProfile": {
 "networkInterfaces": [
 {
 "properties": {
 "primary": true
 },
 "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nic1NamePrefix'))]"
 },
 {
 "properties": {
 "primary": false
 },
 "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nic2NamePrefix'))]"
 }
 ]
 },
 "diagnosticsProfile": {
 "bootDiagnostics": {
 "enabled": "true",
 "storageUri": "[concat('http://',parameters('StorageAccountName'),'.blob.core.windows.net')]"
 }
 }
 }
 }
 ]
} 


So what is this Azure Resource Visualiser?

For those following at home and are cluey enough to pick up on what the ARM Visualiser is, it’s a means to help visualise the components and relationships of those components in an ARM template. It does this in a very user friendly and easy to use way.

What you can do is as follows:

  • Play around with ARMVIZ with a pre-build ARM template
  • Import your own ARM template and visualise the characteristics of your components and their relationships
    • This helps validate the ARM template and make sure there are no errors or bugs
  • You can have quick access to the ARM Quickstart templates library in Github
  • You can view your JSON ARM template online
  • You can create a new ARM template from scratch or via copying one of the quick start templates ONLINE in your browser of choice
  • You can freely edit that template and go from JSON view to the Visualiser view quickly
  • You can then download a copy of your ARM template that is built, tested, visualised and working
  • You can provide helpful feedback to the team from Azure working on this service
    • Possibly leading to this being rolled up into the Azure Portal at some point in the future
  • All of the components are able to be moved around the screen as well
  • If you double click on any of the components, for example the Load Balancer, you’ll be taken to the line in the ARM template to view or amend the config

What does ARMVIZ.io look like?

Here’s a sample ARM template visualised

Azure Resource Visualiser

This is a screen grab of the Azure Resource Visualiser site. It shows the default, sample template that can be manipulated and played with.

Final words

If you want a quick, fun ARM template editor and your favourite ISE is just behaving like it did ever other day, then spruce up and pimp up your day with the Azure Resource Visualiser. In the coming weeks I’m going to be doing quick a few ARM templates. I can assure you that those will be run through the Azure Resource Visualiser to validate and check for any errors.

Pro-tip: don’t upload an ARM template to Microsoft that might have sensitive information. I know it’s common sense, but, it happens to the best of us. I thought I’d just quickly mention that as a reminder more than anything else.

Hope you enjoy it,

Best,

Lucian

 

 

***

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango, or, connect via LinkedIn.

Why are you not using Azure Resource Explorer (Preview)?

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango. Connect on LinkedIn.

***

For almost two years the Azure Resource Explorer has been in preview. For almost two years barely anyone has used it. This stops today!

I’ve been playing around with the Azure Portal (ARM) and clicking away stumbled upon the Azure Resource Explorer; available via https://resources.azure.com. Before you go any further, click on that or open the URI in a new tab in your favourite browser (I’m using Chrome 56.x for Mac if you were wondering) and finally BOOKMARK IT!

Okay, let’s pump the breaks and slow down now. I know what you’re probably thinking: “I’m not bookmarking a URI to some Azure service because some blogger dude told me to. This could be some additional service that won’t add any benefit since I have the Azure portal and PowerShell; love PowerShell”. Well, that is a fair point. However, let me twist your arm with the following blog post; full of fun facts and information about what Azure Resource Explorer is, what is does, how to use it and more!

What is Azure Resource Explorer

This is a [new to me] website, running Bootstrap HTML, CSS, Javascript framework (an older version, but, like yours truly here on clouduccino), that provides streamlined and rather well laid out access to various REST API details/calls for any given Azure subscription. You can login and view some nice management REST API’s, make changes to Azure infrastructure in your subscription via REST calls/actions like get, put, post, delete and create.

There’s some awesome resources around documentation for the different API’s, although Microsoft is lagging in actually making this of any use across the board (probably should not have mentioned that). Finally, what I find handy is pre-build PowerShell scripts that outline how to complete certain actions mixed in with the REST API’s.

Here’s an example of an application gateway in the ARE portal. Not that there is much to see, since there are no appGateways, but, I could easily create one!

Use cases

I’m sure that all looks “interesting”; with an example above with nothing to show for it. Well, here is where I get into a little more detail. I can’t show you all the best features straight away, otherwise you’ll probably go off and start playing with and tinkering with the Resource Explorer portal yourselves (go ahead and do that, but, after reading the remainder of this blog!).

Use case #1 – Quick access to information

Drilling through numerous blades in the Azure Portal, while it works well, sometimes can take longer than you want when all you need to do is check for one bit of information- say a route table for a VNET. PowerShell can also be time consuming- doing all that typing and memorising cmdlets and stuff (so 2016..).

A lot of the information you need can be grabbed at a glance from the ARE portal though which in turns saves you time. Here’s a quick screenshot of the route table (or lack there of) from a test VNET I have in the Kloud sandbox Azure subscription that most Kloudies use on the regular.

I know, I know. There is no routes. In this case it’s a pretty basic VNET, but, if I introduced peering and other goodness in Azure, it would all be visible at a glance here!

Use case #2 – Ahh… quick access to information?

Here’s another example where getting access to information configuration is really easy with ARE. If you’re working on a PowerShell script to provision some VM instances and you are not sure of the instance size you need, or the programatic name for that instance size, you can easily grab that information form the ARE portal. Bellow highlights a quick view of all the VM instance sizes available. There are 532 lines in that JSON output with all instance sizes from Standard_A0 to Standard_F16s (which offers 16 cores, 32GB of RAM and up to 32 disks attached if you were interested).

vmSizes view for all current available VM sizes in Azure. Handy to grab the programatic name to then use in PowerShell scripting?!

Use case #3 – PowerShell script examples

Mixing the REST API programatic cmdlets with PowerShell is easy. The ARE portal outlines the ways you can mix the two to execute quick cmdlets to various actions, for example: get, set, delete. Bellow is a an example set of PowerShell cmdlets from the ARE portal for VNET management.

Final words

Hopefully you get a chance to try out the Azure Resource Explorer today. It’s another handy tool to keep in your Azure Utility Belt. It’s definitely something I’m going to use, probably more often than I will probably realise.

#HappyFriday

Best,

Lucian

 

 

 

Azure AD Connect pass-through authentication. Yes, no more AD FS required.

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter: @LucianFrango.

***

Yesterday I received a notification email from Alex Simons (Director of PM, Microsoft Identity Division) which started like this:

Todays news might well be our biggest news of the year. Azure AD Pass-Through Authentication and Seamless Single Sign-on are now both in public preview!

So I thought I’d put together a streamlined overview of what this means for authentication with regards to the Microsoft Cloud and my thoughts on if I’d use it.

Before:

2016-12-09-before-01

What is Azure AD pass-through auth?

When working with the Microsoft Cloud, organisations from small to enterprise leverage the ability to have common credentials across on-premises directories (ADDS) and the cloud. It’s the best user experience, it’s the best IT management experience as well. The overhead of facilitating this can be quite a large endeavour. There’s all the complexities of AD FS and AADConnect to work through and build with high availability and disaster recovery in mind as this core identity infrastructure needs to be online 24/7/365.

Azure AD Pass-through authentication (public preview) simplifies this down to Azure AD Connect. This new feature can, YES, do away with AD FS. I’m not in the “I hate AD FS” boat. I think as a tool it does a good job: proxying authentication from external to ADDS and from Kerberos to SAML. For all those out there, I know you’re out there, this would be music to your ears.

Moreover, if you wanted to enjoy SINGLE SIGN ON, you needed AD FS. Now, with pass-through authentication, SSO works with just Azure AD Connect. This is a massive win!

So, what do I need?

Nothing too complicated or intricate. The caveat is that the support for this in public preview feature is limited, as with all preview offerings from Azure. Don’t get to excited and roll this out into production as yet! Dev or test it as much as possible and get an understanding of it, but, don’t go replacing AD FS just yet.

Azure AD Connect generally needs a few ports to communicate with ADDS on-premises and Azure AD in the cloud. The key port being TCP443. With pass-through authentication, there are ~17 other ports (with 10 of which included in a range) that need to be opened up for communication. While this could be locked down at the firewall level to just Azure IP’s, subnets and hosts, it will still make security question and probe for details.

The version of AADConnect also needs to be updated to the latest, released on December 7th (2016). From version 1.1.371.0, the preview feature is now publicly available. Consider upgrade requirements as well before taking the plunge.

What are the required components?

Apart from the latest version of Azure AD Connect, I touched on a couple more items required to deploy pass-though authentication. The following rapid fire list outlines what is required:

  • A current, latest, version of Azure AD Connect (v 1.1.371.0 as mentioned above)
  • Windows Server 2012 R2 or higher is listed as the operating system for Azure AD Connect
    • If you still have Server 2008 R2, get your wiggle on and upgrade!
  • A second Windows Server 2012 R2 instance to run the Azure App Proxy .exe to leverage high availability
    • Yes, HA, but not what you think… more details below
  • New firewall rules to allow traffic to a couple wildcard subdomains
  • A bunch of new ports to allow communication with the Azure Application proxy

For a complete list, check out the docs.microsoft.com article with detailed config.

What is this second server business? AADConnect has HA now?

Much like AD FS being designed to provide high availability for that workload, there is the ability to provide some HA around the connectors required for the pass-through auth process. This is where it gets a little squirly. You won’t be deploying a second Azure AD Connect server and load balance the two.

The second server is actually an additional server, I would imagine a vanilla Windows Server instance, in which a deployment of Azure AD Application Proxy is downloaded, executed and run.

The Azure Application Proxy is enabled in Azure AD, the client (AADApplicationProxyConnectorInstaller.exe) downloaded and run (as mentioned above). This then introduces two services that run on the Windows Server instance and provide that connector to Azure AD. For more info, review the docs.microsoft.com article outlines the setup process.

After:

2016-12-09-after-01

Would I use this?

I’ll answer this in two ways, then I’ll give you my final verdict.

Yes, I would use this. Simplifying what was “federated identity” to “hybrid identity” has quite a few benefits. Most importantly the reduced complexity and reduced requirements to maintain the solution. This reduced overhead can maintain a higher level of security, no credentials (rather passwords), being stored in the cloud at all. No hashes. Nadda. Zilch. Zero. Security managers and those tightly aligned to government regulations: small bit of rejoice for you.

No, I would not use this. AD FS is a good solution. If I had an existing AD FS solution that tied in with other applications, I would not go out of my way to change that just for Office 365 / Azure AD. Additionally, in preview functionality does not have the same level of SLA support. In fact, Azure in preview features are provided “as is”. That is not ideal for production environments. Sure, I’d put this in a proof of concept, but, I wouldn’t recommend rolling this out anytime soon.

Personally, I think this is a positive piece of SSO evolution within the Microsoft stack. I would certainly be trying to get customers to proof of concept this and trial it in dev or test environments. It can further streamline identity deployments and could well defer customers from ever needing or deploying AD FS, thus saving in instance cost and managed services cost supporting the infrastructure. Happy days!

Final thoughts

Great improvement, great step forward. If this was announced at Microsoft Ignite a couple of months back, I think it would have been big news. Try it, play with it, send me your feedback. I want to always been learning, improving and finding out new gotchas.

Best,

Lucian

Real world Azure AD Connect: the case for TWO Azure AD Connect servers

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango.

***

I was exchanging some emails with an account manager (Andy Walker) at Kloud and thought the exchange would be for some interesting reading. Here’s the outcome in an expanded and much more helpful (to you dear reader) format…

***

Background

When working with the Microsoft Cloud and in particular with identity, depending on some of the configuration options, it can be quite important to have Azure AD Connect highly available. Unfortunately for us, Microsoft has not developed AADConnect to be highly available. That’s not ideal in today’s 24/7/365 cloud-centric IT world.

When disaster strikes, and yes that was a deliberate use of the word WHEN, with email, files, real-time communication and everything in between now living up in the sky with those clouds, should your primary data centre be out of action, there is a high probability that a considerable chunk of functionality will be affected by your on-premises outage.

Azure AD Connect

AADConnect’s purpose, it’s only purpose, is to synchronise on-premises directories to Azure AD. There is no other option. You cannot sync an ADDS forest to another with Azure AD Connect. So this simple and lightweight tool only requires a single deployment in any given ADDS forest.

Even when working with multiple forests, you only need one. It can even be deployed on a domain controller; which can save on an instance when working with the cloud.

The case for two Azure AD Connect servers

As of Monday April 17th 2017, just 132 days from today (Tuesday December 6th), DirSync and Azure AD Sync, the predecessors to Azure AD Connect will be deprecated. This is good news. This means that there is an opportunity to review your existing synchronisation architecture to take advantage of a feature of AADConnect and deploy two servers.

2016-12-06-two-aadc-v01

Staging mode is fantastic. Yes, short and sweet and to the point. It’s fantastic. Enough said, blog done; use it and we’re all good here? Right?

Okay, I’ll elaborate. Staging mode allows for the following rapid fire benefits which greatly improves IT as a result:

1 – Redundancy and disaster recovery, not high availability

I’ve read in certain articles that staging mode offers high availability. I disagree and argue it offers redundancy and disaster recovery. High availability is putting into practice architecture and design around applications, systems or servers that keeps those operational and accessible as near as possible to 100% uptime, or always online, mostly through automated solutions.

I see staging mode as offering the ability to manually bring online a secondary AADConnect server should the primary be unavailable. This could be due to a disaster or a scheduled maintenance window. Therefore it makes sense to deploy a secondary server in an alternate data centre, site or geographic location to your primary server.

To do the manual update of the secondary AADConnect server config and bring it online, the following needs to take place:

  • Run Azure AD Connect
    • That is the AzureADConnect.exe which is usually located in the Microsoft Azure Active Directory Connect folder in Program Files
  • Select Configure staging mode (current state: enabled), select next
  • Validate credentials and connect to Azure AD
  • Configure staging mode – un-tick
  • Configure and complete the change over
  • Once this has been run, run a full import/full sync process

This being a manual process which takes time and breaks service availability to complete (importantly: you can’t have two AADConnect servers synchronising to a single Azure AD tenant), it’s not highly available.

2 – Update and test management

AADConnect now offers (since February 2016) an in-place automatic upgrade feature. Updates are rolled out by Microsoft and installed automatically. Queue the alarm bells!

I’m going to take a stab at making an analogy here: you wouldn’t have your car serviced while it’s running and taking you from A to B, so why would you have a production directory synchronisation server upgraded while its running. A bit of a stretch, but, can you see where I was going there?

Having a secondary AADConnect server allows updates and or server upgrades without impacting the core functionality of directory synchronisation to Azure AD. While this seems trivial, it’s certainly not trivial in my books. I’d want my AADConnect server functioning correctly all the time with minimal downtime. Having the two servers can allow for a 15 min (depending on how fast you can click) managed change over from server to server.

3 – Improved management and less impact on production

Lastly, this is more of a stretch, but, I think it’s a good practice to not have user accounts assigned Domain Admin rights. The same applies to logging into production servers to do trivial tasks. Having a secondary AADConnect server allows for IT administrators or service desk engineers to complete troubleshooting of sync processes for the service or objects away from the production server. That, again in my books, is a better practice and just as efficient as using primary or production servers.

Final words

Staging mode is fantastic. I’ve said it again. Since I found out about staging mode around this time last year, I’ve recommended to every customer I’ve mentioned AADConnect too, to use two servers referencing the 3 core arguments listed above. I hope that’s enough to convince you dear reader to do so as well.

Best,

Lucian

Real world Azure AD Connect: multi forest user and resource + user forest implementation

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango.

***

Disclaimer: During October I spent a few weeks working on this blog posts solution at a customer and had to do the responsible thing and pull the pin on further time as I had hit a glass ceiling. I reached what I thought was possible with Azure AD Connect. In comes Nigel Jones (Identity Consultant @ Kloud) who, through a bit of persuasion from Darren (@darrenjrobinson), took it upon himself to smash through that glass ceiling of Azure AD Connect and figured this solution out. Full credit and a big high five!

***

tl;dr

  • Azure AD Connect multi-forest design
  • Using AADC to sync user/account + resource shared forest with another user/account only forest
  • Why it won’t work out of the box
  • How to get around the issue and leverage precedence to make it work
  • Visio’s on how it all works for easy digestion

***

In true Memento style, after the quick disclaimer above, let me take you back for a quick background of the solution and then (possibly) blow your mind with what we have ended up with.

Back to the future

A while back in the world of directory synchronisation with Azure AD, to have a user and resource forest solution synchronised required the use of Microsoft Forefront Identity Manager (FIM), now Microsoft Identity Manager (MIM). From memory, you needed the former of those products (FIM) whenever you had a multi-forest synchronisation environment with Azure AD.

Just like Marty McFly, Azure AD synchronisation went from relative obscurity to the mainstream. In doing so, there have been many advancements and improvements that negate the need to ever deploy FIM or MIM for ever the more complex environment.

When Azure AD Connect, then Azure AD Sync, introduced the ability to synchronise multiple forests in a user + resource model, it opened the door for a lot of organisations to streamline the federated identity design for Azure and Office 365.

2016-12-02-aadc-design-02

In the beginning…

The following outlines a common real world scenario for numerous enterprise organisations. In this environment we have an existing Active Directory forest which includes an Exchange organisation, SharePoint, Skype for Business and many more common services and infrastructure. The business grows and with the wealth and equity purchases another business to diversity or expand. With that comes integration and the sharing of resources.

We have two companies: Contoso and Fabrikam. A two-way trust is set up between the ADDS forests and users can start to collaboration and share resources.

In order to use Exchange, which is the most common example, we need to start to treat Contoso as a resource forest for Fabrikam.

Over at the Contoso forest, IT creates disabled user objects and linked mailboxes Fabrikam users. When where in on-premises world, this works fine. I won’t go into too much more detail, but, I’m sure that you, mr or mrs reader, understand the particulars.

In summary, Contoso is a user and resource forest for itself, and a resource forest for Fabrikam. Fabrikam is simply a user forest with no deployment of Exchange, SharePoint etc.

How does a resource forest topology work with Azure AD Connect?

For the better part of two years now, since AADConnect was AADSync, Microsoft added in support for multi-forest connectivity. Last year, Jason Atherton (awesome Office 365 consultant @ Kloud) wrote a great blog post summarising this compatibility and usage.

In AADConnect, a user/account and resource forest topology is supported. The supported topology assumes that a customer has that simple, no-nonsense architecture. There’s no room for any shared funny business…

AADConnect is able to select the two forests common identities and merge them before synchronising to Azure AD. This process uses the attributes associated with the user objects: objectSID in the user/account forest and the msExchMasterAccountSID in the resource forest, to join the user account and the resource account.

There is also the option for customers to have multiple user forests and a single resource forest. I’ve personally not tried this with more than two forests, so I’m not confident enough to say how additional user/account forests would work out as well. However, please try it out and be sure to let me know via comments below, via Twitter or email me your results!

Quick note: you can also merge two objects by sAmAccountName and sAmAccountName attribute match, or specifying any ADDS attribute to match between the forests.

Compatibility

aadc-multi-forest

If you’d like to read up on this a little more, here are two articles reference in detail the above mentioned topologies:

Why won’t this work in the example shown?

Generally speaking, the first forest to sync in AADConnect, in a multi-forest implementation, is the user/account forest, which likely is the primary/main forest in an organisation. Lets assume this is the Contoso forest. This will be the first connector to sync in AADConnect. This will have the lowest precedence as well, as with AADConnect, the lower the precedence designated number, the higher the priority.

When the additional user/account forest(s) is added, or the resource forest, these connectors run after the initial Contoso connector due to the default precedence set. From an external perspective, this doesn’t seem like much of a bit deal. AADConnect merges two matching or mirrored user objects by way of the (commonly used) objectSID and msExchMasterAccountSID and away we go. In theory, precedence shouldn’t really matter.

Give me more detail

The issue is that precedence does in deed matter when we go back to our Contoso and Fabrikam example. The reason that this does not work is indeed precedence. Here’s what happens:

2016-12-02-aadc-whathappens-01

  • #1 – Contoso is sync’ed to AADC first as it was the first forest connected to AADC
    • Adding in Fabrikam first over Contoso doesn’t work either
  • #2 – The Fabrikam forest is joined with a second forest connector
  • AADC is configured with user identities exist across multiple directories
  • objectSID and msExchMasterAccountSID is selected to merge identities
  • When the objects are merged, sAmAccountName is taken Contoso forest – #1
    • This happens for Contoso forest users AND Fabrikam forest users
  • When the objects are merged, mail or primarySMTPaddress is taken Contoso forest – #1
    • This happens for Contoso forest users AND Farikam forest users
  • Should the two objects not have a completely identical set of attributes, the attributes that are set are pulled
    • In this case, most of the user object details come from Fabrikam – #2
    • Attributes like the users firstname, lastname, employee ID, branch / office

The result is this standard setup is having Fabrikam users with their resource accounts in Contoso sync’ed, but, have their UPN set with the prefix from the Contoso forest. An example would be a UPN of user@contoso.com rather than the desired user@fabrikam.com. When this happens, there is no SSO as Windows Integrated Authentication in the Fabrikam forest does not recognise the Contoso forest UNP prefix of @contoso.com.

Yes, even with ADDS forest trusts configured correctly and UPN routing etc all working correctly, authentication just does not work. AADC uses the incorrect attributes and sync’s those to Azure AD.

Is there any other way around this?

I’ve touched on and referenced precedence a number of times in this blog post so far. The solution is indeed precedence. The issue that I had experienced was a lack of understanding of precedence in AADConnect. Sure it works on a connector rule level precedence which is set by AADConnect during the configuration process as forests are connected to.

Playing around with precedence was not something I want to do as I didn’t have enough Microsoft Identity Manager or Forefront Identity Manager background to really be certain of the outcome of the joining/merging process of user and resource account objects. I know that FIM/MIM has the option of attribute level precedence, which is what we really wanted here, so my thinking as that we needed FIM/MIM to do the job. Wrong!

In comes Nigel…

Nigel dissected the requirements over the course of a week. He reviewed the configuration in an existing FIM 2010 R2 deployment and found the requirements needed of AADConnect. Having got AADConnect setup, all that was required was tweaking a couple of the inbound rules and moving higher up the precedence order.

Below is the AADConnect Sync Rules editor output from the final configuration of AADConnect:

2016-12-01-syncrules-01

The solution centres around the main precedence rule, rule #1 for Fabrikam (red arrows pointing and yellow highlight) to be above the highest (and default) Contoso rule (originally #1). When this happened, AADConnect was able to pull the correct sAmAccountName and mail attributes from Fabrikam and keep all the other attributes associated with Exchange mailboxes from Contoso. Happy days!

Final words

Tinkering around with AADConnect shows just how powerful the “cut down FIM/MIM” application is. While AADConnect lacks the in-depth configuration and customisation that you find in FIM/MIM, it packs a lot in a small package! #Impressed

Cheers,

Lucian

Completing an Exchange Online Hybrid individual MoveRequest for a mailbox in a migration batch

I can’t remember for certain, however, I would say since at least Exchange Server 2010 Hybrid, there was always the ability to complete a MoveRequest from on-premises to Exchange Online manually (via PowerShell) for a mailbox that was a within a migration batch. It’s really important for all customers to have this feature and something I have used on every enterprise migration to Exchange Online.

What are we trying to achive here?

With enterprise customers and the potential for thousands of mailboxes to move from on-premises to Exchange Online, business analyst’s get their “kind in a candy store” on and sift through data to come up with relationships between mailboxes so these mailboxes can be grouped together in migration batches for synchronised cutovers.

This is the tried and true method to ensure that not only business units, departments or teams cutover around the same timeframe, but, any specialised mailbox and shared mailbox relationships cutover. This then keeps business processes working nicely and with minimal disruption to users.

Sidebar – As of March 2016, it is now possible to have permissions to shared mailboxes cross organisations from cloud to on-premises, in hybrid deployments with Exchange Server 2013 or 2016 on-premises. Cloud mailboxes can access on-premises shared mailboxes which can streamline migrations where no all shared mailboxes need to be cutover with their full access users.

Reference: https://blogs.technet.microsoft.com/mconeill/2016/03/20/shared-mailboxes-in-exchange-hybrid-now-work-cross-premises/

How can we achieve this?

In the past and from what I had been doing for, what I feel is, the last 4 years, required one or two PowerShell cmdlets. The main reference that I’ve recently gone to is Paul Cunningham’s ExchangeServerPro blog post that conveniently is also the main Google search result. The two lines of PowerShell are as follows:

Step 1 – Change the Suspend When Ready to Complete switch to $false or turn it off

Get-MoveRequest mailbox@domain.com | Set-MoveRequest -SuspendWhenReadyToComplete:$false

Step 2 – Resume the move request to complete the cutover of the mailbox

Get-MoveRequest mailbox@domain.com | Resume-MoveRequest

Often I don’t even do step 1 mentioned above. I just resumed the move request. It all worked and go me the result I was after. This was across various migration projects with both Exchange Server 2010 and Exchange Server 2013 on-premises.

Things changed recently!?!? I’ve been working with a customer on their transition to Exchange Online for the last month and we’re now up to moving a pilot set of mailboxes to the cloud. Things were going great, as expected, nothing out of the ordinary. That is, until I wanted to cutover one initial mailbox in our first migration batch.

Why didn’t this work?

It’s been a few months, three quarters of a year, since I’ve done an Office 365 Exchange Online mail migration. I did the trusted Google and found Paul’s blog post. Had that “oh yeah, thats it” moment and remembered the PowerShell cmdlet’s. Our pilot user was cutover! Wrong..

The mailbox went from a status of “synced” to “incrementalsync” and back to “synced” with no success. I ran through the PowerShell again. No buono. Here’s a screen grab of what I was seeing.

SERIOUS FACE > Some names and identifying details have been changed to protect the privacy of individuals

And yes, that is my PowerShell colour theme. I have to get my “1337 h4x0r” going and make it green and stuff.

I see where this is going. There’s a work around. So Lucian, hurry up and tell me!

Yes, dear reader, you would be right. After a bit of back and forth with Microsoft, there is indeed a solution. It still involves two PowerShell cmdlet’s, but, there is two additional properties. One of those properties is actually hidden: -preventCompletion. The other property is a date and time value normally represented like: “28/11/2016 12:00:00 PM”.

Lets cut to the chase; here’s the two cmdlets”

Get-MoveRequest -Identity mailbox@domain.com | Set-MoveRequest -SuspendWhenReadyToComplete:$false -preventCompletion:$false -CompleteAfter 5
Get-MoveRequest -Identity mailbox@domain.com | Resume-MoveRequest

After running that, the mailbox, along with another four to be sure it worked, had all successfully cutover to Exchange Online. My -CompleteAfter value of “5” was meant to be 5 minutes. I would say that could be set with the correct date and time format and have the current date and time + 5minutes added to do it correctly. For me, it worked with the above PowerShell.

Final words

I know it’s been sometime since I moved mailboxes from on-premises to the cloud, however, it’s like riding a bike: you never forget. In this case, I knew the option was there to do it. I pestered Office 365 support for two days. I hope the awesome people there don’t hate me for it. Although, in the long run, it’s good that I did as figuring this out and using this method of manual mailbox cutover for sync’ed migration batches is crucial to any transition to the cloud.

Enjoy!

Lucian

How to export user error data from Azure AD Connect with CSExport

A short post is a good post?! – the other day I had some problems with users synchronising with Azure AD via Azure AD Connect. Ultimately Azure AD Connect was not able to meet the requirements of the particular solution, as Microsoft Identity Manager (MIM) 2016 has the final 5% of the config required for, as I found out, a complicated user+resource and user forest design.

In saying that though, during my troubleshooting, I was looking at ways to export the error data from Azure AD Connect. I wanted to have the data more accessible as sometimes looking at problematic users one by one isn’t ideal. Having it all in a CSV file makes it rather easy.

So here’s a short blog post on how to get that data out of Azure AD Connect to streamline troubleshooting purposes.

What

Azure AD Connect has a way to make things nice and easy, but, at the same time makes you want to pull your hair out. When digging a little, you can get the information that you want. However, at first, you could be presented with a whole bunch of errors like this:

Unable to update this object because the following attributes associated with this object have values that may already be associated with another object in your local directory services: [UserPrincipalName user@domain.com]. Correct or remove the duplicate values in your local directory. Please refer to http://support.microsoft.com/kb/2647098 for more information on identifying objects with duplicate attribute values.

I beleive its Event ID: 6941 in eventvwr as well 

It’s not a complicated error. It’s rather self explanatory. However, when you have a bunch of them; say anything more that 20 or so, as I said earlier; it’s easier to export it all for quick reference and faster review.

How

To export that error data to a CSV file, complete the following steps:

Open a cmd prompt 
CD: or change the directory to "C:\Program Files\Microsoft Azure AD Sync\bin" 
Run: "CSExport “[Name of Connector]” [%temp%]\Errors-Export.xml /f:x" - without the [ ]

The name of the connector above can be found in the AADC Synchronisation Service.

Now to view that data in a nice CSV format, the following steps can be run to convert that into something more manageable:

Run: "CSExportAnalyzer [%temp%]\Errors-Export.xml > [%temp%]\Errors-Export.csv" - again, without the [ ] 
You now have a file in your [%temp%] directory named "Errors-Export.csv".

Happy days!

Final words

So a short blog post, but, I think a valuable one in that getting the info into a more easily digestible format should result in faster troubleshooting. In saying that, this doesn’t give you all errors in all area’s of AADC. Enjoy!

Best, Lucian


Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango.

Azure AD Connect: An error occurred executing configure AAD Sync task: user realm discovery failed

Yesterday (Tuesday October 11th, 2016) I started a routine install of Azure AD Connect. This project is for an upgrade from FIM 2010 R2 for a long time client; if you were wondering.

Unfortunately at the end of the process, when essentially the final part of the install was running, during the “Configure” process, I ran into some trouble.

Strike 1

I received the following error:

2016-10-11-error-01

An error occurred executing Configure AAD Sync task: user_realm_discovery_failed: User realm discovery failed

This happened with the current, as of this blog post, version of Azure AD Connect: 1.1.281.0 (release: Sep 7th 2016).

I did a quick Google, as you do, and found a few articles that matched the same error. The first one I went to was by Mark Parris. His blog post (available here) had all the makings of a solution. I followed those instructions to try and resolve my issue.

The solution required the following steps (with some of my own additions):

  • Open Explorer
  • Navigate to the following path: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\config
  • Find the file called machine.config
    • Create a new folder in the same directory called “Original”
    • Copy the file there as a backup in case things go wrong
  • Open machine.config with notepad to edit it
  • At the bottom, before the </configuration>, enter in the following:
<system.net>
<defaultProxy enabled=”false”></defaultProxy>
</system.net>

I followed those instructions and saved the file to my desktop. I removed the notepad added .txt extension and before saving the file to the directory, I checked the logs to make sure I was indeed having the same issue. Small error on my part there. I’ll just blame it on the jet lag. Something (the log file) I should have checked first. Nevertheless, I checked.

Strange. I didn’t have the same error as Mark. My install had the following error:

Operation failed. The remote server returned an error: (400) Bad Request.. Retrying…Exception = TraceEventType = RegistrationAgentType = NotAnAgentServiceType

I know my client was using a proxy though, so, for the sake of testing, as this deployment was in staging mode anyway, I copied the machine.config file from the server desktop to the path and overwrote the file. My logic was that the .NET config with proxy “false” setting would bypass the proxy. Unfortunately that was not the case.

Strike 2

I selected the retry option in the Azure AD Connect installer and waited for the result. Not quite the same error, but, an error nonetheless:

2016-10-11-error-02

An error occurred executing Configure AAD Sync task: the given key was not present in the dictionary.

What a dictionary has to do with Azure AD Connect is beyond me. Technology is complicated, so I didn’t judge it. I went back to a few other of the search results in Google. One of the others was from Tarek El-Touny. His blog post (available here) was similar to Mark’s, but, had some different options regarding the proxy settings in the machine.config file.

Here’s what Tarek suggested to enter in that machine.config file:

<system.net>
        <defaultProxy enabled="true" useDefaultCredentials="true">
            <proxy
            usesystemdefault="true"
            proxyaddress="http://<PROXYADDRESS>:<PROXYPORT>"
            bypassonlocal="true"
            />
        </defaultProxy>
    </system.net>

Since I did have a proxy, this made more sense. Originally with the proxy “false” setting, I thought that would bypass or disable usage of the proxy. I was wrong.

Here’s a sample of the machine.config file for your reference:

2016-10-11-error-03

After I amended the machine.config file and saved it to the correct location, I started another retry of the installation. This time there was good news! It successfully finalised the installation and went on to configuration.

Final words

Over the last few months I’ve not done any hands on technical work. Yes, some architecture and design, some work orders and pre-sales, but, to get back on the tools was good. Unfortunately for me, a little rusty, but, thanks to the blog-sphere out there and the brains trust of other awesome people, I managed to work my way through the problem.

Best, Lucian


Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango.


Azure networking VNET architecture best practice update (post #MSIgnite 2016)

During Microsoft Ignite 2016 I attended a few Azure networking architecture sessions. Towards the end of the week, though, they did overlap some content which was not ideal. A key message was there though. An interesting bit of reference architecture information.

Of note and relevant to this blog post:

  • Migrate and disaster recover Azure workloads using Operations Management Suite by Mahesh Unnifrishan, Microsoft Program Manager
  • Review ExpressRoute for Office 365 configuration (routing, proxy and network security) by Paul Andrew, Senior Product Marketing Manager
  • Run highly available solutions on Microsoft Azure by Igal Figlin, Principal PM- Availability, Scalability and Performance on Azure
  • Gain insight into real-world usage of the Microsoft cloud using Azure ExpressRoute by Bala Natarajan Microsoft Program Manager
  • Achieve high-performance data centre expansion with Azure Networking by Narayan Annamalai, Principal PM Manager, Microsoft

Background

For the last few years there has been one piece of design around Azure Virtual Networks (VNETs) that caused angst. When designing a reference architecture for VNETs, creating multi tiered solutions was generally not recommended. For the most part, a single VNET with multiple subnets (from Microsoft solution architects I spoke with) was the norm. This didn’t scale well across regions and required multiple and repetitive configurations when working at scale; or Microsoft’s buzz words from the conference: hyper-scale.

2016-10-11-example-01

VNet Peering

At Microsoft Ignite, VNET peering was made generally available on September 28th (reference and official statement).  VNET peering allows for the connectivity of VNETs in the same region without the need of a gateway. It extends the network segment to essentially allows for all communication between the VNETs as if they were a single network. Each VNET is still managed independently of one another; so NSG’s, for example, would need to be managed on each VNET.

Extending across regions VNET peering across regions is still the biggest issue. When this feature comes, it will be another game changer. Amazon Web Services also has VPC peering, but, is also limited to a single region. Microsoft has caught up in this regard.

Interesting and novel designs can now be achieved with VNET peering.

Hub and spoke

I’m not a specialist network guy. I’ve done various Cisco studies and never committed to getting certified, but, did enough to be dangerous!

VNET peering has one major advantage: the ability to centralise shared resources, like for example networking virtual appliances.

A standard network topology design known as hub and spoke features centralised provisioning of core components in a hub network with additional networks in spokes stemming from the core.

2016-10-11-example-02

Larger customers opt to use virtual firewall (Palo Alto or F5 firewall appliances) or load balancers (F5 BigIP’s) as network teams are generally well skilled in these and re-learning practices in Azure is time-consuming and costly.

Now Microsoft, via program managers on several occasions, recommends a new standard practice of using the hub and spoke network topology and leveraging the ability to centrally store network components that are shared. This could even extend to centrally store certain logical segmented areas, for example a DMZ segment.

I repeat: a recommended network design for most environments is generally a hub and spoke leveraging VNET peering and centralising shared resources in the hub VNET.

Important

These new possibilities offer awesome network architecture designs that can now be achieved. Important to note though, is that there are limits imposed.

Speaking with various program managers, limits in most services are there are a guide and form a logical understanding of what can be achieved. However, in most cases these can be raised through discussion with Microsoft.

The limits on VNET peering applies to two areas. The first is number of networks able to be peering to a single network (currently 50). The second is the number of routes able to be advertised when using ExpressRoute and VNET peering. Review the following Azure documentation article for more info on these limits: https://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#networking-limits.

Finally, it’s important to note that not every network is identical and requirements change from customer to customer. What is additionally as important is to implement consistent and proven architecture topologies that leverages on the knowledge and experience of others. Basically, stand on the shoulders of giants.  

Best,

Lucian

[Updated] Yammer group and user export via Yammer API to JSON, then converted to CSV

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango.


Update: awesome pro-tip

Big shout out to Scott Hoag (@ciphertxt on Twitter) with this pro-tip which will save you having to read this blog post. Yes, you don’t know what you don’t know.

Part of a Yammer network merge (which I am writing a blog post about.. WIP), you would lose data, posts, files etc as that can’t come across. You can however do an export of all that data to, depending on how much there is to export, usually a large .zip file. This is where Scott showed me the light. In that export, there are also two .csv files that contain all the user info in the first, and in the second all the group info. Knowing this, run that export process and you probably don’t need to read the rest of this blog post. #FacePalm.

HOWEVER, and that is a big however for a reason. The network export process does not export what members there are in groups in that groups.csv file. So if you want to to export Yammer groups and all their members, the below blog post is one way of doing that process, just a longer way… 


Yammer network merges are not pretty. I’m not taking a stab at you (Yammer developers and Microsoft Office 365 developers), but, I’m taking a stab.

There should be an option to allow at least group and group member data to be brought across when there is a network merge. Fair enough not bringing any data across as that can certainly be a headache with the vast amount of posts, photos, files and various content that consumes a Yammer network.

However, it would be considerably much less painful for customers if at least the groups and all their members could be merged. It would also make my life a little easier not having to do it.

Let me set the stage her and paint you a word picture. I’m no developer. Putting that out there from the start. I am good at problem solving though and I’m a black belt at finding information online (surfing the interwebs). So, after some deliberation, I found the following that might help with gathering group and user data, to be used for Yammer network merges.

Read More