How to export user error data from Azure AD Connect with CSExport

A short post is a good post?! – the other day I had some problems with users synchronising with Azure AD via Azure AD Connect. Ultimately Azure AD Connect was not able to meet the requirements of the particular solution, as Microsoft Identity Manager (MIM) 2016 has the final 5% of the config required for, as I found out, a complicated user+resource and user forest design.

In saying that though, during my troubleshooting, I was looking at ways to export the error data from Azure AD Connect. I wanted to have the data more accessible as sometimes looking at problematic users one by one isn’t ideal. Having it all in a CSV file makes it rather easy.

So here’s a short blog post on how to get that data out of Azure AD Connect to streamline troubleshooting purposes.


Azure AD Connect has a way to make things nice and easy, but, at the same time makes you want to pull your hair out. When digging a little, you can get the information that you want. However, at first, you could be presented with a whole bunch of errors like this:

Unable to update this object because the following attributes associated with this object have values that may already be associated with another object in your local directory services: [UserPrincipalName]. Correct or remove the duplicate values in your local directory. Please refer to for more information on identifying objects with duplicate attribute values.

I beleive its Event ID: 6941 in eventvwr as well 

It’s not a complicated error. It’s rather self explanatory. However, when you have a bunch of them; say anything more that 20 or so, as I said earlier; it’s easier to export it all for quick reference and faster review.


To export that error data to a CSV file, complete the following steps:

Open a cmd prompt 
CD: or change the directory to "C:\Program Files\Microsoft Azure AD Sync\bin" 
Run: "CSExport “[Name of Connector]” [%temp%]\Errors-Export.xml /f:x" - without the [ ]

The name of the connector above can be found in the AADC Synchronisation Service.

Now to view that data in a nice CSV format, the following steps can be run to convert that into something more manageable:

Run: "CSExportAnalyzer [%temp%]\Errors-Export.xml > [%temp%]\Errors-Export.csv" - again, without the [ ] 
You now have a file in your [%temp%] directory named "Errors-Export.csv".

Happy days!

Final words

So a short blog post, but, I think a valuable one in that getting the info into a more easily digestible format should result in faster troubleshooting. In saying that, this doesn’t give you all errors in all area’s of AADC. Enjoy!

Best, Lucian

Originally posted on Lucian’s blog at Follow Lucian on Twitter @LucianFrango.


Avoiding Windows service accounts with static passwords using GMSAs

One of the benefits of an Active Directory (AD) running with only Windows Server 2012 domain controllers is the use of ‘Group Managed Service Accounts’ (GMSAs).

GMSAs can essentially execute applications and services similar to an Active Directory user account running as a ‘service account’.  GMSAs store their 120 character length passwords using the Key Distribution Service (KDS) on Windows Server 2012 DCs and periodically refresh these passwords for extra security (and that refresh time is configurable).

This essentially provides the following benefits:

  1. Eliminates the need for Administrators to store static service accounts passwords in a ‘password vault’
  2. Increased security as the password is refreshed automatically and that refresh interval is configurable (you can tell it to refresh the password every day if you want to.
  3. The password is not known even to administrators so there is no chance for  attackers to try to hijack the GMSA account and ‘hide their tracks’ by logging in as that account on other Windows Servers or applications
  4. An extremely long character password which would require a lot of computing power & time to break

There is still overhead in using GMSA versus a traditional AD user account:

  1. Not all applications or services support GMSA so if the application does not document their supportability, then you will need to test their use in a lab
  2. Increase overhead in the upfront configuration getting them working and testing versus a simple AD user account creation
  3. GMSA bugs (see Appendix)

I recently had some time to develop & run a PowerShell script under Task Scheduler, but I wanted to use GMSA to run the job under a service account whose password would not be known to any administrator and would refresh automatically (every 30 days or so).

There are quite a few blogs out there on GMSA, including this excellent PFE blog from MS from 2012 and the official TechNet library.

My blog is really a ‘beginners guide’ to GMSA in working with it in a simple Task Scheduler scenario.  I had some interesting learnings using GMSA for the first time that I thought would prove useful, plus some sample commands in other blogs are not 100% accurate.

This blog will run through the following steps:

  1. Create a GMSA and link it to two Windows Servers
  2. ‘Install’ the GMSA on the Windows Servers and test it
  3. Create a Task Schedule job and have it execute it under the GMSA
  4. Execute a GMSA refresh password and verify Task Schedule job will still execute

An appendix at the end will briefly discuss issues I’m still having though running a GMSA in conjunction with an Active Directory security group (i.e. using an AD Group instead of direct server memberships to the GMSA object).

A GMSA essentially shares many attributes with a computer account in Active Directory, but it still operates as a distinct AD class object.   Therefore, its use is still quite limited to a handful of Windows applications and services.   It seems the following apps and services can run under GMSA but I’d first check and test to ensure you can run it under GMSA:

  • A Windows Service
  • An IIS Application Pool
  • SQL 2012
  • ADFS 3.0 (although the creation and use of GMSA using ADFS 3.0 is quite ‘wizard driven’ and invisible to admins)
  • Task Scheduler jobs

This blog will create a GMSA manually, and allow two Windows Servers to retrieve the password to that single GMSA and use it to operate two Task Schedule jobs, one per each server.

Step 1: Create your KDS root key & Prep Environment

A KDS root key is required to work with GMSA.  If you’re in a shared lab, this may already have been generated.  You can check with the PowerShell command (run under ‘Run As Administrator’ with Domain Admin rights):


If you get output similar to the following, you may skip this step for the entire forest:


If there is no KDS root key present (or it has expired), the command to create the KDS root key for the entire AD forest (of which all GMSA derive their passwords from) is as follows:

Add-KDSRootKey –EffectiveImmediately

The ‘EffectiveImmediately’ switch is documented that may need to wait up to 10 hours for it to take effect to take into account Domain Controller replication. however you can speed up the process (if you’re in a lab) by following this link.

The next few steps will assume you have the following configured:

  • Domain Admins rights
  • PowerShell loaded with ‘Run as Administrator’
  • Active Directory PowerShell module loaded with command:
    • import-module activedirectory


Step 2: Create a GMSA and link it to two (or more) Windows Servers

This step creates the GMSA object in AD, and links two Windows Servers to be able to retrieve (and therefore login) as that GMSA on those servers to execute the Task Schedule job.

The following commands will :

$server1 = Get-ADComputer <Server1 NETBIOS name>

$server2 = Get-ADComputer <Server2 NETBIOS name>

New-ADServiceAccount -name gmsa-pwdexpiry -DNSHostName gmsa-pwdexpiry.domain.lab -PrincipalsAllowedToRetrieveManagedPassword $server1,$server2

Get-ADServiceAccount -Identity gmsa-pwdexpiry -Properties PrincipalsAllowedToRetrieveManagedPassword

You should get an output similar to the following:


The important verification step is to ensure the ‘PrincipalsAllowed…’ value contains all LDAP paths to the Windows Servers who wish to use the GMSA (the ones specified as variables).

The GMSA object will get added by default to the ‘Managed Service Accounts’ container object in the root of the domain (unless you specify the ‘-path’ switch to tell it to install it to a custom OU).



  1. To reiterate, many blogs point out that you can use the switch: ‘PrincipalsAllowedToRetrieveManagedPassword’ (almost the longest switch name I’ve ever encountered!) to specify an ‘AD group name’.   I’m having issues with using that switch to specify an and work with an AD group instead of direct computer account memberships to the GMSA.   I run through those issues in the Appendix.
  2. A lot of blogs just state you can just specify the server NETBIOS names for the ‘principals’ switch, however I’ve found you need to first retrieve the AD objects first using the ‘get-ADcomputeraccount’ commands
  3. I did not specify a Service Principal Name (SPN) as my Task Scheduler job does not require one, however be sure to do so if you’re executing an application or service requiring one
  4. I accepted the default password refresh interval of 30 days without specifying a custom password refresh interval (viewable in the attribute value: ‘msDS-ManagedPasswordInterval’).  Custom refresh intervals can only be specified during GMSA creation from what I’ve read (a topic for a future blog!).
  5. Be sure to specify a ‘comma’ between the two computer account variables without a space

OPTIONAL Step 2A: Add or Removing Computers to the GMSA

If you’ve created the GMSA but forgot to add a server account, then to modify the server computer account membership of a GMSA, I found the guidance from MS a little confusing. In my testing I found you cannot really add or remove individual computers to the GMSA without re-adding every computer back into the membership list.

You can use this command to update an existing GMSA, but you will still need to specify EVERY computer that should be able to retrieve the password for that GMSA.

For example, if I wanted to add a third server to use the GMSA I would still need to re-add all the existing servers using the ‘Set-ADServiceAccount’ command:

$server1 = Get-ADComputer <Server1 NETBIOS name>

$server2 = Get-ADComputer <Server2 NETBIOS name>

$server3 = Get-ADComputer <Server3 NETBIOS name>

Set-ADServiceAccount -Identity gmsa-pwdexpiry -PrincipalsAllowedToRetrieveManagedPassword $server1,$server2,$server3

(Also another reason why I want to work with an AD group used instead!)

Step 3: ‘Install’ the Service Account

According to Microsoft TechNet, the ‘Install-ADServiceAccount’ “makes the required changes locally that the service account password can be periodically reset by the computer”.

I’m not 100% sure what these changes are local to the Windows Server, but after you run the command, the Windows Server will have permission to reset the password to the GMSA.

You run this command on a Windows Server (who should already be in the list of ‘PrincipalsAllowed…’ computer stored in the GMSA):

Install-ADServiceAccount gsma-pwdexpiry


After you run this command, verify that both the ‘PrincipalsAllowed…’ switch and ‘Install’ commands are properly configured for this Windows Server:

Test-ADServiceAccount gsma-pwdexpiry


A value of ‘True’ for the Test command means that this server can now use the GMSA to execute the Task Scheduler.  A value of ‘False’ means that either the Windows Server was not added to the ‘Principals’ list (using either ‘New-ADServiceAccount’ or ‘Set-ADServiceAccount’) or the ‘Install-ADServiceAccount’ command did not execute properly.

Finally, in order to execute Task Scheduler jobs, be sure also to add the GSMA to the local security policy (or GPO) to be assigned the right: ‘Log on as batch job’:



Without this last step, the GMSA account will properly login to the Windows Server but the Task Scheduler job will not execute as the GMSA will not have the permission to do so.  If the Windows Server is a Domain Controller, then you will need to use a GPO (either ‘Default Domain Controller’ GPO or a new GPO).

Step 4:  Create the Task Schedule Job to run under GMSA

Windows Task Scheduler (at least on Windows Server 2012) does not allow you to specify a GMSA using the GUI.  Instead, you have to create the Task Schedule job using PowerShell.  The password prompt when you create the job using the GUI will ask you to specify a password when you go to save it (which you will never have!)

The following four commands will instead create the Task Schedule job to execute an example PowerShell script and specifies the GMSA object to run under (using the $principal object):

$action = New-ScheduledTaskAction powershell.exe -Argument “-file c:\Scripts\Script.ps1” -WorkingDirectory “C:\WINDOWS\system32\WindowsPowerShell\v1.0”

$trigger = New-ScheduledTaskTrigger -At 12:00 -Daily

$principal = New-ScheduledTaskPrincipal -UserID domain.lab\gmsa-pwdexpiry$ -LogonType Password -RunLevel highest

Register-ScheduledTask myAdminTask –Action $action –Trigger $trigger –Principal $principal




  1. Be sure to replace the ‘domain.lab’ with the FQDN of your domain and other variables such as script path & name
  2. It’s optional to use the switch: ‘-RunLevel highest’.  This just sets the job to ‘Run with highest privileges’.
  3. Be sure to specify a ‘$’ symbol after the GMSA name for the ‘-UserID’.  I also had to specify the FQDN instead of the NETBIOS for the domain name as well.

Step 5: Kick the tyres! (aka test test test)

Yes, when you’re using GMSA you need to be confident that you’re leaving something that is going to work even when the password expires.

Some common task that I like to perform to verify the GMSA is running include:

Force the GMSA to password change:

You can force the GMSA to reset it’s password by running the command:

Reset-ADServiceAccountPassword gmsa-pwdexpiry

You can then verify the time date of the last password set by running the command:

Get-ADServiceAccount -Identity gmsa-pwdexpiry -Properties passwordlastset

The value will be next to the ‘PasswordLastSet’ field:


After forcing a password reset, I would initiate a Task Schedule job execution and be sure that it operates without failure.

Verify Last Login Time

You can also verify that the GMSA is logging in properly to the server by checking the ‘Last Login value’:

Get-ADServiceAccount -Identity gmsa-pwdexpiry -Properties LastLogonDate

 View all Properties

Finally, if you’re curious as to what else that object stores, then this is the best method to review all values of the GMSA:

 Get-ADServiceAccount -Identity gmsa-pwdexpiry -Properties *

I would not recommend using ADSIEdit to review most GMSA attributes as I find that GUI is limited in showing the correct values for those objects, e.g. this is what happens when you view the ‘principals…’ value using ADSIEdit (called msDS-GroupMSAMembership in ADSI):


Appendix:  Why can’t I use an AD group with the switch: PrincipalsAllowedTo..?

Simply: you can! Just a word of warning.  I’ve been having intermittent issues in my lab with using AD groups.   I decided to base my blog purely on direct computer account memberships directly to the GMSA as I’ve not had an issue with that approach.

If find that the commands: ‘Install-ADServiceAccount’ and ‘Test-ADServiceAccount’ sometimes fails when I use group memberships.  Feel free to readily try however, it may due to issues in my lab.  In preparing this lab, I could not provide the screen shot of the issues as they’d mysteriously resolved themselves overnight (the worst kind of bug, an intermittent one!)

You can easily run the command to create a GMSA with a security group membership (e.g. ‘pwdexpiry’) as the sole ‘PrincipalsAllowed…’ object:


Then use try running the ‘Install-ADServiceAccount’ and ‘Test-ADServiceAccount’ on the Windows Servers whose computer accounts are members of that group.

Good luck!











Run Chromium OS without having to buy a Chromebook thanks to CloudReady

Thanks to the good folks at Neverware, you can now run Google’s cloud centric OS on a wider range of hardware than just Chromebooks alone. To enable this, what Neverware have done is repackage Google’s Chromium operating system.  This OS is at the core of it’s range of branded laptops, and is now made available to all.

CloudReady running on different hardware.

CloudReady running on different hardware.

The differences

Where Google build and maintain open source versions of Android and Chromium, their real value proposition is to add proprietary features onto both before selling them on branded devices. Enter CloudReady, based entirely on the open source core of Chromium making it a vanilla experience. Given it’s nature, not all features of are available in the first release, an example of this is Powerwash and the Trusted Platform Module. A full list of differences is available from Neverware of course.

Software updates

Updates to CloudReady are delivered in a similarly transparent manner to the OS as with Chromium, however these updates are available by Neverware, and not Google. CloudReady is also several major releases behind Chromium, for reasons owing to development. it it worth noting that Neverware have somewhat boldly committed to “indefinite” support for the OS.


Neverware are focused on generating revenue through selling devices, and OS licenses, as well as support to education and the enterpise at a later date. The caveat, however is there is currently no official support for the free version, you will have to look to  community support through their user forum.CloudReady recovery media creator.

CloudReady recovery media creator.


CloudReady is available for download from Neverware. Installing it is just matter of creating a USB based installer from, which to boot and launch the process. This procedure is supported on a Chrome OS, Windows or Mac devices. Now that you havecreated the source media, you will then need to reboot the target system, and have it boot from the relevant USB port by applying the required BIOS settings. Alternatively CloudReady can also be dual booted alongside other operating systems. Detailed installation instructions are available from their web site.


CloudReady installer.


Neverware have made available a number of useful hardware support lists including.

Chromium OS is an open-source project that aims to build an operating system that provides a fast, simple, and more secure computing experience for people who spend most of their time on the web. Here you can review the project’s design docs, obtain the source code, and contribute. –

Neverware is a venture-backed technology company that provides a service to make old PCs run like new. In February 2015 the company launched its second product, CloudReady; an operating system built on Google’s open source operating system Chromium.

CloudReady can be installed on older PCs in order to make them perform like a Chromebook. CloudReady machines can even be managed under the Google Admin console, which is a true line of demarcation from just installing Chrome. It was founded by CEO Jonathan Hefter and currently specializes in the education sector. It is headquartered in the Flatiron District of Manhattan. – Wikipedia


An Approach to DevOps Adoption

Originally posted at Chandra’s blog –

DevOps has been a buzzword for a while now in the tech industry, with many organizations joining the bandwagon and working towards embracing DevOps practices.

Wikipedia describes DevOps as “a practice that emphasizes the collaboration and communication of the IT professionals across the value chain while automating the process of software delivery and infrastructure changes. The aim is to deliver the software quickly and reliably.”

However, in an Enterprise scenario with the complexity involved, the journey to implement DevOps comprehensively is evolutionary.  Hence, it is only sensible to drive along an incremental adoption path. Each increment has to provide the most benefits through the MVP (Minimum Viable Product) delivered towards the DevOps journey.

In this context, this article attempts to explain the initial steps towards the larger DevOps journey and helps to get a head start.

The approach at high-level consists of four major steps:

  1. Value stream mapping – Mapping the existing process workflows
  2. Future state value stream mapping – Identify the immediate goals and visualize the optimized value stream map
  3. Execution – Incremental approach towards the implementation
  4. Retrospection – Review and learn.

OK, let’s get started!

Value Stream Mapping 

Value stream mapping is a lean improvement strategy that maps the processes and information flows of a product from source to delivery. For software delivery, it is the pre-defined path an idea takes to transform into a software product/service delivered to the end customers.

Value stream mapping exercise of the current services delivered serves as the first step towards the DevOps journey. It helps in identifying the overall time taken by the value chain, the process cycle time and the lead time between the processes involved in the software delivery.  It also captures various process specific metrics along the value chain.

Quite obviously, the exercise involves collaboration with multiple stakeholders of the application lifecycle management to gather the details and at the same time align them to a shared goal. In fact, it sets the stage for the larger collaboration between the parties as the journey progresses.

The picture below depicts a typical software development workflow at a high level, agnostic to the development methodology. Depending on the type of change or the product, one or more steps may not be relevant to the application’s lifecycle.

Software Lifecycle

Value stream mapping provides key insights on the overall performance of the value chain. Details include:

  • The process and information flow
  • Overall timeline from an idea generation to release
  • Fastest and slowest processes
  • Shortest and longest lead times.

The insights will pave the path to the definition of strategic goals and the immediate goals that will help optimize the overall value chain. The next steps describe the future state value stream mapping and the execution methodology that focuses on breaking the silos to create a people and technology-driven culture for accelerated processes.

Future state Value Stream Mapping

The future state value stream mapping focuses on defining the immediate goals for optimising the value chain. The optimization focuses on improving the product/service quality delivered through each process, to improve the cycle times and the lead time, etc. Remember, the aim is to deliver quickly and reliably.

While the easiest route to go about this, is to target those processes that are time-consuming, it is imperative to analyse multiple aspects of the process before considering the options. Below are a few metrics that could be considered to evaluate the options and build the future state value stream mapping. The optimisation options are to be balanced against the metrics to arrive at the final set that will be part of the execution.



Ideation to action – This next step in the journey is broken down into three key aspects:

  1. Backlog grooming
  2. Shared responsibility model and
  3. Implementation.

Backlog grooming

This involves user story detailing and a backlog creation for the areas identified for optimization in collaboration with all the parties involved. The backlog has to be put to action either in sprints or in a Kanban approach depending on how you want to manage the execution.

Further, below, describes how the backlog could be driven forward for execution.

Next steps

Shared responsibility model

It is vital that a culture of collaboration and communication between the stakeholders is nurtured for a successful DevOps journey. A shared responsibility model is equally important. It overlaps the services delivered by each of the stakeholders involved, which earlier used to be in silos. Below is a depiction on how the shared responsibility model evolves with the DevOps adoption.

Shared responsibilities

As you may figure out, during the exercise, most tasks could ideally be delivered by the operations team. However, tasks related to the design optimization, setting up continuous integration, implementing a test automation framework, etc. are part of the Development/QA community’s responsibilities.

The project management related tasks are more focused on nurturing the culture as well as providing the tools/methods for improving the collaboration and to provide the work in process visibility. The key is to bring together the teams involved (including business, development, quality assurance, service delivery, and operations) and build the necessary tools and technologies to drive the agile processes.

Again, depending on the organization, Scrum/Kanban methods can be implemented to execute the user stories/tasks.


Tools and technology implementation is one of the core aspects of a typical DevOps journey. They provide the required automation across the ALM and accelerate the adoption, and make the whole process sustainable. Needless to say, the implementation of the tools and/or otherwise the tasks are driven through the shared responsibility model and the sprint plans put together.

Review and Retrospection

The last step in the cycle is to measure and map the outcomes achieved and update the value stream map to project the new realities. It is also important to review the improvement in the overall process transparency. A detailed review of the execution provides insights into areas of further development across all the facets of the process.

The last step of the cycle could well be also, the first step towards another iteration/increment for further optimization of the value stream mapping nurturing and driving the DevOps culture and the journey forward.

Complex Mail Routing in Exchange Online Staged Migration Scenario

Notes From the Field:

I was recently asked to assist an ongoing project with understanding some complex mail routing and identity scenario’s which had been identified during planning for an upcoming mail migration from an external system into Exchange Online.

New User accounts were created in Active Directory for the external staff who are about to be migrated. If we were to assign the target state, production email attributes now, and create the exchange online mailboxes, we would have a problem nearing migration.

When the new domain is verified in Office365 & Exchange Online, new mail from staff already in Exchange Online would start delivering to the newly created mailboxes for the staff soon to be onboarded.

Not doing this, will delay the project which is something we didn’t want either.

I have proposed the following in order to create a scenario whereby cutover to Exchange Online for the new domain is quick, as well as not causing user downtime during the co-existence period. We are creating some “co-existence” state attributes on the on-premises AD user objects that will allow mail flow to continue in all scenarios up until cutover. (I will come back to this later).


We have configured the AD user objects in the following way

  1. UserPrincipalName – username@localdomainname.local
  2. mail –
  3. targetaddress –

We have configured the remote mailbox objects in the following way

  1. mail –
  2. targetaddress –

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – External Relay

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

How does this all work?

Glad you asked! As I eluded to earlier, the main problem here is with staff who already have mailboxes in Exchange Online. By configuring the objects in this way, we achieve several things:

  1. We can verify the new domains successfully in Office365 without impacting existing or new users. By setting the UPN & mail attributes to, Office365 & Exchange Online do not (yet) reference the newly onboarded domain to these mailboxes.
  2. By configuring the accepted domains in this way, we are doing the following:
    1. When an email is sent from Exchange Online to an email address at the new domain, Exchange Online will route the message via the hybrid connector to the Exchange on-premises environment. (the new mailbox has an email address
    2. When the on-premises environment receives the email, Exchange will look at both the remote mailbox object & the accepted domain configuration.
      1. The target address on the mail is configured
      2. The accepted domain is configured as external relay
      3. Because of this, the on-premises exchange environment will forward the message externally.

Why is this good?

Again, for a few reasons!

We are now able to pre-stage content from the existing external email environment to Exchange Online by using a target address of The project is no longer at risk of being delayed !🙂

At the night of cutover for MX records to Exchange Online (Or in this case, a 3rd party email hygiene provider),  We are able to use the same powershell code that we used in the beginning to configure the new user objects to modify the user accounts for production use. (We are using a different csv import file to achieve this).

Target State Objects

We have configured the AD user objects in the following way

  1. UserPrincipalName –
  2. mail –
  3. targetaddress –

We have configured the remote mailbox objects in the following way

  1. mail
    1. (primary)
  2. targetaddress –

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – Authoritive

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

NOTE: AAD Connect sync is now run and a manual validation completed to confirm user accounts in both on-premises AD & Exchange, as well as Azure AD & Exchange Online to confirm that the user updates have been successful.

We can now update DNS MX records to our 3rd party email hygiene provider (or this could be Exchange Online Protection if you don’t have one).

A final synchronisation of mail from the original email system is completed once new mail is being delivered to Exchange Online.

Fixing the Windows 10 Insider 14946 Bitdefender Update Issue

I have been part of the Windows 10 Insider program for some time now, and as usual the time had come around again to install the latest fast ring update 14946.

However, when I went to download the update via the usual windows update channel, I found I could not download the update at all. (Or the bar showed zero progress).

I started to go looking for an explanation and came across the following post on the Microsoft Forum site.

I am running Bitdefender 2016 on my machine so guessed this might be the problem. Now! I didn’t want to leave my machine unprotected, so I thought I would see if I could get the problem fixed.

  1. Run a repair of Bitdefender 2016
    • Open control panel and launch Programs & Features
    • Locate Bitdefender 2016 and select Uninstall (This won’t uninstall the product)
    • Choose the repair option in the popup menu
    • Restart the computer when prompted
    • Update Bitdefender once the machine has restarted
  2. Open the Bitdefender control panel from the desktop or taskbar
    • Open the Firewall module using the modules button on the front panel
    • Click the gear icon next to the firewall module
    • Select the adaptors tab
  3. Update the wifi or ethernet connection that is active to the following
    • Network Type – Trusted or home/office
    • Stealth Mode – Off
    • Generic On
  4. Close the Bitdefender control panel

Note: I needed to toggle the firewall module off/on before I could edit the network adapter configuration. I was running a ping to my local gateway while making changes to the Bitdefender adapter configuration in order to see when the network connection became active.

Hope this helps some of you out as well.



DataWeave: Tips and tricks from the field

DataWeave (DW) has been part of the MuleSoft Anypoint Platform since v3.7.0 and has been a welcome enhancement providing an order of magnitude improvement in performance as well as increased mapping capability that enables more efficient flow design.

However, like most new features of this scope and size (i.e. brand new transformation engine written from the ground up), early documentation was minimal and often we were left to ourselves. At times even the most simple mapping scenarios could take an hour or so to solve what could have taken 5 mins in data-mapper. But it pays to stick with it and push on through the adoption hurdle as the pay offs are worth it in the end.

For those starting out with DataWeave here are some links to get you going:

In this post I will share some tips and tricks I have picked up from the field with the aim that I can give someone out there a few hours of their life back.

Tip #1 – Use the identity transform to check how DW performs it’s intermediate parsing

When starting any new DW transform, it pays to capture and understand how DW will parse the inputs and present it to the transform engine. This helps navigate some of the implicit type conversions going on as well as better understand the data structure being traversed in your map. To do this, start off by using the following identity transform with an output type of application/dw.

Previewing a sample invoice xml yields the following output which gives us insight into the internal data structure and type conversations performed by DW when parsing our sample payload.

and the output of the identity transform

Tip #2 – Handling conditional xml node lists

Mule developers who have being using DW even for a short time will be used to seeing these types of errors displayed in the editor

Cannot coerce a :string to a :object

These often occur when we expect the input payload to be an array or complex data type, but a simple type (string in this case) is actually presented to the transform engine. In our invoice sample, this might occur when an optional xml nodelist contains no child nodes.

To troubleshoot this we would use the identity transform described above to gain insight into the intermediate form of our input payload. Notice the invoices element is no longer treated as a nodelist but rather a string.

We resolve this by checking if ns0#invoices is of type object and provide alternative output should the collection be empty.

Tip #3 – Explicitly setting the type of input reader DataWeave uses

Occasionally you will hit scenarios where the incoming message (input payload) doesn’t have a mimeType assigned or DW cannot infer a reader class from the mimeType that is present. In these cases, we’ll either get an exception thrown or we may get unpredictable behaviour from your transform. To avoid this, we should be in the habit of setting the mimeType of the input payload explicitly. At present we can’t do this in the graphical editor, we will need to edit the configuration xml directly and add the following attribute to the <dw:input-payload> element of our transform shape

<dw:input-payload doc:sample="xml_1.xml" mimeType="application/xml" />

Tip #4 – Register custom formats as types (e.g. datetime formats)

Hopefully we are blessed to be always working against strongly typed message schema where discovery and debate over the data formats of the output fields never happen…yeah right. Too often we need to tweak the output format of data fields a couple of times during the development cycle. In large transforms, this may mean applying the same change to several fields throughout the map, only to come back and change this again the next day. To save you time and better organise your DW maps, we should look to declare common format types in the transform header and reference those throughout the map. If we need to tweak this we apply the change in one central location.

Tip #5 – Move complex processing logic to MEL functions

Sometimes even the most straight forward of transformations can lead to messy chains of functions that are hard to read, difficult to troubleshoot and often error prone. When finding myself falling into these scenarios I look to pull out this logic and move it into a more manageable MEL function. This not only cleans up the DW map but also provides opportunity to place debug points in our MEL function to assist with troubleshooting a misbehaving transform.

Place your MEL function in your flow’s configuration file at the start along with any other config elements.

Call your function as you would if you declared it inline in your map.

Tip #6 – Avoid the graphical drag and drop tool in the editor

One final tip that I find myself regularly doing is avoid using the graphical drag and drop tool in the editor. I’m sure this will be fixed in later versions of DataWeave, but for now I find it creates untidy code that I often end up fighting with the editor to clean up. I would only typically use the graphical editor to map multiple fields en-mass and then cut and paste the code into my text editor, clean it up and paste it back into DW. From then on, I am working entirely in the code window.

There we go, in this post I have outlined six tips that I hope will save at least one time poor developer a couple of hours which could be much better spent getting on with the business off delivering integration solutions for their customers. Please feel free to contribute more tips in the comments section below.

How to assign and remove user Office365 licenses using the AzureADPreview Powershell Module

A couple of months ago the AzureADPreview module was released. The first cmdlet that I experimented with was Set-AzureADUserLicense. And it didn’t work, there was no working examples and I gave up and used GraphAPI instead.

Since then the AzureADPreview has gone through a number of revisions and I’ve been messing around a little with each update. The Set-AzureADUserLicense cmdlet has been my litmus test. Now that I have both removing and assigning Office 365 licenses working I’ll save others the pain of working it out and give a couple of working examples.


If like me you have been experimenting with the AzureADPreview module you’ll need to force the install of the newest one. And for whatever reason I was getting an error informing me that it wasn’t signed. As I’m messing around in my dev sandpit I skipped the publisher check.
Install-Module -Name AzureADPreview -MinimumVersion -Force -SkipPublisherCheck
Import-Module AzureADPreview RequiredVersion

Removing an Office 365 License from a User

Removing a license with Set-AzureADUserLicense looks something like this.

What if there are multiple licenses ? Similar concept but just looping through each one to remove.

Assigning an Office 365 License to a User

Now that we have the removal of licenses sorted, how about adding licenses ?

Assigning a license with Set-AzureADUserLicense looks something like this;

Moving forward this AzureAD Powershell Module will replace the older MSOL Module as I wrote about here. If you’re writing new scripts it’s a good time to start using the new modules.

Follow Darren on Twitter @darrenjrobinson


Azure AD Connect: An error occurred executing configure AAD Sync task: user realm discovery failed

Yesterday (Tuesday October 11th, 2016) I started a routine install of Azure AD Connect. This project is for an upgrade from FIM 2010 R2 for a long time client; if you were wondering.

Unfortunately at the end of the process, when essentially the final part of the install was running, during the “Configure” process, I ran into some trouble.

Strike 1

I received the following error:


An error occurred executing Configure AAD Sync task: user_realm_discovery_failed: User realm discovery failed

This happened with the current, as of this blog post, version of Azure AD Connect: (release: Sep 7th 2016).

I did a quick Google, as you do, and found a few articles that matched the same error. The first one I went to was by Mark Parris. His blog post (available here) had all the makings of a solution. I followed those instructions to try and resolve my issue.

The solution required the following steps (with some of my own additions):

  • Open Explorer
  • Navigate to the following path: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\config
  • Find the file called machine.config
    • Create a new folder in the same directory called “Original”
    • Copy the file there as a backup in case things go wrong
  • Open machine.config with notepad to edit it
  • At the bottom, before the </configuration>, enter in the following:
<defaultProxy enabled=”false”></defaultProxy>

I followed those instructions and saved the file to my desktop. I removed the notepad added .txt extension and before saving the file to the directory, I checked the logs to make sure I was indeed having the same issue. Small error on my part there. I’ll just blame it on the jet lag. Something (the log file) I should have checked first. Nevertheless, I checked.

Strange. I didn’t have the same error as Mark. My install had the following error:

Operation failed. The remote server returned an error: (400) Bad Request.. Retrying…Exception = TraceEventType = RegistrationAgentType = NotAnAgentServiceType

I know my client was using a proxy though, so, for the sake of testing, as this deployment was in staging mode anyway, I copied the machine.config file from the server desktop to the path and overwrote the file. My logic was that the .NET config with proxy “false” setting would bypass the proxy. Unfortunately that was not the case.

Strike 2

I selected the retry option in the Azure AD Connect installer and waited for the result. Not quite the same error, but, an error nonetheless:


An error occurred executing Configure AAD Sync task: the given key was not present in the dictionary.

What a dictionary has to do with Azure AD Connect is beyond me. Technology is complicated, so I didn’t judge it. I went back to a few other of the search results in Google. One of the others was from Tarek El-Touny. His blog post (available here) was similar to Mark’s, but, had some different options regarding the proxy settings in the machine.config file.

Here’s what Tarek suggested to enter in that machine.config file:

        <defaultProxy enabled="true" useDefaultCredentials="true">

Since I did have a proxy, this made more sense. Originally with the proxy “false” setting, I thought that would bypass or disable usage of the proxy. I was wrong.

Here’s a sample of the machine.config file for your reference:


After I amended the machine.config file and saved it to the correct location, I started another retry of the installation. This time there was good news! It successfully finalised the installation and went on to configuration.

Final words

Over the last few months I’ve not done any hands on technical work. Yes, some architecture and design, some work orders and pre-sales, but, to get back on the tools was good. Unfortunately for me, a little rusty, but, thanks to the blog-sphere out there and the brains trust of other awesome people, I managed to work my way through the problem.

Best, Lucian

Originally posted on Lucian’s blog at Follow Lucian on Twitter @LucianFrango.

Azure networking VNET architecture best practice update (post #MSIgnite 2016)

During Microsoft Ignite 2016 I attended a few Azure networking architecture sessions. Towards the end of the week, though, they did overlap some content which was not ideal. A key message was there though. An interesting bit of reference architecture information.

Of note and relevant to this blog post:

  • Migrate and disaster recover Azure workloads using Operations Management Suite by Mahesh Unnifrishan, Microsoft Program Manager
  • Review ExpressRoute for Office 365 configuration (routing, proxy and network security) by Paul Andrew, Senior Product Marketing Manager
  • Run highly available solutions on Microsoft Azure by Igal Figlin, Principal PM- Availability, Scalability and Performance on Azure
  • Gain insight into real-world usage of the Microsoft cloud using Azure ExpressRoute by Bala Natarajan Microsoft Program Manager
  • Achieve high-performance data centre expansion with Azure Networking by Narayan Annamalai, Principal PM Manager, Microsoft


For the last few years there has been one piece of design around Azure Virtual Networks (VNETs) that caused angst. When designing a reference architecture for VNETs, creating multi tiered solutions was generally not recommended. For the most part, a single VNET with multiple subnets (from Microsoft solution architects I spoke with) was the norm. This didn’t scale well across regions and required multiple and repetitive configurations when working at scale; or Microsoft’s buzz words from the conference: hyper-scale.


VNet Peering

At Microsoft Ignite, VNET peering was made generally available on September 28th (reference and official statement).  VNET peering allows for the connectivity of VNETs in the same region without the need of a gateway. It extends the network segment to essentially allows for all communication between the VNETs as if they were a single network. Each VNET is still managed independently of one another; so NSG’s, for example, would need to be managed on each VNET.

Extending across regions VNET peering across regions is still the biggest issue. When this feature comes, it will be another game changer. Amazon Web Services also has VPC peering, but, is also limited to a single region. Microsoft has caught up in this regard.

Interesting and novel designs can now be achieved with VNET peering.

Hub and spoke

I’m not a specialist network guy. I’ve done various Cisco studies and never committed to getting certified, but, did enough to be dangerous!

VNET peering has one major advantage: the ability to centralise shared resources, like for example networking virtual appliances.

A standard network topology design known as hub and spoke features centralised provisioning of core components in a hub network with additional networks in spokes stemming from the core.


Larger customers opt to use virtual firewall (Palo Alto or F5 firewall appliances) or load balancers (F5 BigIP’s) as network teams are generally well skilled in these and re-learning practices in Azure is time-consuming and costly.

Now Microsoft, via program managers on several occasions, recommends a new standard practice of using the hub and spoke network topology and leveraging the ability to centrally store network components that are shared. This could even extend to centrally store certain logical segmented areas, for example a DMZ segment.

I repeat: a recommended network design for most environments is generally a hub and spoke leveraging VNET peering and centralising shared resources in the hub VNET.


These new possibilities offer awesome network architecture designs that can now be achieved. Important to note though, is that there are limits imposed.

Speaking with various program managers, limits in most services are there are a guide and form a logical understanding of what can be achieved. However, in most cases these can be raised through discussion with Microsoft.

The limits on VNET peering applies to two areas. The first is number of networks able to be peering to a single network (currently 50). The second is the number of routes able to be advertised when using ExpressRoute and VNET peering. Review the following Azure documentation article for more info on these limits:

Finally, it’s important to note that not every network is identical and requirements change from customer to customer. What is additionally as important is to implement consistent and proven architecture topologies that leverages on the knowledge and experience of others. Basically, stand on the shoulders of giants.