Scripting the generation & creation of Microsoft Identity Manager Sets/Workflows/Sync & Management Policy Rules with the Lithnet Resource Management PowerShell Module

Introduction

Yes, that title is quite a mouthful. And this post is going to be quite long. But worth the read if you are having to create a number of rules in Microsoft/Forefront Identity Manager, or even more so the same rule in multiple environments (eg. Dev, Staging, Production).

My colleague David Minnelli introduced using the Lithnet RMA PowerShell Module and the Import-RMConfig cmdlet recently for bulk creation of MIM Sets and MPR’s. David has a lot of the background on Import-RMConfig and getting started with it. Give that a read for a more detailed background.

In this post I detail using Import-RMConfig to create a Set, Workflow, Synchronization Rule and Management Policy Rule to populate a Development AD Domain with Users from a Production AD Domain. This process is designed to run on a combined MIM Service/Sync Server. If your roles a separated (as they likely will be in a Production environment) you will need to run these scripts on the MIM Sync Server (so it can query the Management Agents, and you will need to add in a line to connect to the MIM Service (eg. Set-ResourceManagementClient ) at the beginning of the script.

In my environment I have two Active Directory Management Agents, each connected to an AD Forest as shown below.

On each of the AD MA’s I have a Constant Flow Attribute (named Source) configured to flow in a value representing the source AD Forest. I’m doing this in my environment as I have more than one production forest (hence the need for automation). You could simply use the Domain attribute for the Set criteria. That attribute is used in the Set later on. Mentioning it up front so it make sense.

Overview

The Import-RMConfig cmdlet uses XML and XOML files that contain the configuration required to create the Set, Workflow, Sync Rule and MPR in the FIM/MIM Service. The order that I approach the creation is, Sync Rule, Workflow, Set and finally the MPR.

Each of these objects as indicated above leverage an XML and/or XOML input file. I’ve simplified base templates and included them in the scripts.

The Sync Rule Script includes a prompt to choose a folder (you can create one through the GUI presented) to store the XML/XOML files to allow the Import-RMConfig to use them. Once generated you can simply reference the files with Import-RMConfig to replicate the creation in another environment.

Creating the Synchronization Rule

For creation of the Sync Rule we need to define which Management Agent will be the target for our Sync Rule. In my script I’ve automated that too (as I have a number to do). I’m querying the MIM Sync Server for all its Active Directory MA’s and then providing a dialog to allow you to choose the target MA for the Sync Rule. That dialog simply looks like the one below.

Creating the Sync Rule will finally ask you to give the Rule a name. This name will then be used as the base Display Name for the Set, MPR and Workflow (and a truncated version as the Rule ID’s).

The script below in the $SyncRuleXML section defines the rules of the Sync Rule. Mine is an Outbound Sync Rule, with a base set of attributes and transforming the users UPN and DN (for the differing Development AD namespace). Update lines 42 and 45 for the users UPN and DN your namespace.

Creating the Workflow

The Workflow script is pretty self-explanatory. A simple Action based workflow and is below.

Creating the Set

The Set is the group of objects that will be synchronized to the target management agent. As my Sync Rule is only for Users my Set is also only contains users. As stated in the Overview I have an attribute that defines the authoritative source for the objects. I’m also using that in my Set criteria.

Creating the Management Policy Rule

The MPR ties everything together. Here’s that part of the script.

Tying them all together

Here is the end to end automation, and the raw script that you could use as the basis for automating similar rule generation. The Sync Rule could easily be updated for Contacts or Groups. Remember the attributes and object classes are case sensitive’.

  • Through the Browse for Folder dialog I created a new folder named ProvisionDevAD

  • I provided a Display Name for the rules

  • I chose the target Management Agent

  • The SyncRule, Workflow, Set and MPR are created. The whole thing takes seconds.

  • Script Complete

Let’s take a look at the completed objects through the MIM Portal.

Sync Rule

The Sync Rule is present as we named it. Including the !__ prefix so it appears at the top of the list.

Outbound Sync Rule based on a MPR, Set and Workflow

The Resources will be created and if deleted de-provisioned.

And our base attribute flows.

Set

Our Set was created.

Our naming aligns with what we input

And a Criteria based Set. As per the Overview I have an attribute populated by a Constant flow rule that I based my set on. You’ll want to update for you environment.

Workflow

The Action Workflow was created

All looks great

And it applies our Sync Rule

MPR

And finally our MPR. Created as a Transition In MPR with Action Workflow

Set Transition and naming all aligned

The Transition Set configured for the Set that was created

And the Workflow configured with the Workflow that was just created

Summary

When you have a lot of Sync Rules to create, or you know you will need to re-create them numerous times, potentially in different environments automation is key. This just scratches the surface on what can be achieved, and made so much easier using the Lithnet PowerShell Modules.

Here’s is the full script. Note: You’ll need to make a couple of minor changes as indicated earlier, but you should be able to create a Provisioning Rule end to end pretty quick to validate the process. Then customize accordingly for your environment and requirements. Enjoy.

FIM: An object with DN “CN=BLAH” already exists in management agent “BLAH MA”

If you treat the FIM Synchronization Service well and your configuration is good, it will reward you and the ‘magic’ will happen. At my customer site, the ‘magic’ stopped working and I was faced with an increasing number of synchronization errors being ‘An object with DN “CN=<blah>” already exists in management agent “<blah> MA” ‘. For users that were already provisioned correctly, the FIM Synchronization Service was attempting to re-provision a duplicate object in the destination directory but obviously the account already existed. Why was this occurring and how to resolve it? To be honest it stumped me for quite a while until I remembered I was faced with the same scenario a few years ago with yet another FIM Synchronization Service with many 100s of thousands of objects.

Before I start, I’ll preface that this blog assumes a very good working knowledge of the Microsoft Forefront Identity Manager product, now known as Microsoft Identity Manager. Let’s delve deeper.

We should all know that best practise when making changes to any part of the FIM Synchronization Service configuration, or new or updates to FIM Service Synchronization Rules, you should do a Full Synchronization cycle to ensure the changes and rules are all processed against the objects in the solution. When you are working on a large solution with 100s of thousands of objects, it is not always possible to do this Full Sync cycle at will and it needs to be typically scheduled as a change over a weekend, maybe even a long weekend! We had attempted a full synchronization cycle on this customer solution in the past and it took just on 2 days to complete. So when you don’t have the opportunity perform this cycle, you use your prior knowledge, expertise and judge if the solution can continue to function if a change has been made. In my case, a new attribute flow in a Synchronization Rule which was not critical to the existing objects in the solution. So I thought I could avoid a Full Sync cycle and everything appeared to be working as expected.

Over a few weeks, these sync errors started to occur as illustrated below and they were increasing until I got to around 40 errors and further troubleshooting action needed to be taken.

The customer had rules in the solution to detect when an existing user account was disabled and perform some functions in the FIM Service. If the user account was re-enabled, the ‘provisioning’ MPR was triggered as the user was back as a member in the ‘provision accounts’ Transition Set. Workflows assigned a handful of Synchronization Rules against the user as Expected Rule Entries (EREs), which had already applied to the user when it first came to existence and provisioned. These duplicate EREs when flowed from the FIM Service to the FIM Synchronization Service were throwing the errors.

Back in the test environment, the ‘magic’ was still working with no errors thrown. What normally happens in this situation is if you happen to assign a provisioning Sync Rule to a user account, but that user already had one previously successfully ‘applied’ (and was already provisioned), this duplicate ERE would be processed by the Sync Service and also marked as ‘Applied’. It was smart enough to know that the provisioning event was not required because an existing ERE had already been assigned. When the Sync Service flows the status of the duplicate ERE back to the FIM Service, the duplicate is deleted and world order is restored with this ‘magic’. Sorry I find no way of easily describing it without detailing my scenario, so I hope you are still following me. So why was it not working in production?

I’m not sure of the internal innards of the Sync Service, but what I have found is when a delta import change of a Synchronization Rule flows into the Sync Service and a ‘Delta Sync’ is processed on this change, it seems to have an adverse effect on the other Sync Rules. Sync Rules that have previously been working fine start behaving badly and to put it simply, the ‘magic’ breaks.

The fix is to obviously perform a Full Synchronization on the whole solution because a configuration item has changed. However, without having the easy opportunity to do this, I needed to perform a Preview/Full Synchronization on each Synchronization Rule in the solution as illustrated below. Although no obvious change is shown in the preview window, it seems like each Sync Rule is re-processed/reset into good working order and the ‘magic’ is restored.

After performing this change against all Sync Rule objects, all synchronization errors have been cleared and the ‘magic’ now works as expected. I didn’t find much in the forums or public domain about this scenario, so I hope this helps somebody in the future. I’m now working on getting the full synchronization cycle actioned! Let me know in the comments below if this has saved you.

Automating Source IP Address updates on an Azure Network Security Group RDP Access Rule

Recently I’ve migrated a bunch of Virtual Box Virtual Machines to Azure as detailed here. These VM’s are in Resource Groups with a Network Security Group associated that restricts access to them for RDP based on a source TCPIP address. All good practice. However from a usability perspective, when I want to use these VM’s, I’m not always in the same location, and rarely on a connection with a static IP address.

This post details a simple little script that;

  • Has a couple of variables associated with a Resource Group, Network Security Group, Virtual Machine Name and an RDP Configuration File associated with the VM
  • Gets the public IP Address of the machine I’m running the script from
  • Prompts for Authentication to Azure, and retrieves the NSG associated with the Resource Group
  • Compares the Source IP Address in the ‘RDP’ Inbound Rule to my current IP Address. If they aren’t a match it updates the Source IP Address to be my current public IP Address
  • Starts the Virtual Machine configured at the start of the script
  • Launches Remote Desktop using the RDP Configuration file

The Script

Here’s the raw script. Update lines 2-8 for your environment and away you go. Simple but useful as is often the way.

Diagnosing FIM/MIM ‘kerberos-no-logon-server’ error on an Active Directory Management Agent

Overview

I have a complex customer environment where Microsoft Identity Manager is managing identities across three Active Directory Forests. The Forests all serve different purposes and are contained in different network zones. Accordingly there are firewalls between the zone where the MIM Sync Server is located and two of the other AD Forests as shown in the graphic below.

As part of the project the maintainers of the network infrastructure had implemented rules to allow the MIM Sync server to connect to the other two AD Forests. I had successfully been able to create the Active Directory Management Agents for each of the Forests and perform synchronization imports.

The Error ‘kerberos-no-logon-server’

Everything was going well right up to the point I went to export changes to the two AD Forests that were separated by firewalls. I received the ‘kerberos-no-logon-server’ error as shown below from the run profile output.

I started investigating the error as I hadn’t encountered this one before. There were a few posts on the possibilities mainly dealing with properties of the AD MA’s configuration. But I did stumble on a mention of kerberos being used when provisioning users to Active Directory and setting the initial password. That aligned with what I was doing. I had provided the networking engineers with my firewall port requirements. Those are (no PCNS required for this implementation) ;

  • 389 TCP – LDAP
  • 636 TCP – LDAPS
  • 88 TCP – Kerberos
  • 464 TCP/UDP – Kerberos
  • 53 TCP – DNS
  • 3268 TCP/UDP – Global Catalog
  • 3269 TCP/UDP – Global Catalog
  • 135 TCP – RPC

My old school immediate thought was to Telnet to each of the ports to see if the firewall was allowing me through. But with a couple of forests to test against and UDP ports as well, it wasn’t going to be that easy. I found a nice little Test-Port function that did both TCP and UDP. I already had an older script for testing TCP ports via PowerShell. So I combined them.

Identifying the cause

As suspected connectivity to the forest where my MIM Sync Server was located was all good. The other two, not so much. GC connectivity wouldn’t give me the Kerberos error, but not having Kerberos Port 464 certainly would.

In the backwards and forwards with the networking team I had to test connectivity many times so I added a running output as well as a summary output. The running output highlighting ports that weren’t accessible.

Here’s the raw script if you’re in a similar situation. Get the Test-Port Function from the URL in line 1 to test UDP Port connectivity. Add additional ports to the arrays if required (eg. for PCNS), and update the forest names in lines 21-23.

Summary

I’m sure this is going to become more relevant in a Cloud/Hybrid world where MIM Servers will be in Azure, Active Directory Forests will be in different networks and separated by firewalls and Network Security Groups.

Re-execute the UserData script in an AWS Windows Instance

First published at https://nivleshc.wordpress.com

Bootstrapping is an awesome way of customising your instances in AWS (similar capability exists in Azure).

To enable bootstrapping, while configuring the launch instance, in Step 3: Configure Instance Details scroll down to the bottom and then expand Advanced Details.

You will notice a User data text box. This is where you can provide your bootstrap script. The script will be run when your instance is first launched.

AWS_BootstrapScript

I went ahead and entered my script in the text box and proceeded to complete my instance configuration. Once my instance was running, I initiated a Remote Desktop connection to it, to confirm that my script had run. Unfortunately, I couldn’t see any customisations (which meant my script didn’t run)

Thinking that the instance had not been able to access the user data, I opened up Internet Explorer and then browsed to the following url (this is an internal url that can be used to access the user-data)

http://169.254.169.254/latest/user-data/

I was able to successfully access the user-data, which meant that there were no issues with that.  However when checking the content, I noticed a typo! Aha, that was the reason why my customisations didn’t happen.

Unfortunately, according to AWS, user-data is only executed during launch (for those that would like to read, here is the official AWS documentation). To get the fixed bootstrap script to run, I would have to terminate my instance and launch a new one with the corrected script (I tried re-booting my windows instance after correcting my typo, however it didn’t run).

I wasn’t very happy on terminating my current instance and then launching a new one, since for those that might not be aware, AWS EC2 compute charges are rounded up to the next hour. Which means that if I terminated my current instance and launched a new one, I would be charged for 2 x 1hour sessions instead of just 1 x 1 hour!

So I set about trying to find another solution. And guess what, I did find it 🙂

Reading through the volumes of documentation on AWS, I found that when Windows Instances are provisioned, the service that does the customisations using user-data is called EC2Config. This service runs the initial startup tasks when the instance is first started and then disables them. HOWEVER, there is a way to re-enable the startup tasks later on 🙂 Here is the document that gives more information on EC2Config.

The Amazon Windows AMIs include a utility called EC2ConfigService Settings. This allows you to configure EC2Config to execute the user-data on next service startup. The utility can be found under All Programs (or you can search for it).

AWS_EC2ConfigSettings_AllApps

AWS_EC2ConfigSettings_Search

Once Open, under General you will see the following option

Enable UserData execution for next service start (automatically enabled at Sysprep) eg. or <powershell></powershell>

AWS_EC2ConfigSettings

Tick this option and then press OK. Then restart your Windows Instance.

After your Windows Instance restarts, EC2Config will execute the userData (bootstrap script) and then it will automatically remove the tick from the above option so that the userData is not executed on subsequent restarts (or service starts)

There you go. A simple way to re-run your bootstrap scripts on an AWS Windows Instance without having to terminate the current instance and launching a new one.

There are other options available in the EC2ConfigService Settings that you can explore as well 🙂

It all comes down to “Requirements”

In my l last post, I discussed the basic concepts of Identity Management. In this post I’m going to talk about the need to clearly identify the business requirements for IAM as a whole and not just to a specific technical need. More often than not IAM projects are spawn to satisfy a specific need, and realistically these today are around the adoption of specific cloud technologies, and Office 365 is the most obvious one.

The issue with doing this is that when you implement a solution for a single purpose without putting any consideration into the “What else” question that is always overlooked. Over the course of my career I’ve discovered that when the next requirement comes up within the business that requires an Identity Management solution, they are often left in a situation which either requires them to completely rebuild the solution that has already been built to provide the initial service offering, or they are required to setup a completely separate environment which then complicates and increases the overall support for the solutions that could have, more often than not, been delivered within the single system.

This brings us back to the title, requirements! So, what are requirements? And why are they so important? Simply put, requirements tell you what you want to achieve within the business with the technology you have or are about to invest in.

When thinking of requirements for any IAM solution, there are basic principles that you should always take into consideration. An IAM solution is built to satisfy 4 basic functions;

  1. Authentication
  2. Authorization
  3. User Management
  4. Central User Store

When establishing requirements within your business for any IAM solution it’s important to understand these basic functions, and what they could mean for your business. So lets break them down.

Authentication

Most would say this is pretty self-explanatory, but there’s often a lot more to this than what many think. For example, you want to enable a SaaS application such as TechnologyOne Finance. Do you want to provide a seamless authentication model with Single Sign-On (SSO) or do you want to keep the authentication separate from your local security domains to provide a higher level of security. Do you want to provide more of a Same Sign-On solution where they use their local username and password but are forced to login every time which can be a less inviting end user experience.

Authorization

Authorization within IAM simply put provides authorization workflows to requests for access to resources or the creation of new resources managed by the IAM solutions. Authorizations ensure that access compliance and government processes are followed with all managed resources.

User Management

User management is simply that, it manages the user objects of which it knows about, this includes any add moves or changes that occur to these objects throughout their lifecycle, and when a user leaves the organisation for whatever reason they are terminated through the standard business processes that are established as part of the requirements that have been defined.

Central User Repository

This is effectively where everything is stored, in Microsoft it’s referred to as the metaverse, in the Novell space it’s referred to as the Identity Vault. But it’s simply a central repository of all user objects, as well as the configuration items such as workflows, policy rules as well as various other configuration items.

A common misconception with requirements is technical teams will look for what the Requirements comparisontechnical requirements are to satisfy the solutions which is being built! But this is where things will often go wrong, business requirements come down to simple business logic of what are the primary business objectives. These business requirements will often create a bunch of technical requirements, but it must start with the basic business requirements.

Summary

So, to summarise, complex IAM solutions all come down to basic principles, what am I trying to achieve as a business? It is for this reason you need to start with the basics and understand the purpose of the solution being built. You wouldn’t want to build a house without plans or you wouldn’t want to build a road without understanding where it’s going. So why would you do the same with your Identity Management solutions.

An alternate method for dealing with Orphaned MetaVerse Objects

Update 21 April ’17. The LithnetMIISAutomation PS Module now has a -Force switch for Delete-CSObject

As often happens in development environments, data changes, configurations change and at some point you end up with a whole bunch of objects that are in no-mans land. This happened to me today. I had thousands of objects that we basically empty but had previously triggered to be exported to the MIM Service prior to them actually being deleted from the source management agent.

An example of one of the objects. A group with a Pending Export to the MIM Service.

A closer look at the object and there is no attribute data present as the source object had been removed.

And only a single connector, to the MIM Service which it will never reach as it doesn’t contain the mandatory attributes.

Normally to clean up such a mess you’d probably be looking at deleting the Connector Space for the MIM Service and then refreshing it from the MIM Service and these objects would be gone. However, this development environment is rather large, and that wasn’t something I had time or was prepared for at this time. So here’s how I worked around the issue.

Deleting spurious objects from the Connector Space

There’s two approaches;

  1. Select each of the errors, select the MIM Service Connector and select delete. That would work but I had thousands.
  2. Automate the process described in point 1. That’s the approach I took

Using the ever versatile Lithnet MIM Sync Powershell Module I retrieved the last run details for my MIM Service MA. I grabbed all the errors, inspected the errors for the ones that were failing creation to the MIM Service and then deleted the CSObject for that orphan.

Here’s where it got more than a little clink clink cowboy-ish. The Delete-CSObject cmdlet requires confirmation to delete the CSObject. There is not a switch to force the delete, or accept confirmation globally*. I wasn’t going to click Yes or press Enter 5000 times either.

So I wrote a small script that loops and checks for the Confirm disconnection dialog and sends the enter key to window.

Here’s the two little scripts.

This first script retrieves the last run details and loops through the errors.

This second script which I ran in a second separate PowerShell Runspace loops around and presses enter at the right time.

*I’ve submitted an enhancement request to Ryan to add a confirm parameter to Delete-CSObject

Getting Azure 99.95% SLA for Cisco FTD virtual appliances in Azure via availability sets and ARM templates

First published on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter: @LucianFrango or connect via LinkedIn: Lucian Franghiu.


In the real world there are numerous lessons learned, experiences, opinions and vendors recommendations that dictate and what constitutes “best practice” when it comes to internet edge security. It’s a can of worms that I don’t want to open as I am not claiming to be an expert in that regard. I can say that I do have enough experience to know that not having any security is a really bad idea and having bank level security for regular enterprise customers can be excessive.

I’ve been working with an enterprise customer that falls pretty much in the middle of that dichotomy. They are a regular large enterprise organisation that is concerned about internet security and have little experience with Azure. That said, the built in tools and software defined networking principles of Azure don’t meet the requirements they’ve set. So, to accomodate those requirements, moving from Azure NSGs and WAFs and all the goodness that Azure provides to dedicated virtual appliances was not difficult, but, did require a lot of thinking and working with various team members and 3rd parties to get the result.

Cisco Firepower Thread Defence Virtual for Microsoft Azure

From what I understand, Cisco’s next generation firewall has been in the Azure marketplace for about 4 months now, maybe a little longer. Timelines are not that much of a concern, rather, they are a consideration in that it relates to the maturity of the product. Unlike competitors, there is indeed a lag behind in some features.

The firewalls themselves, Cisco Firepower Thread Defence Virtual for Microsoft Azure, are Azure specific Azure Marketplace available images of the virtual appliances Cisco has made for some time. The background again, not that important. It’s just the foundational knowledge for the following:

Cisco FTDv supports 4 x network interfaces in Azure. These interfaces include:

  • A management interface (Nic0) – cannot route traffic over this
  • A diagnostics interface (Nic1) – again, cannot route traffic over this. I found this out the hard way…
  • An external / untrusted interface (Nic2)
  • An internal / trusted interface (Nic3)

So we have a firewall that essentially is an upgraded Cisco ASA (Cisco Adaptive Security Appliance) with expanded feature sets unlocked through licensing. An already robust product with new features.

The design

Availability is key in the cloud. Scale out dominates scale up methodologies and as the old maxim goes: two is better than one. For a customer, I put together the following design to leverage Azure availability sets (to guarantee instance uptime of at least one instance in the set; and to guarantee different underlying Azure physical separation of these resources) and to have a level of availability higher than a single instance. NOTE: Cisco FTDv does not support high availability (out of the box) and is not a statefull appliance in Azure. 

Implementation

To deploy a Cisco FTDv in Azure, the quick and easy way is to use the Azure Marketplace and deploy through the portal. It’s a quick and pretty much painless process. To note though, here are some important pieces of information when deploying these virtual appliances from the Azure marketplace:

  • There is essentially only one deployment option for the size of instances – Standard_D3 or Standard_D3v2 – the difference being SSD vs HDD (with other differences between v1 and v2 series coming by way of more available RAM in certain service plans)
  • YOU MUST deploy the firewall in a resource group WITH NO OTHER RESOURCES in that group (from the portal > Marketplace)
  • Each interface MUST be on a SUBNET – so when deploying your VNET, you need to have 4 subnets available
    • ALSO, each interface MUST be on a UNIQUE subnet – again, 4 subnets, can’t double up – even with the management and diagnostic interfaces
  • Deploying the instance in an Azure availability set is- NOT AVAILABLE (from the portal > Marketplace)

Going through the wizard is relatively painless and straight forward and within 15-20min you can have a firewall provisioned and ready to connect to your on-premises management server. Yes, another thing to note is that the appliance is managed from Firepower Management Centre (FMC). The FMC, from that I have read, cannot be deployed in Azure at this time. However, i’ve not looked into that tidbit to much, so I may be wrong there.

The problem

In my design I have a requirement for two appliances. These appliances would be in a farm, which is supported in the FMC, and the two appliances can have common configuration applied to both devices- stuff like allow/deny rules. In Azure, without an availability set, there is a small chance, however a chance nonetheless, that both devices could someone be automagically provisioned in the same rack, on the same physical server infrastructure in the Australia East region (my local region).

Availability is a rather large requirement and ensuring that all workloads across upwards of 500+ instances for the customer I was working with is maintained was a tricky proposition. Here’s how I worked around the problem at hand as officially Cisco do not state they “do not support availability sets”.

The solution

Pretty much all resources when working with the Azure Portal have a very handy tab under their properties. I use this tab a lot. It’s the Automation Script section of the properties blade of a resource.

Automation script

After I provisioned a single firewall, I reviewed the Automation Script blade of the instance. There is plenty of good information there. What was particularly is handy to know is the following:

 },
 "storageProfile": {
 "imageReference": {
 "publisher": "cisco",
 "offer": "cisco-ftdv",
 "sku": "ftdv-azure-byol",
 "version": "620362.0.0"
 },
 "osDisk": {
 "osType": "Linux",
 "name": "[concat(parameters('virtualMachines_FW1_name'),'-disk')]",
 "createOption": "FromImage",

So with that, we have all the key information to leverage ARM templates to deploy the firewalls. In practice though, I copied the entire Automation Script 850 line JSON file and put it into Atom. Then I did the following:

  • Reviewed the JSON file and cleaned it up to match the naming standard for the customer
    • This applied to various resources that were provisioned: NIC, NSGs, route tables etc
  • I copied and added to the file an addition for adding the VM to availability set
    • The code for that will be bellow
  • I then removed the firewall instance and all of the instance specific resources it created form Azure
    • I manually removed the NICs
    • I manually removed the DISKs from Blob storage
  • I then used Visual Studio to create an ew project
  • I copied the JSON file into the azuredeploy.json file
  • Updated my parameters file (azuredeplpy.parameters.json)
  • Finally, proceeded to deploy from template

Low and behold the firewall instance provisioned just fine and indeed there was an availability set associated with that. Additionally, when I provisioned the second appliance, I followed the same process and both are now in the same availability set. This makes using the Azure Load Balancer nice and easy! Happy days!

For your reference, here’s the availability set JSON I added in my file:

"parameters": [
 {
"availabilitySetName": {
 "defaultValue": "FW-AS",
 "type": "string"
 }

Then you need to add the following under “resources”:

"resources": [
 {
 "type": "Microsoft.Compute/availabilitySets",
 "name": "[parameters('availabilitySetName')]",
 "apiVersion": "2015-06-15",
 "location": "[resourceGroup().location]",
 "properties": {
 "platformfaultdomaincount": "2",
 "platformupdatedomaincount": "2"
 }
 },

Then you’ll also need to add in the resources “type”: “Microsoft.Compute/virtualMachines”:

 "properties": {
 "availabilitySet": {
 "id": "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]"
 },
  "dependsOn": [
 "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]",

Those are really the only things that need to be added to the ARM template. It’s quick and easy!

BUT WAIT, THERES MORE!

No, I’m not talking about throwing in a set of steak knives with that, but, there is a little more to this that you dear reader need to be aware of.

Once you deploy the firewall and the creating process finalises and its state is now running, there is an additional challenge. When deploying via the Marketplace, the firewall enters Advanced User mode and is able to be connected to the FMC. I’m sure you can guess where this is going… When deploying the firewall via an ARM template, the same mode is not entered. You get the following error message:

User [admin] is not allowed to execute /bin/su/ as root on deviceIDhere

After much time digging through Cisco documentation, which I am sorry to say is not up to standard, Cisco TAC were able to help. The following command needs to be run in order to get into the correct mode:

~$ su admin
~$ [password goes here] which is Admin123 (the default admin password, not the password you set)

Once you have entered the correct mode, you can add the device to the FMC with the following:

~$ configure manager add [IP address of FMC] [key - one time use to add the FW, just a single word]

The summary

I appreciate that speciality network vendors provide really good quality products to manage network security. Due limitations in the Azure Fabric, not all work 100% as expected. From a purists point of view, NSGs and the Azure provided software defined networking solutions and the wealth of features provided, works amazingly well out of the box.

The cloud is still new to a lot of people. That trust that network admins place in tried and true vendors and products is just not there yet with BSOD Microsoft. In time I feel it will be. For now though, deploying virtual appliances can be a little tricky to work with.

Happy networking!

Lucian

Calling WCF client proxies in Azure Functions

Azure Functions allow developers to write discrete units of work and run these without having to deal with hosting or application infrastructure concerns. Azure Functions are Microsoft’s answer to server-less computing on the Azure Platform and together with Azure ServiceBus, Azure Logic Apps, Azure API Management (to name just a few) has become an essential part of the Azure iPaaS offering.

The problem

Integration solutions often require connecting legacy systems using deprecating protocols such as SOAP and WS-*. It’s not all REST, hypermedia and OData out there in the enterprise integration world. Development frameworks like WCF help us deliver solutions rapidly by abstracting much of the boiler plate code away from us. Often these frameworks rely on custom configuration sections that are not available when developing solutions in Azure Functions. In Azure Functions (as of today at least) we only have access to the generic appSettings and connectionString sections of the configuration.

How do we bridge the gap and use the old boiler plate code we are familiar with in the new world of server-less integration?

So let’s set the scene. Your organisation consumes a number of legacy B2B services exposed as SOAP web services. You want to be able to consume these services from an Azure Function but definitely do not want to be writing any low level SOAP protocol code. We want to be able to use the generated WCF client proxy so we implement the correct message contracts, transport and security protocols.

In this post we will show you how to use a generated WCF client proxy from an Azure Function.

Start by generating the WCF client proxy in a class library project using Add Service Reference, provide details of the WSDL and build the project.

add_service_reference

Examine the generated bindings to determine the binding we need and what policies to configure in code within our Azure Function.

bindings

In our sample service above we need to create a basic http binding and configure basic authentication.

Create an Azure Function App using an appropriate template for your requirements and follow the these steps to call your WCF client proxy:

Add the System.ServiceModel NuGet package to the function via the project.json file so we can create and configure the WCF bindings in our function
project_json

Add the WCF client proxy assembly to the ./bin folder of our function. Use Kudo to create the folder and then upload your assembly using the View Files panelupload_wcf_client_assembly

In your function, add references to both the System.ServiceModel assembly and your WCF client proxy assembly using the #r directive

When creating an instance of the WCF client proxy, instead of specifying the endpoint and binding in a config file, create these in code and pass to the constructor of the client proxy.

Your function will look something like this

Lastly, add endpoint address and client credentials to appSettings of your Azure Function App.

Test the function using the built-in test harness to check the function executes ok

test_func

 

Conclusion

The suite of integration services available on the Azure Platform are developing rapidly and composing your future integration platform on Azure is a compelling option in a maturing iPaaS marketplace.

In this post we have seen how we can continue to deliver legacy integration solutions using emerging integration-platform-as-a-service offerings.

Adapting to the changes in the AzureAD Preview PowerShell Module ADAL Helper Library

I’m a big proponent of using PowerShell for integration and automation of Azure Active Directory Services using the Azure AD GraphAPI. You may have seen many of my posts leverage the evolving Azure AD Preview PowerShell Module helper libraries. Lines in my scripts that use this look like the one below. In this case using preview version 2.0.0.52.

# the default path to where the ADAL GraphAPI PS Module puts the Libs
Add-Type -Path 'C:\Program Files\WindowsPowerShell\Modules\AzureADPreview\2.0.0.52\Microsoft.IdentityModel.Clients.ActiveDirectory.dll'

The benefit of using this library is the simplification of Authentication to AzureAD, from which we can then receive a token and interact with the GraphAPI via PowerShell using Invoke-RestMethod.

Earlier this week it was bought to my attention that implementation of some of my scripts were failing when using the latest v2 releases of the AzureAD PowerShell Module (v2.0.0.98).  Looking into it the last version I had working is v2.0.0.52. v2.0.0.55 doesn’t work with my scripts either.  So anything after v2.0.0.52 the following will not work

What’s Changed?

First up the PowerShell Module has been renamed. It is no longer AzureADPreview, it is just AzureAD. So the path it gets installed into (depending on the version you have) is now;

'C:\Program Files\WindowsPowerShell\Modules\AzureAD\2.0.0.98\Microsoft.IdentityModel.Clients.ActiveDirectory.dll'

Looking into the updated PowerShell Module there has been a change to the Microsoft.IdentityModel.Clients.ActiveDirectory.dll library.

A number of the methods in the library have changed. I believe this is part of Microsoft transitioning the endpoint to use GraphAPI. With that understanding I approached using PowerShell to integrate with the GraphAPI more akin to the way I do when not using the helper library.

User PowerShell and the ADAL Helper Library to connect to AzureAD via the GraphAPI

Here is the updated script to connect (and retrieve a batch of users). You will need to update lines 4, 17 & 18 for your Tenant name and the username and password (non-MFA enabled) you will be connecting with.