How to create a PowerShell FIM/MIM Management Agent for AzureAD Groups using Differential Sync and Paged Imports

Introduction

I’ve been working on a project where I must have visibility of a large number of Azure AD Groups into Microsoft Identity Manager.

In order to make this efficient I need to use the Differential Query function of the AzureAD Graph API. I’ve detailed that before in this post How to create an AzureAD Microsoft Identity Manager Management Agent using the MS GraphAPI and Differential Queries. Due to the number of groups and the number of members in the Azure AD Groups I needed to implement Paged Imports on my favourite PowerShell Management Agent (Granfeldt PowerShell MA). I’ve previously detailed that before too here How to configure Paged Imports on the Granfeldt FIM/MIM PowerShell Management Agent.

This post details using these concepts together specifically for AzureAD Groups.

Pre-Requisites

Read the two posts linked to above. They will detail Differential Queries and Paged Imports. My solution also utilises another of my favourite PowerShell Modules. The Lithnet MIIS Automation PowerShell Module. Download and install that on the MIM Sync Server where you be creating the MA.

Configuration

Now that you’re up to speed, all you need to do is create your Granfeldt PowerShell Management Agent. That’s also covered in the post linked above  How to create an AzureAD Microsoft Identity Manager Management Agent using the MS GraphAPI and Differential Queries.

What you need is the Schema and Import PowerShell Scripts. Here they are.

Schema.ps1

Two object classes on the MA as we need to have users that are members of the groups on the same MA as membership is a reference attribute. When you bring through the Groups into the MetaVerse and assuming you have an Azure AD Users MA using the same anchor attribute then you’ll get the reference link for the members and their full object details.

Import.ps1

Here is my PSMA Import.ps1 that performs what is described in the overview. Enumerate AzureAD for Groups, import the active ones along with group membership.

Summary

This is one solution for managing a large number of Azure AD Groups with large memberships via a PS MA with paged imports showing progress thanks to differential sync which then allows for subsequent quick delta-sync run profiles.

I’m sure this will help someone else. Enjoy.

Follow Darren on Twitter @darrenjrobinson

Why are you not using Azure Resource Explorer (Preview)?

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango. Connect on LinkedIn.

***

For almost two years the Azure Resource Explorer has been in preview. For almost two years barely anyone has used it. This stops today!

I’ve been playing around with the Azure Portal (ARM) and clicking away stumbled upon the Azure Resource Explorer; available via https://resources.azure.com. Before you go any further, click on that or open the URI in a new tab in your favourite browser (I’m using Chrome 56.x for Mac if you were wondering) and finally BOOKMARK IT!

Okay, let’s pump the breaks and slow down now. I know what you’re probably thinking: “I’m not bookmarking a URI to some Azure service because some blogger dude told me to. This could be some additional service that won’t add any benefit since I have the Azure portal and PowerShell; love PowerShell”. Well, that is a fair point. However, let me twist your arm with the following blog post; full of fun facts and information about what Azure Resource Explorer is, what is does, how to use it and more!

What is Azure Resource Explorer

This is a [new to me] website, running Bootstrap HTML, CSS, Javascript framework (an older version, but, like yours truly here on clouduccino), that provides streamlined and rather well laid out access to various REST API details/calls for any given Azure subscription. You can login and view some nice management REST API’s, make changes to Azure infrastructure in your subscription via REST calls/actions like get, put, post, delete and create.

There’s some awesome resources around documentation for the different API’s, although Microsoft is lagging in actually making this of any use across the board (probably should not have mentioned that). Finally, what I find handy is pre-build PowerShell scripts that outline how to complete certain actions mixed in with the REST API’s.

Here’s an example of an application gateway in the ARE portal. Not that there is much to see, since there are no appGateways, but, I could easily create one!

Use cases

I’m sure that all looks “interesting”; with an example above with nothing to show for it. Well, here is where I get into a little more detail. I can’t show you all the best features straight away, otherwise you’ll probably go off and start playing with and tinkering with the Resource Explorer portal yourselves (go ahead and do that, but, after reading the remainder of this blog!).

Use case #1 – Quick access to information

Drilling through numerous blades in the Azure Portal, while it works well, sometimes can take longer than you want when all you need to do is check for one bit of information- say a route table for a VNET. PowerShell can also be time consuming- doing all that typing and memorising cmdlets and stuff (so 2016..).

A lot of the information you need can be grabbed at a glance from the ARE portal though which in turns saves you time. Here’s a quick screenshot of the route table (or lack there of) from a test VNET I have in the Kloud sandbox Azure subscription that most Kloudies use on the regular.

I know, I know. There is no routes. In this case it’s a pretty basic VNET, but, if I introduced peering and other goodness in Azure, it would all be visible at a glance here!

Use case #2 – Ahh… quick access to information?

Here’s another example where getting access to information configuration is really easy with ARE. If you’re working on a PowerShell script to provision some VM instances and you are not sure of the instance size you need, or the programatic name for that instance size, you can easily grab that information form the ARE portal. Bellow highlights a quick view of all the VM instance sizes available. There are 532 lines in that JSON output with all instance sizes from Standard_A0 to Standard_F16s (which offers 16 cores, 32GB of RAM and up to 32 disks attached if you were interested).

vmSizes view for all current available VM sizes in Azure. Handy to grab the programatic name to then use in PowerShell scripting?!

Use case #3 – PowerShell script examples

Mixing the REST API programatic cmdlets with PowerShell is easy. The ARE portal outlines the ways you can mix the two to execute quick cmdlets to various actions, for example: get, set, delete. Bellow is a an example set of PowerShell cmdlets from the ARE portal for VNET management.

Final words

Hopefully you get a chance to try out the Azure Resource Explorer today. It’s another handy tool to keep in your Azure Utility Belt. It’s definitely something I’m going to use, probably more often than I will probably realise.

#HappyFriday

Best,

Lucian

 

 

 

Running Vue.js on ASP.NET Core Applications

Vue.js has recently got many attentions as it is relatively easier to learn and lighter in size, comparing to other popular frameworks like Angular 1, Angular 2 or React. By providing a middleware, ASP.NET Core supports those front-end frameworks such as Angular2, Aurelia, React, Knockout, etc, while Vue has been excluded out-of-the-box. Even though we can find many good articles and code samples to integrate Vue and ASP.NET Core, as they don’t use the basic template that Vue is providing, it’s a little difficult for developers to apply at their first glance, who want to use both Vue and ASP.NET Core with minimum integration efforts. In this post, we are going to integrate Vue.js and ASP.NET Core using their basic templates, with minimum modification.

The code sample used in this post can be found at here.

Microsoft.AspNetCore.SpaServices

In order to support front-end development in ASP.NET Core, Visual Studio uses Bower. However, as we are already aware how fast front-end development environment changes, another option like Webpack helps developers work a lot easier by offering bundling, moduling and so forth. ASP.NET Core, of course supports Webpack as a separate extension. Microsoft.AspNetCore.SpaServices helps developers integrate front-end frameworks using Webpack with ease.

Prerequisites

Throughout this post, we need some preparations:

We can also use Visual Studio Code (VSC), but we’re assuming to use VS for this post.

Creating ASP.NET Core Web Application

Open Visual Studio and create a new ASP.NET Core web application project. As this web application likely uses .NET Core version 1.0.1, we need to update it to 1.1 by updating global.json:

Also, update project.json so that all NuGet packages in the web application can be complied to .NET Core 1.1:

As mentioned earlier, we’re using Webpack instead of Bower, so remove Bower-related configuration files as well as ASP.NET Core native bundling & minification setting files:

  • .bowerrc
  • bower.json
  • bundleconfig.json

And finally, all directories and files under the wwwroot directory need to be deleted. We’re all set now at the ASP.NET Core application side. Let’s add Vue.

Installing Vue.js

We can use Vue as a stand-alone library by simply including the script from CDN. However, if we want to use its powerful features especially for componentising, module bundlers like Webpack should be considered. In order to use Webpack for Vue.js, if node.js and npm have already been installed, install vue-cli:

Then, install the basic template by running the following command:

During installation, just leave all the question as default value and finally template gets installed. Now, run the following command to install all npm packages and run the Vue app:

Since Vue uses the port number of 8080, we can see the result as expected. However, this is just a Vue app running independently, not the one integrated with ASP.NET Core. Let’s move on.

Integrating Vue.js with ASP.NET Core

This is the main part of this post. We need to touch configurations at both sides.

Configuring Vue.js

First things first. We need to install the npm package, aspnet-webpack, to enable communication between front-end framework and back-end application through Webpack:

Once installed, update config/index.js to setup the root directory:

And finally, create webpack.config.js and save it to the root directory of the ASP.NET Core project:

We now got Vue.js setup done. What’s next?

Configuring ASP.NET Core

aspnet-webpack corresponding package in the ASP.NET Core side is Microsoft.AspNetCore.SpaServices. Install it through NuGet package manager and register a middleware at Startup.cs:

We also need to add a routing configuration for SPA like above. Is the ASP.NET Core side relatively simple? Let’s punch the F5 key to run the app and see how it’s going.

We now can see a different port number that is handled by the ASP.NET Core app. So, technically we completed integration between Vue.js and ASP.NET Core! How can we ensure if they are actually working together? Let’s implement a basic AJAX call to see whether they get along with each other. Of course further modification is required. Who can stop us?

Handling AJAX Request/Response

As Vue.js core is lightweight, if we want to implement more sophisticated works like handling AJAX request/response, we need to install another npm extension called vue-resource.

Once installed, add it to the Vue instance for use. Update src/main.js like:

Now we need to implement AJAX request/response codes. Open /src/components/Hello.vue and modify it like:

What we can expect from this change is the message Welcome to Your Vue.js App will be replaced with the value from res.body.message. Now we need to implement the Web API. Here’s the Web API controller and action:

All good. Build the app, press the F5 key and confirm the result.

We can see Hello World, instead of the original message.

Side note: Another benefit of using Webpack is Hot Module Replacement (HMR). As long as the app is up and running on our local machine, we can instantly change something and check the result with no time, without reloading the page. Of course, if we install .NET Core Watcher tool, we don’t even need to rebuild and rerun the app itself either.

Deploying App to Azure

We’ve built an ASP.NET Core web application with Vue.js. Now it’s time for deployment. We don’t use View features provided by ASP.NET Core. Instead we just use a static(?) wwwroot/index.html. In our local development, we don’t really have to worry about that as our development environment automatically detects that. However, in a production environment, we have to specify we’re using the static index.html page; otherwise Azure web app can’t recognise it. To enable this feature, we need to update Startup.cs:

Make sure that the UseDefaultFiles() method must always come before the UseStaticFiles() method. Finally update deployment/publish related settings in project.json for Vue.js:

Everything is done! Deploy this web app to Azure and see the result. Can we see it as expected?

So far, we have installed Vue.js, integrated it with ASP.NET Core, implemented whether both are working together or not, and deployed the app to Azure web app instance. Some may think this looks too complicated, but actually, it’s not that tricky. Rather, we’ve got another option on ASP.NET Core application development using a different front-end framework. As long as a new front-end framework emerges and it supports Webpack, we can make it working like this approach. Now, for the next step, let’s build a real business solution!

An Azure Timer Function App to retrieve files via FTP and Remote PowerShell

Introduction

In an age of Web Services and API’s it’s an almost a forgotten world where FTP Servers exist. However most recently I’ve had to travel back in time and interact with a FTP server to get a set of files that are produced by other systems on a daily basis. These files are needed for some flat-file imports into Microsoft Identity Manager.

Getting files off a FTP server is pretty simple. But needing to do it across a number of different environments (Development, Staging and Production) meant I was looking for an easy approach that I could also replicate quickly across multiple environments. As I already had Remote PowerShell setup on my MIM Servers for other Azure Function Apps I figured I’d use an Azure Function for obtaining the FTP Files as well.

Overview

My PowerShell Timer Function App performs the following:

  • Starts a Remote PowerShell session to my MIM Sync Server
  • Imports the PSFTP PowerShell Module
  • Creates a local directory to put the files into
  • Connects to the FTP Server
  • Gets the files and puts them into the local directory
  • Ends the session

Pre-requisites

From the overview above there are a number of pre-requites that other blog posts I’ve written detail nicely the steps involved to appropriately setup and configure. So I’m going to link to those. Namely;

  • Configure your Function App for your timezone so the schedule is correct for when you want it to run. Checkout the WEBSITE_TIME_ZONE note in this post.

    WEBSITE_TIME_ZONE

  • You’ll need to configure your Server that you are going to put the files onto for Remote PowerShell. Follow the Enable Powershell Remoting on the FIM/MIM Sync Server section of this blogpost.
  • The credentials used to connect to the MIM Server are secured as detailed in the Using an Azure Function to query FIM/MIM Service section of this blog post.
  • Create a Timer PowerShell Function App. Follow the Creating your Azure App Service section of this post but choose a Timer Trigger PowerShell App.
    • I configured my Schedule for 1030 every day using the following CRON configuration
      0 30 10 * * *
  • On the Server you’ll be connecting to in order to run the FTP processes you’ll need to copy the PSFTP Module and files to the following directories. I unzipped the PSFTP files and copied the PSFTP folder and its contents to;
    • C:\Program Files\WindowsPowerShell\Modules
    • C:\Windows\System32\WindowsPowerShell\v1.0\Modules

     

Configuring the Timer Trigger Function App

With all the pre-requisites in place it’s time to configure the Timer Function App that you created in the pre-requisites.

The following settings are configured in the Function App Application Settings;

  • FTPServer (the server you will be connecting to, to retrieve files)
  • FTPUsername (username to connect to the FTP Sever with)
  • FTPPassword (password for the username above)
  • FTPSourceDirectory (FTP directory to get the files from)
  • FTPTargetDirectory (the root directory under which the files will be put)

ApplicationSettings

  • You’ll also need Application Settings for a Username and Password associated with a user that exists on the Server that you’ll be connecting to with Remote PowerShell. In my script below these application settings are MIMSyncCredUser and MIMSyncCredPassword

Function App Script

Finally here is a raw script. You’ll need to add appropriate error handling for your environment. You’ll also want to change lines 48 and 51 for the naming of the files you are looking to acquire. And line 59 for the servername you’ll be executing the process on.

Summary

A pretty quick and simple little Azure Function App that will run each day and obtain daily/nightly extracts from an FTP Server. Cleanup of the resulting folders and files I’m doing with other on-box processes.

 

This post is cross-blogged on both the Kloud Blog and Darren’s Blog.

logo-mashup

Monitor SharePoint Changelog in Azure Function

Azure Functions have officially reached ‘hammer’ status

I’ve been enjoying the ease with which we can now respond to events in SharePoint and perform automation tasks, thanks to the magic of Azure Functions. So many nails, so little time!

The seminal blog post that started so many of us on that road, was of course John Liu’s Build your PnP Site Provisioning with PowerShell in Azure Functions and run it from Flow and that pattern is fantastic for many event-driven scenarios.

One where it currently (at time of writing) falls down is when dealing with a list item delete event. MS Flow can’t respond to this and nor can a SharePoint Designer workflow.

Without wanting to get into Remote Event Receivers (errgh…), the other way to deal with this is after the fact via the SharePoint change log (if the delete isn’t time sensitive). In my use case it wasn’t – I just needed to be able to clean up some associated items in other lists.

SharePoint Change Logs

SharePoint has had an API for getting a log of changes of certain types, against certain objects, in a certain time window since the dawn of time. The best post for showing how to query it from the client side is (in my experience) Paul Schaeflin’s Reading the SharePoint change log from CSOM and was my primary reference for the below PowerShell-based Function.

In my case, I am only interested in items deleted from a single list, but this could easily be scoped to an entire site and capture more/different event types (see Paul’s post for the specifics).

The biggest challenge in getting this working was persisting the change token to Azure Storage, and this wasn’t that difficult in and of itself – it’s just that the PowerShell bindings for Azure are as of yet woefully under-documented (TIP: Get-Content and Set-Content are the key to the PowerShell bindings… easy when you know how). In my case I have an input and output binding to a single Blob Storage blob (to persist the change token for reference the next time the Function runs) and another output to Queue Storage to trigger another function that actually does the cleanup of the other list items linked to the one now sitting in the recycle bin. The whole thing is triggered by an hourly timer. If nothing has been deleted, then no action is taken (other than the persisted token blob update).

A Note on Scaling

Note that if multiple delete events occurred since the last check, then these are all deposited in one message. This won’t cause a problem in my use case (there will never be more than a handful of items deleted in one pass of the Function), but it obviously doesn’t scale well, as too many being handled by the receiving Function would threaten to bump up against the 5 min execution time limit. I wanted to use the OOTB message queue binding for simplicity, but if you needed to push multiple messages, you could simple use the Azure Storage PowerShell cmdlets instead of an out binding.

Code Now Please

Here’s the Function code (following the PnP PowerShell Azure Functions implementation as per John’s article above and liberally stealing from Paul’s guide above).

Going Further

This is obviously a simple example with a single objective, but you could take this pattern and ramp it up as high as you like. By targeting the web instead of a single list, you could push a lot of automation through this single pipeline, perhaps ramping up the execution recurrence to every 5 mins or less if you needed that level of reduced latency. Although watch out for your Functions consumption if you turn up the executions too high!

60288347

Back to Basics – Design Patterns – Part 2

In the previous post, we discussed design patterns, their structure and usage. Then we discussed the three fundamental types and started off with the first one – Creational Patterns.

Continuing with creational patterns we will now discuss Abstract Factory pattern, which is considered to be a super set of Factory Method.

Abstract Factory

In the Factory method, we discussed how it targets a single family of subclasses using a corresponding set of factory method classes, or a single factory method class via parametrised/ static factory method. But if we target families of related classes (multiple abstractions having their own subclasses) and need to interact with them using a single abstraction then factory method will not work.

A good example could be the creation of doors and windows for a room. A room could offer a combination of wooden door, sliding door etc. and wooden window, glass window etc. The client machine will however, interact with a single abstraction (abstract factory) to create the desired door and window combination based on selection/ configuration. This could be a good candidate for an Abstract Factory.

So abstract factory allows initiation of families (plural) of related classes using a single interface (abstract factory) independent of the underline concrete classes.

Reasons

When a system needs to use families of related or dependent classes, it might need to instantiate several subclasses. This will lead to code duplication and complexities. Taking the above example of a room, the client machine will need to instantiate classes for doors and windows for one combination and then do the same for others, one by one. This will break the abstraction of those classes, exposes their encapsulation, and put the instantiation complexity on the client. Even if we use a factory method for every single family of classes this will still require several factory methods, unrelated to each other. Thereby managing them to make combination offerings (rooms) will be code cluttering.

We will use abstract factory when:

–          A system is using families of related or dependent objects without any knowledge of their concrete types.

–          The client does not need to know the instantiation details of subclasses.

–          The client does not need to use subclasses in a concrete way.

Components

There are four components of this pattern.

–          Abstract Factory

The abstraction client interacts with, to create door and window combinations. This is the core factory that provide interfaces for individual factories to implement.

–          Concrete Factories

These are the concrete factories (CombinationFactoryA, CobinationFactoryB) that create concrete products (doors and windows).

–          Abstract Products

These are the abstract products that will be visible to the client (AbstractDoor & AbstractWindow).

–          Concrete Products

The concrete implementations of products offered. WoodenDoor, WoodenWindow etc.

 

drawing1

 

Sample code

Using the above example, our implementation would be:

    public interface Door

{

double GetPrice();

}

class WoodenDoor : Door

{

public double GetPrice()

{

//return price;

}

}

class GlassDoor : Door

{

public double GetPrice()

{

//return price

}

}

public interface Window

{

double GetPrice();

}

class WoodenWindow : Window

{

public double GetPrice()

{

//return price

}

}

class GlassWindow : Window

{

public double GetPrice()

{

//return price

}

}

The concrete classes and factories should ideally have protected or private constructors and should have appropriate access modifiers. e.g.

    protected WoodenWindow()

{

}

The factories would be like:

    public interface AbstractFactory

{

Door GetDoor();

Window GetWindow();

}

class CombinationA : AbstractFactory

{

public Door GetDoor()

{

return new WoodenDoor();

}

public Window GetWindow()

{

return new WoodenWindow();

}

}

class CombinationB : AbstractFactory

{

public Door GetDoor()

{

return new GlassDoor();

}

public Window GetWindow()

{

return new GlassWindow();

}

}

And the client:

    public class Room

{

Door _door;

Window _window;

public Room(AbstractFactory factory)

{

_door = factory.GetDoor();

_window = factory.GetWindow();

}

public double GetPrice()

{

return this._door.GetPrice() + this._window.GetPrice();

}

}

            AbstractFactory woodFactory = new CombinationA();

Room room1 = new Room(woodFactory);

Console.Write(room1.GetPrice());

AbstractFactory glassFactory = new CombinationB();

Room room2 = new Room(glassFactory);

Console.Write(room2.GetPrice());

The above showcases how abstract factory could be utilised to instantiate and use related or dependent families of classes via their respective abstractions without having to know or understand the corresponding concrete classes.

The Room class only knows about Door and Window abstractions and let the configuration/ client code input dictate which combination to use, at runtime.

Sometimes abstract factory also uses Factory Method or Static Factory Method for factory configurations:

    public static class FactoryMaker

{

public static AbstractFactory GetFactory(string type)   //some configuration

{

//configuration switches

if (type == “wood”)

return new CombinationA();

else if (type == “glass”)

return new CombinationB();

else   //default or fault config

return null;

}

}

Which changes the client:

AbstractFactory factory = FactoryMaker.GetFactory(“wood”);//configurations or inputs

Room room1 = new Room(factory);

As can be seen, polymorphic behaviours are the core of these factories as well as the usage of related families of classes.

Advantages

Creational patterns, particularly Factory can work along with other creational patterns; Abstract factory

–          Isolation of the creation mechanics from its usage for related families of classes.

–          Adding new products/ concrete types does not affect the client code rather the configuration/ factory code.

–          Provide way for the client to work with abstractions instead of concrete types. This gives flexibility to the client code to the related use cases.

–          Usage of abstractions reduce dependencies across components and increases maintainability.

–          Design often starts with Factory method and evolves towards Abstract factory (or other creational patterns) as the families of classes expends and their relationships develops.

Drawbacks

Abstract factory does introduce some disadvantages in the system.

–          It has fairly complex implementation and as the families of classes grows, so does the complexity.

–          Relying heavily on polymorphism does require expertise for debugging and testing.

–          It introduces factory classes, which can be seen as added workload without having any direct purpose except for instantiation of other classes, particularly in bigger systems.

–          Factory structures are tightly coupled with the relationships of the families of classes. This introduces maintainability issues and rigid design.

For example, adding a new type of window or door in the above example would not be as easy. Adding another family of classes, like Carpet and its sub types would be even more complex, but this does not affect the client code.

Conclusion

Abstract factory is a widely used creational pattern, particularly because of its ability to handle the instantiation mechanism of several families of related classes. This is helpful in real-world solutions where entities are often interrelated and work in a blend in a variety of use cases.

Abstract factory ensures the simplification of design targeting business processes by eliminating the concrete types and replacing them with abstractions while maintaining their encapsulation and removing the added complexity of object creation. This also reduces a lot of duplicate code at the client side making business processes testable, robust and independent of the underline concrete types.

In the third part of creational patterns, we will discuss a pattern slightly similar to Abstract factory, the Builder. Sometimes the two could be competitors in design decisions but differs in real-world applications.

 

Further reading

 

http://www.dofactory.com/net/abstract-factory-design-pattern

http://www.oodesign.com/abstract-factory-pattern.html

http://www.oodesign.com/abstract-factory-pattern.html

output

Automate the nightly backup of your Development FIM/MIM Sync and Portal Servers Configuration

Last week in a customer development environment I had one of those oh shit moments where I thought I’d lost a couple of weeks of work. A couple of weeks of development around multiple Management Agents, MV Schema changes etc. Luckily for me I was just connecting to an older VM Image, but it got me thinking. It would be nice to have an automated process that each night would;

  • Export each Management Agent on a FIM/MIM Sync Server
  • Export the FIM/MIM Synchronisation Server Configuration
  • Take a copy of the Extensions Folder (where I keep my PowerShell Management Agents scripts)
  • Export the FIM/MIM Service Server Configuration

And that is what this post covers.

Overview

My automated process performs the following;

  1. An Azure PowerShell Timer Function WebApp is triggered at 2330 each night
  2. The Azure Function App initiates a Remote PowerShell session to my Dev MIM Sync Server (which is also a MIM Service Server)
  3. In the Remote PowerShell session the script;
    1. Creates a new subfolder under c:\backup with the current date and time (dd-MM-yyyy-hh-mm)

  1. Creates further subfolders for each of the backup elements
    1. MAExports
    2. ServerExport
    3. MAExtensions
    4. PortalExport

    1. Utilizes the Lithnet MIIS Automation PowerShell Module to;
      1. Enumerate each of the Management Agents on the FIM/MIM Sync Server and export each Management Agent to the MAExports Folder
      2. Export the FIM/MIM Sync Server Configuration to the ServerExport Folder
    2. Copies the Extensions folder and subfolder contexts to the MAExtensions Folder
    3. Utilizes the FIM/MIM Export-FIMConfig cmdlet to export the FIM Server Configuration to the PortalExport Folder

Implementing the FIM/MIM Backup Process

The majority of the setup to get this to work I’ve covered in other posts, particularly around Azure PowerShell Function Apps and Remote PowerShell into a FIM/MIM Sync Server.

Pre-requisites

  • I created a C:\Backup Folder on my FIM/MIM Server. This is where the backups will be placed (you can change the path in the script).
  • I installed the Lithnet MIIS Automation PowerShell Module on my MIM Sync Server
  • I configured my MIM Sync Server to accept Remote PowerShell Sessions. That involved enabling WinRM, creating a certificate, creating the listener, opening the firewall port and enabling the incoming port on the NSG . You can easily do all that by following my instructions here. From the same post I setup up the encrypted password file and uploaded it to my Function App and set the Function App Application Settings for MIMSyncCredUser and MIMSyncCredPassword.
  • I created an Azure PowerShell Timer Function App. Pretty much the same as I show in this post, except choose Timer.
    • I configured my Schedule for 2330 every night using the following CRON configuration

0 30 23 * * *

  • I set the Azure Function App Timezone to my timezone so that the nightly backup happened at the correct time relative to my timezone. I got my timezone index from here. I set the  following variable in my Azure Function Application Settings to my timezone name AUS Eastern Standard Time.

    WEBSITE_TIME_ZONE

The Function App Script

With all the pre-requisites met, the only thing left is the Function App script itself. Here it is. Update lines 2, 3 & 6 if your variables and password key file are different. The path to your password keyfile will be different on line 6 anyway.

Update line 25 if you want the backups to go somewhere else (maybe a DFS Share).
If your MIM Service Server is not on the same host as your MIM Sync Server change line 59 for the hostname. You’ll need to get the FIM/MIM Automation PS Modules onto your MIM Sync Server too. Details on how to achieve that are here.

Running the Function App I have limited output but enough to see it run. The first part of the script runs very quick. The Export-FIMConfig is what takes the majority of the time. That said less than a minute to get a nice point in time backup that is auto-magically executed nightly. Sorted.

 

Summary

The script itself can be run standalone and you could implement it as a Scheduled Task on your FIM/MIM Server. However I’m using Azure Functions for a number of things and having something that is easily portable and repeatable and centralised with other functions (pun not intended) keeps things organised.

I now have a daily backup of the configurations associated with my development environment. I’m sure this will save me some time in the near future.

Follow Darren on Twitter @darrenjrobinson

 

 

 

pageimports

How to configure Paged Imports on the Granfeldt FIM/MIM PowerShell Management Agent

Introduction

In the last 12 months I’ve lost count of the number of PowerShell Management Agents I’ve written to integrate Microsoft Identity Manager with a plethora of environments. The majority though have not been of huge scale (<50k objects) and the import of the managed entities into the Connector Space/Metaverse runs through pretty timely.

However this week I’ve been working on a AzureAD Groups PS MA for an environment with 40k+ groups. That in itself isn’t that large, but when you start processing Group Memberships as well, the Import process can take an hour for a Full Sync. During this time before the results are passed to the Sync Engine you don’t have any visual of where the Import is up to (other than debug logging). And the ability to stop the MA requires a restart of the Sync Engine Server.

I’ve wanted to mess with Paging the Imports for sometime, but it hadn’t been a necessity. Now it is, so I looked to working out how to achieve it. The background information on Paged Imports is available at the bottom of the PSMA documentation page here.  However there are no working examples. I contacted Soren and he had misplaced his demo scripts for the time being. With some time at hand (in between coats of paint on the long weekend renovation)  I therefore worked it out for myself. I detail how to implement Paged Imports in this blogpost.

This post uses an almost identical Management Agent to the one described in this post. Review that post to get an understanding of the AzureAD Differential Queries. I’m not going to cover those elements in this post or setting up the MA at all.

Getting Started

There are two things you need to do in preparation for enabling Paged Imports on your PowerShell Management Agent;

  1. Enable Paged Imports (if your Import.ps1 is checking for this setting)
  2. Configure Page Size on your Import Run Profiles

The first is as simple as clicking the checkbox on the Global Parameters tab on your PS MA as shown below.

The 2nd is in your Run Profile. By default this will be 100. For my “let’s figure this out” process I dropped my Run Profiles to 50 on one Run Profile and 10 on another.

 

Import Script

With Paged Imports setup on the MA the rest of the logic goes into your Import Script. In your param section at the start of the script $usepagedimport and $pagesize are the variables that reflect the items from the two enablement components you did above.

$usepageimport is either True or False. Your Import.ps1 script can check to see if it is set and process accordingly. In this example I’m not even checking if it is set and doing Paged Imports anyway. For completeness in a production example you should check to see what the intention of the MA is.

$pagesize is the pagesize from the Run Profile (100 by default, or whatever you changed your’s too).

param (
    $Username,
    $Password,
    $Credentials,
    $OperationType,
    [bool] $usepagedimport,
    $pagesize
 )

 

An important consideration to keep in mind is that the Import.ps1 will be called multiple times (ie. #of_times = #ofObjects / pagesize).

So anything that you would normally expect in any other MA to only process once when the Import.ps1 runs you need to limit to only running once.

Essentially the way I’ve approached it is, retrieve all the objects that will be processed and put them in a Global variable. If the variable does not have any values/data then it is the first run, so go and get our source data. If the Global variable has values/data in it then we must be on a subsequent loop so no need to go process that part, just page through our import.

In my example below this appears as;

if (!$global:tenantObjects) {
    # Authenticate
    # Search and get the users
    # Do some rationalisation on the results (if required)
    # setup some global variables so we know where we are with processing the data
} # Finish the one time tasks

As you’ll see in the full import.ps1 script below there are more lines that could be added into this section so they don’t get processed each time. In a production implementation I would.

For the rest of the Import.ps1 script we are expecting it to run multiple times. This is where we do our logic and process our objects to send through to the Sync Engine/Connector Space. We need to keep track of where we are up to processing the dataset and continue on from where we left off. We also need to know how many objects we have processed in relation to the ‘pagesize’ we get from the Run Profile so we know when we’ve finished.

When we reach the pagesize but know we have more objects to process we set the $global:MoreToImport  to $true and break out of the foreach loop.

When we have processed all our objects we set $global:MoreToImport = $false and break out of the foreach loop to finish.

With that explanation out of the way here is a working example. I’ve left in debugging output to a log file so you can see what is going on.

You can get the associated relevant Schema.ps1 from the Management Agent described in this post. You’ll need to update your Tenant name on line 29, your directory paths on lines 10 and 47. If you are using a different version of the AzureADPreview PowerShell Module you’ll need to change line 26 as well.

Everything else is in the comments within the example script below and should make sense.

Summary

For managing a large number of objects on a PS MA we can now see progress as the import processes the objects, and we can now stop an MA if required.

I’m sure this will help someone else. Enjoy.

Follow Darren on Twitter @darrenjrobinson

 

 

 

 

 

 

Integrating Microsoft Flow with Azure Functions for Non-IT People

Microsoft Flow (Flow) creates automated workflows between various apps and services so that users can get notifications, collect data and more. This is similar to Azure Logic Apps (Logic Apps), but has different target audiences such as marketing, sales or all other non-IT related people. This document provides high-level comparisons between Flow, Logic Apps and Azure Functions.

Flow contains comprehensive number of pre-defined workflows called templates so we can just simply choose one of them, provide necessary information and use it. If there is no template suitable for our purpose, we can create a new template from scratch using pre-defined triggers and actions. If there is no trigger or action pre-defined, we can use a simple HTTP trigger using Azure Functions. In this post, we are going to have a look how use Azure Functions, HTTP Trigger in particular, to integrate with Flow.

As a Marketing Staff, I Want to …

Let’s say there is someone from a marketing department. They want to search all Twitter posts with a hashtag, #ausopen, for example and those posts are fetched to their marketing Slack channel. This can be easily accomplished by using a pre-defined template.

We can easily set the hashtag they want to follow and Slack channel to fetch like:

This is all set! Too easy! Now, we are with the Free plan, this Flow runs every five minutes. If we want to run the flow more frequently, we should upgrade the plan to paid ones like Flow Plan 1 (runs every 3 mins) or Flow Plan 2 (runs every minute). Once the flow runs, the marketing channel in Slack will receive all tweets like:

We’ve so far created a Flow item as an example.

As a …, I Want to Handle those Tweets in a Different Way

Probably, the marketing staff needs more sophisticated analysis by storing those tweets into database or want to do something else that pre-defined actions/triggers don’t support out-of-the-box. In this case we can introduce HTTP Trigger Functions to do so. Let’s create an HTTP Trigger Function.

Of course, we should implement more complex logic in the function. However, this is just an example, so we only log how Flow passes the data to Azure Function for now. When the function is ready like above, we know its endpoint URL like https://my-function-app.azurewebsites.net/api/TwitterWebhoook?code=XXXXXX.

Copy this endpoint URL for Flow. Now we need to modify the existing Flow item like:

When a new tweet with the hashtag #ausopen is found, the entire tweet object is passed to Azure Functions through the POST method, then the tweet is posted to the Slack channel. Wait for up to five minutes (we’re with the Free Plan!)

Slack channel has finally been updated.

This is the log from Flow:

And this is the log from Azure Functions:

So far, we have integrated Azure Functions (HTTP Trigger) with Microsoft Flow so that we can do more complex jobs through it. The code used in this post was very simple, but depending on the complexity of requirements, the function will handle jobs in more sophisticated way.

Adding YAML Settings into ASP.NET Core Apps

Unlike traditional ASP.NET web apps using web.config for configuration, ASP.NET Core apps supports various file formats for it. When we actually see the source codes, configuration supports XML, JSON, INI, Azure Key Vault, in-memory collection, command line arguments and environment variables. However, another popular format, YAML is not officially supported in ASP.NET Core at the time of writing. In this post, we are going to walk through how we can import YAML settings file to support ASP.NET Core web application.

You can find sample codes used in this post at here

Analysis of ConfigurationBuilder

When we first create a new ASP.NET Core application project, no matter it is a web app or an API app, we can always see how ConfigurationBuilder is implemented.

As we can see above, appsettings.json is included as a default, followed by secret settings, if it’s a development mode, then environment variables. If we want, we can add an XML file by adding builder.AddXmlFile("appsettings.xml"); or an INI file by adding builder.AddIniFile("appsettings.ini");, or others from various sources. In this case, we MUST make sure that the order of adding settings is very important. The settings defined in the latter file always overwrite the settings previously defined. For example, if JSON settings is loaded first then XML settings is loaded later, if a same key exists in both settings files, the value from the XML settings will always be considered.

AddYamlSettings()

As YAML settings is not supported out-of-the-box, we need to implement an extension. As there’s already a NuGet package for this extension, we will just use it. Here are sample appsettings.json and appsettings.yml files:

Now, we’re modifying the existing Startup.cs file to load YAML settings.

The builder.AddYamlFile() method has been added right after the builder.AddJsonFile() method. Therefore, we are expecting the clientId value will be FROM appsettings.yml. Let’s check it out. First of all, we need to inject the settings into a controller. Here’s a code bits to deserialise the settings into a strongly-typed instance and inject into the built-in IoC container.

Now, we need to resolve that instance from the controller level, which is a typical DI stuff.

We’re all set. Let’s run the app and see how it’s going. Can we see the result like below?

Order is Important!

As we’ve dealt above, the latter settings takes precedence for the same key. Now, let’s change between JSON settings and YAML one the other way around like:

Then try the web app again. What can we expect?

So far we have looked how to import/load YAML settings into our ASP.NET Core web apps. Can we use YAML for our ASP.NET Core web app now? Yes, we can!