Azure Load Balancer – Add/Remove Vms

 

Still stuck on azure service manager (ASM)? Have load balancers in environment which you need to configure often to remove/add vms? Not a worry. Even though when it comes to load balancer configuration option in ASM we are pretty much tied down to PowerShell but in this post I will show you how you can use simple PowerShell scripts to configure your load balancer.

Azure load balancer is a layer 4 load balancer (TCP, UDP) and manages the incoming traffic for load and availability. Azure classic portal does not provide any functionality for the Azure administrators to configure load balancer via portal. The only option we have is the PowerShell.

In real world scenario you will often need to take your azure Vms out of load balancer to perform updates or to trouble shoot production issues. And that’s where capability to configure your load balancers comes handy. Lets have a look at a simple scenario as an example where you have two azure Vms Web01 and Web02 in a subscription named Myazuresubscription, both are configured for an external load balancer in Azure named ExtLB. Vms have service names as Webserv01 and Websrv02 respectively. Let’s get started:

Remove Vm from Load Balancer

Let’s first log into our subscription using the following PowerShell commands.

Once you are logged into your subscription, its time to take your vm out of load balancer. Its worth mentioning here that basically this means that we are going to remove the endpoints of vm which are associated with load balancer. Typically, a vm behind load balancer would be a web server, meaning we will have end points configured for http and https. Hence we will need to remove both these endpoints to take it off the load balancer. However, you may have a different scenario but I will consider in this example that we have configured endpoint for both protocols.

Let’s inspect the existing endpoints of vm Web01:

Important thing to note is that you will need to know the cloud service name of your VM. You can view this under your vm Dashboard in ASM and in ARM it will be the name of Resource group in which this vm resides.

 

vm_endpoints

 

The LBSetName highlighted in red represent the name of load balancer and name highlighted in green represent name of the endpoint. We will use the name of endpoint in our following PowerShell.

To remove Http and Https endpoints from load balancer we will run following command for each endpoint. So in this example we will run it twice once for http and second for https.

 

This will remove the VM from load balancer. To verify it you can rerun the command above we used to inspect vm endpoints and you will be able to see the endpoints removed in Output. Once you have removed all endpoints of VM configured with Load balancer you can work on your vm and once you are ready its time to add it back.

Important thing to consider is that you should not remove your both web servers together from load balancer as it may result in service loss.

 

Add VM to Azure Load Balancer

 To add a vm into Azure load balancer, following PowerShell script can be used.  Again you will need to run this script twice each for Http and Https end points.

 

 

 

And we are done. We have successfully added a vm into azure load balancer for both http and https endpoints. Important thing to remember here is that if your Vms are deployed in ARM, you can add/remove vms from load balancer using Azure portal as well as PowerShell.

Also, if you are looking to configure your load balancer for a distribution mode then have a read of another fantastic blog written by our Kloudie.

 

Azure Load Balancer – Set Distribution Mode

 

 

How to create a PowerShell FIM/MIM Management Agent for AzureAD Groups using Differential Sync and Paged Imports

Introduction

I’ve been working on a project where I must have visibility of a large number of Azure AD Groups into Microsoft Identity Manager.

In order to make this efficient I need to use the Differential Query function of the AzureAD Graph API. I’ve detailed that before in this post How to create an AzureAD Microsoft Identity Manager Management Agent using the MS GraphAPI and Differential Queries. Due to the number of groups and the number of members in the Azure AD Groups I needed to implement Paged Imports on my favourite PowerShell Management Agent (Granfeldt PowerShell MA). I’ve previously detailed that before too here How to configure Paged Imports on the Granfeldt FIM/MIM PowerShell Management Agent.

This post details using these concepts together specifically for AzureAD Groups.

Pre-Requisites

Read the two posts linked to above. They will detail Differential Queries and Paged Imports. My solution also utilises another of my favourite PowerShell Modules. The Lithnet MIIS Automation PowerShell Module. Download and install that on the MIM Sync Server where you be creating the MA.

Configuration

Now that you’re up to speed, all you need to do is create your Granfeldt PowerShell Management Agent. That’s also covered in the post linked above  How to create an AzureAD Microsoft Identity Manager Management Agent using the MS GraphAPI and Differential Queries.

What you need is the Schema and Import PowerShell Scripts. Here they are.

Schema.ps1

Two object classes on the MA as we need to have users that are members of the groups on the same MA as membership is a reference attribute. When you bring through the Groups into the MetaVerse and assuming you have an Azure AD Users MA using the same anchor attribute then you’ll get the reference link for the members and their full object details.

Import.ps1

Here is my PSMA Import.ps1 that performs what is described in the overview. Enumerate AzureAD for Groups, import the active ones along with group membership.

Summary

This is one solution for managing a large number of Azure AD Groups with large memberships via a PS MA with paged imports showing progress thanks to differential sync which then allows for subsequent quick delta-sync run profiles.

I’m sure this will help someone else. Enjoy.

Follow Darren on Twitter @darrenjrobinson

Why are you not using Azure Resource Explorer (Preview)?

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango. Connect on LinkedIn.

***

For almost two years the Azure Resource Explorer has been in preview. For almost two years barely anyone has used it. This stops today!

I’ve been playing around with the Azure Portal (ARM) and clicking away stumbled upon the Azure Resource Explorer; available via https://resources.azure.com. Before you go any further, click on that or open the URI in a new tab in your favourite browser (I’m using Chrome 56.x for Mac if you were wondering) and finally BOOKMARK IT!

Okay, let’s pump the breaks and slow down now. I know what you’re probably thinking: “I’m not bookmarking a URI to some Azure service because some blogger dude told me to. This could be some additional service that won’t add any benefit since I have the Azure portal and PowerShell; love PowerShell”. Well, that is a fair point. However, let me twist your arm with the following blog post; full of fun facts and information about what Azure Resource Explorer is, what is does, how to use it and more!

What is Azure Resource Explorer

This is a [new to me] website, running Bootstrap HTML, CSS, Javascript framework (an older version, but, like yours truly here on clouduccino), that provides streamlined and rather well laid out access to various REST API details/calls for any given Azure subscription. You can login and view some nice management REST API’s, make changes to Azure infrastructure in your subscription via REST calls/actions like get, put, post, delete and create.

There’s some awesome resources around documentation for the different API’s, although Microsoft is lagging in actually making this of any use across the board (probably should not have mentioned that). Finally, what I find handy is pre-build PowerShell scripts that outline how to complete certain actions mixed in with the REST API’s.

Here’s an example of an application gateway in the ARE portal. Not that there is much to see, since there are no appGateways, but, I could easily create one!

Use cases

I’m sure that all looks “interesting”; with an example above with nothing to show for it. Well, here is where I get into a little more detail. I can’t show you all the best features straight away, otherwise you’ll probably go off and start playing with and tinkering with the Resource Explorer portal yourselves (go ahead and do that, but, after reading the remainder of this blog!).

Use case #1 – Quick access to information

Drilling through numerous blades in the Azure Portal, while it works well, sometimes can take longer than you want when all you need to do is check for one bit of information- say a route table for a VNET. PowerShell can also be time consuming- doing all that typing and memorising cmdlets and stuff (so 2016..).

A lot of the information you need can be grabbed at a glance from the ARE portal though which in turns saves you time. Here’s a quick screenshot of the route table (or lack there of) from a test VNET I have in the Kloud sandbox Azure subscription that most Kloudies use on the regular.

I know, I know. There is no routes. In this case it’s a pretty basic VNET, but, if I introduced peering and other goodness in Azure, it would all be visible at a glance here!

Use case #2 – Ahh… quick access to information?

Here’s another example where getting access to information configuration is really easy with ARE. If you’re working on a PowerShell script to provision some VM instances and you are not sure of the instance size you need, or the programatic name for that instance size, you can easily grab that information form the ARE portal. Bellow highlights a quick view of all the VM instance sizes available. There are 532 lines in that JSON output with all instance sizes from Standard_A0 to Standard_F16s (which offers 16 cores, 32GB of RAM and up to 32 disks attached if you were interested).

vmSizes view for all current available VM sizes in Azure. Handy to grab the programatic name to then use in PowerShell scripting?!

Use case #3 – PowerShell script examples

Mixing the REST API programatic cmdlets with PowerShell is easy. The ARE portal outlines the ways you can mix the two to execute quick cmdlets to various actions, for example: get, set, delete. Bellow is a an example set of PowerShell cmdlets from the ARE portal for VNET management.

Final words

Hopefully you get a chance to try out the Azure Resource Explorer today. It’s another handy tool to keep in your Azure Utility Belt. It’s definitely something I’m going to use, probably more often than I will probably realise.

#HappyFriday

Best,

Lucian

 

 

 

Introduction to MIM Advanced Workflows with MIMWAL

Introduction

Microsoft late last year introduced the ‘MIMWAL’, or to say it in full: (inhales) ‘Microsoft Identity Manager Workflow Activity Library’ – an open source project that extends the default workflows & functions that come with MIM.

Personally I’ve been using a version of MIMWAL for a number of years, as have my colleagues, in working on MIM projects with Microsoft Consulting.   This is the first time however it’s been available publicly to all MIM customers, so I thought it’d be a good idea to introduce how to source it, install it and work with it.

Microsoft (I believe for legal reasons) don’t host a compiled version of MIMWAL, instead host the source code on GitHub for customers to source, compile and potentially extend. The front page to Microsoft’s MIMWAL GitHub library can be found here: http://microsoft.github.io/MIMWAL/

Compile and Deploy

Now, the official deployment page is fine (github) but I personally found Matthew’s blog to be an excellent process to use (ithinkthereforeidam.com).  Ordinarily, when it comes to installing complex software, I usually combine multiple public and private sources and write my own process but this blog is so well done I couldn’t fault it.

…however, some minor notes and comments about the overall process:

  • I found that I needed to copy the gacutil.exe and sn.exe utilities you extract from the old FIM patch in the ‘Solution Output’ folder.  The process mentions they need to be in the ‘src\Scripts’ (Step 6), but they need to be in the ‘Solution Output’ folder as well, which you can see in the last screenshot of that Explorer folder in Step 8 (of process: Configure Build/Developer Computer).
  • I found the slowest tasks in the entire process was sourcing and installing Visual Studio, and extracting the required FIM files from the patch download.  I’d suggest keeping a saved Windows Server VM somewhere once you’ve completed these tasks so you don’t have to repeat them in case you want to compile the latest version of MIMWAL in the future (preferably with MIM installed so you can perform the verification as well).
  • Be sure to download the ‘AMD 64’ version of the FIM patch file if you’re installing MIMWAL onto a Windows Server 64-bit O/S (which pretty much everyone is).  I had forgotten that old 64 bit patches used to be titled after the AMD 64-bit chipset, and I instead wasted time looking for the newer ‘x64’ title of the patch which doesn’t exist for this FIM patch.

 

‘Bread and Butter’ MIMWAL Workflows

I’ll go through two examples of MIMWAL based Action Workflows here that I use for almost every FIM/MIM implementation.

These action workflows have been part of previous versions of the Workflow Activity Library, and you can find them in the MIMWAL Action Workflow templates:

I’ll now run through real world examples in using both Workflow templates.

 

Update Resource Workflow

The Update Resource MIMWAL action workflow I use all the time to link two different objects together – many times linking a user object with a new and custom ‘location’ object.

For new users, I execute this MIMWAL workflow when a user first ‘Transitions In’ to a Set whose dynamic membership is “User has Location Code”.

For users changing location, I also execute this workflow use a Request-based MPR of the Synchronization Engine changing the “Location Code” for a user.

This workflow looks like the following:

location1

The XPath Filter is:  /Location[LocationCode = ‘[//Target/LocationCode]’]

When you target the Workflow at the User object, it will use the Location Code stored in the User object to find the equivalent Location object and store it in a temporary ‘Query’ object (referenced by calling [//Queries]):

Location2.jpg

The full value expression used above, for example, sending the value of the ‘City’ attribute stored in the Location object into the User object is:

IIF(IsPresent([//Queries/Location/City]),[//Queries/Location/City],Null())

This custom expression determines if there is a value stored in the ‘[//Queries]’ object (ie. a copy of the Location object found early in the query), and if there is a value, then send it to the City attribute of the user object ie. the ‘target’ of the Workflow.  If there is no value, it will send a ‘null’ value to wipe out the existing value (in case a user changes location, but the new location doesn’t have a value for one of the attributes).

It is also a good idea (not seen in this example) to send the Location’s Location Code to the User object and store it in a ‘Reference’ attribute (‘LocationReference’).  That way in future, you can directly access the Location object attributes via the User object using an example XPath:  [//Person/LocationReference/City].

 

Generate Unique Value from AD (e.g. for sAMAccountName, CN, mailnickname)

I’ve previously worked in complex Active Directory and Exchange environments, where there can often be a lot of conflict when it comes to the following attributes:

  • sAMAccountName (used progressively less and less these days)
  • User Principal Name (used progressively more and more these days, although communicated to the end user as ’email address’)
  • CN (or ‘container’ value, which forms part of the LDAP Distinguished Name (DN) value.  Side note: the most commonly mistaken attribute for admins who think this is the ‘Display Name’ when they view it in AD Users & Computers.
  • Mailnickname (used by some Exchange environments to generate a primary SMTP address or ‘mail’ attribute values)

All AD environments require a unique sAMAccountName (otherwise you’ll get a MIM export error into AD if there’s already an account with it) for any AD account to be created.  It will also require a unique CN value in the same OU as other objects, otherwise the object cannot be created.  Unique CN values are generally required to be unique if you export all user accounts for a large organization to the same OU where there is a greater chance for a conflict happening.

UPNs are generally unique if you copy a person’s email address, but sometimes not – sometimes it’s best to combine a unique mailnickname, append a suffix and send that value to the UPN value.  Again, it depends on the structure and naming of your AD, and the applications that integrate with it (Exchange, Office 365 etc.).

Note: the default MIMWAL Generate Unique Value template assumes the FIM Service account has the permissions required to perform LDAP lookups against the LDAP path you specify.  There are ways to enhance the MIMWAL to add in an authentication username/password field in case there is an ‘air gap’ between the FIM server’s joined domain and the target AD you’re querying (a future blog post).

In this example in using the ‘Generate Unique Value’ MIMWAL workflow, I tend to execute as part of a multi-step workflow, such as the one below (Step 2 of 3):sam1

I use the workflow to generate a query of the LDAP to look for existing accounts, and then send that value to the [//Workflowdata/AccountName] attribute.

The LDAP filter used in this example looks at all existing sAMAccountNames across the entire domain to look for an existing account:   (&(objectClass=user)(objectCategory=person)(sAMAccountName=[//Value]))

The workflow will also query the FIM Service database for existing user accounts (that may not have been provisioned yet to AD) using the XPath filter:  /Person[AccountName = ‘[//Value]’]

The Uniqueness Key Seed in this example is ‘2’, which essentially means that if you cannot resolve a conflict with using other attribute values (such as a user’s middle name, or using more letters of a first or last name) then you can use this ‘seed’ number to break the conflict as a last resort.  This number increments by 1 for each confict, so if there’s a ‘michael.pearn’, and a ‘michael.pearn2’ for example, the next one to test will be ‘michael.pearn3’ etc etc.

sam2

The second half of the workflow shows the rules to use to generate sAMAccountName values, and the rules in order in which to break the conflict.  In this example (which is a very simple example), I use an employee’s ‘ID number’ to generate an AD account.  If there is already an account for that ID number, then this workflow will generate a new account with the string ‘-2’ added to the end of it:

Value Expression 1 (highest priority): NormalizeString([//Target/EmployeeID])

Value Expression 2 (lowest priority):  NormalizeString([//Target/EmployeeID] + “-” + [//UniquenessKey])

NOTE: The function ‘NormalizeString’ is a new MIMWAL function that is also used to strip out any diacritics character out.  More information can be found here: https://github.com/Microsoft/MIMWAL/wiki/NormalizeString-Function

sam3

Microsoft have posted other examples of Value Expressions to use that you could follow here: https://github.com/Microsoft/MIMWAL/wiki/Generate-Unique-Value-Activity

My preference is to use as many value expressions as you can to break the conflict before having to use the uniqueness key.  Note: the sAMAccountName has a default 20 character limit, so often the ‘left’ function is used to trim the number of characters you take from a person’s name e.g. ‘left 8 characters’ of a person’s first name, combined with ‘left 11 characters’ of a person’s last name (and not forgetting to save a character for the seed value deadlock breaker!).

Once the Workflow step is executed, I then send the value to the AD Sync Rule (using [//WorkflowData/AccountName] to then pass to the outbound ‘AccountName –> sAMAccountName’ outbound AD rule flow:

sam4

 

More ideas for using MIMWAL

In my research on MIMWAL, I’ve found some very useful links to sample complex workflow chains that use the MIMWAL ‘building block’ action workflows and combine them to do complex tasks.

Some of those ideas can be found here by some of Microsoft’s own MSDN: https://blogs.msdn.microsoft.com/connector_space/2016/01/15/the-mimwal-custom-workflow-activity-library/

These include:

  • Create Employee IDs
  • Create Home Directories
  • Create Admin Accounts

I particularly like the idea of using the ‘Create Employee ID’ example workflow, something that I’ve only previously done outside of FIM/MIM, for example with a SQL Trigger that updates a SQL database with a unique number.

 

 

Running Vue.js on ASP.NET Core Applications

Vue.js has recently got many attentions as it is relatively easier to learn and lighter in size, comparing to other popular frameworks like Angular 1, Angular 2 or React. By providing a middleware, ASP.NET Core supports those front-end frameworks such as Angular2, Aurelia, React, Knockout, etc, while Vue has been excluded out-of-the-box. Even though we can find many good articles and code samples to integrate Vue and ASP.NET Core, as they don’t use the basic template that Vue is providing, it’s a little difficult for developers to apply at their first glance, who want to use both Vue and ASP.NET Core with minimum integration efforts. In this post, we are going to integrate Vue.js and ASP.NET Core using their basic templates, with minimum modification.

The code sample used in this post can be found at here.

Microsoft.AspNetCore.SpaServices

In order to support front-end development in ASP.NET Core, Visual Studio uses Bower. However, as we are already aware how fast front-end development environment changes, another option like Webpack helps developers work a lot easier by offering bundling, moduling and so forth. ASP.NET Core, of course supports Webpack as a separate extension. Microsoft.AspNetCore.SpaServices helps developers integrate front-end frameworks using Webpack with ease.

Prerequisites

Throughout this post, we need some preparations:

We can also use Visual Studio Code (VSC), but we’re assuming to use VS for this post.

Creating ASP.NET Core Web Application

Open Visual Studio and create a new ASP.NET Core web application project. As this web application likely uses .NET Core version 1.0.1, we need to update it to 1.1 by updating global.json:

Also, update project.json so that all NuGet packages in the web application can be complied to .NET Core 1.1:

As mentioned earlier, we’re using Webpack instead of Bower, so remove Bower-related configuration files as well as ASP.NET Core native bundling & minification setting files:

  • .bowerrc
  • bower.json
  • bundleconfig.json

And finally, all directories and files under the wwwroot directory need to be deleted. We’re all set now at the ASP.NET Core application side. Let’s add Vue.

Installing Vue.js

We can use Vue as a stand-alone library by simply including the script from CDN. However, if we want to use its powerful features especially for componentising, module bundlers like Webpack should be considered. In order to use Webpack for Vue.js, if node.js and npm have already been installed, install vue-cli:

Then, install the basic template by running the following command:

During installation, just leave all the question as default value and finally template gets installed. Now, run the following command to install all npm packages and run the Vue app:

Since Vue uses the port number of 8080, we can see the result as expected. However, this is just a Vue app running independently, not the one integrated with ASP.NET Core. Let’s move on.

Integrating Vue.js with ASP.NET Core

This is the main part of this post. We need to touch configurations at both sides.

Configuring Vue.js

First things first. We need to install the npm package, aspnet-webpack, to enable communication between front-end framework and back-end application through Webpack:

Once installed, update config/index.js to setup the root directory:

And finally, create webpack.config.js and save it to the root directory of the ASP.NET Core project:

We now got Vue.js setup done. What’s next?

Configuring ASP.NET Core

aspnet-webpack corresponding package in the ASP.NET Core side is Microsoft.AspNetCore.SpaServices. Install it through NuGet package manager and register a middleware at Startup.cs:

We also need to add a routing configuration for SPA like above. Is the ASP.NET Core side relatively simple? Let’s punch the F5 key to run the app and see how it’s going.

We now can see a different port number that is handled by the ASP.NET Core app. So, technically we completed integration between Vue.js and ASP.NET Core! How can we ensure if they are actually working together? Let’s implement a basic AJAX call to see whether they get along with each other. Of course further modification is required. Who can stop us?

Handling AJAX Request/Response

As Vue.js core is lightweight, if we want to implement more sophisticated works like handling AJAX request/response, we need to install another npm extension called vue-resource.

Once installed, add it to the Vue instance for use. Update src/main.js like:

Now we need to implement AJAX request/response codes. Open /src/components/Hello.vue and modify it like:

What we can expect from this change is the message Welcome to Your Vue.js App will be replaced with the value from res.body.message. Now we need to implement the Web API. Here’s the Web API controller and action:

All good. Build the app, press the F5 key and confirm the result.

We can see Hello World, instead of the original message.

Side note: Another benefit of using Webpack is Hot Module Replacement (HMR). As long as the app is up and running on our local machine, we can instantly change something and check the result with no time, without reloading the page. Of course, if we install .NET Core Watcher tool, we don’t even need to rebuild and rerun the app itself either.

Deploying App to Azure

We’ve built an ASP.NET Core web application with Vue.js. Now it’s time for deployment. We don’t use View features provided by ASP.NET Core. Instead we just use a static(?) wwwroot/index.html. In our local development, we don’t really have to worry about that as our development environment automatically detects that. However, in a production environment, we have to specify we’re using the static index.html page; otherwise Azure web app can’t recognise it. To enable this feature, we need to update Startup.cs:

Make sure that the UseDefaultFiles() method must always come before the UseStaticFiles() method. Finally update deployment/publish related settings in project.json for Vue.js:

Everything is done! Deploy this web app to Azure and see the result. Can we see it as expected?

So far, we have installed Vue.js, integrated it with ASP.NET Core, implemented whether both are working together or not, and deployed the app to Azure web app instance. Some may think this looks too complicated, but actually, it’s not that tricky. Rather, we’ve got another option on ASP.NET Core application development using a different front-end framework. As long as a new front-end framework emerges and it supports Webpack, we can make it working like this approach. Now, for the next step, let’s build a real business solution!

An Azure Timer Function App to retrieve files via FTP and Remote PowerShell

Introduction

In an age of Web Services and API’s it’s an almost a forgotten world where FTP Servers exist. However most recently I’ve had to travel back in time and interact with a FTP server to get a set of files that are produced by other systems on a daily basis. These files are needed for some flat-file imports into Microsoft Identity Manager.

Getting files off a FTP server is pretty simple. But needing to do it across a number of different environments (Development, Staging and Production) meant I was looking for an easy approach that I could also replicate quickly across multiple environments. As I already had Remote PowerShell setup on my MIM Servers for other Azure Function Apps I figured I’d use an Azure Function for obtaining the FTP Files as well.

Overview

My PowerShell Timer Function App performs the following:

  • Starts a Remote PowerShell session to my MIM Sync Server
  • Imports the PSFTP PowerShell Module
  • Creates a local directory to put the files into
  • Connects to the FTP Server
  • Gets the files and puts them into the local directory
  • Ends the session

Pre-requisites

From the overview above there are a number of pre-requites that other blog posts I’ve written detail nicely the steps involved to appropriately setup and configure. So I’m going to link to those. Namely;

  • Configure your Function App for your timezone so the schedule is correct for when you want it to run. Checkout the WEBSITE_TIME_ZONE note in this post.

    WEBSITE_TIME_ZONE

  • You’ll need to configure your Server that you are going to put the files onto for Remote PowerShell. Follow the Enable Powershell Remoting on the FIM/MIM Sync Server section of this blogpost.
  • The credentials used to connect to the MIM Server are secured as detailed in the Using an Azure Function to query FIM/MIM Service section of this blog post.
  • Create a Timer PowerShell Function App. Follow the Creating your Azure App Service section of this post but choose a Timer Trigger PowerShell App.
    • I configured my Schedule for 1030 every day using the following CRON configuration
      0 30 10 * * *
  • On the Server you’ll be connecting to in order to run the FTP processes you’ll need to copy the PSFTP Module and files to the following directories. I unzipped the PSFTP files and copied the PSFTP folder and its contents to;
    • C:\Program Files\WindowsPowerShell\Modules
    • C:\Windows\System32\WindowsPowerShell\v1.0\Modules

     

Configuring the Timer Trigger Function App

With all the pre-requisites in place it’s time to configure the Timer Function App that you created in the pre-requisites.

The following settings are configured in the Function App Application Settings;

  • FTPServer (the server you will be connecting to, to retrieve files)
  • FTPUsername (username to connect to the FTP Sever with)
  • FTPPassword (password for the username above)
  • FTPSourceDirectory (FTP directory to get the files from)
  • FTPTargetDirectory (the root directory under which the files will be put)

ApplicationSettings

  • You’ll also need Application Settings for a Username and Password associated with a user that exists on the Server that you’ll be connecting to with Remote PowerShell. In my script below these application settings are MIMSyncCredUser and MIMSyncCredPassword

Function App Script

Finally here is a raw script. You’ll need to add appropriate error handling for your environment. You’ll also want to change lines 48 and 51 for the naming of the files you are looking to acquire. And line 59 for the servername you’ll be executing the process on.

Summary

A pretty quick and simple little Azure Function App that will run each day and obtain daily/nightly extracts from an FTP Server. Cleanup of the resulting folders and files I’m doing with other on-box processes.

 

This post is cross-blogged on both the Kloud Blog and Darren’s Blog.

logo-mashup

Monitor SharePoint Changelog in Azure Function

Azure Functions have officially reached ‘hammer’ status

I’ve been enjoying the ease with which we can now respond to events in SharePoint and perform automation tasks, thanks to the magic of Azure Functions. So many nails, so little time!

The seminal blog post that started so many of us on that road, was of course John Liu’s Build your PnP Site Provisioning with PowerShell in Azure Functions and run it from Flow and that pattern is fantastic for many event-driven scenarios.

One where it currently (at time of writing) falls down is when dealing with a list item delete event. MS Flow can’t respond to this and nor can a SharePoint Designer workflow.

Without wanting to get into Remote Event Receivers (errgh…), the other way to deal with this is after the fact via the SharePoint change log (if the delete isn’t time sensitive). In my use case it wasn’t – I just needed to be able to clean up some associated items in other lists.

SharePoint Change Logs

SharePoint has had an API for getting a log of changes of certain types, against certain objects, in a certain time window since the dawn of time. The best post for showing how to query it from the client side is (in my experience) Paul Schaeflin’s Reading the SharePoint change log from CSOM and was my primary reference for the below PowerShell-based Function.

In my case, I am only interested in items deleted from a single list, but this could easily be scoped to an entire site and capture more/different event types (see Paul’s post for the specifics).

The biggest challenge in getting this working was persisting the change token to Azure Storage, and this wasn’t that difficult in and of itself – it’s just that the PowerShell bindings for Azure are as of yet woefully under-documented (TIP: Get-Content and Set-Content are the key to the PowerShell bindings… easy when you know how). In my case I have an input and output binding to a single Blob Storage blob (to persist the change token for reference the next time the Function runs) and another output to Queue Storage to trigger another function that actually does the cleanup of the other list items linked to the one now sitting in the recycle bin. The whole thing is triggered by an hourly timer. If nothing has been deleted, then no action is taken (other than the persisted token blob update).

A Note on Scaling

Note that if multiple delete events occurred since the last check, then these are all deposited in one message. This won’t cause a problem in my use case (there will never be more than a handful of items deleted in one pass of the Function), but it obviously doesn’t scale well, as too many being handled by the receiving Function would threaten to bump up against the 5 min execution time limit. I wanted to use the OOTB message queue binding for simplicity, but if you needed to push multiple messages, you could simple use the Azure Storage PowerShell cmdlets instead of an out binding.

Code Now Please

Here’s the Function code (following the PnP PowerShell Azure Functions implementation as per John’s article above and liberally stealing from Paul’s guide above).

Going Further

This is obviously a simple example with a single objective, but you could take this pattern and ramp it up as high as you like. By targeting the web instead of a single list, you could push a lot of automation through this single pipeline, perhaps ramping up the execution recurrence to every 5 mins or less if you needed that level of reduced latency. Although watch out for your Functions consumption if you turn up the executions too high!

UX Process: A groundwork for effective design teams

User Experience practice is about innovating and finding solutions to real-world problems. Which means we need to find problems, then validate them first before trying to fix them. So how do we go about doing all this? Read on…

I’ve been asked to explain a “Good UX Process” numerous times over the years in consulting. Customers want a formula per se that can solve all their design problems. But unfortunately, it doesn’t exist and there is no set UX process that applies to all.

Every organisation and its problems are unique. They all require different sets of UX activities to determine a positive outcome.

However, there are some general guidelines on:

  • What type of UX artefacts can we deliver?
  • Who do we engage and collaborate with?
  • What kind of UX activities / workshops can we suggest?

To answer the above, I put together a general UX design process with the help of my colleagues a few years back. So here it goes.

 


Phase I – Discovery

People Involved
  1. Product Owner (Whoever is funding the project)
  2. Project Manager (Whoever is overseeing the project)
  3. Business Analyst (Whoever is managing different teams)
  4. Analytics
  5. Marketing
  6. Information Technology
  7. User Experience Designer
Deliverables
  1. Problem Statements
  2. User Needs
  3. Design principles
  4. Benchmarking
  5. Personas
  6. Servicer maps
  7. Hypothesis
Activities
  1. Discover pain-points
  2. Discuss solutions to pain points (utopia)
  3. Analyse competitors in similar space
  4. Discover potential constraints (IT or culture related)
  5. Come up with a basic information architecture (homepage elements, navigation and unique pages)

Phase II – Ideation and Concept

People Involved
  1. Product Owner
  2. Project Manager
  3. Business Analyst
  4. Developers
  5. Information Technology
  6. User Experience Designer
  7. SEO
Deliverables
  1. Concept Vision
  2. High Level Requirements
  3. UX Estimates
  4. Dev Estimates
  5. UX Epics
  6. Story Boards
  7. Experience Maps
  8. Navigation
Activities
  1. User Testing
  2. Feasibility Prototyping
  3. Workshop Facilitations

Phase III – Design and Build

People Involved
  1. Project Manager
  2. Business Analyst
  3. Marketing
  4. Developers
  5. User Experience Designer
  6. Visual Designer
  7. SEO
Deliverables
  1. Wireframes
  2. Visual Designs
  3. User Interface Specs
  4. Process Flows
Activities
  1. Collaborative Design Sessions
  2. 6-ups Designs
  3. Rapid User Testing
  4. Wireframes
  5. UI Trends

Phase IV – Measure and Respond

People Involved
  1. Project Manager
  2. Analytics
  3. Developers
  4. SEO
Deliverables
  1. UI Improvements
  2. UX Enhancements
Activities
  1. Advanced Analytics
  2. Collaborative Design Sessions
  3. User Testing (A/B Testing)

The best way to use this UX process is to post-understanding your client’s requirements, extract the best bits that suit your needs and take it from there.

I hope you guys find this useful!

60288347

Back to Basics – Design Patterns – Part 2

In the previous post, we discussed design patterns, their structure and usage. Then we discussed the three fundamental types and started off with the first one – Creational Patterns.

Continuing with creational patterns we will now discuss Abstract Factory pattern, which is considered to be a super set of Factory Method.

Abstract Factory

In the Factory method, we discussed how it targets a single family of subclasses using a corresponding set of factory method classes, or a single factory method class via parametrised/ static factory method. But if we target families of related classes (multiple abstractions having their own subclasses) and need to interact with them using a single abstraction then factory method will not work.

A good example could be the creation of doors and windows for a room. A room could offer a combination of wooden door, sliding door etc. and wooden window, glass window etc. The client machine will however, interact with a single abstraction (abstract factory) to create the desired door and window combination based on selection/ configuration. This could be a good candidate for an Abstract Factory.

So abstract factory allows initiation of families (plural) of related classes using a single interface (abstract factory) independent of the underline concrete classes.

Reasons

When a system needs to use families of related or dependent classes, it might need to instantiate several subclasses. This will lead to code duplication and complexities. Taking the above example of a room, the client machine will need to instantiate classes for doors and windows for one combination and then do the same for others, one by one. This will break the abstraction of those classes, exposes their encapsulation, and put the instantiation complexity on the client. Even if we use a factory method for every single family of classes this will still require several factory methods, unrelated to each other. Thereby managing them to make combination offerings (rooms) will be code cluttering.

We will use abstract factory when:

–          A system is using families of related or dependent objects without any knowledge of their concrete types.

–          The client does not need to know the instantiation details of subclasses.

–          The client does not need to use subclasses in a concrete way.

Components

There are four components of this pattern.

–          Abstract Factory

The abstraction client interacts with, to create door and window combinations. This is the core factory that provide interfaces for individual factories to implement.

–          Concrete Factories

These are the concrete factories (CombinationFactoryA, CobinationFactoryB) that create concrete products (doors and windows).

–          Abstract Products

These are the abstract products that will be visible to the client (AbstractDoor & AbstractWindow).

–          Concrete Products

The concrete implementations of products offered. WoodenDoor, WoodenWindow etc.

 

drawing1

 

Sample code

Using the above example, our implementation would be:

    public interface Door

{

double GetPrice();

}

class WoodenDoor : Door

{

public double GetPrice()

{

//return price;

}

}

class GlassDoor : Door

{

public double GetPrice()

{

//return price

}

}

public interface Window

{

double GetPrice();

}

class WoodenWindow : Window

{

public double GetPrice()

{

//return price

}

}

class GlassWindow : Window

{

public double GetPrice()

{

//return price

}

}

The concrete classes and factories should ideally have protected or private constructors and should have appropriate access modifiers. e.g.

    protected WoodenWindow()

{

}

The factories would be like:

    public interface AbstractFactory

{

Door GetDoor();

Window GetWindow();

}

class CombinationA : AbstractFactory

{

public Door GetDoor()

{

return new WoodenDoor();

}

public Window GetWindow()

{

return new WoodenWindow();

}

}

class CombinationB : AbstractFactory

{

public Door GetDoor()

{

return new GlassDoor();

}

public Window GetWindow()

{

return new GlassWindow();

}

}

And the client:

    public class Room

{

Door _door;

Window _window;

public Room(AbstractFactory factory)

{

_door = factory.GetDoor();

_window = factory.GetWindow();

}

public double GetPrice()

{

return this._door.GetPrice() + this._window.GetPrice();

}

}

            AbstractFactory woodFactory = new CombinationA();

Room room1 = new Room(woodFactory);

Console.Write(room1.GetPrice());

AbstractFactory glassFactory = new CombinationB();

Room room2 = new Room(glassFactory);

Console.Write(room2.GetPrice());

The above showcases how abstract factory could be utilised to instantiate and use related or dependent families of classes via their respective abstractions without having to know or understand the corresponding concrete classes.

The Room class only knows about Door and Window abstractions and let the configuration/ client code input dictate which combination to use, at runtime.

Sometimes abstract factory also uses Factory Method or Static Factory Method for factory configurations:

    public static class FactoryMaker

{

public static AbstractFactory GetFactory(string type)   //some configuration

{

//configuration switches

if (type == “wood”)

return new CombinationA();

else if (type == “glass”)

return new CombinationB();

else   //default or fault config

return null;

}

}

Which changes the client:

AbstractFactory factory = FactoryMaker.GetFactory(“wood”);//configurations or inputs

Room room1 = new Room(factory);

As can be seen, polymorphic behaviours are the core of these factories as well as the usage of related families of classes.

Advantages

Creational patterns, particularly Factory can work along with other creational patterns; Abstract factory

–          Isolation of the creation mechanics from its usage for related families of classes.

–          Adding new products/ concrete types does not affect the client code rather the configuration/ factory code.

–          Provide way for the client to work with abstractions instead of concrete types. This gives flexibility to the client code to the related use cases.

–          Usage of abstractions reduce dependencies across components and increases maintainability.

–          Design often starts with Factory method and evolves towards Abstract factory (or other creational patterns) as the families of classes expends and their relationships develops.

Drawbacks

Abstract factory does introduce some disadvantages in the system.

–          It has fairly complex implementation and as the families of classes grows, so does the complexity.

–          Relying heavily on polymorphism does require expertise for debugging and testing.

–          It introduces factory classes, which can be seen as added workload without having any direct purpose except for instantiation of other classes, particularly in bigger systems.

–          Factory structures are tightly coupled with the relationships of the families of classes. This introduces maintainability issues and rigid design.

For example, adding a new type of window or door in the above example would not be as easy. Adding another family of classes, like Carpet and its sub types would be even more complex, but this does not affect the client code.

Conclusion

Abstract factory is a widely used creational pattern, particularly because of its ability to handle the instantiation mechanism of several families of related classes. This is helpful in real-world solutions where entities are often interrelated and work in a blend in a variety of use cases.

Abstract factory ensures the simplification of design targeting business processes by eliminating the concrete types and replacing them with abstractions while maintaining their encapsulation and removing the added complexity of object creation. This also reduces a lot of duplicate code at the client side making business processes testable, robust and independent of the underline concrete types.

In the third part of creational patterns, we will discuss a pattern slightly similar to Abstract factory, the Builder. Sometimes the two could be competitors in design decisions but differs in real-world applications.

 

Further reading

 

http://www.dofactory.com/net/abstract-factory-design-pattern

http://www.oodesign.com/abstract-factory-pattern.html

http://www.oodesign.com/abstract-factory-pattern.html

Back to Basics – Design Patterns – Part 1

Design Patterns

Design patterns are reusable solutions to recurring problems of software design and engineering in the real-world. Patterns makes it easier to reuse proven techniques to resolve design and architectural complications and then communicating and documenting them with better understanding, making them more accessible to developers in an abstract way.

60288347

Design patterns enhance the classic techniques of object oriented programming by encouraging the reusability and communication of the solutions of common problems at abstract levels and improves the maintainability of the code as a by-product at implementation levels.

The “Y” in Design Patterns

Apart from the obvious advantage of providing better techniques to map real-world into programming models, a prime objective of OOP is design and code reusability. However, this is easier said than done. In reality, designing reusable classes is hard and takes time. In the real-world, less than few devs write code with long-term reusability in mind. This becomes obvious when dealing with recurring problems in design and implementation. This is where design patterns come in to the picture, when dealing with problems that seems to appear again and again. Any proven technique that provides a solution of a recurring problem in an abstract, reusable way, independent of the implementation barriers like programming language details and data structures, is categorised as design pattern.

The design patterns:

  • help analysing common problems in a more abstract way.
  • provides proven solutions to those problems.
  • help decreasing the overall code time at the implementation level.
  • encourages the code reusability by providing common solutions.
  • increases code lifetime and maintainability by enhancing the capacity of change.
  • Increases the understanding of the solutions of recurring problems.

A pattern will describe a problem, provide its solution at an abstract level, and elaborate the result.

Problem

The problem part of a pattern describes the issue(s) a program/ piece of code is facing along with its context. It might highlight a class structure to be inflexible, or issues related to the usage of an object, particularly at runtime.

Solution

A solution is always defined as a template, an abstract design that will describe its element(s), their relationships, and responsibilities and will detail out how this abstract design will address the problem at hand. A pattern never provides any concrete implementation of the solution, enhancing its reusability and flexibility. The actual implementation of a pattern might vary in different programming languages.

Understanding software design patterns requires respectable knowledge of object oriented programming concepts like abstraction, inheritance, and polymorphic behaviour.

Types of Design Patterns

Design patterns are often divided into three fundamental types.

  • Creational – deals with the creation/ instantiation of objects specific to business use cases. Polymorphic concepts, along with inheritance, are the core of these patterns.
  • Structural – targets the structure and composition of classes. Heavily relies upon the inheritance and composition concepts of OOP.
  • Behavioural – underlines the interaction between classes, separation and delegation of responsibilities.

let-us-understand-design-pattern-9-638

Creational Patterns

Often the implementation and usage of a class, or a group of classes is tightly coupled with the way objects of those classes are created. This decreases the flexibility of those classes, particularly at runtime. For example, if we have a group of cars providing a functionality of driving. Then the creation of each car will require a piece of code; new constructor in common modern languages. This will infuse the creation of cars with its usage:

Holden car1 = new Holden();

car1.Drive();

Mazda car2 = new Mazda();

car2.Drive();

Even if we use base type;

Car car1 = new Holden();

car1.Drive();

Car car2 = new Mazda();

car2.Drive();

If you look at the above examples, you will notice that the actual class being instantiated is selected at compile-time. This creates problems when designing common functionality and forces hardwired code into the usage based on concrete types. This also exposes the constructors of the classes which penetrates the encapsulation.

Creational patterns provide the means of creating, and using objects of related classes without having to identify their concrete types or exposing their creational mechanism. This gives move flexibility at the usage of the instantiated objects at runtime without worrying about their types. This also results in less code and eliminates the creational complexity at the usage, allowing the code to focus on what to do then what to create.

CarFactory[] factories = <Create factories>;

foreach (CarFactory factory in factories) {

               Car car = factory.CreateCar();

               car.Drive();

}

The above code removes the creational logic and dedicates it to subclasses and factories. This gives flexibility on usage of classes independent of its creation and let the runtime dictates the instantiated types while making the code independent of the number of concrete types (cars) to be used. This code will work regardless of the number concrete types, enhancing the reusability and separation of creation from its usage. We will discuss the above example in details in Factory Method pattern.

The two common trades of creational patterns are:

  • Encapsulation of concrete types and exposure using common abstractions.
  • Encapsulation of instantiation and encouraging polymorphic behaviour.

The system leveraging creational patterns does not need to know, or understand concrete types; it handles abstractions only (interfaces, abstract classes). This gives flexibility in configuring a set of related classes at runtime, based on use cases and requirements without having to alter the code.

There are five fundamental creational patterns:

  • Factory method
  • Abstract factory
  • Prototype
  • Singleton
  • Builder

Factory Method

This pattern specifies a way of creating instances of related classes but let the subclasses decide which concrete type to instantiate at runtime, also called Virtual Construction. The pattern encourages the use of interfaces and abstract classes over concrete types. The decision is based upon the input supplied by either the client code or configuration.

Reasons

When client code instantiates a class, it knows the concrete type of that class. This breaks through the polymorphic abstraction when dealing with a family of classes. We may use factory method when:

  • Client code don’t know the concrete types of subclasses to create.
  • The instantiation needs to be deferred to subclasses.
  • The job needs to be delegated to subclasses and client code does not need to know which subclass is doing the job.

Components

factorymethod1

Abstract <Product> (Car)

The interface/ abstraction the client code understands. This describes the family of subclasses to be instantiated.

<Product> (Holden, Mazda, …)

This implements the abstract <product>. The subclass(es) that needs to be created.

Abstract Factory (CarFactory)

This provides the interface/ abstraction for the creation of abstract product, called factory method. This might also use configuration for creating a default product.

ConcreteFactory (HoldenFactory, MazdaFactory, …)

This provides the instantiation of the concrete subclass/ product by implementing the factory method.

Sample code

Going back to our example of cars earlier, we will provide detail implementation of the factory method.

public abstract class Car //could be an interface instead, if no default behaviour is required

    {

        public virtual void Drive()

        {

            Console.Write(“Driving a car”);

        }

    }

    public class Holden : Car

    {

        public override void Drive()

        {

            Console.Write(“Driving Holder”);

        }

    }

    public class Mazda : Car

    {

        public override void Drive()

        {

            Console.Write(“Driving Mazda”);

        }

    }

   public interface ICarFactory //This will be a class/ abstract if there is a default factory implementation.

    {

        Car CreateCar();

    }

    public class HoldenFactory : CarFactory

    {

        public Car CreateCar()

        {

            return new Holden();

        }

    }

    public class MazdaFactory : CarFactory

    {

        public Car CreateCar()

        {

            return new Mazda();

        }

    }

Now the client code could be:

var factories = new CarFactory[2];

factories[0] = new HoldenFactory();

factories[1] = new MazdaFactory();

foreach (var factory in factories)

{

var car = factory.CreateCar();

       car.Drive();

}

 

Now we can keep introducing new Car types and the client code will behave the same way for this use case. The creation of factories could further be abstracted by using parametrised factory method. This will modify the CarFactory interface.

 

    public class CarFactory

    {

        public virtual Car CreateCar(string type) //any configuration or input from client code.

        {

     /////switches based on the configuration or input from client code

            if (type == “holden”)

                return new Holden();

            else if (type == “mazda”)

                return new Mazda();

//…

            else    ////default instantiation/ fault condition etc.

                return null; //default/ throw etc.

        }

    }

 

Which will change the client code:

 

CarFactory factory = new CarFactory();

Car car1 = factory.CreateCar(“holdern”); //Configuration/ input etc.

Car car2 = factory.CreateCar(“mazda”); //Configuration/ input etc.

 

The above code shows the flexibility of the factory method, particularly the Factory class.

Advantages

Factory method is the simplest of the creational patterns that targets the creation of a family of classes.

  • It separates the client code and a family of classes by weakening the coupling and abstracting the concrete subclasses.
  • It reduces the changes in client code due to changes in concrete types.
  • Provides configuration mechanism for subclasses by removing their instantiation from the client code and into the factory method(s).
  • The default constructors of the subclasses could be marked protected/ private to shield direct creation.

Drawbacks

There are some disadvantages that should be considered before applying factory method:

  • Factory demands individual implementations of a factory method for every subclass in the family. This might introduce unnecessary complexity.
  • Concrete parametrised factory leverages the switch conditions to identify the concrete type to be instantiated. This introduces cluttered and hardwired code in the factory. Any changes in the configuration/ client code input or implementation changes of the concrete types demands a review of the factory method.
  • The subtypes must be of the same base type. Factory method cannot handle subtypes of different base types. That will require complex creational patterns.

Conclusion

Factory method is simple to understand and implement in a variety of languages. The most important consideration is to look for many subclasses of the same base type/ interface and being handled at that abstract level in a system. Conditions like below could warrant a factory method implementation.

Car car1 = new Holder();

car1.GetDiscount();

Car car2 = new Mazda();

car2.GetDiscount();

. . .

 

On the other hand, if you have some concrete usage of subclasses like.

 

if (car.GetType() == typeof(Holden))

      ((Holden)car).GetHoldenDiscount();

 

Then factory method might not be the answer and, perhaps reconsider the class hierarchy.

In part 2, we will discuss the next Creational Pattern, that is considered a superset of factory method, Abstract Factory.

 

Further Reading

https://msdn.microsoft.com/en-us/library/orm-9780596527730-01-05.aspx

http://www.oodesign.com/abstract-factory-pattern.html

http://www.dofactory.com/net/factory-method-design-pattern