Send mail to Office 365 via an Exchange Server hosted in Azure

Those of you who have attempted to send mail to Office 365 from Azure know that sending outbound mail directly from an email server hosted in Azure is not supported due to elastic nature of public cloud service IPs and the potential for abuse. Therefore, the Azure IP address blocks are added to public block lists with no exceptions to this policy.

To be able to send mail from an Azure hosted email server to Office 365 you to need to send mail via a SMTP relay. There is a number of different SMTP relays you can utilise including Exchange Online Protection, more information can be found here: https://blogs.msdn.microsoft.com/mast/2016/04/04/sending-e-mail-from-azure-compute-resource-to-external-domains

To configure Exchange Server 2016 hosted in Azure to send mail to Office 365 via SMTP relay to Exchange Online protection you need to do the following;

  1. Create a connector in your Office 365 tenant
  2. Configure accepted domains on your Exchange Server in Azure
  3. Create a send connector on your Exchange Server in Azure that relays to Exchange Online Protection

Create a connector in your Office 365 tenant

  1. Login to Exchange Online Admin Center
  2. Click mail flow | connector
  3. Click +
  4. Select from: “Your organisation’s email server” to: “Office 365”o365-connector1
  5. Enter in a Name for the Connector | Click Nexto365-connector2
  6. Select “By verifying that the IP address of the sending server matches one of these IP addresses that belong to your organization”
  7. Add the public IP address of your Exchange Server in Azureo365-connector3

Configure accepted domains on your Exchange Server in Azure

  1. Open Exchange Management Shell
  2. Execute the following PowerShell command for each domain you want to send mail to in Office 365;
  3. New-AcceptedDomain -DomainName Contoso.com -DomainType InternalRelay -Name Contosoaccepted-domain1

Create a send connector on your Exchange Server in Azure that relays to Exchange Online Protection

  1. Execute the following PowerShell command;
  2. New-SendConnector -Name “My company to Office 365” -AddressSpaces * -CloudServicesMailEnabled $true -RequireTLS $true -SmartHosts yourdomain-com.mail.protection.outlook.com -TlsAuthLevel CertificateValidationsend-connector1

Exchange Server 2016 in Azure

I recently worked on a project where I had to install Exchange Server 2016 on an Azure VM and I chose a D2 sized Azure VM (2 cores, 7GB RAM) thinking that will suffice, well that was a big mistake.

The installation made it to the last step before a warning appeared informing me that the server is low on memory resources and eventually terminated the installation, leaving it incomplete.

Let this be a warning to the rest of you, choose a D3 or above sized Azure VM to save yourself a whole lot of agony.

To try and salvage the Exchange install I attempted to re-run the installation as it detects an incomplete installation and tries to pick up where it failed previously, this did not work.

I then tried to uninstall Exchange completely by running command: “Setup.exe /mode:Uninstall /IAcceptExchangeServerLicenseTerms”. This also did not work as it was trying to uninstall an Exchange role that never got installed, this left me one option manually remove Exchange from Active Directory and rebuild the Azure VM. 

To remove the Exchange organisation from Active Directory I had to complete the following steps;

  1. On a Domain Controller | Open ADSI Edit
  2. Connect to the Configuration naming contextconfig-naming-context
  3. Expand Services
  4. Delete CN=Microsoft Exchange and CN=Microsoft Exchange Autodiscoverconfig-exchange-objects
  5. Connect to the Default naming contextdefault-naming-context
  6. Under the root OU delete OU=Microsoft Exchange Security Groups and CN=Microsoft Exchange System Objects delete-exchange-objects
  7. Open Active Directory Users and Computers
  8. Select the Users OU
  9. Delete the following:
    • DiscoverySearchMailbox{GUID}
    • Exchange Online-ApplicationAccount
    • Migration.GUID
    • SystemMailbox{GUID}ad-exchange-objects

After Exchange was completely removed from Active Directory and my Azure VM was rebuilt with a D3 size I could successfully install Exchange Server 2016.

Exchange Server 2016 install error: “Active Directory could not be contacted”

I recently worked on a project where I had to install Exchange Server 2016 on an Azure VM and received error “Active Directory could not be contacted”.

To resolve the issue, I had to complete the following steps;

  1. Remove the Azure VM public IP address
  2. Disable IPv6 on the NICipv6-disabled
  3. Set the IPv4 DNS suffix to point to your domain. If a public address is being used it will be set to reddog.microsoft.com by default.dns-suffix

Once done the installation could proceed and Active Directory was contactable.

A [brief] intro to Azure Resource Visualiser (ARMVIZ.io)

Another week, another Azure tool that I’ve come by and thought I’d share with the masses. Though this one isn’t a major revelation or a something that I’ve added to my Chrome work profile bookmarks bar like I did with the Azure Resource Explorer (as yet, though, I may well add this in the very near future), I certainly have it bookmarked in my Azure folder in Chrome bookmarks.

When working with Azure Resource Manager templates, you’re dealing with long JSON files. These files can get to be pretty big in size and span hundreds of lines. Here’s an example of one:

(word or warning- scroll to the bottom as there is more content after this ~250+ line ARM template sample)

{
 "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json",
 "contentVersion": "1.0.0.0",
 "parameters": {
 "storageAccountName": {
 "type": "string",
 "metadata": {
 "description": "Name of storage account"
 }
 },
 "adminUsername": {
 "type": "string",
 "metadata": {
 "description": "Admin username"
 }
 },
 "adminPassword": {
 "type": "securestring",
 "metadata": {
 "description": "Admin password"
 }
 },
 "dnsNameforLBIP": {
 "type": "string",
 "metadata": {
 "description": "DNS for Load Balancer IP"
 }
 },
 "vmSize": {
 "type": "string",
 "defaultValue": "Standard_D2",
 "metadata": {
 "description": "Size of the VM"
 }
 }
 },
 "variables": {
 "storageAccountType": "Standard_LRS",
 "addressPrefix": "10.0.0.0/16",
 "subnetName": "Subnet-1",
 "subnetPrefix": "10.0.0.0/24",
 "publicIPAddressType": "Dynamic",
 "nic1NamePrefix": "nic1",
 "nic2NamePrefix": "nic2",
 "imagePublisher": "MicrosoftWindowsServer",
 "imageOffer": "WindowsServer",
 "imageSKU": "2012-R2-Datacenter",
 "vnetName": "myVNET",
 "publicIPAddressName": "myPublicIP",
 "lbName": "myLB",
 "vmNamePrefix": "myVM",
 "vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('vnetName'))]",
 "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]",
 "publicIPAddressID": "[resourceId('Microsoft.Network/publicIPAddresses',variables('publicIPAddressName'))]",
 "lbID": "[resourceId('Microsoft.Network/loadBalancers',variables('lbName'))]",
 "frontEndIPConfigID": "[concat(variables('lbID'),'/frontendIPConfigurations/LoadBalancerFrontEnd')]",
 "lbPoolID": "[concat(variables('lbID'),'/backendAddressPools/BackendPool1')]"
 },
 "resources": [
 {
 "type": "Microsoft.Storage/storageAccounts",
 "name": "[parameters('storageAccountName')]",
 "apiVersion": "2015-05-01-preview",
 "location": "[resourceGroup().location]",
 "properties": {
 "accountType": "[variables('storageAccountType')]"
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "type": "Microsoft.Network/publicIPAddresses",
 "name": "[variables('publicIPAddressName')]",
 "location": "[resourceGroup().location]",
 "properties": {
 "publicIPAllocationMethod": "[variables('publicIPAddressType')]",
 "dnsSettings": {
 "domainNameLabel": "[parameters('dnsNameforLBIP')]"
 }
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "type": "Microsoft.Network/virtualNetworks",
 "name": "[variables('vnetName')]",
 "location": "[resourceGroup().location]",
 "properties": {
 "addressSpace": {
 "addressPrefixes": [
 "[variables('addressPrefix')]"
 ]
 },
 "subnets": [
 {
 "name": "[variables('subnetName')]",
 "properties": {
 "addressPrefix": "[variables('subnetPrefix')]"
 }
 }
 ]
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "type": "Microsoft.Network/networkInterfaces",
 "name": "[variables('nic1NamePrefix')]",
 "location": "[resourceGroup().location]",
 "dependsOn": [
 "[concat('Microsoft.Network/virtualNetworks/', variables('vnetName'))]",
 "[concat('Microsoft.Network/loadBalancers/', variables('lbName'))]"
 ],
 "properties": {
 "ipConfigurations": [
 {
 "name": "ipconfig1",
 "properties": {
 "privateIPAllocationMethod": "Dynamic",
 "subnet": {
 "id": "[variables('subnetRef')]"
 },
 "loadBalancerBackendAddressPools": [
 {
 "id": "[concat(variables('lbID'), '/backendAddressPools/BackendPool1')]"
 }
 ],
 "loadBalancerInboundNatRules": [
 {
 "id": "[concat(variables('lbID'),'/inboundNatRules/RDP-VM0')]"
 }
 ]
 }
 }
 ]
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "type": "Microsoft.Network/networkInterfaces",
 "name": "[variables('nic2NamePrefix')]",
 "location": "[resourceGroup().location]",
 "dependsOn": [
 "[concat('Microsoft.Network/virtualNetworks/', variables('vnetName'))]"
 ],
 "properties": {
 "ipConfigurations": [
 {
 "name": "ipconfig1",
 "properties": {
 "privateIPAllocationMethod": "Dynamic",
 "subnet": {
 "id": "[variables('subnetRef')]"
 }
 }
 }
 ]
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "name": "[variables('lbName')]",
 "type": "Microsoft.Network/loadBalancers",
 "location": "[resourceGroup().location]",
 "dependsOn": [
 "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]"
 ],
 "properties": {
 "frontendIPConfigurations": [
 {
 "name": "LoadBalancerFrontEnd",
 "properties": {
 "publicIPAddress": {
 "id": "[variables('publicIPAddressID')]"
 }
 }
 }
 ],
 "backendAddressPools": [
 {
 "name": "BackendPool1"
 }
 ],
 "inboundNatRules": [
 {
 "name": "RDP-VM0",
 "properties": {
 "frontendIPConfiguration": {
 "id": "[variables('frontEndIPConfigID')]"
 },
 "protocol": "tcp",
 "frontendPort": 50001,
 "backendPort": 3389,
 "enableFloatingIP": false
 }
 }
 ]
 }
 },
 {
 "apiVersion": "2015-06-15",
 "type": "Microsoft.Compute/virtualMachines",
 "name": "[variables('vmNamePrefix')]",
 "location": "[resourceGroup().location]",
 "dependsOn": [
 "[concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
 "[concat('Microsoft.Network/networkInterfaces/', variables('nic1NamePrefix'))]",
 "[concat('Microsoft.Network/networkInterfaces/', variables('nic2NamePrefix'))]"
 ],
 "properties": {
 "hardwareProfile": {
 "vmSize": "[parameters('vmSize')]"
 },
 "osProfile": {
 "computername": "[variables('vmNamePrefix')]",
 "adminUsername": "[parameters('adminUsername')]",
 "adminPassword": "[parameters('adminPassword')]"
 },
 "storageProfile": {
 "imageReference": {
 "publisher": "[variables('imagePublisher')]",
 "offer": "[variables('imageOffer')]",
 "sku": "[variables('imageSKU')]",
 "version": "latest"
 },
 "osDisk": {
 "name": "osdisk",
 "vhd": {
 "uri": "[concat('http://',parameters('storageAccountName'),'.blob.core.windows.net/vhds/','osdisk', '.vhd')]"
 },
 "caching": "ReadWrite",
 "createOption": "FromImage"
 }
 },
 "networkProfile": {
 "networkInterfaces": [
 {
 "properties": {
 "primary": true
 },
 "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nic1NamePrefix'))]"
 },
 {
 "properties": {
 "primary": false
 },
 "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nic2NamePrefix'))]"
 }
 ]
 },
 "diagnosticsProfile": {
 "bootDiagnostics": {
 "enabled": "true",
 "storageUri": "[concat('http://',parameters('StorageAccountName'),'.blob.core.windows.net')]"
 }
 }
 }
 }
 ]
} 


So what is this Azure Resource Visualiser?

For those following at home and are cluey enough to pick up on what the ARM Visualiser is, it’s a means to help visualise the components and relationships of those components in an ARM template. It does this in a very user friendly and easy to use way.

What you can do is as follows:

  • Play around with ARMVIZ with a pre-build ARM template
  • Import your own ARM template and visualise the characteristics of your components and their relationships
    • This helps validate the ARM template and make sure there are no errors or bugs
  • You can have quick access to the ARM Quickstart templates library in Github
  • You can view your JSON ARM template online
  • You can create a new ARM template from scratch or via copying one of the quick start templates ONLINE in your browser of choice
  • You can freely edit that template and go from JSON view to the Visualiser view quickly
  • You can then download a copy of your ARM template that is built, tested, visualised and working
  • You can provide helpful feedback to the team from Azure working on this service
    • Possibly leading to this being rolled up into the Azure Portal at some point in the future
  • All of the components are able to be moved around the screen as well
  • If you double click on any of the components, for example the Load Balancer, you’ll be taken to the line in the ARM template to view or amend the config

What does ARMVIZ.io look like?

Here’s a sample ARM template visualised

Azure Resource Visualiser

This is a screen grab of the Azure Resource Visualiser site. It shows the default, sample template that can be manipulated and played with.

Final words

If you want a quick, fun ARM template editor and your favourite ISE is just behaving like it did ever other day, then spruce up and pimp up your day with the Azure Resource Visualiser. In the coming weeks I’m going to be doing quick a few ARM templates. I can assure you that those will be run through the Azure Resource Visualiser to validate and check for any errors.

Pro-tip: don’t upload an ARM template to Microsoft that might have sensitive information. I know it’s common sense, but, it happens to the best of us. I thought I’d just quickly mention that as a reminder more than anything else.

Hope you enjoy it,

Best,

Lucian

 

 

***

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango, or, connect via LinkedIn.

Why are you not using Azure Resource Explorer (Preview)?

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango. Connect on LinkedIn.

***

For almost two years the Azure Resource Explorer has been in preview. For almost two years barely anyone has used it. This stops today!

I’ve been playing around with the Azure Portal (ARM) and clicking away stumbled upon the Azure Resource Explorer; available via https://resources.azure.com. Before you go any further, click on that or open the URI in a new tab in your favourite browser (I’m using Chrome 56.x for Mac if you were wondering) and finally BOOKMARK IT!

Okay, let’s pump the breaks and slow down now. I know what you’re probably thinking: “I’m not bookmarking a URI to some Azure service because some blogger dude told me to. This could be some additional service that won’t add any benefit since I have the Azure portal and PowerShell; love PowerShell”. Well, that is a fair point. However, let me twist your arm with the following blog post; full of fun facts and information about what Azure Resource Explorer is, what is does, how to use it and more!

What is Azure Resource Explorer

This is a [new to me] website, running Bootstrap HTML, CSS, Javascript framework (an older version, but, like yours truly here on clouduccino), that provides streamlined and rather well laid out access to various REST API details/calls for any given Azure subscription. You can login and view some nice management REST API’s, make changes to Azure infrastructure in your subscription via REST calls/actions like get, put, post, delete and create.

There’s some awesome resources around documentation for the different API’s, although Microsoft is lagging in actually making this of any use across the board (probably should not have mentioned that). Finally, what I find handy is pre-build PowerShell scripts that outline how to complete certain actions mixed in with the REST API’s.

Here’s an example of an application gateway in the ARE portal. Not that there is much to see, since there are no appGateways, but, I could easily create one!

Use cases

I’m sure that all looks “interesting”; with an example above with nothing to show for it. Well, here is where I get into a little more detail. I can’t show you all the best features straight away, otherwise you’ll probably go off and start playing with and tinkering with the Resource Explorer portal yourselves (go ahead and do that, but, after reading the remainder of this blog!).

Use case #1 – Quick access to information

Drilling through numerous blades in the Azure Portal, while it works well, sometimes can take longer than you want when all you need to do is check for one bit of information- say a route table for a VNET. PowerShell can also be time consuming- doing all that typing and memorising cmdlets and stuff (so 2016..).

A lot of the information you need can be grabbed at a glance from the ARE portal though which in turns saves you time. Here’s a quick screenshot of the route table (or lack there of) from a test VNET I have in the Kloud sandbox Azure subscription that most Kloudies use on the regular.

I know, I know. There is no routes. In this case it’s a pretty basic VNET, but, if I introduced peering and other goodness in Azure, it would all be visible at a glance here!

Use case #2 – Ahh… quick access to information?

Here’s another example where getting access to information configuration is really easy with ARE. If you’re working on a PowerShell script to provision some VM instances and you are not sure of the instance size you need, or the programatic name for that instance size, you can easily grab that information form the ARE portal. Bellow highlights a quick view of all the VM instance sizes available. There are 532 lines in that JSON output with all instance sizes from Standard_A0 to Standard_F16s (which offers 16 cores, 32GB of RAM and up to 32 disks attached if you were interested).

vmSizes view for all current available VM sizes in Azure. Handy to grab the programatic name to then use in PowerShell scripting?!

Use case #3 – PowerShell script examples

Mixing the REST API programatic cmdlets with PowerShell is easy. The ARE portal outlines the ways you can mix the two to execute quick cmdlets to various actions, for example: get, set, delete. Bellow is a an example set of PowerShell cmdlets from the ARE portal for VNET management.

Final words

Hopefully you get a chance to try out the Azure Resource Explorer today. It’s another handy tool to keep in your Azure Utility Belt. It’s definitely something I’m going to use, probably more often than I will probably realise.

#HappyFriday

Best,

Lucian

 

 

 

An Azure Timer Function App to retrieve files via FTP and Remote PowerShell

Introduction

In an age of Web Services and API’s it’s an almost a forgotten world where FTP Servers exist. However most recently I’ve had to travel back in time and interact with a FTP server to get a set of files that are produced by other systems on a daily basis. These files are needed for some flat-file imports into Microsoft Identity Manager.

Getting files off a FTP server is pretty simple. But needing to do it across a number of different environments (Development, Staging and Production) meant I was looking for an easy approach that I could also replicate quickly across multiple environments. As I already had Remote PowerShell setup on my MIM Servers for other Azure Function Apps I figured I’d use an Azure Function for obtaining the FTP Files as well.

Overview

My PowerShell Timer Function App performs the following:

  • Starts a Remote PowerShell session to my MIM Sync Server
  • Imports the PSFTP PowerShell Module
  • Creates a local directory to put the files into
  • Connects to the FTP Server
  • Gets the files and puts them into the local directory
  • Ends the session

Pre-requisites

From the overview above there are a number of pre-requites that other blog posts I’ve written detail nicely the steps involved to appropriately setup and configure. So I’m going to link to those. Namely;

  • Configure your Function App for your timezone so the schedule is correct for when you want it to run. Checkout the WEBSITE_TIME_ZONE note in this post.

    WEBSITE_TIME_ZONE

  • You’ll need to configure your Server that you are going to put the files onto for Remote PowerShell. Follow the Enable Powershell Remoting on the FIM/MIM Sync Server section of this blogpost.
  • The credentials used to connect to the MIM Server are secured as detailed in the Using an Azure Function to query FIM/MIM Service section of this blog post.
  • Create a Timer PowerShell Function App. Follow the Creating your Azure App Service section of this post but choose a Timer Trigger PowerShell App.
    • I configured my Schedule for 1030 every day using the following CRON configuration
      0 30 10 * * *
  • On the Server you’ll be connecting to in order to run the FTP processes you’ll need to copy the PSFTP Module and files to the following directories. I unzipped the PSFTP files and copied the PSFTP folder and its contents to;
    • C:\Program Files\WindowsPowerShell\Modules
    • C:\Windows\System32\WindowsPowerShell\v1.0\Modules

     

Configuring the Timer Trigger Function App

With all the pre-requisites in place it’s time to configure the Timer Function App that you created in the pre-requisites.

The following settings are configured in the Function App Application Settings;

  • FTPServer (the server you will be connecting to, to retrieve files)
  • FTPUsername (username to connect to the FTP Sever with)
  • FTPPassword (password for the username above)
  • FTPSourceDirectory (FTP directory to get the files from)
  • FTPTargetDirectory (the root directory under which the files will be put)

ApplicationSettings

  • You’ll also need Application Settings for a Username and Password associated with a user that exists on the Server that you’ll be connecting to with Remote PowerShell. In my script below these application settings are MIMSyncCredUser and MIMSyncCredPassword

Function App Script

Finally here is a raw script. You’ll need to add appropriate error handling for your environment. You’ll also want to change lines 48 and 51 for the naming of the files you are looking to acquire. And line 59 for the servername you’ll be executing the process on.

Summary

A pretty quick and simple little Azure Function App that will run each day and obtain daily/nightly extracts from an FTP Server. Cleanup of the resulting folders and files I’m doing with other on-box processes.

 

This post is cross-blogged on both the Kloud Blog and Darren’s Blog.

logo-mashup

Monitor SharePoint Changelog in Azure Function

Azure Functions have officially reached ‘hammer’ status

I’ve been enjoying the ease with which we can now respond to events in SharePoint and perform automation tasks, thanks to the magic of Azure Functions. So many nails, so little time!

The seminal blog post that started so many of us on that road, was of course John Liu’s Build your PnP Site Provisioning with PowerShell in Azure Functions and run it from Flow and that pattern is fantastic for many event-driven scenarios.

One where it currently (at time of writing) falls down is when dealing with a list item delete event. MS Flow can’t respond to this and nor can a SharePoint Designer workflow.

Without wanting to get into Remote Event Receivers (errgh…), the other way to deal with this is after the fact via the SharePoint change log (if the delete isn’t time sensitive). In my use case it wasn’t – I just needed to be able to clean up some associated items in other lists.

SharePoint Change Logs

SharePoint has had an API for getting a log of changes of certain types, against certain objects, in a certain time window since the dawn of time. The best post for showing how to query it from the client side is (in my experience) Paul Schaeflin’s Reading the SharePoint change log from CSOM and was my primary reference for the below PowerShell-based Function.

In my case, I am only interested in items deleted from a single list, but this could easily be scoped to an entire site and capture more/different event types (see Paul’s post for the specifics).

The biggest challenge in getting this working was persisting the change token to Azure Storage, and this wasn’t that difficult in and of itself – it’s just that the PowerShell bindings for Azure are as of yet woefully under-documented (TIP: Get-Content and Set-Content are the key to the PowerShell bindings… easy when you know how). In my case I have an input and output binding to a single Blob Storage blob (to persist the change token for reference the next time the Function runs) and another output to Queue Storage to trigger another function that actually does the cleanup of the other list items linked to the one now sitting in the recycle bin. The whole thing is triggered by an hourly timer. If nothing has been deleted, then no action is taken (other than the persisted token blob update).

A Note on Scaling

Note that if multiple delete events occurred since the last check, then these are all deposited in one message. This won’t cause a problem in my use case (there will never be more than a handful of items deleted in one pass of the Function), but it obviously doesn’t scale well, as too many being handled by the receiving Function would threaten to bump up against the 5 min execution time limit. I wanted to use the OOTB message queue binding for simplicity, but if you needed to push multiple messages, you could simple use the Azure Storage PowerShell cmdlets instead of an out binding.

Code Now Please

Here’s the Function code (following the PnP PowerShell Azure Functions implementation as per John’s article above and liberally stealing from Paul’s guide above).

Going Further

This is obviously a simple example with a single objective, but you could take this pattern and ramp it up as high as you like. By targeting the web instead of a single list, you could push a lot of automation through this single pipeline, perhaps ramping up the execution recurrence to every 5 mins or less if you needed that level of reduced latency. Although watch out for your Functions consumption if you turn up the executions too high!

output

Automate the nightly backup of your Development FIM/MIM Sync and Portal Servers Configuration

Last week in a customer development environment I had one of those oh shit moments where I thought I’d lost a couple of weeks of work. A couple of weeks of development around multiple Management Agents, MV Schema changes etc. Luckily for me I was just connecting to an older VM Image, but it got me thinking. It would be nice to have an automated process that each night would;

  • Export each Management Agent on a FIM/MIM Sync Server
  • Export the FIM/MIM Synchronisation Server Configuration
  • Take a copy of the Extensions Folder (where I keep my PowerShell Management Agents scripts)
  • Export the FIM/MIM Service Server Configuration

And that is what this post covers.

Overview

My automated process performs the following;

  1. An Azure PowerShell Timer Function WebApp is triggered at 2330 each night
  2. The Azure Function App initiates a Remote PowerShell session to my Dev MIM Sync Server (which is also a MIM Service Server)
  3. In the Remote PowerShell session the script;
    1. Creates a new subfolder under c:\backup with the current date and time (dd-MM-yyyy-hh-mm)

  1. Creates further subfolders for each of the backup elements
    1. MAExports
    2. ServerExport
    3. MAExtensions
    4. PortalExport

    1. Utilizes the Lithnet MIIS Automation PowerShell Module to;
      1. Enumerate each of the Management Agents on the FIM/MIM Sync Server and export each Management Agent to the MAExports Folder
      2. Export the FIM/MIM Sync Server Configuration to the ServerExport Folder
    2. Copies the Extensions folder and subfolder contexts to the MAExtensions Folder
    3. Utilizes the FIM/MIM Export-FIMConfig cmdlet to export the FIM Server Configuration to the PortalExport Folder

Implementing the FIM/MIM Backup Process

The majority of the setup to get this to work I’ve covered in other posts, particularly around Azure PowerShell Function Apps and Remote PowerShell into a FIM/MIM Sync Server.

Pre-requisites

  • I created a C:\Backup Folder on my FIM/MIM Server. This is where the backups will be placed (you can change the path in the script).
  • I installed the Lithnet MIIS Automation PowerShell Module on my MIM Sync Server
  • I configured my MIM Sync Server to accept Remote PowerShell Sessions. That involved enabling WinRM, creating a certificate, creating the listener, opening the firewall port and enabling the incoming port on the NSG . You can easily do all that by following my instructions here. From the same post I setup up the encrypted password file and uploaded it to my Function App and set the Function App Application Settings for MIMSyncCredUser and MIMSyncCredPassword.
  • I created an Azure PowerShell Timer Function App. Pretty much the same as I show in this post, except choose Timer.
    • I configured my Schedule for 2330 every night using the following CRON configuration

0 30 23 * * *

  • I set the Azure Function App Timezone to my timezone so that the nightly backup happened at the correct time relative to my timezone. I got my timezone index from here. I set the  following variable in my Azure Function Application Settings to my timezone name AUS Eastern Standard Time.

    WEBSITE_TIME_ZONE

The Function App Script

With all the pre-requisites met, the only thing left is the Function App script itself. Here it is. Update lines 2, 3 & 6 if your variables and password key file are different. The path to your password keyfile will be different on line 6 anyway.

Update line 25 if you want the backups to go somewhere else (maybe a DFS Share).
If your MIM Service Server is not on the same host as your MIM Sync Server change line 59 for the hostname. You’ll need to get the FIM/MIM Automation PS Modules onto your MIM Sync Server too. Details on how to achieve that are here.

Running the Function App I have limited output but enough to see it run. The first part of the script runs very quick. The Export-FIMConfig is what takes the majority of the time. That said less than a minute to get a nice point in time backup that is auto-magically executed nightly. Sorted.

 

Summary

The script itself can be run standalone and you could implement it as a Scheduled Task on your FIM/MIM Server. However I’m using Azure Functions for a number of things and having something that is easily portable and repeatable and centralised with other functions (pun not intended) keeps things organised.

I now have a daily backup of the configurations associated with my development environment. I’m sure this will save me some time in the near future.

Follow Darren on Twitter @darrenjrobinson

 

 

 

ADFS v 3.0 (2012 R2) Migration to ADFS 4.0 (2016) – Part 3 – Azure MFA Integration

In Part 1 and Part 2 of this series we have covered the migration from ADFS v3 to ADFS 2016. In this series we will continue our venture in configuring Azure MFA in ADFS 2016.

Azure MFA – What is it about?

It is a bit confusing when we mention that we need to enable Azure MFA on ADFS. Technically, this method is actually integrating Azure MFA with ADFS. MFA itself is authenticating on Azure AD, however, ADFS is prompting you enter an MFA code which will be verified with the Azure AD to sign you in.

In theory, this by itself is not a multi-factor authentication. When users choose to login with a multi-factor authentication on ADFS, they’re not prompted to enter a password, they simply will login with the six digit code they receive on their mobile devices.

Is this secure enough? Arguably. Of course users had to previously set up their MFA to be able to login by choosing this method, but if someone has control or possession of your device they could simply login with the six digit code. Assuming the device is not locked, or MFA is setup to receive calls or messages (on some phones message notifications will appear on the main display), almost anyone could login.

Technically, this is how Azure MFA will look once integrated with the ADFS server. I will outline the process below, and show you how we got this far.

7

Once you select Azure Multi-Factor Authentication you will be redirected to another page

8

And when you click on “Sign In” you will simply sign in to the Office or Azure Portal, without any other prompt.

The whole idea here is not much about security as much as it is about simplicity.

Integrating Azure MFA on ADFS 2016

Important note before you begin: Before integrating Azure MFA on ADFS 2016, please be aware that users should already have setup MFA using the Microsoft Authenticator mobile app. Or they can do it while first signing in, after being redirected to the ADFS page. The aim of this post is to use the six digit code generated by the mobile app.

If users have MFA setup to receive calls or texts, the configuration in this blog (making Azure MFA as primary) will not support that. To continue using SMS or a call, whilst using Azure MFA, the “Azure MFA” need to be configured as a secondary authentication method, under “Multi-Factor”, and “Azure MFA” under “Primary” should be disabled.

Integrating Azure MFA on ADFS 2016, couldn’t be any easier. All that is required, is running few PowerShell cmdlets and enabling the authentication method.

Before we do so however, let’s have a look at our current authentication methods.

0

As you have noticed, that we couldn’t enable Azure MFA without first configuring Azure AD Tenant.

The steps below are intended to be performed on all AD FS servers in the farm.

Step 1: Open PowerShell and connect to your tenant by running the following:

Connect-MsolService

Step 2: Once connected, you need to run the follow cmdlets to configure the AAD tenant:

$cert = New-AdfsAzureMfaTenantCertificate -TenantID swayit.onmicrosoft.com

When successful, head to the Certificate snap in, and check that a certificate with the name of your tenant has been added in the Personal folder.

2a22

Step 3: In order to enable the AD FS servers to communicate with the Azure Multi-Factor Auth Client, you need to add the credentials to the SPN for the Azure Multi-Factor Auth Client. The certificate that we generated in a previsou step,  will serve as these credentials.

To do so run the following cmdlet:

New-MsolServicePrincipalCredential -AppPrincipalId 981f26a1-7f43-403b-a875-f8b09b8cd720 -Type asymmetric -Usage verify -Value $cert

3

Note that the GUID 981f26a1-7f43-403b-a875-f8b09b8cd720 is not made up, and it is the GUID for the Azure Multi Factor Authentication client. So you basically can copy/paste the cmdlet as is.

Step 4: When you have completed the previous steps, you can now configure the ADFS Farm by running the following cmdlet:

Set-AdfsAzureMfaTenant -TenantId swayit.onmicrosoft.com -ClientId 981f26a1-7f43-403b-a875-f8b09b8cd720

Note how we used the same GUID from the previous step.

4

When that is complete, restart the ADFS service on all your ADFS farm servers.

net stop adfssrv

net start adfssrv

Head back to your ADFS Management Console and open the Authentication method and you will notice that Azure MFA has been enabled, and the message prompt disappeared.

5

6

If the Azure MFA Authentication methods were not enabled, then enable them manually and restart the services again (on all your ADFS servers in the farm).

Now that you have completed all the steps, when you try and access Office 365 or the Azure Portal you will be redirected to the pages posted above.

Choose Azure Multi-Factor Authentication

7

Enter the six digit code you have received.

8

And then you’re signed in.

10

By now you have completed migrating from ADFS v3 to ADFS 2016, and in addition have integrated Azure MFA authentication with your ADFS farm.

The last part in this series will be about WAP 2012 R2 upgrade to WAP 2016. So please make sure to come back tomorrow and check in details the upgrade process.

I hope you’ve enjoyed the posts so far. For any feedback or questions, please leave a comment below.

 

 

stopping

How to create an Azure Function App to Simultaneously Start|Stop all Virtual Machines in a Resource Group

Just on a year ago I wrote this blog post that detailed a method to “Simultaneously Start|Stop all Azure Resource Manager Virtual Machines in a Resource Group”. It’s a simple script that I use quite a lot and I’ve received a lot of positive feedback on it.

One year on though and there are a few enhancements I’ve been wanting to make to it. Namely;

  • host the script in an environment that is a known state. Often I’m authenticated to different Azure Subscriptions, my personal, my employers and my customers.
  • prioritize the order the virtual machines startup|shutdown
  • allow for a delay between starting each VM (to account for environments where the VM’s have roles that have cross dependencies; e.g A Domain Controller, an SQL Server, Application Servers). You want the DC to be up and running before the SQL Server, and so forth
  • and if I do all those the most important;
    • secure it so not just anyone can start|stop my environments at their whim

Overview

This blog post is the first that executes the first part of implementing the script in an environment that is a known state aka implementing it as an Azure Function App. This won’t be a perfect implementation as you will see, but will set the foundation for the other enhancements. Subsequent posts (as I make time to develop the enhancements) will add the new functionality. This post covers;

  • Creating the Azure Function App
  • Creating the foundation for automating management of Virtual Machines in Azure using Azure Function Apps
  • Starting | Stopping all Virtual Machines in an Azure Resource Group

Create a New Azure Function App

First up we are going to need a Function App. Through your Azure Resource Manager Portal create a new Function App.

For mine I’ve created a new Resource Group and a new Storage Account as this solution will flesh out over time and I’d like to keep everything organised.

Now that we have the Azure App Plan setup, create a New PowerShell HTTP Trigger Function App.

Give it a name and hit Create.

 

Create Deployment Credentials

In order to get some of the dependencies into the Azure Function we need to create deployment credentials so we can upload them. Head to the Function App Settings and choose Go to App Service Settings.

Create a login and give it a password. Record the FTP/Deployment username and the FTP hostname along with your password as you’ll need this in the next step.

Upload our PowerShell  Modules and Dependencies

Just as my original PowerShell script did I’m using the brilliant Invoke Parallel Powershell Script from Rambling Cookie Monster. Download it from that link and save it to your local machine.

Connect to your Azure Function App using your favourite FTP Client using the credentials you created earlier. I’m using WinSCP. Create a new sub-directory under /site/wwwroot/ named “bin” as shown below.

Upload the Invoke-Parallel.ps1 file from wherever you extracted it to on your local machine to the bin folder you just created in the Function App.

We are also going to need the AzureRM Powershell Modules. Download those via Powershell to your local machine (eg. Save-Module -Name AzureRM -Path c:\temp\azurerm). There are a lot of modules obviously and you’re not going to need them all. At a minimum for this solution you’ll need;

  • AzureRM
  • AzureRM.profile
  • AzureRM.Compute

Upload them under the bin directory also as shown below.

Test that our script dependencies are accessible

Now that we have our dependent modules uploaded let’s test that we can load and utilise them. Below is commands to load the Invoke-Parallel script and test that it has loaded by getting the Help.

# Load the Invoke-Parallel Powershell Script
. "D:\home\site\wwwroot\RG-Start-Stop-VirtualMachines\bin\Invoke-Parallel.ps1"

# See if it is loaded by getting some output
Get-Help Invoke-Parallel -Full

Put those lines into the code section, hit Save and Run and select Logs to see the output. If successful you’ll see the help. If you don’t you probably have a problem with the path to where you put the Invoke-Parallel script. You can use the Kudu Console from the Function App Settings to get a command line and verify your path.

Mine worked successfully. Now to test our AzureRM Module Loads. Update the Function to load the AzureRM Profile PSM as per below and test you have your path correct.

# Import the AzureRM Powershell Module
import-module 'D:\home\site\wwwroot\RG-Start-Stop-VirtualMachines\bin\AzureRM.profile\2.4.0\AzureRM.Profile.psm1'
Get-Help AzureRM

Success. Fantastic.

Create an Azure Service Principal

In order to automate the access and control of the Azure Virtual Machines we are going to need to connect to Azure using a Service Principal with the necessary permissions to manage the Virtual Machines.

The following script does just that. You only need to run this as part of the setup for the Azure Function so we have an account we can use for our automation tasks. Update line 6 for your naming and the password you want to use. I’m assigning the Service Principal the “DevTest Labs User” Azure Role (Line 17) as that allows the ability to manage the Virtual Machines. You can find a list of the available roles here.

Take note of the key outputs from this script. You will need to note the;

  • ApplicationID
  • TenantID

I’m also securing the credential that has the permissions to Start|Stop the Virtual Machines using the example detailed here in Tao’s post.

For reference here is an example to generate the keyfile. Update your path in line 5 if required and make sure the password you supply in line 18 matches the password you supplied for the line in the script (line 6) when creating the Security Principal.

Take note of the password encryption string from the end of the script to pair with the ApplicationID and TenantID from the previous steps. You’ll need these shortly in Application Settings.

Additional Dependencies

I created another sub-directory under the function app site named ‘keys’ again using WinSCP. Upload the passkey file created above into that directory.

Whilst we’re there I also created a “logs” directory for any erroneous output (aka logfiles created when you don’t specify them) from the invoke-parallel script.

Application Variables

Using the identity information you have created and generated we will populate variables on the Function App, Application Settings that we can then leverage in our Function App. Go to your Azure Function App, Application Settings and add an application setting (with the respective values you have gathered in the previous steps) for;

  • AzureAutomationPWD
  • AzureAutomationAppID
  • AzureAutomationTennatID (bad speed typing there)

Don’t forget to click Save up the top of the Application Settings screen.

 

The Function App Script

Below is the sample script for your testing purposes. If you plan to use something similar in a production environment you’ll want to add more logging and error handling.

Testing the Function

Select the Test option from the right-hand side pane and update the request body for what the Function takes (mode and resourcegroup) as below.   Select Run and watch the logs. You will need to select Expand to get more screen real estate for them.

You will see the VM’s enumerate then the script starting them all up. My script has a 30 second timeout for the Invoke-Parallel Runspace as the VM’s will take longer than 30 seconds to startup. And you pay for use, so we want to keep this lean. Increase the timeout if you have more VM’s or latency that doesn’t see all your VM’s state transitioning.

Checking in the Azure Portal I can see my VM’s all starting up (too fast on the screenshot for the spfarm-mim host).

 

Sample Remote PowerShell Invoke Script

Below is a sample PowerShell script that is remotely calling the Azure Function and providing the info the Function takes (mode and resourcegroup) the same as we did in the Test Request Body script in the Azure Function Portal.  This time to stop the VMs.

Looking in the Azure Portal and we can see all the VMs shutting down.

 

Summary

A foundational implementation of an Azure Function App to perform orchestration of Azure Virtual Machines.

The Function App is rudimentary in that the script exits (as described in the Runspace timeout) after 30 seconds which is prior to the VMs fully returning after starting|stopping. This is because the Function App will timeout after 5mins anyway.

Now to workout the enhancements to it.

Finally, yes I have renewed/changed the Function Key so no-one else can initiate my Function 🙂

Follow Darren Robinson on Twitter