Azure MFA Server – International Deployment

Hi all – this blog will cover off some information to assist with multilingual/international deployment of Azure MFA server. There are some nuances of the product that make ongoing management of language preferences a little challenging. Also some MFA Methods are preferable to others in international scenarios due to carrier variances.

Language Preferences

Ideally when a user is on-boarded, their language preferences for the various MFA Methods should be configured to their native language. This can easily be achieved using MFA Server, however there are some things to know:

  1. Language settings are defined in in Synchronisation Items.
  2. Synchronisation Items can be set in a hierarchy so that settings in the most precedent Synchronisation Item apply.
  3. Language settings set explicitly in Synchronisation Items apply at FIRST SYNC ONLY. Adding, removing or changing Sync Items with explicitly set language configurations will have no effect on existing MFA Server users.

I’ll cover off how to get around item 3 a bit later with non-explicitly configured language settings. To demonstrate hierarchical application of Sync items – first create two Sync Items with different explicitly set language settings as below. I’ve used Active directory OUs to differentiate user locations, however AD security groups can be used too, as can an LDAP filter:

Add Synchronization Item Dialog - Australia

Add Synchronization Item Dialog - Italy

Then set them in the required hierarchy, in this case our default language is English for the Aussies, then we have an overriding Sync Item for the Italians – easy:

Multi-Factor Authentication Server - Syncrhonization tab

The above is all well and good if you are 100% guaranteed that users will be in the correct OU, group, or have the user filter apply when MFA server synchronises. If they are subsequently modified and a different Sync Item now applies, their language settings will not be updated. I’ll now cover how to get around this next.

Use of Language Attributes to control Language Settings

MFA Server has the option to specify a language attribute in user accounts to contain the short code (e.g. en = English) for language preferences. When the account is imported, the user’s default language can be set based on this attribute. Also unlike explicitly configured language settings, when a Sync Item is configured to use language attributes it will update existing users if the language short code in their account changes.

To use this feature, first define an AD attribute that stores the language short code within the Attributes tab in the Directory Integration. Shown below I’m using an AD Extension attribute to store this code:

Multi-Factor Authentication Server - setting Attributes

Then create a Sync Item that covers all your users, and configure “(use language attribute)” instead of explicitly setting a language – it’s the last item on the list of languages so unless you’re Zulu it’s easy to miss. By the way, those are the short codes for the language attribute for each language listed in this drop down list. Once user accounts have been configured with the language short code in the specified attribute, on next sync their language will be updated.

Edit Synchronization Item Dialog

Other Stuff to Keep in Mind

When dealing with international phone carriers, not all are created equal. Unfortunately, Microsoft has little control over what happens to messages once they’re sent from Azure MFA Services. Some issues that have been experienced:

  1. Carriers not accepting two way SMS, thus MFA requests time out as user One Time Password responses are not processed.
  2. Carriers not accepting the keypad response for Phone call MFA, again resulting in MFA request time out.
  3. Carriers spamming users immediately after SMS MFA has been sent to users.
  4. Users being charged international SMS rates for two-way SMS (makes sense, but often forgotten).

In my experience SMS has proven to be the least reliable, unless the Mobile MFA App/OATH can be used Phone is the better method for International users.

Often overlooked is the ability to export and import user settings using the MFA Server Console. You can export by selecting File/Export within the console. The CSV can then be updated and re-imported – this will modify existing user accounts including MFA Method preference and language defaults.

One Last Thing

For Apple iPhone users, Notifications must be enabled for the MFA Mobile App as the App will fail to activate with a timeout error if Notifications are disabled. Notifications can be enabled in iOS via navigating to Settings/Notifications/Mulit-Factor and enabling “Allow Notifications”. Changing this setting can take a little while to apply, so if the Mobile App still does’t activate after enabling Notifications, wait a bit then try again.

Setting Instance Level Public IPs on Azure VMs

Originally posted on siliconvalve:

Since October 2014 it has been possible to add a public IP address to a virtual machine in Azure so that it can be directly connected to by clients on the internet. This bypasses the load balancing in Azure and is primarily designed for those scenarios where you need to test a host without the load balancer, or you are deploying a technology that may require a connection type that isn’t suited to Azure’s Load Balancing technology.

This is all great, but the current implementation provides you with dynamic IP addresses only, which is not great unless you can wrap a DNS CNAME over the top of them. Reading the ILPIP documentation suggested that a custom FQDN was generated for an ILPIP, but for the life of me I couldn’t get it to work!

I went around in circles a bit based on the documentation Microsoft supplies as it looked…

View original 333 more words

Programmatically interacting with Yammer via PowerShell – Part 1

For my latest project I was asked to automate some Yammer activity. I’m first to concede that I don’t have much of a Dev background, but I instantly fired up PowerShell ISE in tandem with Google only to find…well not a lot! After a couple of weeks fighting with a steep learning curve, I thought it best to blog my findings, it’s good to share ‘n all that!

    It’s worth mentioning at the outset, if you want to test this out you’ll need an E3 Office 365 Trial and a custom domain. It’s possible to trial Yammer, but not with the default * domain.

First things first, there isn’t a PowerShell Module for Yammer. I suspect it’s on the todo list over in Redmond since their 2012 acquisition. So instead, the REST API is our interaction point. There is some very useful documentation along with examples of the json queries over at the Yammer developer site, linked here.

The site also covers the basics of how to interact using the REST API. Following the instructions, you’ll want to register your own application. This is covered perfectly in the link here.

When registering you’ll need to provide a Expected Redirect. For this I simply put my Yammer site address again For the purposes of my testing, I’ve not had any issues with this setting. This URL is important and you’ll need it later so make sure to take a note of it. From the registration you should also grab your Client ID & Client secret.

While we’ve got what appears to be the necessary tools to authenticate, we actually need to follow some steps to retrieve our Admin token.

    It is key to point out that I use the Yammer Verified Admin. This will be more critical to follow for Part 2 of my post, but it’s always good to start as you mean to go on!

So The following script will load Internet Explorer and compile the necessary URL. You will of course simply change the entries in the variables with the ones you created during your app registration. I have obfuscated some of the details in my examples, for obvious reasons :)

$clientID = "fvIPx********GoqqV4A"
$clientsecret = "5bYh6vvDTomAJ********RmrX7RzKL0oc0MJobrwnc"
$RedirURL = ""

$ie = New-Object -ComObject internetexplorer.application
$ie.Visible = $true

From the IE Window, you should login with your Yammer Verified Admin and authorise the app. Once logged in, proceed to this additional code…

if ($ie.LocationURL.EndsWith("algo"))
$url = $ie.LocationURL.TrimStart("")
$Authcode = $url.TrimEnd("/Threads/index?type=algo")
$url = $ie.LocationURL.TrimStart("")
$Authcode = $url.TrimEnd("/Threads/index?type=my_all")
$ie = New-Object -ComObject internetexplorer.application
$ie.Visible = $true

This script simply captures the 302 return and extracts the $Authcode which is required for the token request. It will then launch an additional Internet Explorer session and prompt you to download an access_token.json. Within here you will find your Admin Token which does not expire and can be used for all further admin tasks. I found it useful to load this into a variable using the code below…

$Openjson = $(Get-Content 'C:\Tokens\access_token.json' ) -join "`n" | ConvertFrom-Json
$token = $Openjson.access_token.token

Ok, so we seem to be getting somewhere, but our Yammer page is still looking rather empty! Well now all the prerequistes are complete, we can make our first post. A good example json to use is posting a message, which is detailed in the link here.

I started with this one mainly because all we need is the Group_ID of one of the groups in Yammer and the message body in json format. I created a group manually and then just grabbed the Group_ID from the end of the URL in my browser. I have provided an example below…

$uri = ""

$Payloadjson = '{
"body": "Posting to Yammer!",
"group_id": 59***60

$Headers = @{
"Accept" = "*/*"
"Authorization" = "Bearer "+$token
"accept-encoding" = "gzip"

Invoke-RestMethod -Method Post -Uri $uri -Header $Headers -Body $Payloadjson
Yammer Result

Yammer Post

    It’s at this stage you’ll notice that I’ve only used my second cmdlet, Invoke-RestMethod. Both this and ConvertFrom-Json were introduced in PowerShell 3.0 and specifically designed for REST web services like this.

A key point to highlight here is the Authorisation attribute in the $Headers. This is where the $Token is passed to Yammer for authentication. Furthermore, this $Header construct is all you need going forward. It’s simply a case of changing the -Method, the $uri and the $Payload and you can play around with all the different json queries listed on the Yammer Site.

While this was useful for me, it soon became apparent that I wanted to perform actions on behalf of other users. This is something I’ll look to cover in Part 2 of this Blog, coming in the next few days!

Azure Active Directory Connect high-availability using ‘Staging Mode’

With the Azure Active Directory Connect product (AAD Connect) being announced as generally available to the market (more here, download here), there is a new feature available that will provide a greater speed of recovery of the AAD Sync component. This feature was not available with the previous AAD Sync or DirSync tools and there is little information about it available in the community, so hopefully this model can be considered for your synchronisation design.

Even though the AAD Sync component within the AAD Connect product is based on the Forefront Identity Manager (FIM) synchronisation service, it does not take on the same recovery techniques as FIM. For AAD Sync, you prepare two servers (ideally in different data centres) and install AAD Connect. Your primary server would be configured to perform the main role to synchronise identities between your Active Directory and Azure Active Directory, and the second server installed in much the same way but configured with the setting ‘enable staging mode’ being selected. Both servers are independent and do not share any components such as SQL Server, and the second server is performing the same tasks as the primary except for the following:

  • No exports occur to your on-premise Active Directory
  • No exports occur to Azure Active Directory
  • Password synchronisation and password writeback are disabled.

Should the primary server go offline for a long period of time or become unrecoverable, you can enable the second server by simply running the installation wizard again and disabling staging mode. When the task schedule next runs, it will perform a delta import and synchronisation and identify any differences between the state of the previous primary server and the current server.

Some other items you might want to consider with this design.

  • Configure the task schedule on the second server so that it runs soon after the primary server completes. By default the task schedule runs every 3 hours and launches at a time which is based on when it was installed, therefore the second server task schedule can launch up to a few hours after the primary server runs. Based on the average amount of work the primary server takes, configure the task schedule on the second server to launch say 5-10 minutes later
  • AAD Sync includes a tool called CSExportAnalyzer which displays changes that are staged to be exported to Active Directory or Azure Active Directory. This tool is useful to report on pending exports while the server is in ‘staging mode’
  • Consider using ‘custom sync groups’ which are located in your Active Directory domain. The default installation of the AAD Sync component will create the following groups locally on the server: ADSyncAdmins, ADSyncOperators, ADSyncBrowse and ADSyncPasswordSet. With more than one AAD Sync server, these groups need to be managed on the servers and kept in sync manually. Having the groups in your Active Directory will simplify this administration.

    NOTE: This feature is not yet working with the current AAD Connect download and this blog will be updated when working 100%.

The last two items will be detailed in future blogs.

Use Excel, PowerQuery and Yahoo Finance to manage your Portfolio

There are some new and powerful Business Intelligence options available to all of us, from scalable cloud platforms to the Excel on the desktop. So its a good time to demonstrate some of them and show where BI and Application Development meet.

Power Query and the language M have been quietly making their way into the Microsoft Business Intelligence stack and in particular Excel for some time. The set of “Power” components (Power Query, Power Pivot and Power BI) all sound a bit like the marketing guys won the naming battle again and don’t really help explain the technology nor where they fit so I avoided it for a long time. That is until this latest version of Office where you can’t really avoid it. So it came time to take Power Query and M by the scruff and learn it.

First let’s put these new “Power” technologies in their relative places:

  • M is a new language that takes on the ET from ETL (Extract Transform and Load). It is a scripting language which looks like an odd mix of VBScript and PowerShell and “R”. It has constructs for importing and manipulating Tables, Lists and Records including the ability to do joining and grouping
  • Power Query is a scripting environment which is an add-on to Excel adding GUI support for creating and editing M scripts. It hides away the M language from the punter but you’ll quickly find the need to drop into the M script editor for all but the basic scripting chores.
  • Power Pivot is a multidimensional analysis engine without the need to use the full MSSQL OLAP Analysis Services. PowerPivot is available as a plug in for Excel but executes over what is called the Data Model an in memory database capable of much larger datasets than Excel.
  • Power BI Designer is a standalone BI development environment which includes Power Query and PowerPivot. It’s for the power business user to develop reports from data without the need to drop into or distribute Excel.

So I set myself a goal to create a spreadsheet that auto updates with stock prices.

First get a version of Excel (2010 or above) which supports Power Query and download the Power Query installer this is already embedded in the later versions of Excel but may need an update. After installing you should get a set of new buttons like this. The New Query button replaces the previous Get External Data features of Excel and is the entry point to load data into the Power Query environment.

Yahoo Finance

Yahoo Finance have kindly exposed a number of APIs for requesting stock price information (if used for non-commercial purposes), current and historical for just about every instrument you can think of. These APIs are simple web queries but predate the REST API conventions so don’t follow nice RESTful best practice. The two services we’ll use are:

This will return historical prices between two dates.

This will return the latest quote for the stock CPU.AX with a number of fields identified by the f parameter.

Both APIs are very well documented (counter intuitively) at Google Code

Pulling this data into Power Pivot is really easy from which it’s even easier to post process and report.

Historical Query

So let’s import one of these services. The New Query button has a huge number of different data sources. For this purpose, we’ll use simply “Web” and import the historical data service by pasting in the above URL. This will query the service, display the data and launch the Power Query editor.

Here’s our first glimpse of the new data staging area. There’s a whole lot of buttons here for transforming and massaging the returned data and everything that is done with the buttons in the GUI become M statements in an M query which can be a great way to learn the language. And there is a data grid (not Excel this one is part of Power Query) to show the query results.

Try hitting the button “Advanced Editor”.

    Source = Csv.Document(Web.Contents(""),[Delimiter=",",Encoding=1252]),
    #"First Row as Header" = Table.PromoteHeaders(Source),
    #"Changed Type" = Table.TransformColumnTypes(#"First Row as Header",{{"Date", type date}, {"Open", type number}, {"High", type number}, {"Low", type number}, {"Close", type number}, {"Volume", Int64.Type}, {"Adj Close", type number}})
    #"Changed Type"

Ah, now we get our first glimpse of M. The Web import wizard created a set of M code that queried the URL, determined the type of data returned (csv), determined the first row is headers and had a guess at the column types of the data and returned that data as a Table.

Breaking down the code:

  • A Query starts with (a very BASIC looking) “let”
  • A set of case sensitive statements (or Steps) that are separated by commas
  • Variables look like #”variablename” are created by assignment and adopt the returned type (much like Powershell a var in C#)
  • M is optimised for speed and therefore uses short circuit evaluation; that is a statement line wont execute if it isn’t required for the end value.
  • A Query end in an “in” statement to represent the data returned from the query.


Now choose Close and Load To…. This is the point where Power Query tips into Excel or other tool you may be using to display data. The choice here is either to load the query into an Excel Table the PowerPivot Data Model or simply save the code into a stored query “Connection” for later use.

Now that’s given us a Choose Table and you should end up with something like this


Now wouldn’t it be good if we could parameterise that query so it could be used for other stocks and dates. We can upgrade that original query to be a function. Open the Advanced Editor again on the query and overwrite with this.

     historical= (symbol as text, days as number) =>
    #"toDate" = DateTime.LocalNow(),
    #"fromDate" = Date.AddDays(#"toDate", 0-days),
    #"fromDateString" = "&a=" & Text.From(Date.Month(fromDate)-1) & "&b=" & Text.From(Date.Day(fromDate)) & "&c=" & Text.From(Date.Year(fromDate)),
    #"toDateString" = "&d=" & Text.From(Date.Month(toDate)-1) & "&e=" & Text.From(Date.Day(toDate)) & "&f=" & Text.From(Date.Year(toDate)),
    Source = Csv.Document(Web.Contents("" & symbol & #"fromDateString" & #"toDateString" & "&ignore=.csv"),[Delimiter=",",Encoding=1252]),
    #"SourceHeaders" = Table.PromoteHeaders(Source),
    #"Typed" = Table.TransformColumnTypes(#"SourceHeaders",{{"Date", type date}, {"Open", type number}, {"High", type number}, {"Low", type number}, {"Close", type number}, {"Volume", Int64.Type}, {"Adj Close", type number}})

First thing to note is the code structure now has a query in a query. The outer query defines a function which injects some variables into the inner query. We’ve parameterised the stock symbol, and the number of days of history. We also get to use some of the useful M library functions as defined in here Power Query Formula Library Specification to manipulate the dates.
Now rename the new “function” as Historical and choose Home=>Close and Load.

At this point it’s easy to test the function using the Invoke command which will prompt for the parameters. To execute it from Excel we’ll provide the data from Excel cells. In Excel use Insert=>Table to insert a table with two columns named Symbol and Days and name the table “HistoricalParameters”. Note we have just created an Excel Table which is different to a Power Query Table, although you can use a Power Query Table to load data into and read data from an Excel Table (confused yet?).

Now it’s time for some more M. New Query=>Other Sources=>Blank Query and paste in this code.

    Source = Excel.CurrentWorkbook(){[Name="HistoricalParameters"]}[Content],
    #"Historical" = fHistorical(#"Symbol",#"Days")

Take the HistoricalParameters Table we created in Excel, pull out the data from the Symbol and Days columns. And use that to call our fHistorical function. Close and Load that query into an Excel table called Historical. I put both the Historical and the HistoricalParameters tables on the same Worksheet so you can update the Symbol and the Days and see the result. It looks like this. To get the query to run with new data either hit the Data Refresh button in Excel or Right Click and Refresh on the Historical table.

Note once you have data in an Excel Table like this, it’s easy to add additional calculated columns into the table auto update when the query does.

Formula Firewall

One of the areas that can be confusing is the concept of Privacy Levels. Power Query is built with sharing of queries in mind. Users can share queries with others but a Data Source must be declared with its privacy level to ensure that data from secure sources aren’t combined with insecure sources. This is done via the very fancy sounding Formula Firewall.

You can control the privacy level of every data source or but when developing a query the restriction imposed by the firewall can be confusing and can lead to the dreaded error message.

"Formula.Firewall: Query 'QueryName' (step 'StepName') references other queries or steps and so may not directly access a data source. Please rebuild this data combination."


So for now I’m going to turn it off. In the Power Query editor window go to File=>Options and Settings=>Options and set the Ignore Privacy Levels option.


Portfolio Query

OK now onto something more complex. The Quote Yahoo URL takes two parameters, s and f. “s” is a comma delimited list of stock symbols and f is a list of fields to return. There are actually 90 fields which can be returned for a stock by concatenating the list of alphanumeric codes for the fields. What would be good is to create a query that can request a number of stocks for a user defined set of columns. The set of codes is well described in this page so let’s take that data and add it into an Excel worksheet and make it into an Excel Table by pasting the data into an Excel Worksheet, highlight the cells and select Insert=>Table which will convert it to an Excel Table and rename it “FieldLookup”. Now rename a couple of columns and add a “Type” and “Display” column. So you end up with something like this.

The “Display” column is used to determine which fields are to be queried and displayed. The “Type” column is used to encourage Excel to use the right column type when displaying (everything will still work using the default Excel column type, but Numeric and Date columns wont sort nor filter properly).

Now in a new Excel sheet and add table called Portfolio with a single column Symbol and add some stock code symbols. It is this table which will be used to fill with data for all the required stocks and using all of the required display fields.

Now create a new Query

    PortfolioSource = Excel.CurrentWorkbook(){[Name="Portfolio"]}[Content],
    #"sParam" = Text.Combine(Table.ToList(Table.SelectColumns(PortfolioSource, "Symbol")),","),

    #"Fields" = Excel.CurrentWorkbook(){[Name="FieldLookup"]}[Content],
    #"DisplayFields" = Table.SelectRows(#"Fields",each [Display]=true),
    #"fParam" = Text.Combine(Table.ToList(Table.SelectColumns(#"DisplayFields", "f")),""),

    #"DisplayColumns" = Table.ToList(Table.SelectColumns(#"DisplayFields", "Name")),
    #"TypeNames" = Table.SelectRows(Table.SelectColumns(#"DisplayFields", {"Name","Type"}), each [Type]="number"),
    #"ColumnTypes" = Table.AddColumn( #"TypeNames", "AsType", each type number),
    #"ColumnTypesList" = Table.ToRows( Table.SelectColumns(#"ColumnTypes",{"Name","AsType"})),

    #"YahooSource"= Csv.Document(Web.Contents("" & #"sParam" & "&f=" & #"fParam"), #"DisplayColumns"),
    #"TypedYahooSource" = Table.TransformColumnTypes(#"YahooSource",  #"ColumnTypesList")

Here’s what is happening

  • PortfolioSource: Get the Portfolio table and take just the Symbol column to create a single column Table
  • #”sParam”: Now combine all the rows of that table with commas separating. This will be the “s” parameter of the yahoo Quotes request
  • #”Fields”: Now get the table content from the FieldLookup table
  • #”DisplayFields”: Select just the rows that have a Display column value of true. Here’s we see the first use an inline function executed for every row using the “each” keyword (a bit like a C# Linq query).
  • #”fParam”: Combine all the rows with no separator to create the “f” parameter.
  • #”DisplayColumns”: Choose the Name column from the table to be used later
  • #”TypeNames”: Choose the Name and Type column where the type is a number
  • #”ColumnTypes”: Add a new column of type number. This is the first time we see the type system of M there are all the simple types and some additional ones for Record, List and Table etc.
  • #”ColumnTypesList”: Create a list of lists where each list item is a column name and the number type
  • #”YahooSource”: Now make the query to yahoo finance with our preprepared “s” and “f” parameters
  • #”TypedYahooSource”: Run through the returned table and change the column types to number for all required number columns

Now Close and Load the Query. This will generate a query like this.,MRM.AX,CAB.AX,ARI.AX,SEK.AX,NEC.AX&f=m4m3c1m5m7p2er1j4e7ql1nt8r5rr2p6r6r7p1p5s6s7st7yvkj

And a Portfolio table like this

Every column that has a “true” in the Display column of FieldLookup table will get a column returned with data. To add and remove stocks from the portfolio, just add, remove or edit them right there in the Symbol column of the Portfolio table. Hit Refresh and the data is updated!

Heres the final spreadsheet YahooFinanceStocks

As I was feeling around PowerQuery and M I’m left with a nagging feeling. Do we need another language? And another development environment, without debugging, intellisense, or syntax highlighting? Definitely not!

M is reasonably powerful but seems to have snuck into the Microsoft stack before what I call the “open era”. That is Microsoft’s new model of embracing open source rather than competing against it. Which brings me to R. R can do all of that and more and there is much more community backing. R is now a language appearing across interesting new Microsoft products like Azure Machine Learning and even inside SQL Server.

So I’ll crack that nut next time.

Azure Internal Load Balancing – Setting Distribution Mode

I’m going to start by saying that I totally missed that the setting of distribution mode on Azure’s Internal Load Balancer (ILB) service is possible. This is mostly because you don’t set the distribution mode at the ILB level – you set it at the Endpoint level (which in hindsight makes sense because that’s how you do it for the public load balancing too).

There is an excellent blog on the Azure site that covers distribution modes for public load balancing and the good news is that they also apply to internal load balancing as well. Let’s take a look.

In the example below we’ll use the following parameters:

  • Cloud Service: apptier
  • Two VMS: apptier01, apptier02
  • VNet subnet with name of ‘appsubnet’
    adding a
  • load balancer with static IP address of
  • balances HTTP traffic based on Source and Destination IP.

Here’s the PowerShell to achieve this setup.

# Assume you have setup PS subscription and user Account.

# Add Load Balancer to Cloud Service wrapping VMs
Add-AzureInternalLoadBalancer -ServiceName apptier `
-InternalLoadBalancerName apptierplb -SubnetName appsubnet `

# Add Endpoints to VMs
# VM1
Get-AzureVM -ServiceName apptier -Name apptier01 | `
Add-AzureEndpoint -LBSetName 'HttpIn' -Name 'HttpIn' `
-DefaultProbe -InternalLoadBalancerName 'apptierplb' -Protocol tcp `
-PublicPort 80 -LocalPort 80 -LoadBalancerDistribution sourceIP | `

# VM2
Get-AzureVM -ServiceName apptier -Name apptier02 | `
Add-AzureEndpoint -LBSetName 'HttpIn' -Name 'HttpIn' `
-DefaultProbe -InternalLoadBalancerName 'apptierplb' -Protocol tcp `
-PublicPort 80 -LocalPort 80 -LoadBalancerDistribution sourceIP | `

# You can check what distribution mode is set
Get-AzureVM –ServiceName apptier –Name apptier01 | Get-AzureEndpoint


The Secrets of SharePoint Site Mailbox

A Site Mailbox serves as a central filing cabinet, providing a place to file project emails and documents that can be only accessed and edited by SharePoint site members.

It can be used from a SharePoint team site to store and organise team email or via Outlook 2013 for team email, and as a way to quickly store attachments and retrieve documents from the team site.

Users will not see a Site Mailbox in their Outlook client unless they are an owner or member of that Site in SharePoint.

Secret 1: Remove Site Mailbox from Outlook client

Outlook 2013 has been enhanced to support Site Mailboxes, with a really nice integration point being the automated rollout of Site Mailboxes directly to user’s Outlook profile based on the user’s permission to the SharePoint site.

Anyone in the default owners or members groups for the site (anyone with Contribute permissions) can use the Site Mailbox. People can be listed in the owners or members group as individuals or as part of a security group. Once the Site Mailbox is provisioned, the Site Mailbox will automatically be added to Outlook, with the reverse true if the user’s permission is removed.

There are two ways to manage visibility of the Site Mailbox in Outlook:

A. Managing from Outlook manually

Users can simply right click on their personal mailbox and by selecting ‘Manage All Site Mailboxes’, users will be directed to a list of all Site Mailboxes they have access to and they can easily pin and unpin them from there.

Manage all Site Mailboxes in Outlook

B. Group Membership in SharePoint

If you don’t want users to see the Site Mailbox in Outlook at all here is the tip – do not add users to the default members or owners group in SharePoint. Instead, create a separate SharePoint group with Edit permission and add users to the new group so that they can still access the site mailbox through the web but the mailbox will not be available in Outlook.

Secret 2: Rename an existing Site Mailbox

Here is another tip – what happened if users are not happy with the Site Mailbox name that shows in the Global Address List (GAL)? There is lots of efforts to delete the Site Mailbox and reassign the permission if all we want to do is rename it.

I have great news – here is the work around:

  1. In the Office 365 Admin Center > Active users, find the Site Mailbox
  2. Select the Site Mailbox and edit its display name
  3. Then go to the Site with the mailbox, go to Site Settings > Title, description and logo
  4. Update the Title with the new Site Mailbox name too.

Note: if you did not update the Site name (the last step), Exchange will revert the name of the Site Mailbox name back to the Site name – Smart enough, isn’t it :).

So there we have a couple of useful tips – hopefully they’ve helped you out!

Hybrid Exchange Migration: Mailbox to Mail-User Conversion Fails

Occasionally after migrating a mailbox from an on-premises Exchange server to Exchange Online the user is unable access their mailbox using Outlook, however the Office 365 Outlook Web Access (OWA) application is functional. Often (but not always) the migration batch report will contain users that have “Completed with Errors” or “Completed with Warnings”.

Commonly this is caused by the migration process failing to update the on-premises object and convert it into a mail-enabled user, often due to issues with inheritable permissions or unsupported characters. This critical step retains relevant information to populate the Global Address List, while clearing some attributes that block Outlook from performing an autodiscover operation to locate the mailbox on Office 365.

Additionally the process sets the targetAddress attribute to allow Exchange to correctly route mail to the Exchange Online mailbox. In this situation it’s likely that you will need to complete the conversion from mailbox to a mail enabled user manually. Microsoft has made the KB2745710 article available recommending a process for resolving the issue.

Unfortunately, running “Disable-Mailbox” will result in the user losing their proxy and X500 (LegacyExchangeDN) addresses which will likely create more problems. I have put the following simple script together that will convert an on-premises User Mailbox into a Remote Mailbox (mail-enabled user).

Note: Do not use the script ‘as is’ against any other mailbox type (shared, resource, etc)!

This script will set the following attributes to NULL:

  • homeMDB = NULL
  • homeMTA = NULL
  • msExchHomeServerName = NULL

and then sets the following attributes:

  • msExchVersion = “44220983382016” (this is relevant for mailboxes previously hosted on Exchange 2003)
  • msExchRecipientDisplayType = “-2147483642” (SyncedMailboxUser)
  • msExchRecipientTypeDetails = “2147483648” (RemoteUserMailbox)
  • msExchRemoteRecipientType = “4” (migrated)
  • targetAddress =

To run the script, save as a PowerShell ps1 file. The samAccountName of the affected mailbox is a required input as in the below example:

.\Complete_MailUser_Conversion -samAccountName smithj
# Global Variables & Constants

Import-Module activedirectory

$targetDomain = "*" #Update to match your own
$ProxyAddresses = (Get-AdUser $samAccountName -properties *).ProxyAddresses
$UPN = (Get-AdUser $samAccountName -properties *).UserPrincipalName
$target = $proxyaddresses -like $targetDomain -replace 'smtp:','SMTP:'

# Script START
if ($target -gt 0) {

    Get-ADUser -Filter 'UserPrincipalName -eq $upn' | `
    Set-ADUser -Clear homeMDB, homeMTA, msExchHomeServerName -Replace @{ msExchVersion="44220983382016";msExchRecipientDisplayType="-2147483642";msExchRecipientTypeDetails="2147483648";msExchRemoteRecipientType="4";targetAddress=$target }
# Script END

FIM 2010 R2 and the Missing Log File

Anyone who has had anything to do with FIM will probably have experienced moments where you question what is taking place and ask yourself if you really understand what FIM is doing at a specific point in time. This is partly due to FIM’s extraordinarily unpredictable error handling and logging.

While working on a long running FIM 2010 R2 project where we chose to make heavy use of PowerShell within action and authorisation workflows. We chose to make use of some of the PowerShell extensions for FIM 2010 R2 at Codplex. In particular we made use of:

By enabling FIM to execute PowerShell it enabled us to get FIM to do all kinds of things it otherwise did not have an out of the box capability for. Furthermore it made FIM’s interactions with other systems what I like to call “System Administrator” friendly. Most system administrators these days are pretty comfortable with PowerShell and can at least follow the logic inside of a PowerShell ps1 script.

So, this worked well for us and allowed us to pick the “FIM Extensions PowerShell Activity” from inside of a workflow and execute our very own PowerShell Scripts as part of Action or Authorisation Workflows.


This is awesome until something unexpected happens inside your scripts. In a complex environment where many PowerShell scripts may exist within close proximity of each other troubleshooting can be a less than pleasant experience.

While the concept of logging is nothing new, we did experiment with a few methods before arriving at one that made working with FIM PowerShell Extension more friendly, predictable and requiring standard analytical / system administration skills rather than a specialty in clairvoyance. Originally we started logging to a custom application log inside the windows event-log. This however was slow and cumbersome to view, particularly during the development and testing stages of our project. In the end we found it more helpful to have a single PowerShell text file log, that captured the output of all our scripts activities as executed as part of FIM workflows, allowing for an easier to read view of what has taken place and importantly in what order.

So here my learnings from the field:

Start by creating a function library. Here you can stash any functions and reduce the requirement for repetitive code within your PowerShell scripts. we called our library “fimlib.ps1″

Inside the fimlib.ps1 we wrote a function for logging to a txt file. The function allows us to define a severity ($severity), event code ($code) and a message ($message)

$logfile = “c:\EPASS\Logs\PowerShell”+(get-date -f yyyy-MM-dd)+”.log”


$handle = $pid.ToString() + “.” + (get-random -Minimum 1000 -Maximum 2000).ToString()

function fimlog($severity,$code,$message) {

if ($loginclude -contains $severity) {

$date = get-date -format u

$user = whoami

$msg = “{” + $handle + “} ” + $date + ” [” + (get-pscallstack)[1].Command + “/” + $code + “/” + $severity + ” as ” + $user + “] – ” + $message

add-content $logfile $msg



Take note of the use of the “$handle” variable, here we are creating a code to identify individual threads based on the current PID and a randomish number.

The end result is a text based log file with events like the following:


I like to use NotePad++ to view the log as it has some nice features like automatic notification when there are new entries inside the log. The handle I mentioned earlier makes it easy to sort the log and isolate individual activities by finding all occurrences of an handle. Typically each PowerShell Script executed as a result of a workflow will have a unique handle.

So how do you make this work:

Firstly you’ll need to reference your library at the start of each of your PowerShell scripts. Add something like this to the start of your script:

# Load FIMLib.ps1
. C:\Lib\FIMLib.ps1

Whenever you need to log something you can simply call your “fimlog” function as follows:

fimlog “JOBINFO” 100 “This is something I’d like to log”

While this is nothing revolutionary, it helped me understand what was actually taking place in reality. Hopefully it can help you too.

Tips on moving your Visual Studio Online from Microsoft to Organisational Accounts

If like me you’ve been a keen user of Visual Studio Online since it first came into existence way back in 2012 you’ve probably gotten used to using it with Microsoft Accounts (you know, the ones everyone writes “formerly Live ID” after), and when, in 2014, Microsoft enabled the use of Work (or Organisational) Accounts you either thought “that’s nice” and immediately got back to writing code, or went ahead and migrated to Work Accounts.

If you are yet to cutover your Visual Studio Online (VSO) tenant to use Work Accounts, here are a few tips and gotchas to be aware of as part of your switch.

The VSO owner Microsoft Account must be in Azure AD

Yes, you read that correctly.

Azure Active Directory supports the invitation of users from other Azure AD instances as well as users with Microsoft Accounts (MSAs).

If you haven’t added the MSA that is currently listed as “Account owner” on the Settings page of your VSO tenant ( to the Azure Active Directory instance you intend to use for Work Account authentication then you will have problems.

For the duration of the cutover you also need to have the account as an Azure Co-admin. This is because the directory connection process for VSO which you perform via the Azure Management Portal assumes that the user you are logged into the Portal as is also the Account owner of your VSO tenant. If this isn’t the case you’ll see this error:

HTTP403 while loading resource VS27043 from visualstudio : User “” is not the account owner of “yourvsotenant”.

Users with MSAs that match their Work Accounts are OK

Many organisations worked around the use of non-Azure AD accounts early on by having users create MSAs with logins matching Work Accounts. As I previously blogged about, it isn’t always possible to do this, but if you organisation has then you’re in a good state (with one draw back).

After you cut over to Azure AD, these users will no longer be able to use their MSA and will have log out and then log back in. When they do so they will be required to select whether they wish to use the MSA or the Work Account for access, which, while painful is hardly the end of the world.

On the upside, the existing Team Project membership the user had with their old MSA is retained as their user identifier (UPN / email address) hasn’t changed.

On the downside the user will need to create new Alternative Credentials in order to access Git repositories they previously had access too. Note that you can’t reuse the old Git credentials you had.

In summary on this one – it is quite a subtle change, particularly as the username is the same, but the password (and login source) changes. Expect to do some hand holding with people who don’t understand that they were not previously using an Azure AD-based identity.

Don’t forget they’ll need to change their Visual Studio settings too by using the ‘Switch User’ option on the bottom left of the “Connect to Team Foundation Server” dialog as shown below.

Switch User on Dialog

Don’t delete MSAs until you have mapped user access

Seems simple in hindsight, but deleting an MSA before you know what Team Projects it has access to will mean you can no longer determine which projects the new Work Account needs to be added to.

The easiest way to determine which groups to assign Work Accounts to is to switch to the VSO Admin Security Page ( and open the User view which will show you Team Project membership. Then for each MSA view the groups it is a member of and add the new Work Account to the same. This is maybe a time to pull access from old projects nobody really uses any more ;).

TFSVC – new user = new workspace

Ah yes, this old chestnut :).

As the Work Account represents a new user it isn’t possible to reuse a previous workspace which does mean… re-syncing code down into a new workspace on your machine again.

The tip here (and for Git also) is that you need to plan the cutover and ensure that developers working on the source trees in your tenant either check-in pending changes, or shelve them for later retrieval and re-application to a new workspace.

Get external users into Azure AD before the cutover

If you need to continue to have MSA logins for customers or third party services, make sure you add them to Azure AD prior to the cutover as this will make life afterwards so much easier.

If you have a large number of developers using MSAs that can’t be disrupted then you should consider adding their MSAs temporarily to Azure AD and then migrate them across gradually.

So there we have it, hopefully some useful content for you to use in your own migration. Do you have tips you want to share? Please go ahead and add them in the comments below.