Resolving the ‘Double Auth’ prompt issue in ADFS with Azure AD Conditional Access MFA

As mentioned in my previous post, Using ADFS on-premises MFA with Azure AD Conditional Access, if you have implemented Azure AD Conditional Access to enforce MFA for all your Cloud Apps and you are using the SupportsMFA=true parameter to direct MFA execution to your ADFS on-premises MFA server you may have encountered what I call the ‘Double Auth’ prompt issue.

While this doesn’t happen across all Cloud Apps, you will see it on the odd occasion (in particular the Intune Company Portal and Azure AD Powershell Cmdlets) and it has the following symptoms:

  1. User signs into Azure AD App (e.g. Azure AD Powershell with Modern Auth support)
  2. User sees auth prompt, enters their username, which redirects to ADFS
  3. User enters credentials and clicks enter
  4. It looks like it signs in successfully but then ADFS reappears and the user is prompted to enter credentials again.
  5. After the second successful attempt, the user is then prompted for MFA as expected



Understanding the reason behind why this happens is reliant on two things:

  1. The background I provided in the blog post I referenced above, specifically that when SupportsMFA is being used, two requests to ADFS are sent by Azure AD instead of one as part of the authentication process when MFA is involved.
  2. Configuration and behaviour of the prompt=login behaviour of Azure AD, which is discussed in this Microsoft Docs article.

So to delve into this, let’s crack out our trusty Fiddler tool to look at what’s happening:


Highlighted in the image above is the culprit.  You’ll see in the request strings that the user is being sent to ADFS with two key parameters wauth=… and wfresh=0.  What is happening here is that this particular Azure AD application has decided that as part of sign in, they want to ensure that ‘fresh credentials’ are being provided (say, to ensure the correct user creds are used).  They do this by telling Azure AD to generate a request with prompt=login, however as noted in the article referenced, because some legacy ADFS systems don’t understand this ‘modern’ parameter, the default behaviour is for Azure AD to pre-emptively translate this request into two ‘WS-Fed’ parameters that they can understand.   In particular, wfresh=0 as per the WS-Fed specs means:

…  If specified as “0” it indicates a request for the IP/STS to re-prompt the user for authentication before issuing the token….

The problem of course is that ADFS sees the wfresh=0 parameter in both requests and will abide by that behaviour by prompting the user for credentials each time!

So, the fix for this is fairly simple and is in fact (very vaguely) called out in the technet article I’ve referenced above – which is to ensure that Azure AD uses the NativeSupport configuration so that it sends the parameter as-is to ADFS to interpret instead of pre-emptively translating it.

The specific command to run is:

Set-MsolDomainFederationSettings –DomainName -PromptLoginBehavior NativeSupport

The prerequisite to this fix is to ensure that you are either running:

  • ADFS 2016
  • ADFS 2012 R2 with the July 2016 update rollup

Once this update is applied (remember that these DomainFederationSettings changes can take up to 15-30 mins) you’ll be able to see the difference via Fiddler – ADFS is sent with a prompt=login parameter instead and its only for the first request so  the overall experience is the credential prompt only occurs once.


Hope that saves a few hairs for anyone out there who’s come across this issue!

[UPDATE 12/09/17]  Looks like there’s a Microsoft KB article around this issue now!  Helpful for those who need official references:

Using ADFS on-premises MFA with Azure AD Conditional Access

With the recent announcement of General Availability of the Azure AD Conditional Access policies in the Azure Portal, it is a good time to reassess your current MFA policies particularly if you are utilising ADFS with on-premises MFA; either via a third party provider or with something like Azure MFA Server.

Prior to conditional MFA policies being possible, when utilising on-premises MFA with Office 365 and/or Azure AD the MFA rules were generally enabled on the ADFS relying party trust itself.  The main limitation with this of course is the inability to define different MFA behaviours for the various services behind that relying party trust.  That is, within Office 365 (Exchange Online, Sharepoint Online, Skype for Business Online etc.) or through different Azure AD Apps that may have been added via the app gallery (e.g. ServiceNow, SalesForce etc.).  In some circumstances you may have been able to define some level of granularity utilising custom authorisation claims, such as bypassing MFA for activesync and legacy  authentication scenarios, but that method was reliant on special client headers or the authentication endpoints that were being used and hence was quite limited in its use.

Now with Azure AD Conditional Access policies, the definition and logic of when to trigger MFA can, and should, be driven from the Azure AD side given the high level of granularity and varying conditions you can define. This doesn’t mean though that you can’t keep using your on-premises ADFS server to perform the MFA, you’re simply letting Azure AD decide when this should be done.

In this article I’ll show you the method I like to use to ‘migrate’ from on-premises MFA rules to Azure AD Conditional Access.  Note that this is only applicable for the MFA rules for your Azure AD/Office 365 relying party trust.  If you are using ADFS MFA for other SAML apps on your ADFS farm, they will remain as is.


At a high level, the process is as follows:

  1. Configure Azure AD to pass ‘MFA execution’ to ADFS using the SupportsMFA parameter
  2. Port your existing ADFS MFA rules to an Azure AD Conditional Access (CA) Policy
  3. Configure ADFS to send the relevant claims
  4. “Cutover” the MFA execution by disabling the ADFS MFA rules and enabling the Azure AD CA policy

The ordering here is important, as by doing it like this, you can avoid accidentally forcing users with a ‘double MFA’ prompt.

Step 1:  Using the SupportsMFA parameter

The crux of this configuration is the use of the SupportsMFA parameter within your MSOLDomainFederationSettings configuration.

Setting this parameter to True will tell Azure AD that your federated domain is running an on-premises MFA capability and that whenever it determines a need to perform MFA, it is to send that request to your STS IDP (i.e. ADFS) to execute, instead of triggering its own ‘Azure Cloud MFA’.

To perform this step is a simple MSOL PowerShell command:

Set-MsolDomainFederationSettings -domain -SupportsMFA $true

Pro Tip:  This setting can take up to 15-30 mins to take effect.  So make sure you factor in this into your change plan.  If you don’t wait for this to kick in before cutting over your users will get ‘double MFA’ prompts.

Step 2:  Porting your existing MFA Rules to Azure AD Conditional Access Policies

There’s a whole article in itself talking about what Azure AD CA policies can do nowadays, but for our purposes let’s use the two most common examples of MFA rules:

  1. Bypass MFA for users that are a member of a group
  2. Bypass MFA for users on the internal network*

Item 1 is pretty straight forward, just ensure our Azure AD CA policy has the following:

  • Assignment – Users and Groups:
    • Include:  All Users
    • Exclude:  Bypass MFA Security Group  (simply reuse the one used for ADFS if it is synced to Azure AD)


Item 2 requires the use of the Trusted Locations feature.  Note that at the time of writing, this feature is still the ‘old’ MFA Trusted IPs feature hosted in the Azure Classic Portal.   Note*:  If you are using Windows 10 Azure AD Join machines this feature doesn’t work.  Why this is the case will be an article in itself, so I’ll add a link here when I’ve written that up.

So within your Azure AD CA policy do the following:

  • Conditions – Locations:
    • Include:  All Locations
    • Exclude:  All Trusted IPs


Then make sure you click on Configure all trusted locations to be taken to the Azure Classic Portal.  From there you must set Skip multi-factor authentication for requests from federated users on my intranet


This effectively tells Azure AD that a ‘trusted location’ is any authentication requests that come in with a InsideCorporateNetwork claim.

Note:  If you don’t use ADFS or an IDP that can send that claim, you can always use the actual ‘Trusted IP addresses’ method.

Now you can define exactly which Azure AD apps you want MFA to be enabled for, instead of all of them as you had originally.


Pro Tip:  If you are going to enable MFA on All Cloud Apps to start off with, check the end of this article for some extra caveats you should consider for, else you’ll start breaking things.

Finally, to make this Azure AD CA policy actually perform MFA, set the access controls:


For now, don’t enable the policy just yet as there is more prep work to be done.

Step 3:  Configure ADFS to send all the relevant claims

So now that Azure AD is ready for us, we have to configure ADFS to actually send the appropriate claims across to ‘inform’ it of what is happening or what it is doing.

The first is to make sure we send the InsideCorporateNetwork claim so Azure AD can apply the ‘bypass for all internal users’ rule.  This is well documented everywhere, but the short version is, within your Microsoft Office 365 Identity Platform relying party trust in ADFS and Add a new Issuance Transform Rule to pass through the Inside Corproate Network Claim:


Fun fact:   The Inside Corporate Network claim is automatically generated by ADFS when it detects that the authentication was performed on the internal ADFS server, rather then through the external ADFS proxy (i.e. WAP).  This is why it’s a good idea to always use an ADFS proxy as opposed to simply reverse proxying your ADFS.  Without it you can’t easily tell whether it was an ‘internal’ or ‘external’ authentication request (plus its more secure).

The other important claim to send through is the authnmethodsreferences claim.  Now you may already have this if you were following some online Microsoft Technet documentation when setting up ADFS MFA.  If so, you can skip this step.

This claim is what is generated when ADFS successfully performs MFA.  So think of it as a way for ADFS to tell Azure AD that it has performed MFA for the user.


Step 4: “Cutover” the MFA execution

So now that everything is prepared, the ‘cutover’ can be performed by doing the following:

  1. Disable the MFA rules on the ADFS Relying Party Trust
    Set-AdfsRelyingPartyTrust -TargetName "Microsoft Office 365 Identity Platform" -AdditionalAuthenticationRules $null
  2. Enable the Azure AD CA Policy

Now if it all goes as planned, what should happen is this:

  1. User attempts sign into an Azure AD application.  Since their domain is federated, they are redirected to ADFS to sign in.
  2. User will perform standard username/password authentication.
    • If internal, this is generally ‘SSO’ with Windows Integrated Auth (WIA).  Most importantly this user will get a ‘InsideCorporateNetwork’ = true claim
    • If external, this is generally a Forms Based credential prompt
  3. Once successfully authenticated, they will be redirected back to Azure AD with a SAML token.  Now is actually when Azure AD will assess the CA policy rules and determines whether the user requires MFA or not.
  4. If they do, Azure AD actually generates a new ADFS sign in request, this time specifically stating via the wauth parameter to use multipleauthn. This will effectively tell ADFS to execute MFA using its configured providers
  5. Once the user successfully completes MFA, they will go back to Azure AD with this new SAML token that contains a claim telling Azure AD that MFA has now been performed and subsequently lets the user through

This is what the above flow looks like in Fiddler:


This is what your end-state SAML token should like as well:


The main takeaway is that Step 4 is the new auth flow that is introduced by moving MFA evaluation into Azure AD.  Prior to this, step 2 would have simply perform both username/password authentication and MFA in the same instance rather then over two requests.

Extra Considerations when enabling MFA on All Cloud Apps

If you decide to take a ‘exclusion’ approach to MFA enforcement for Cloud Apps, be very careful with this.  In fact you’ll even see Microsoft giving you a little extra warning about this.


The main difference with taking this approach compared to just doing MFA enforcement at the ADFS level is that you are now enforcing MFA on all cloud identities as well!  This may very well unintentionally break some things, particularly if you’re using ‘cloud identity’ service accounts (e.g. for provisioning scripts or the like).  One thing that will definitely break is the AADConnect account that is created for directory synchronisation.

So at a very minimum, make sure you remember to add the On-Premises Directory Synchronization Service Account(s) into the exclusion list for for your Azure AD MFA CA policy.

The very last thing to call out is that some Azure AD applications, such as the Intune Company Portal and Azure AD Powershell cmdlets, can cause a ‘double ADFS prompt’ when MFA evaluation is being done in Azure AD.   The reason for this and the fix is covered in my next article Resolving the ‘double auth’ prompt issue with Azure AD Conditional Access MFA and ADFS so make sure you check that out as well.


Do It Yourself Cloud Accelerator – Part II BranchCache

In the last post I introduced the idea of breaking the secure transport layer between cloud provider and employee with the intention to better deliver those services to employees using company provided infrastructure.

In short we deployed a server which re-presents the cloud secure urls using a new trusted certificate. This enables us to do some interesting things like provide centralised and shared caching across multiple users. The Application Request Routing (ARR) module is designed for delivering massively scalable content delivery networks to the Internet which when turned on its head can be used to deliver cloud service content efficiently to internal employees. So that’s a great solution where we have cacheable content like images, javascript, css, etc. But can we do any better?

Yes we can and it’s all possible because we now own the traffic and the servers delivering it. To test the theory I’ll be using a SharePoint Online home page which by itself is 140K and overall the total page size with all resources uncached is a whopping 1046K.


Surprisingly when you look at a Fiddler trace of a SharePoint Online page the main page content coming from the SharePoint servers is not compressed (The static content, however, is) and it is also marked as not cacheable (since it can change each request). That means we have a large page download occurring for every page which is particularly expensive if (as many organisations do) you have the Intranet home page as a default on the browser opening.

Since we are using Windows Server IIS to host the Application Request Router we get to take a free ride on some of the other modules that have been built for IIS like, for instance, compression. There are two types of compression available in IIS, static compression which can be used to pre-calculate the compressed output of static files, or dynamic compression which will compress the output of dynamically generated pages on the fly. This is the compression module we need to compress the home page on the way through our router.

Install the Dynamic Compression component of the Web Server(IIS) role

Configuring compression is simple, firstly, make sure the IIS Server level has Dynamic Compression enabled and also the Default Web Site level

By enabling dynamic compression we are allowing the Cloud Accelerator to step in between server and client and inject gzip encoding on anything that isn’t already compressed. On our example home page the effect is to reduce the download content size from a whopping 142K down to 34K

We’ve added compression to uncompressed traffic which will help the experience for individuals on the end of low bandwidth links, but is there anything we can do to help the office workers?


BranchCache is a Windows Server role and Windows service that has been around since Server 2008/Win 7 and despite being enormously powerful has largely slipped under the radar. BranchCache is a hosted or peer to peer file block sharing technology much like you might find behind torrent style file sharing networks. Yup that’s right, if you wanted to, you could build a huge file sharing network using out of the box Windows technology! But it can be used for good too.

BranchCache operates deep under the covers of Windows operating systems when communicating using one of the BranchCache-enabled protocols HTTP, SMB (file access), or BITS(Background Intelligent Transfer Service). When a user on a BranchCache enable device accesses files on a BranchCache enabled file server or accesses web content on a BranchCache enabled web server the hooks in the HTTP.SYS and SMB stacks kick in before transferring all the content from the server.

HTTP BranchCache

So how does it work with HTTP?

When a request is made from a BranchCache enabled client there is an extra header in the request Accept-Encoding: peerdist which signifies that this client not only accepts normal html responses but also accepts another form of response, content hashes.

If the server has the BranchCache feature enabled it may respond with Content-Encoding: peerdist along with a set of hashes instead of the actual content. Here’s what a BranchCache response looks like:

Note that if there was no BranchCache operating at the server a full response of 89510 bytes of javascript would have been returned by the server. Instead a response of just 308 bytes was returned which contains just a set of hashes. These hashes point to content that can then be requested from a local BranchCache or even broadcast out on the local subnet to see if any other BranchCache enabled clients or cache host servers have the actual content which corresponds to those hashes. If the content has been previously requested by one of the other BranchCache enabled clients in the office then the data is retrieved immediately, otherwise an additional request is made to the server (with MissingDataRequest=true) for the data. Note that this means some users will experience two requests and therefore slower response time until the distributed cache is primed with data.

It’s important at this point to understand the distinction between the BranchCache and the normal HTTP caching that operates under the browser. The browser cache will cache whole HTTP objects where possible as indicated by cache headers returned by the server. The BranchCache will operate regardless of HTTP cache-control headers and operates on a block level caching parts of files rather than whole files. That means you’ll get caching across multiple versions of files that have changed incrementally.

BranchCache Client Configuration

First up note that BranchCache client functionality is not available on all Windows versions. BranchCache functionality is only available in the Enterprise and Ultimate editions of Windows 7/8 which may restrict some potential usage.

There are a number of ways to configure BranchCache on the client including Group Policy and netsh commands, however the easiest is to use Powsershell. Launch an elevated Powershell command window and execute any of the Branch Cache Cmdlets.

  • Enable-BCLocal: Sets up this client as a standalone BranchCache client; that is it will look in its own local cache for content which matches the hashes indicated by the server.
  • Enable-BCDistributed: Sets up this client to broadcast out to the local network looking for other potential Distributed BranchCache clients.
  • Enable-BCHostedClient: Sets up this client to look at a particular static server nominated to host the BranchCache cache.

While you can use a local cache, the real benefits come from distributed and hosted mode, where the browsing actions of a single employee can benefit the whole office. For instance if Employee A and Employee B are sitting in the same office and both browse to the same site then most of the content for Employee B will be retrieved direct from Employee A’s laptop rather than re-downloading from the server. That’s really powerful particularly where there are bandwidth constraints in the office and common sites that are used by all employees. But it requires that the web server serving the content participates in the Branchcache protocol by installing the BranchCache feature.

HTTP BranchCache on the Server

One of the things you lose when moving to the SharePoint Online service from an on premises server is the ability install components and features on the server including BranchCache. However, by routing requests via our Cloud Accelerator that feature is available again simply by installing the Windows Server BranchCache Feature.

With the BranchCache feature installed on the Cloud Accelerator immediately turns the SharePoint Online service into a BranchCache enabled service so the size of the Body items downloaded from to the browser goes from this:

To this:

So there are some restrictions and configuration. First, you won’t normally see any peerdist hash responses for content body size less than 64KB. Also you’ll need a latency of about 70mS between client and server before BranchCache bothers stepping in. Actually you can change these parameters but it’s not obvious from the public API’s. The settings are stored at this registry key (HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\PeerDistKM\Parameters) which will be picked up next time you start the BranchCache service on the server. Changing these parameters can have a big effect on performance and depend on the exact nature of the bandwidth or latency environment the clients are operating in. In the above example I changed the MinContentLength from the default 64K (which would miss most of the content from SharePoint) to 4K. The effect of changing the minimum content size to 4K is quite dramatic on bandwidth but will penalise those on a high latency link due to the multiple requests for many small pieces of data not already available in your cache peers.

The following chart shows the effect of our Cloud Accelerator on the SharePoint Online home page for two employees in a single office. Employee A browses to the site first, then Employee B on another BranchCache enabled client browses to the same page.


  • Office 365: Out of the box raw service
  • Caching: With caching headers modified by our Cloud Accelerator
  • Compression: With compression added to dynamic content (like the home page)
  • BranchCache 64K: With BranchCache enabled for >64K data
  • BranchCache 4K: With BranchCache enabled for >4K data

So while adopting a cloud based service is often a cost effective solution for businesses, if the result negatively impacts users and user experience then it’s unlikely to gather acceptance and may actually be avoided in preference for old on-premises habits like local files shares and USB drives. The Cloud Accelerator gives us back the ownership of the traffic and the ability to implement powerful features to bring content closer to users. Next post we’ll show how the accelerator can be scaled out ready for production.

Do It Yourself Fiddler Service

I recently upgraded to Windows 8.1 which required a full install (upgraded from the 8.1 Preview which annoyingly didn’t support upgrades). A full install of my laptop is getting easier and easier as more of the things I use are delivered as services. The install list is getting smaller due to the combined effect of software as a service and a simpler working life.

I still had to install these:

  • Microsoft Office 2013
  • Microsoft Visio 2013
  • Microsoft Project (yes yes I know but there really is no good alternative yet)
  • LastPass
  • Visual Studio 2013
  • And…Fiddler!

The install list from bare metal to productive machine is getting smaller and smaller (arguably Office is available online but slightly under featured for my liking). The world of development is moving online too. There is a significant move away from the personal development environment backed by shared source control and build machine to a development environment that is team first, always integrated and continuously saved to repository and built by default. The editing and debugging experience is moving online too. Take a look at jsFiddle for a window into the online development experience. I think we’ll see a lot more in this space and much stronger interaction with continuously integrated source code repositiories.


Anyway I digress, what I really wanted to highlight is Fiddler. That invaluable tool that injects itself as a proxy between your browser and the internet and even performs a man in the middle attack on your secure traffic. I use it all the time for everything from solving performance problems to reverse engineering web sites. But what does the Fiddler of the future look like? How can it be relevant in service oriented world where the PC is no longer the centre of your universe but just a dumb terminal? In a cloud and service oriented world there are a great deal of conversations happening between machines hosted remotely that we are interesting in listen in on. When developing a modern cloud solution which is aggregating a number of RESTful services, sometimes the most interesting conversations are happening not between browser and server but between machines in a data centre far-far away.

What I need is a Fiddler as a Cloud Service


Alert readers from my previous post may have noticed something; there was a technology mentioned in the forward that I didn’t really use in the implementation of the WebAPI Proxy, “SignalR”. SignalR is one of those cool new technologies built over the WebSockets protocol that is a technological leap forward in Internet technologies and a game changer for implementing complex interactive websites that significantly augments the page request based model of the Internet we have been stuck with since the 80’s. Broadly speaking the WebSockets API enables a server to open, and maintain a connection to the browser which can be used to push information to the client without user interaction. This is much more like the traditional old thick client application model where TCP connections are held open for creating an interactive real-time user experience. While this technology will (and has) opened the door for a whole new style of web based applications; it invites back invites back all those scalability problems of managing client state which was so artfully dodged by the disconnected stateless model the Internet is built on today.

So what’s all this got to do with Fiddler?

What if we could deploy to the cloud a WebAPI Proxy as described in the previous blog but when passing through traffic, use Signal-R to feed that logging information back to a browser. Then we’d have something much like Fiddler but not installed on the local machine but rather as a service in the cloud; able to intercept and debug traffic of anyone choosing to use it as a proxy. Sounds good in theory but how to implement it?

Again as I demonstrated with the proxy, it’s relatively easy to plug in handlers into the pipeline to modify the behaviour of the proxy, and that includes putting in a logging handler to watch all traffic passing through. Something like this

 async System.Threading.Tasks.Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
   //add a unique Id so we can match up request and response laterrequest.Properties["Id"] = System.Guid.NewGuid().ToString();
   //log the request
    await LogRequest(request);
   //start a timer
    Stopwatch stopWatch = Stopwatch.StartNew();
   //pass the request through the proxy 
    var response = await
    base.SendAsync(request, cancellationToken);
   //stop the timerstopWatch.Stop();
   //log the response and the time taken
    await LogResponse(request, response, stopWatch);
    return response;

Now comes the interesting part, adding Signal-R from nuget adds a folder into your WebAPI project called Hubs. Now a Signal-R “Hub” is your opportunity to set up the relationship between an incoming browser request and an outgoing signal (or push) to the browser. It is in this hub area that you can add handlers for javascript functions on the client and use the Context.ConnectionId to identify the incoming request that can later be used to push out data to the same client.

Client side Javascript Server side Hub handler
 $("#start").click(function ()

  Trace.WriteLine("connection", Context.ConnectionId);

So that’s pretty cool, we can call functions on the server from the client javascript (once you understand the conventions of properly naming to deal with language case differences), but then you could always call functions on the server from the client, that’s what HTTP is. The difference here is in the server side function “Start” there is a Context.ConnectionId, this is a GUID that represents the caller and can be used to later to asynchronously call the user back.

Fiddler as a Service

A service isn’t a service unless it’s multi-tenant. So what we’ll do is build the ability for a client to register interest in just some of the logs passing through the proxy. For simplicity we’ll just pick up the user’s ipaddress and push logs at them for anything we see running through the proxy from that ipaddress. That way we’ll get something similar to the functionality of the local Fiddler running on the local machine where we set the internet proxy for browsing and see the logs for anything running through the proxy on that machine.

Remember (from the previous post), the code written for this proxy is designed to be hosting platform agnostic; that is it can be run in IIS but can also be run in the new simple hosting platforms such as OWIN. But this throws up some interesting challenges when trying to sniff out HTTP constants like ipaddress which are represented through different APIs. So I use some code like this to stay generic.

public static string GetClientIp(this HttpRequestMessage request)
   if (request.Properties.ContainsKey("MS_HttpContext"))
     return ((dynamic)request.Properties["MS_HttpContext"]).Request.UserHostAddress as string;
   else if (request.Properties.ContainsKey("MS_OwinContext"))
     return ((dynamic)request.Properties["MS_OwinContext"]).Request.RemoteIpAddress as string;
   else if (request.Properties.ContainsKey(RemoteEndpointMessageProperty.Name))

     RemoteEndpointMessageProperty prop;
     prop = (RemoteEndpointMessageProperty) request.Properties[RemoteEndpointMessageProperty.Name];
     return prop.Address;
     throw new Exception("Could not get client IP");

Next we need a page on the proxy that the client can request to register their interest in logs, this is called logging.html and apart from including the necessary signal-R javascript glue, just has a couple of buttons to Start and Stop logging and a list to accept the logs coming back from the browser. (Serving that static page was a bit of a challenge at the time of writing since there was no official Static File Handler for OWIN so I wrote my own.)

Now here’s where the interesting part of SignalR shows itself. There is some javascript on the page which defines a function called “logUrl and some code on the server which calls that function from the WebAPI handler. So whenever a request or response comes through the proxy we look up the ipaddress, determine which registered hub connection it maps to and call the appropriate client’s logUrl function which logs the data in the browser window.

Server side logging Client side logging Javascript
 private static async System.Threading.Tasks.Task LogRequest(HttpRequestMessage request)
 string connection;
 if (connections.TryGetValue(request.GetClientIp(), out connection))
   var loggingHubContext = GlobalHost.ConnectionManager.GetHubContext<LoggingHub>();
   await loggingHubContext.Clients.Client(connection).LogUrl(request.RequestUri.ToString());

 loggingHub.client.logUrl = function (url)
  $('#logs').append('<li>' + url + '</li>');

Anyway, what we should end up with is something that looks like this:


What better name for Fiddler as a Service but “Busker”? Busker Proxy is running as a WorkerRole in Azure. (I can’t use the free Azure WebSites because the proxy effectively serves traffic for any url but Azure WebSites uses host headers to determine which site to send traffic to.) While I’m still hosting the service for a limited time, try this:

Set your internet proxy to port:8080

Browse to

Hit Start and then open a new browser window and browse to a (non secure) website like You should see something like this in your first browser window.

For now it’s an interesting debugging tool and could do with quite a bit more work. But more importantly it demonstrates how the technologies available to us as developers are now helping to bridge the gap between web client and server to provide better, more interactive software as a service from a remote, deployed, multi-tenant environment replacing the need to install applications locally other than a browser.

On the todo list for Busker:

  • There are some very interesting certificate tricks that Fiddler plays on your local machine to implement the man in the middle attack which will be quite hard to implement securely in a Fiddler as a Service
    (I might give that a go in a later blog).
  • The logging screen should be able to nominate an ipaddress rather than just using the one you come from so you can sniff traffic that doesn’t just originate from your machine
  • The logging view needs to properly match up requests and responses so you can see things like response time and outstanding requests

Check out the code here

Let me know if there are any cool features I should consider adding.

Happy Busking

Visualise Azure Table Storage with Excel and Fiddler

Today I came across an interesting problem;

I’m a big fan of Table Storage but its potential is yet to be realised because the tool support just isn’t a match for databases. We’ve got a solution which lays down a lot of data into Azure Table storage but the options to view that data is limited. There are plenty of available viewers including Visual Studio, Azure Storage Explorer and others. The problem with all of these viewers is they are limited to plain old tablular data views.

What if we want to “visualise” the data? Is there a tool to provide a graphical view of my Table Storage data?

Sure, we could open Visual Studio grab a charting control deploy a WebRole to build images from the data but I’m no fan of writing code, and prefer to build solutions on the shoulder of others. This, naturally, got me to thinking about Excel!

Microsoft in their wisdom and a view to the future of an “Open Data” world used oData to interface with Table Storage. oData is a REST protocol built for interacting with data stores over HTTP. The entire Azure Data Market is presented through oData and there are some great examples of presenting data through oData (like the Netflix API).

With that said, shouldn’t it be easy to point Excel at the oData interface and chart the data?

Yes and No.

Yes Excel has the ability to interface with oData through both the PowerPivot plugin add on and natively in the latest version of Excel 2013

But if you point directly at your Table Storage URL endpoint then you’ll get an access denied message. The problem is there is no way to express your table storage credentials to connect to the data store. And it’s more complicated than just adding the storage key onto the request because the Azure implementation of Table Storage requires that authentication headers have just a hash of some request parameters and the storage key. (This is sensible since your secure storage key never needs to cross the wire.) The concept of the “SharedKey” and SharedKeyLite” are well described here.

So what we really need is a local proxy that will intercept the requests from Excel or other clients and insert the required SharedKeyLite Authorization header. One of the best and most powerful local proxy servers is … Fiddler. Fiddler provides an “Extension” plugin architecture that gives us just the hooks required to implement our authentication logic as requests are being intercepted.

If your interested in how to build such a Fiddler extension, build the source , otherwise just grab the assembly and drop it into \Program Files (x86)\Fiddler2\Scripts. Next time you start Fiddler there will be a new Tools menu item “Azure Auth…” to set up your account and storage keys.

After which, the extension will watch for any requests to and will intercept and add the required Authorization headers for the Azure Table Storage. To test it out, launch a browser (for IE you’ll want to turn off the default feed reading view as per this article) and browse to the URL This will return all the tables available in your account.

Now try with Excel using one of your tables using Data->From Other Sources-> From oData Data Feed

And Choose your table

Finish the Wizard and then choose one of the Excel data presentations

Now you can chart your Azure Table Storage data with no code!

Note: updated now to include oData paging.