Remote Access to Local ASP.NET Core Applications from Mobile Devices

One of the most popular tools for ASP.NET or ASP.NET Core application development is IIS Express. We can’t deny it. Unless we need specific requirements, IIS Express is a sort of de-facto web server for debugging on developers’ local machines. With IIS Express, we can easily access to our local web applications with no problem during the debugging time.

There are, however, always cases that we need to access to our locally running website from another web browsers like mobile devices. As we can see the picture above, localhost is the loopback address so we can’t use it outside our dev box. It’s not working by simply replacing the loopback address with a physical IP address. We need to adjust our dev box to allow this traffic. In this post, we’re going to solve this issue by looking at two different approaches.

At the time of writing this post, we’re using Visual Studio (VS) 2015, as VS 2017 will be launched on March 7, 2017.

Network Sharing Options and Windows Firewall

Please make sure that all screenshots for this section are taken from Windows 10. Open my current connected network (either wireless or wired).

Make sure that the “Make this PC discoverable” option is turned on.

This option enables our network in “Private” mode on Windows Firewall:

WARNING!!!: If our PC is currently connected to a public network, for our better security, we need to turn off the private network settings; otherwise our PC will get vulnerable from malicious attacks.

Update Windows Firewall Settings

In this example, the locally running web app uses the port number of 7314. Therefore, we need to register a new inbound firewall rule to allow access through the port number. Open “Windows Firewall with Advanced Security” through Control Panel and create a new rule with options below:

  • Rule Type: Port
  • Protocol: TCP
  • Port Number: 7314
  • Action: Allow the Connection
  • Profile: Private (Domain can also be selected if our PC is bound with domain controllers)
  • Name: Self-descriptive name of anything! eg) IIS Express Port Opener

Now, all traffic through this port number is allowed from now on. So far, we’ve completed the basic environment settings including firewalls. Let’s move onto the first option using IIS Express itself.

1. Updating IIS Express Configurations Directly

When we install VS, IIS Express is also installed at the same time. Its default configuration file is located at somewhere but each solution that VS 2015 creates has its own settings that overwriting the default one and it’s stored to the .vs folder like:

Open applicationhost.config for update.

Add another binding with my local IP address like:

We can easily find our local IP address by running the ipconfig command. We’re using 192.168.1.3 for now.

IIS Express has now been set. Let’s try our mobile web browser to access the local dev website by IP address.

All good! It seems to be working now. However, if we have more web applications running on our dev environment for our development work, every time we create a new web application project, we have to register the port number, allocated by IIS Express, to Windows Firewall. No good. Too repetitive. Is there any other convenient way? Of course there is.

2. Conveyor – Visual Studio Extension

Conveyor can sort out this hassle. At the time of this writing, its version is 1.3.2. After installing this extension, run the debugging mode by typing the F5 key again and we will be able to see a new window like:

The Remote URL is what we’re going to use. In general, the IP address would look like 192.168.xxx.xxx, if we’re in a small network (home, for example), or something different type of IP address type, if we’re in a corporate network. This is the IP address that the mobile devices use. Another important point is Conveyor uses the port number starting from 45455. Whatever port number IIS Express assigns the web application project, Conveyor forwards it to 45455. If 45455 is taken by others, it looks up one and one until a free port number exists. Due to this behaviour, we can easily predict the port number range, instead of the random nature of IIS Express. Therefore, we can register the port number range starting from 45455 to whatever we want, 45500 for example.

Now, we can access to our local dev website by using this port number pool like:

If we’re developing a web application using HTTPS connection, that wouldn’t be an issue. If no self-signed certificate is installed on our local dev machine, Conveyor will install one and that’s it. Visiting the website again through HTTPS connection will display the initial warning message and finally gets the page.

We’ve so far discussed how to remotely access to our local dev website using either IIS Express configuration or Conveyor. Conveyor gets rid of repetitive firewall registration, so it’s worth installing for our web app development.

Publish to a New Azure Website from behind a Proxy

One of the great things about Azure is the ease of which you can spin up a new cloud based website using Powershell. From there you can quickly publish any web-based solution from Visual Studio to the Azure hosted site.

To show how simple this is; After configuring PowerShell to use an Azure Subscription, I’ve created a new Azure hosted website in the new Melbourne (Australia Southeast) region:

Creating a new website in PowerShell

Creating a new website in PowerShell

That was extremely easy. What next? Publish your existing ASP.NET MVC application from Visual Studio to the web site. For this test, I’ve used Microsoft Visual Studio Ultimate 2013 Update 3 (VS2013). VS2013 offers a simple way from the built-in Publish Web dialogue to select your newly created (or existing) websites.

publish

Web Publish to Azure Websites

This will require that you have already signed in with your Microsoft account linked with a subscription, or you have already imported your subscription certificate to Visual Studio (you can use the same certificate generated for PowerShell). Once your subscription is configured you can select the previously created WebSite:

Select Existing Azure Website

Select Existing Azure Website

The Publish Web dialogue appears, but at this point you may experience failure when you attempt to validate the connection or publish the WebSite. If you are behind a proxy; then the error will show as destination not reachable.

Unable to publish to an Azure Website from behind a proxy

Unable to publish to an Azure Website from behind a proxy

Could not connect to the remote computer ("mykloudtestapp.scm.azurewebsites.net"). On the remote computer, make sure that Web Deploy is installed and that the required process ("Web Management Service") is started. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_DESTINATION_NOT_REACHABLE. Unable to connect to the remote server

The version of Web Deploy included with VS2013 is not able to publish via a Proxy. Even if you configure the msbuild.exe.config to have the correct proxy settings as documented by Microsoft, it will still fail.

Luckily in August 2014 Web Deploy v3.6 BETA3 was released that fixes this issue. To resolve this error, you can download the Web Deploy beta and patch your VS2013 installation. After patching Visual Studio; you can modify the proxy settings used by msbuild.exe (msbuild.exe.config) to use the system proxy:

<system.net>
	<defaultProxy useDefaultCredentials="true" />
</system.net>

You should now be able to publish to your Azure WebSite from behind a proxy with VS2013 Web Deploy.

Deploy an Ultra High Availability MVC Web App on Microsoft Azure – Part 2

In the first post in this series we setup our scenario and looked at how we can build out an ultra highly available Azure SQL Database layer for our applications. In this second post we’ll go through setting up the MVC Web Application we want to deploy so that it can leverage the capabilities of the Azure platform.

MVC project changes

This is actually pretty straight forward – you can take the sample MVC project from Codeplex and apply these changes easily. The sample Github repository has the resulting project (you’ll need Visual Studio 2013 Update 3 and the Azure SDK v2.4).

The changes are summarised as:

  • Open the MvcMusicStore Solution (it will be upgraded to the newer Visual Studio project / solution format).
  • Right-click on the MvcMusicStore project and select
    Convert – Convert to Microsoft Azure Cloud Project.
    Converting a Project
  • Add new Solution Configurations – one for each Azure Region to deploy to. This allows us to define Entity Framework and ASP.Net Membership / Role Provider database connection strings for each Region. I copied mine from the Release configuration.
    Project Configurations
  • Add web.config transformations for the two new configurations added in the previous step. Right click on the web.config and select “Add Config Transform”. Two new items will be added (as shown below in addition to Debug and Release).
    Configuration Transforms
  • Switch the Role deployment to use a minimum of two Instances by double-clicking on the MvcMusicStore.Azure – Roles – MvcMusicStore node in Solution Explorer to open up the properties page for the Role. Change the setting as shown below.
    Set Azure Role Instance Count

At this stage the basics are in place. A few additional items are required and they will be very familiar to anyone who has had to run ASP.Net web applications in a load balanced farm. These changes all relate to application configuration and are achieved through edits to the web.config.

Configure Membership and Role Provider to use SQL Providers.


    <membership defaultProvider="SqlMembershipProvider" userIsOnlineTimeWindow="15">
      <providers>
        <clear/>
        <add name="SqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="AspNetDbProvider" applicationName="MVCMusicStoreWeb" enablePasswordRetrieval="false" enablePasswordReset="false" requiresQuestionAndAnswer="false" requiresUniqueEmail="true" passwordFormat="Hashed"/>
      </providers>
    </membership>
    <roleManager enabled="true" defaultProvider="SqlRoleProvider">
      <providers>
        <add name="SqlRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="AspNetDbProvider" applicationName="MVCMusicStoreWeb"/>
      </providers>
    </roleManager>

Add a fixed Machine Key (you can use a range of online tools to generate one for you if you want). This allows all Instances to handle Forms authentication and other shared state that will require encryption / decryption between requests.


<machineKey
 validationKey="E9D17A5F58DE897D9161BB8D9AA995C59102AEF75F0224183F1E6F67737DE5EBB649BA4F1622CD52ABF2EAE35F9C26D331A325FC9EAE7F59A19F380E216C20F7"
 decryptionKey="D6F541F7A75BB7684FD96E9D3E694AB01E194AF6C9049F65"
 validation="SHA1"/>

Define a new connection string for our SQL-based Membership and Role Providers


<add name="AspNetDbProvider" 
connectionString="{your_connection_string}" 
providerName="System.Data.SqlClient"/>

Phew! Almost there!

Last piece of the puzzle is to add configuration transformations for our two Azure Regions so we talk to the Azure SQL Database in each Region. Repeat this in each Region’s transform and replace the Azure SQL Database Server name with the one appropriate to that Region (note that your secondary will be read-only at this point).


<connectionStrings>
  <add name="MusicStoreEntities"
       xdt:Transform="SetAttributes" 
       xdt:Locator="Match(name)"
       connectionString="Server=tcp:{primaryeast_db_server}.database.windows.net,1433;Database=mvcmusicstore;User ID={user}@{primaryeast_db_server};Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;"
    />
  <add name="AspNetDbProvider"
       xdt:Transform="SetAttributes" 
       xdt:Locator="Match(name)"
       connectionString="Server=tcp:{primaryeast_db_server}.database.windows.net,1433;Database=aspnetdb;User ID={user}@{primaryeast_db_server};Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;"
    />
  </connectionStrings>

Build Deployment Packages

Now that we have a project ready to deploy we will need to publish the deployment packages locally ready for the remainder of this post.

In Visual Studio, right-click on the MvcMusicStore.Azure project and select Package as shown below.

Package Azure Solution

Choose the appropriate configuration and click ‘Package’.

Package with config choice.

After packaging is finished a Windows Explorer window will open at the location of the published files.

Setup Cloud Services

Now we have all the project packaging work out-of-the-way let’s go ahead and provision up a Cloud Service which will be used to host our Web Role Instances. The sample script below shows how we can use the Azure PowerShell Cmdlets to provision a Cloud Service. The use of an Affinity Group allows us to ensure that our Blob Storage and Cloud Service are co-located closely enough to be at a low latency inside of the Azure Region.

Note: Just about everything you deploy in Azure requires a globally unique name. If you run all these scripts using the default naming scheme I’ve included the chances are that they will fail because someone else may have already run them (hint: you should change them).

Deploy Application to Cloud Services

Once the above Cloud Service script has been run successfully we have the necessary pieces on place to actually deploy our MVC Application. The sample PowerShell script below does just this – it utilises the output of our packaging exercise from above, uploads the package to Azure Blob Storage and then deploys using the appropriate configuration. The reason we have two packages is because the web.config is deployed with the package and making changes to it are not supported post deployment.

Once the above script has finished successfully you should now be open a web browser up and connect to the cloudapp.net endpoints in each Region. The Primary Region should give you full read/write access, whereas the Secondary Region will work but will likely throw exceptions for any action that requires a database write (this is expected behaviour). Note that these cloudapp.net endpoints are resolving to a load balanced endpoint that frontends the individual Instances in the Cloud Service.

Configure Traffic Manager

The final piece of this infrastructure puzzle is to deploy Traffic Manager which is Azure’s service offering for controlling where inbound requests are routed. Traffic Manager is not a load balancer but can provide services such as least-latency and failover routing and as of October 2014 can now support nested profiles (i.e. Traffic Manager managing traffic for another Traffic Manager – all very Inception-like!)

For our purposes we are going to use a Failover configuration that will use periodic health checks to determine if traffic should continue to be routed to the Primary Endpoint or failover to the Secondary.

Notes:

When defining a Failover configuration the ordering of the Endpoints matters. The first Endpoint added is considered as the Primary (you can change this later if you wish though).

You might want to consider a different “MonitorRelativePath” and utilise a custom page (or view in MVC’s case) that performs some form a simple diagnostics and returns a 200 OK response code if everything is working as expected.

But wait, there’s more!

A free set of steak knives!

No, not really, but if you’ve read this far you may as well qualify for some!

There are a few important things to note with this setup.

  • It won’t avoid data loss: the nature of Geo-replication means there is a high likelihood you will not see all Primary Database transactions play through to the Secondary in the event of a failure in the Primary. The window should be fairly small (depending on how geographically dispersed your databases are), but there will be a gap.
  • Failover requires manual intervention: you need to use the Stop-AzureSqlDatabaseCopy Cmdlet to force termination of the relationship between the Primary and Secondary databases. Until you do this the Secondary is read-only. Use external monitoring to find out when the Primary service goes down and then either leverage the Azure REST API to invoke the termination or script the Cmdlet. Note that once you break the copy process it isn’t automatically recreated.
  • Users will notice the cutover: there will be a combination of things they’ll notice depending on how long you wait to terminate the database copy. Active sessions will lose some data as they are backed by the database. User actions that require write access to the database will fail until you terminate the copy.
  • What about caching? We didn’t look at caching in this post. If we leverage the new Redis Cache we could theoretically setup a Slave in our Secondary Region. I haven’t tested this so your mileage may vary! (Would love to hear if you have)

The big benefit of this setup is that you can quickly recover from a failure in one Region. You may choose to use a holding page as your Secondary failover and then manually manage the complete cutover to the secondary active application once you are satisfied that the outage in the Primary Region will be of a long duration.

You should be running monitoring of the setup and check that both Primary and Secondary cloudapp.net endpoints are healthy and that your Traffic Manager service is healthy. Any issue with the Primary cloudapp.net endpoint will be the trigger for you to intervene and potentially switch off the geo-replication.

Anyway, this has been a big couple of posts – thanks for sticking around and I hope you’ve found the content useful.

Deploy an Ultra High Availability MVC Web App on Microsoft Azure – Part 1

As public cloud platforms such as Microsoft Azure mature it is becoming easier to build deployment architectures that are substantially resilient to faults in cloud platforms that are increasingly unlikely to ever eventuate due to the previously mentioned maturity!

We’ll take a look at how we can deploy an ultra highly available database-backed ASP.Net MVC Website using Microsoft Azure across this post and my next one.

Desired State

The diagram below shows what we will be aiming to achieve with our setup.

Ultra High Availability Design

This setup can best be summarised as:

  • Two Azure SQL Database Servers in two Regions utilising Active Geo-replication to provide a read-only replica in a secondary Azure Region.
  • Cloud Services in each Region containing dual Web Roles in an Availability Set. Web Roles are configured to communicate with the Azure SQL Database in their Region. The Primary Region will be read/write and the Secondary read-only.
  • A Traffic Manager instance deployed using the two Regional Cloud Services as Endpoints, using a Failover configuration.

Locally AND Geographically Redundant

If you don’t have a deep familiarity with how Azure Cloud Services and Azure SQL Database work then you may not realise the level of local (intra-Region) resiliency you get “out of the box” when you sign up. Here’s a summary based on the setup we are deploying:

  • Cloud Services with two instances will take advantage of Fault and Upgrade Domains to ensure that at least one Instance remains operational at all times (this is required to qualify for Azure Availability SLAs too).
  • Azure SQL Database runs three local replicas concurrently – one primary and two secondaries ensuring that intra-Region hardware failures are less likely to impact you.

Now this is probably more than sufficient for most businesses. In the years that I’ve worked in the public cloud space I’ve never seen an entire Region in any provider go completely dark. While it is highly unlikely to ever occur this is not the same as saying it won’t ever occur (hence this post!)

Costs

A word to the wise: these setups don’t come cheap (example: Azure SQL Database Premium P3 ~AUD4,267.49/mo in AU East). This should be expected given the level of redundancy and service availability you get. Having said this, in comparison with having to build this setup out on-premises you’d be hard pushed to beat the price points (and flexibility) you’ll get from a cloud deployment like this one.

Sample Code

In order to wrap your head around this post I’ve created a sample project hosted on Github that you can use to test out the design covered here. The project is a modified version of the sample MVC Music Store hosted on Codeplex. I had trouble convincing Codeplex to authenticate or register me, so consequently I’ve set this sample up on Github. We’ll cover code changes in more detail in the second part of this post.

Database Tier

We’re going to start at the database tier as we’ll require configuration information from our database deployment in order to successfully deploy our web tier. The MVC Music Store sample application requires us to deploy two databases: the core data model supporting the MVC Music Store product catalogue and shipping; and the standard ASP.Net SQL Membership and Role Provider database.

The ASP.Net databases can be deployed using aspnet_regsql.exe (if you have problems running it with Azure SQL Database check out this Connect item for help) but I’m going to use some SQL scripts from the mentioned Connect item so that I can manage creation of the database.

The MVC Music Store sample application can also be allowed to initialise its own database using the following snippet:

protected void Application_Start()
{
   System.Data.Entity.Database.SetInitializer(
       new MvcMusicStore.Models.SampleData()
       );
}

but in my case I want to control how and where the database is created so I’m using the supplied SQL script and commenting out the above snippet in my solution.

Get the show on the road

Firstly we’re going to initialise our Azure SQL Database Servers, Databases and geo-replication. Now we could do this via the standard Azure Management Portal but it’s much easier (and repeatable) if we use PowerShell.

The basic approach is:

  • Setup Database Server in Primary Region (in our case Australia East) and open up the Firewall as required.
  • Setup Database Server in Secondary Region (in our case Australia Southeast) and open up the Firewall on it as well.
  • Create empty databases for the MVC store and ASP.Net Membership and Role Providers on the Primary server.
  • Enable geo-replication.
  • Create schemas and data in databases on Primary Server using SQL Server DDL.

I’m a bit bummed as I was unable to do all this via PowerShell leveraging the Invoke-SqlCmd Cmdlet to run the DDL. It looks like there’s a known issue floating around that means the SqlPs and Azure PowerShell Modules have a bit of a spat over some shared code. Rather that get you to edit core PowerShell settings I’m just going to run the DDL using the Azure SQL Database console.

Once you’ve run the above script you’ll have your underlying SQL infrastructure ready to go. Now we need to build the schemas and populate with data. We’ll do this using the old Azure Management Portal so we can avoid having to install any SQL Server tooling locally if we don’t need to.

  1. Open up a browser and log into the Azure Management Portal and navigate to the SQL Server that is your Primary Server.
  2. On the SQL Databases page, Click ‘Servers’ and highlight your primary server by clicking on the line anywhere other then Server Name.
  3. Click ‘Manage’ in the footer. This will open the Silverlight-based admin console.
  4. Log into the server with the credentials you put into your PowerShell setup script.
  5. Select the empty database instance you want to run your DDL SQL Script in and then click ‘Open’ in the navigation bar (your screen should look something like the below sample).Running a SQL Script
  6. Repeat for both databases (MVC Music Store and ASP.Net).
    The ASP.Net scripts can be found in the solution on Github. All you need to run is InstallCommon.Sql, InstallMembership.Sql and InstallRoles.Sql (in that order).

Once completed you have setup the foundation database tier for our sample application. The geo-replication will play out your DDL changes into the Secondary Region and you’re already in a good position now having your data in at least four different locations.

This completes the first post in this two post series. In our next post we will look at setting up the web tier.

ASP.NET Web API Integration Testing with One Line of Code

A very popular post about integration testing ASP.NET Web API was published quite some time ago. However, since then, OWIN has been released. OWIN makes integration testing ASP.NET Web API much simpler. This post describes what is required to set an OWIN-based integration testing framework up.

This, believe it or not, only requires a single line of code with OWIN self-hosting! It assumes that your web API project is powered by ASP.NET Web API 2.2 with OWIN.

The basic idea behind this particular approach is that the integration test project self-hosts the web API project. Consequently, the integration tests are executed against the self-hosted web app. In order to achieve this, the web API project must be modified a little bit. Fortunately, this is easy with OWIN.

A web app powered by OWIN has two entry points: Application_Start() and Startup.Configuration(IAppBuilder). There isn’t a big difference between the entry points but it is important to know that Startup.Configuration is invoked a bit later than Application_Start().

So the integration test project can self-host the web API project, all web API initialization logic must be moved to Startup.Configuration(IAppBuilder) as follows:

public partial class Startup
{
    public void Configuration(IAppBuilder app)
    {
        var configuration = new HttpConfiguration();
        WebApiConfig.Register(configuration);
        FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
        // Execute any other ASP.NET Web API-related initialization, i.e. IoC, authentication, logging, mapping, DB, etc.
        ConfigureAuthPipeline(app);
        app.UseWebApi(configuration);
    }
}

Not much is left in the old Application_Start() method:

public class WebApiApplication : HttpApplication
{
    protected void Application_Start()
    {
        AreaRegistration.RegisterAllAreas();
        RouteConfig.RegisterRoutes(RouteTable.Routes);
    }
}

Now, in order to test your web API project, add the following OWIN NuGet packages to your tests project:

PM> Install-Package Microsoft.Owin.Hosting
PM> Install-Package Microsoft.Owin.Host.HttpListener

Following is the single line of code that is required to self-host your web API project:

using (var webApp = WebApp.Start<Startup>("http://*:9443/"))
{
    // Execute test against the web API.
    webApp.Dispose();
}

That single line starts the web API up.

It is better that this is set up once for all integration tests for your tests project using MSTest’s AssemblyInitialize attribute as follows:

[TestClass]
public class WebApiTests
{
    private static IDisposable _webApp;

    [AssemblyInitialize]
    public static void SetUp(TestContext context)
    {
        _webApp = WebApp.Start<Startup>("http://*:9443/");
    }

    [AssemblyCleanup]
    public static void TearDown()
    {
        _webApp.Dispose();
    }
}

Now, the test code for testing a web API, which is self-hosted at localhost:9443 and protected using token-based authentication, can be written as follows:

[TestMethod]
public async Task TestMethod()
{
    using (var httpClient = new HttpClient())
    {
        var accessToken = GetAccessToken();
        httpClient.DefaultRequestHeaders.Authorization =
            new AuthenticationHeaderValue("Bearer", accessToken);
        var requestUri = new Uri("http://localhost:9443/api/values");
        await httpClient.GetStringAsync(requestUri);
    }
}

The above approach has a wide range of advantages:

  • You don’t have to deploy the web API project
  • The test and web API code run in the same process
  • Breakpoints can be added to the web API code.
  • The entire ASP.NET pipeline can be tested with routing, filters, configuration, etc.
  • Any web API dependencies that will otherwise be injected from a DI container, can be faked from the test code
  • Tests are quick to run!

On the other hand, there are a few limitations, since there are different behaviours between IIS hosting and self hosting. For example, HttpContext.Current is null, when the web API is self-hosted. Any asynchronous code might also result in deadlocks when it runs in IIS while it runs fine in the self-hosted web app. There are also complexities with setting the SSL certificate for the self-hosted web app. Please note that Visual Studio should run with elevated elevated privileges.

Nevertheless, with a careful consideration, integration tests can be authored to run against a self-hosted or cloud-hosted web API.

This provides a tremendous productivity boost, given the easy ability to run integration tests in process, with a setup cost that is negligible as described in this post.

Update:

It was suggested in the comments to utilize In-Memory hosting instead of Self-Hosting solution. (See details here)

Another nuget package should be referenced: Microsoft.Owin.Testing. The resulting in-memory api call is executed as follows:

using (var server = TestServer.Create<Startup>())
{
    // Execute test against the web API.
    var result = await server.HttpClient.GetAsync("/api/values/");
}

Unfortunately, in-memory solution doesn’t work for me out of the box. Seems like it can’t handle authentication. I use UseJwtBearerAuthentication (JWT bearer token middleware) and my api calls result in 401.