In the first post in this series we setup our scenario and looked at how we can build out an ultra highly available Azure SQL Database layer for our applications. In this second post we’ll go through setting up the MVC Web Application we want to deploy so that it can leverage the capabilities of the Azure platform.
MVC project changes
This is actually pretty straight forward – you can take the sample MVC project from Codeplex and apply these changes easily. The sample Github repository has the resulting project (you’ll need Visual Studio 2013 Update 3 and the Azure SDK v2.4).
The changes are summarised as:
- Open the MvcMusicStore Solution (it will be upgraded to the newer Visual Studio project / solution format).
- Right-click on the MvcMusicStore project and select
Convert – Convert to Microsoft Azure Cloud Project.
- Add new Solution Configurations – one for each Azure Region to deploy to. This allows us to define Entity Framework and ASP.Net Membership / Role Provider database connection strings for each Region. I copied mine from the Release configuration.
- Add web.config transformations for the two new configurations added in the previous step. Right click on the web.config and select “Add Config Transform”. Two new items will be added (as shown below in addition to Debug and Release).
- Switch the Role deployment to use a minimum of two Instances by double-clicking on the MvcMusicStore.Azure – Roles – MvcMusicStore node in Solution Explorer to open up the properties page for the Role. Change the setting as shown below.
At this stage the basics are in place. A few additional items are required and they will be very familiar to anyone who has had to run ASP.Net web applications in a load balanced farm. These changes all relate to application configuration and are achieved through edits to the web.config.
Configure Membership and Role Provider to use SQL Providers.
[code language=”xml” gutter=”false”]
<membership defaultProvider="SqlMembershipProvider" userIsOnlineTimeWindow="15">
<providers>
<clear/>
<add name="SqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="AspNetDbProvider" applicationName="MVCMusicStoreWeb" enablePasswordRetrieval="false" enablePasswordReset="false" requiresQuestionAndAnswer="false" requiresUniqueEmail="true" passwordFormat="Hashed"/>
</providers>
</membership>
<roleManager enabled="true" defaultProvider="SqlRoleProvider">
<providers>
<add name="SqlRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="AspNetDbProvider" applicationName="MVCMusicStoreWeb"/>
</providers>
</roleManager>
[/code]
Add a fixed Machine Key (you can use a range of online tools to generate one for you if you want). This allows all Instances to handle Forms authentication and other shared state that will require encryption / decryption between requests.
[code language=”xml” gutter=”false”]
<machineKey
validationKey="E9D17A5F58DE897D9161BB8D9AA995C59102AEF75F0224183F1E6F67737DE5EBB649BA4F1622CD52ABF2EAE35F9C26D331A325FC9EAE7F59A19F380E216C20F7"
decryptionKey="D6F541F7A75BB7684FD96E9D3E694AB01E194AF6C9049F65"
validation="SHA1"/>
[/code]
Define a new connection string for our SQL-based Membership and Role Providers
[code language=”xml” gutter=”false”]
<add name="AspNetDbProvider"
connectionString="{your_connection_string}"
providerName="System.Data.SqlClient"/>
[/code]
Phew! Almost there!
Last piece of the puzzle is to add configuration transformations for our two Azure Regions so we talk to the Azure SQL Database in each Region. Repeat this in each Region’s transform and replace the Azure SQL Database Server name with the one appropriate to that Region (note that your secondary will be read-only at this point).
[code language=”xml” gutter=”false”]
<connectionStrings>
<add name="MusicStoreEntities"
xdt:Transform="SetAttributes"
xdt:Locator="Match(name)"
connectionString="Server=tcp:{primaryeast_db_server}.database.windows.net,1433;Database=mvcmusicstore;User ID={user}@{primaryeast_db_server};Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;"
/>
<add name="AspNetDbProvider"
xdt:Transform="SetAttributes"
xdt:Locator="Match(name)"
connectionString="Server=tcp:{primaryeast_db_server}.database.windows.net,1433;Database=aspnetdb;User ID={user}@{primaryeast_db_server};Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;"
/>
</connectionStrings>
[/code]
Build Deployment Packages
Now that we have a project ready to deploy we will need to publish the deployment packages locally ready for the remainder of this post.
In Visual Studio, right-click on the MvcMusicStore.Azure project and select Package as shown below.
Choose the appropriate configuration and click ‘Package’.
After packaging is finished a Windows Explorer window will open at the location of the published files.
Setup Cloud Services
Now we have all the project packaging work out-of-the-way let’s go ahead and provision up a Cloud Service which will be used to host our Web Role Instances. The sample script below shows how we can use the Azure PowerShell Cmdlets to provision a Cloud Service. The use of an Affinity Group allows us to ensure that our Blob Storage and Cloud Service are co-located closely enough to be at a low latency inside of the Azure Region.
Note: Just about everything you deploy in Azure requires a globally unique name. If you run all these scripts using the default naming scheme I’ve included the chances are that they will fail because someone else may have already run them (hint: you should change them).
Deploy Application to Cloud Services
Once the above Cloud Service script has been run successfully we have the necessary pieces on place to actually deploy our MVC Application. The sample PowerShell script below does just this – it utilises the output of our packaging exercise from above, uploads the package to Azure Blob Storage and then deploys using the appropriate configuration. The reason we have two packages is because the web.config is deployed with the package and making changes to it are not supported post deployment.
Once the above script has finished successfully you should now be open a web browser up and connect to the cloudapp.net endpoints in each Region. The Primary Region should give you full read/write access, whereas the Secondary Region will work but will likely throw exceptions for any action that requires a database write (this is expected behaviour). Note that these cloudapp.net endpoints are resolving to a load balanced endpoint that frontends the individual Instances in the Cloud Service.
Configure Traffic Manager
The final piece of this infrastructure puzzle is to deploy Traffic Manager which is Azure’s service offering for controlling where inbound requests are routed. Traffic Manager is not a load balancer but can provide services such as least-latency and failover routing and as of October 2014 can now support nested profiles (i.e. Traffic Manager managing traffic for another Traffic Manager – all very Inception-like!)
For our purposes we are going to use a Failover configuration that will use periodic health checks to determine if traffic should continue to be routed to the Primary Endpoint or failover to the Secondary.
Notes:
When defining a Failover configuration the ordering of the Endpoints matters. The first Endpoint added is considered as the Primary (you can change this later if you wish though).
You might want to consider a different “MonitorRelativePath” and utilise a custom page (or view in MVC’s case) that performs some form a simple diagnostics and returns a 200 OK response code if everything is working as expected.
But wait, there’s more!
A free set of steak knives!
No, not really, but if you’ve read this far you may as well qualify for some!
There are a few important things to note with this setup.
- It won’t avoid data loss: the nature of Geo-replication means there is a high likelihood you will not see all Primary Database transactions play through to the Secondary in the event of a failure in the Primary. The window should be fairly small (depending on how geographically dispersed your databases are), but there will be a gap.
- Failover requires manual intervention: you need to use the Stop-AzureSqlDatabaseCopy Cmdlet to force termination of the relationship between the Primary and Secondary databases. Until you do this the Secondary is read-only. Use external monitoring to find out when the Primary service goes down and then either leverage the Azure REST API to invoke the termination or script the Cmdlet. Note that once you break the copy process it isn’t automatically recreated.
- Users will notice the cutover: there will be a combination of things they’ll notice depending on how long you wait to terminate the database copy. Active sessions will lose some data as they are backed by the database. User actions that require write access to the database will fail until you terminate the copy.
- What about caching? We didn’t look at caching in this post. If we leverage the new Redis Cache we could theoretically setup a Slave in our Secondary Region. I haven’t tested this so your mileage may vary! (Would love to hear if you have)
The big benefit of this setup is that you can quickly recover from a failure in one Region. You may choose to use a holding page as your Secondary failover and then manually manage the complete cutover to the secondary active application once you are satisfied that the outage in the Primary Region will be of a long duration.
You should be running monitoring of the setup and check that both Primary and Secondary cloudapp.net endpoints are healthy and that your Traffic Manager service is healthy. Any issue with the Primary cloudapp.net endpoint will be the trigger for you to intervene and potentially switch off the geo-replication.
Anyway, this has been a big couple of posts – thanks for sticking around and I hope you’ve found the content useful.
Comments are closed.