Querying against an Azure SQL Database using Azure Automation Part 1

What if you wanted to leverage Azure automation to analyse database entries and send some statistics or even reports on a daily or weekly basis?

Well why would you want to do that?

  • On demand compute:
    • You may not have access to a physical server. Or your computer isn’t powerful enough to handle huge data processing. Or you would definitely do not want to wait in the office for the task to complete before leaving on a Friday evening.
  • You pay by the minute
    • With Azure automation, your first 500 minutes are for free, then you pay by the minute. Check out Azure Automation Pricing for more details. By the way its super cheap.
  • Its Super Cool doing it with PowerShell. 

There are other reasons why would anyone use Azure automation but we are not getting into the details around that. What we want to do is to leverage PowerShell to do such things. So here it goes!

To query against a SQL database whether its in Azure or not isn’t that complex. In fact this part of the post is to just get us started. Now for this part, we’re going to do something simple because if you want to get things done, you need the fastest way of doing it. And that is what we are going to do. But here’s a quick summary for the ways I thought of doing it:

    1. Using ‘invoke-sqlcmd2‘. This Part of the blog.. its super quick and easy to setup and it helps getting things done quickly.
    2. Creating your own SQL Connection object to push complex SQL Querying scenarios. [[ This is where the magic kicks in.. Part 2 of this series ]]

How do we get this done quickly?

For the sake of keeping things simple, we’re assuming the following:

  • We have an Azure SQL Database called ‘myDB‘, inside an Azure SQL Server ‘mytestAzureSQL.database.windows.net‘.
  • Its a simple database containing a single table ‘test_table’. This table has basically three columns  (Id, Name, Age) and this table contains only two records.
  • We’ve setup ‘Allow Azure Services‘ Access on this database in the firewall rules Here’s how to do that just in case:
    • Search for your database resource.
    • Click on ‘Set firewall rules‘ from the top menu.
    • Ensure the option ‘Allow Azure Services‘ is set to ‘ON
  • We do have an Azure automation account setup. We’ll be using that to test our code.

Now lets get this up and running

Start by creating two variables, one containing the SQL server name and the other containing the database name.

Then create an Automation credential object to store your SQL Login username and password. You need this as you definitely should not be thinking of storing your password in plain text in script editor.

I still see people storing passwords in plain text inside scripts.

Now you need to import the ‘invoke-sqlcmd2‘ module in the automation account. This can be done by:

  • Selecting the modules tab from the left side options in the automation account.
  • From the top menu, click on Browse gallery, search for the module ‘invoke-sqlcmd2‘, click on it and hit ‘Import‘. It should take about a minute to complete.

Now from the main menu of the automation account, click on the ‘Runbooks‘ tab and then ‘Add a Runbook‘, Give it a name and use ‘PowerShell‘ as the type. Now you need to edit the runbook. To do that, click on the Pencil icon from the top menu to get into the editing pane.

Inside the pane, paste the following code. (I’ll go through the details don’t worry).

#Import your Credential object from the Automation Account
 
 $SQLServerCred = Get-AutomationPSCredential -Name "mySqllogin" #Imports your Credential object from the Automation Account
 
 #Import the SQL Server Name from the Automation variable.
 
 $SQL_Server_Name = Get-AutomationVariable -Name "AzureSQL_ServerName" #Imports the SQL Server Name from the Automation variable.
 
 #Import the SQL DB from the Automation variable.
 
 $SQL_DB_Name = Get-AutomationVariable -Name "AzureSQL_DBname"
    • The first cmdlet ‘Get-AutomationPSCredential‘ is to retrieve the automation credential object we just created.
    • The next two cmdlets ‘Get-AutomationVariable‘ are to retrieve the two Automation variables we just created for the SQL server name and the SQL database name.

Now lets query our database. To do that, paste the below code after the section above.

#Query to execute
 
 $Query = "select * from Test_Table"
 
 "----- Test Result BEGIN "
 
 # Invoke to Azure SQL DB
 
 invoke-sqlcmd2 -ServerInstance "$SQL_Server_Name" -Database "$SQL_DB_Name" -Credential $SQLServerCred -Query "$Query" -Encrypt
 
 "`n ----- Test Result END "

So what did we do up there?

    • We’ve created a simple variable that contains our query. I know the query is too simple but you can put in there whatever you want.
    • We’ve executed the cmdlet ‘invoke-sqlcmd2‘. Now if you noticed, we didn’t have to import the module we’ve just installed, Azure automation takes care of that upon every execution.
    • In the cmdlet parameter set, we specified the SQL server (that has been retrieved from the automation variable), and the database name (automation variable too). Now we used the credential object we’ve imported from Azure automation. And finally, we used the query variable we also created. An optional switch parameter ‘-encypt’ can be used to encrypt the connection to the SQL server.

Lets run the code and look at the output!

From the editing pane, click on ‘Test Pane‘ from the menu above. Click on ‘Start‘ to begin testing the code, and observe the output.

Initially the code goes through the following stages for execution

  • Queuing
  • Starting
  • Running
  • Completed

Now what’s the final result? Look at the black box and you should see something like this.

----- Test Result BEGIN 

Id Name Age
-- ---- ---
 1 John  18
 2 Alex  25

 ----- Test Result END 

Pretty sweet right? Now the output we’re getting here is an object of type ‘Datarow‘. If you wrap this query into a variable, you can start to do some cool stuff with it like

$Result.count or

$Result.Age or even

$Result | where-object -Filterscript {$PSItem.Age -gt 10}

Now imagine if you could do so much more complex things with this.

Quick Hint:

If you include a ‘-debug’ option in your invoke cmdlet, you will see the username and password in plain text. Just don’t run this code with debugging option ON 🙂

Stay tuned for Part 2!!

 

Auto-redirect ADFS 4.0 home realm discovery based on client IP

As I mentioned in my previous post here that I will explain how to auto-redirect the home realm discovery page to an ADFS namespace (claims provider trust) based on client’s IP so here I am.

Let’s say you have many ADFS servers (claims providers trusts) linked to a central ADFS 4.0 server and you want to auto-redirect the user to a linked ADFS server login page based on user’s IP instead of letting the user to choose a respective ADFS server from the list on the home realm discovery page as explained in the below request flow diagram.

You can do so by doing some customization as mentioned below:

  1. Create a database of IP ranges mapped to ADFS namespaces
  2. Develop a Web API which returns the relevant ADFS namespace based on request IP
  3. Add custom code in onload.js file on the central ADFS 4.0 server to call the Web API and do the redirection

It is assumed that all the boxes including Central ADFS, linked ADFS, Web Server, SQL Server are setup. All the nitties and gritties are sorted out in terms of firewall rules, DNS lookups, SSL certificates. If not then you can get help from an infrastructure guy on that.

Lets perform the required action on SQL, Web and ADFS Server.

SQL Server

Perform the following actions on the SQL Server:

  1. Create a new database
  2. Create a new table called Registration as shown below

  1. Insert some records in the table for the linked ADFS server IP range, for example

Start IP: 172.31.117.1, End IP: 172.31.117.254, Redirect Name: http://adfs.adminlab.com/adfs/services/trust

Web Server

Perform the following actions for the Web API development and deployment:

  1. Create a new ASP.NET MVC Web API project using Visual Studio
  2. Create a new class called Redirect.cs as shown below (I would have used the same name as database table name ‘Registration’ but it’s OK for now)

  1. Insert a new Web API controller class called ResolverController.cs as shown below. What we are doing here is getting the request IP address and getting the IP ranges from the database, comparing the request IP address with the IP ranges from the database by converting both to long IP address. If the request IP is in range then returning the redirect object.

  1. Add a connection string in the web.config named DbConnectionString pointing to the database we created above.
  2. Deploy this web API project to the web server IIS
  3. Configure the HTTPS binding as well for this web API project using the SSL certificate
  4. Note down the URL of the web API, something like ‘https://{Web-Server-Web-API-URL}/api/resolver/get’, this will be used in the onload.js

Central ADFS 4.0 Server

Perform the following actions on the central ADFS 4.0 server:

  1. Run the following PowerShell command to export current theme to a location

Export-AdfsWebTheme -Name default -DirectoryPath D:\Themes\Custom

  1. Run the following PowerShell command to create a new custom theme based on current theme

New-AdfsWebTheme -Name custom -SourceName default 

  1. Update onload.js file extracted in step 1 at D:\Themes\Custom\theme\script with following code added at the end of the file. What we are doing here is calling the web API which returns the matched Redirect object with RedirectName as ADFS namespace and setting the HRD.selection as that redirect name.

  1. Run the following PowerShell command to update back the onload.js file in the theme

Set-AdfsWebTheme -TargetName custom -AdditionalFileResource @{Uri=’/adfs/portal/script/onload.js’;path=”D:\Themes\Custom\theme\script\onload.js”} 

  1. Run the following PowerShell command to make the custom theme as your default theme

Set-AdfsWebConfig -ActiveThemeName custom -HRDCookieEnabled $false

Now when you test from your linked ADFS server or a client machine linked to the linked ADFS server (which is linked to a central ADFS server), the auto-redirect kicks in from onload.js and forwards it to web API which gets the client IP and matches it with relevant ADFS where the request came from and redirects the user to the relevant ADFS login page, instead of user selecting the relevant ADFS namespace from the available list on home realm discovery page.

If the relevant match is not found, the default home realm discovery page with list of available ADFS namespaces is shown.

UPDATED: Identifying Active Directory Users with Pwned Passwords using Microsoft/Forefront Identity Manager

Earlier this week I posted this blog post that showed a working example of using a custom Pwned Password FIM/MIM Management Agent to flag a boolean attribute in the MIM Service to indicate whether a users password is in the pwned password dataset or not. If you haven’t read that post this won’t make a lot of sense, so read that then come back.

The solution when receiving a new password for a user (via Microsoft Password Change Notification Service) was checking against the Have I Been Pwned API. The disclaimer at the start of the blog post detailed why this is a bad idea for production credentials. The intent was to show a working example of what could be achieved.

This update post shows a working solution that you can implement internal to a network. Essentially taking the Pwned Password Datasets available here and loading them into a local network SQL Server and then querying that from the FIM/MIM Pwned Password Management Agent rather than calling the external public API.

Creating an SQL Server Database for the Pwned Passwords

On my SQL Server using SQL Server Management Studio I right-clicked on Databases and chose New Database. I gave it the name PwnedPasswords and told it where I wanted my DB and Logs to go to.

Then in a Query window in SQL Server Management Studio I used the following script to created a table (dbo.pwnedPasswords).

use PwnedPasswords;
 CREATE TABLE dbo.pwnedPasswords
( password_id int NOT IDENTITY(1,1) NULL,
 passwords varchar(max) NOT NULL,
 CONSTRAINT passwords_pk PRIMARY KEY (password_id)
);

Again using a query window in SQL Server Management Studio I used the following script to create an index for the passwords.

USE [PwnedPasswords]USE [PwnedPasswords]
GO
SET ANSI_PADDING ON

GO
CREATE UNIQUE NONCLUSTERED INDEX [PasswordIndex] ON [dbo].[pwnedPasswords]( [password_id] ASC)INCLUDE ( [passwords]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
GO

The last thing I did on the DB was to take the MIM Sync Server Active Directory Service Account (that was already in the SQL Server Logins) and give that account Reader Access to my new PwnedPasswords Database. I gave this account access as I’m using Integrated Authentication for login to SQL and as the MA is initiated by the MIM Sync Service Account, that is the account that needs the access.

Getting the Pwned Password Datasets into the new Database

I’m far from a DBA. I’m an identity guy. So using tools I was most familiar with (PowerShell) I created a simple script to open the password dump files as a stream (as Get-Content wasn’t going to handle the file sizes), read in the lines, convert the format and insert the rows into SQL. I performed the inserts in batches of 1000 and I performed it locally on the SQL Server.

In order to get the content from the dump file, add another column and get it in a format quickly to insert into the SQL DB I used the Out-DataTable function available from here.

The script could probably be improved as I only spend about 20-30 minutes on it. It is opening and closing a connection to the SQL DB each time it inserts 1000 rows. That could be moved outside the Insert2DB Function and maybe the batch size increased. Either way it is a starting point and I used it to write millions of rows into the DB successfully.

Updated FIM/MIM Pwned Passwords Management Agent Password.ps1 script

This then is the only other change to the solution. The Password.ps1 script rather than querying the PwnedPasswords API queries the SQL DB and sets the pwned boolean flag accordingly.

Summary

This enhancement shows a working concept that will be more appealing to Security Officers within corporate organisations if you have an appetite to know what your potential exposure is based on your Active Directory Users Passwords.

Setting up Always On Availability Groups for SQL Server in Azure

Almost all organisations rely on data, be it in the form of files or databases (SQL, Oracle…). The most challenging aspect of managing data for any organisation is high-availability and disaster recovery. Starting with the release of SQL Server 2012 (Enterprise Edition) Microsoft introduced “Always On Availability Groups”.

This powerful capability ensures your databases are synchronised through one or more replicas (SQL Servers).

While Azure offers an Infrastructure as a Service capability, the mechanisms used to setup many of the requirements for an Always On Availability Group are different to those in a traditional environment. From the assignment of IP addresses through to load balancing across listeners, special consideration needs to be taken in order to establish a functional SQL environment. In this blog I will discuss how to setup an Always On Availability Group in Microsoft Azure.

This blog assumes that you already have basic knowledge of Azure infrastructure services and that you have created your Azure subscription, and configured the Resource Group, Storage account, VNET, and subnets.

It also assumes that you have basic SQL knowledge and have established your Domain Controller running on Windows Server 2012 R2.

Prerequisites:

  • 1x Domain Controller
  • 1x File Server (Used for file share witness)
  • 2x DS3_v2 servers for installation of SQL Server 2014 Enterprise Edition

Part 1 – Preparing your environment

Step 1 – Availability Set

Before deploying a SQL server VM in Azure, you will need to create an availability set (this is to be used with the Azure Internal Load Balancer (ILB) at a later stage and also to protect your VMs from a single point of failure.

Note: It is important to create an Availability Set before provisioning a VM. In AzureRM, adding a VM (after it’s been created) to an Availability Set is not supported as of this writing.

What is an Availability Set?

Availability Sets are a set of racks, and racks of servers. Just like a traditional datacentre. When you place two or more VMs in an availability set, the VMs will be deployed in two different racks. A rack usually consists of power equipment, network equipment, etc.

Why is this important?

Microsoft from time to time performs planned maintenance for Azure, although there’s no direct impact on VMs, some VMs might need to be rebooted to complete the update. Also too, Microsoft always takes the possibility of a risk that a rack might go down.

More on Availability Sets and best practice could be found here.

Create an Availability Set

In Azure’s sidebar click on “New” and search for “Availability Set”, Create your availability set with a name that makes sense to you, e.g. AZ-AS-SQL

  • AZ = Azure
  • AS = Availability Set
  • SQL = Defining the purpose

Step 2 – Provisioning VMs

In Azure, select your defined Resource Group, and click “Add”, choose Windows, and then select Windows Server 2012R2.

For a production, it is best to choose a DS3 type VM, for larger environments, choose F8. Also too, choose your disks to be Premium disk. For dev/test workloads, you could go with Standard disks rather than Premium.

While creating your SQL VM, choose the availability set you have created in step 1.

If you need more than 1024Gb (1TB) of disk, you would need to create multiple disks, and create a storage pool and a VHD in Windows Server 2012R2 to combine those disks into one, since Azure gives you a maximum of 1024GB of disk space. Combining disks will also give you a better performance and a higher IOPS.

When creating disks, you would need to configure them as follow – Assuming you’re creating one disk for each SQL service.

Note: Name your disks in a way you’d recognise. Should you delete the VM, you might need to re-attach the disks to a new VM. When deleting a VM, your disks remain and are not deleted with the VM (unless the storage account or resource group is deleted). A good example for naming a disk might be “VMName-Data01”.

Disk Type Purpose Host Cashing
Premium SQL Binaries None
Premium TempDB Read
Premium Data Read
Premium Transaction None
Premium Backup None

You could choose a “Standard” Disk type for backup.

Your disks should have the same letters assigned to them, in both SQL Servers.

Once your SQL Server has been created, join it to the domain.

Part 2 – SQL Server – Installation

Prerequisite:

.NET Framework 3.5

Install-WindowsFeature Net-Framework-Core 

After the SQL Server VM has joined the domain, start with the installation of SQL Server 2014 Enterprise.

Note: It is recommended to create gMSA accounts for both SQL Server Agent service and SQL Server Database Engine service.

These accounts should have “Log on as a Service” permissions. This could be standardised through the use of Group Policies.

A quick note on how I have provisioned my SQL Server disks.

I have combined SQL Binaries and TempDB in one disk.

  • SQL Binaries and TempDB = E
  • Data = M
  • Transaction = T
  • Backup = U

Feature Selection

Each environment is different and may require different features to be installed.

In this scenario, we only require:

  • Database Engine Services
  • SQL Server Replication
  • Management Tools – Basic
    • Management Tools – Complete

Drive and Path selections

Part 3 – Failover Cluster

Before we start installing the Failover Clustering, we need to first choose a server in which will host our File Share Witness. This will be set up in a later stage though.

Why a file share witness and not a disk?

Windows Server 2008 and higher provided the option to use a file share witness as a quorum configuration. In traditional clustering you would have needed a shared disk, configured through the clustering setup. In this case you don’t need a shared disk to create a Windows Server Failover Cluster for Always On Availability Groups.

On the Server that you have deployed your SQL Servers, open PowerShell and install the “Failover Clustering” feature. (You could also start with this step prior to installing SQL Server).

Failover Cluster – Installation and Configuration

After launching the Failover Cluster Manager (console), right click on Failover Cluster Manager and choose “Create Cluster”

#Creating a cluster

Install-WindowsFeature –Name Failover-Clustering –IncludeManagementTools

#The following cmdlet runs all cluster validation tests on computers that are named Test-AZ-DB-01 and Test-AZ-DB-02.

Test-Cluster –Node Test-AZ-DB-01, Test-AZ-DB-02

#The Test-Cluster cmdlet outputs the results to a log file in the current working directory, for example C:\Users\username\AppData\Local\Temp.

#The following cmdlet creates the failover cluster without adding eligible storage to the failover cluster, as eligible storage isn't required

New-Cluster –Name AZ-WSFC-01 –Node Test-AZ-DB-01, Test-AZ-DB-02 –StaticAddress 192.168.162.12 -NoStorage

#The following cmdlet creates a cluster that is named AZ-WSFC-01 in the Cluster OU of the domain.

New-Cluster -Name CN=AZ-WSFC-01,OU=SQL,DC=test,DC=local -Node Test-AZ-DB-01, Test-AZ-DB-02

#Creating a FileShareWitness - Make sure the folder is already created, and permissions are assigned.

Set-ClusterQuorum –FileShareWitness \\FileServer\ClusterShareWitness

Or, if you prefer, follow the GUI:

Enter your “Nodes” – These are your SQL servers.

Run your validation in the next step – It is all right for the Validation to give you warning, since we’re not specifying any storage disks.

After adding your servers in the above steps, you should see them in the “Confirmation” window.

Uncheck the “Add all eligible storage to the cluster”.

In the next screen, you will specify the name of your Cluster Server, e.g: AZ-WSFC-01

Click next, until completion.

Once your cluster is created, you need to check the IP Address, since a DHCP address has been allocated to the cluster, as this is different than using PowerShell

To do that,

  1. Click on your Cluster Server Name “AZ-WSFC-01”
  2. Scroll down to “Cluster Core Resources”
  3. Right-click the IP Address
  4. Select “Static IP Address”.

Remember our File Share Witness? It’s time to create the folder on the server which will host the File Share Witness.

Follow the steps, if you haven’t used PowerShell cmdlets above.

In my case, I am using the File Server as my File Share Witness.

  • Create a folder in any drive you have available, call it “ClusterWitnessShare”.
  • Share the folder with the cluster computer object. This is done through “Advanced Sharing permissions”.
  • Confirm that the folder is shared, and firewall is allowing access should you have one.

Now that you have added the nodes, specified an IP Address, and configured your File Share Witness, it is time to add a Quorum for your failover cluster.

To do that through PowerShell

Set-ClusterQuorum –FileShareWitness \\FileServer\ClusterShareWitness

To do that through GUI

  • Right click on your cluster name AZ-WSFC-01
  • Click on “More Actions”
  • Click on “Configure Cluster Quorum Settings”.

Follow these steps:

In the next step, you could click “Browse” and enter the name of your server hosting the “ClusterWitnessShare”, and the shared folder will appear.

Continue clicking “next” until your Quorum is created.

No additional tasks are required for the Quorum after that.

Cluster and SQL Server Permissions

In order to be able to create a listener (in the next steps) you will need to make sure that the following permissions are set.

For the SQL Servers Computer Objects:

In Active Directory, select “Advanced Features”. Go to the OU in which your SQL Servers reside, this is where the cluster computer object will be.

  1. Right-click on the Cluster Computer Object
  2. Security
  3. Add your SQL Servers – Alternatively, you could create a security group in which your SQL Servers are a member of.
  4. Assign the following permissions – For each server, if you didn’t create a security group:
Allow Permissions
Read
Allowed To Authenticate
Change Password
Receive As
Reset Password
Send As
Validate write To DNS Host Name
Validate Write To Service Principle Name
Read Account Restrictions
Write Account Restrictions
Read DNS Host Name Attributes
Read MS-TS-GatewayAccess
Read Personal Information
Read Public Information

For the WSFC Computer Object:

  • In Active Directory go to Users
  • Click on Domain Admins
  • Add the WSFC computer objects (this should automatically be removed once a listener is created. You may have to re-add it if you’re creating more than one listener).

If this permission isn’t set, your listener will fail to be created in AD and DNS records. And as a result your Availability Group will not come online. More on this below.

Part 4 – Availability Groups

What is an Availability Group?

Microsoft describes an Availability Group as follows: An availability group supports a failover environment for a discrete set of user databases, known as availability databases, that fail over together. An availability group supports a set of primary databases and one to eight sets of corresponding secondary databases. Secondary databases are not backups. Continue to back up your databases and their transaction logs on a regular basis.

Documentation for Availability Groups is located here. It’s also worth checking the previous sections of the document, specifically the pre-requisites and restrictions for an Always On Availability Group. Although it is a fairly and straightforward task, you need to decide how data is going to be committed across your SQL instances (asynchronous or synchronous), and that your Databases have no data loss in the case of a disaster.

Creating an Always On Availability Group.

Now that you have your Failover Cluster ready, you will need to launch the SQL Server Configuration Manager. This is where you first have to “turn on” the Always On Availability Group.

To do this through PowerShell:

First run the following cmdlet to enable AlwaysOn on all the nodes in the cluster:

foreach ($node in Get-ClusterNode) {Enable-SqlAlwaysOn -ServerInstance $node -Force} 

To Enable AlwaysOn on each server one by one, then run the following cmdlet:

Enable-SqlAlwaysOn -ServerInstance MSSQLSERVER -Force 

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") |
Out-Null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") |
Out-Null

$SqlServerPrimName = "SQL01"
$SqlServerSecName = "SQL02"
$SqlAgName = "TestAg"
$SqlAgDatabase = "TestDB"
$HadrEndpointName = "HadrEndpoint"
$HadrEndpointPort = 5022
$BackupDirectory = "\\SQL01\Backup"

$SqlServerPrim = New-Object Microsoft.SqlServer.Management.Smo.Server($SqlServerPrimName)
$SqlServerSec = New-Object Microsoft.SqlServer.Management.Smo.Server($SqlServerSecName)

#Backup the database on the primary replica/server (full database backup)

$DbBackup = New-Object Microsoft.SqlServer.Management.Smo.Backup
$DbBackup.Database = $SqlAgDatabase
$DbBackup.Action = [Microsoft.SqlServer.Management.Smo.BackupActionType]::Database
$DbBackup.Initialize = $true
$DbBackup.Devices.AddDevice(“$BackupDirectory\$($SqlAgDatabase)_AgSetup_full.bak”,

[Microsoft.SqlServer.Management.Smo.DeviceType]::File)

$DbBackup.SqlBackup($SqlServerPrim)

#Backup the database (transaction log backup)
$DbBackup = New-Object Microsoft.SqlServer.Management.Smo.Backup
$DbBackup.Database = $SqlAgDatabase
$DbBackup.Action = [Microsoft.SqlServer.Management.Smo.BackupActionType]::Log
$DbBackup.Initialize = $true
$DbBackup.Devices.AddDevice(“$BackupDirectory\$($SqlAgDatabase)_AgSetup_log.trn”,

[Microsoft.SqlServer.Management.Smo.DeviceType]::File)

$DbBackup.SqlBackup($SqlServerPrim)

#Restore the database on the secondary replica/server (full database restore)
$DbRestore = New-Object Microsoft.SqlServer.Management.Smo.Restore
$DbRestore.Database = $SqlAgDatabase
$DbRestore.Action = [Microsoft.SqlServer.Management.Smo.RestoreActionType]::Database
$DbRestore.Devices.AddDevice(“$BackupDirectory\$($SqlAgDatabase)_AgSetup_full.bak”,

[Microsoft.SqlServer.Management.Smo.DeviceType]::File)
$DbRestore.NoRecovery = $true
$DbRestore.SqlRestore($SqlServerSec)

#Restore the database on the secondary replica/server (transaction log restore)

$DbRestore = New-Object Microsoft.SqlServer.Management.Smo.Restore
$DbRestore.Database = $SqlAgDatabase
$DbRestore.Action = [Microsoft.SqlServer.Management.Smo.RestoreActionType]::Log
$DbRestore.Devices.AddDevice(“$BackupDirectory\$($SqlAgDatabase)_AgSetup_log.trn”,

[Microsoft.SqlServer.Management.Smo.DeviceType]::File)

$DbRestore.NoRecovery = $true
$DbRestore.SqlRestore($SqlServerSec)
#Create the endpoint on the primary replica
$EndpointPrim = $SqlServerPrim.Endpoints |
Where-Object {$_.EndpointType -eq [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring}
if(!$EndpointPrim) {
$EndpointPrim = New-Object Microsoft.SqlServer.Management.Smo.Endpoint($SqlServerPrim, $HadrEndpointName)
$EndpointPrim.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
$EndpointPrim.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
$EndpointPrim.Protocol.Tcp.ListenerPort = $HadrEndpointPort
$EndpointPrim.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::All
$EndpointPrim.Payload.DatabaseMirroring.EndpointEncryption = [Microsoft.SqlServer.Management.Smo.EndpointEncryption]::Required
$EndpointPrim.Payload.DatabaseMirroring.EndpointEncryptionAlgorithm = [Microsoft.SqlServer.Management.Smo.EndpointEncryptionAlgorithm]::Aes

$EndpointPrim.Create()
$EndpointPrim.Start()
}

#Create the endpoint on the secondary replica
$EndpointSec = $SqlServerSec.Endpoints |
Where-Object {$_.EndpointType -eq [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring}
if(!$EndpointSec) {
$EndpointSec = New-Object Microsoft.SqlServer.Management.Smo.Endpoint($SqlServerSec, $HadrEndpointName)
$EndpointSec.EndpointType = [Microsoft.SqlServer.Management.Smo.EndpointType]::DatabaseMirroring
$EndpointSec.ProtocolType = [Microsoft.SqlServer.Management.Smo.ProtocolType]::Tcp
$EndpointSec.Protocol.Tcp.ListenerPort = $HadrEndpointPort
$EndpointSec.Payload.DatabaseMirroring.ServerMirroringRole = [Microsoft.SqlServer.Management.Smo.ServerMirroringRole]::All
$EndpointSec.Payload.DatabaseMirroring.EndpointEncryption = [Microsoft.SqlServer.Management.Smo.EndpointEncryption]::Required
$EndpointSec.Payload.DatabaseMirroring.EndpointEncryptionAlgorithm = [Microsoft.SqlServer.Management.Smo.EndpointEncryptionAlgorithm]::Aes

$EndpointSec.Create()
$EndpointSec.Start()

}

$AvailabilityGroup = New-Object Microsoft.SqlServer.Management.Smo.AvailabilityGroup($SqlServerPrim, $SqlAgName)

#Create the primary replica object
$PrimaryReplica = New-Object Microsoft.SqlServer.Management.Smo.AvailabilityReplica($AvailabilityGroup, $SqlServerPrimName)
$PrimaryReplica.EndpointUrl = "TCP://$($SqlServerPrim.NetName):$($EndpointPrim.Protocol.Tcp.ListenerPort)"
$PrimaryReplica.FailoverMode = [Microsoft.SqlServer.Management.Smo.AvailabilityReplicaFailoverMode]::Automatic
$PrimaryReplica.AvailabilityMode = [Microsoft.SqlServer.Management.Smo.AvailabilityReplicaAvailabilityMode]::SynchronousCommit
$AvailabilityGroup.AvailabilityReplicas.Add($PrimaryReplica)

#Create the secondary replica object
$SecondaryReplica = New-Object Microsoft.SqlServer.Management.Smo.AvailabilityReplica($AvailabilityGroup, $SqlServerSecName)
$SecondaryReplica.EndpointUrl = "TCP://$($SqlServerSec.NetName):$($EndpointSec.Protocol.Tcp.ListenerPort)"
$SecondaryReplica.FailoverMode = [Microsoft.SqlServer.Management.Smo.AvailabilityReplicaFailoverMode]::Automatic
$SecondaryReplica.AvailabilityMode = [Microsoft.SqlServer.Management.Smo.AvailabilityReplicaAvailabilityMode]::SynchronousCommit
$AvailabilityGroup.AvailabilityReplicas.Add($SecondaryReplica)

#Create the availability group database object
$AvailabilityDb = New-Object Microsoft.SqlServer.Management.Smo.AvailabilityDatabase($AvailabilityGroup, $SqlAgDatabase)
$AvailabilityGroup.AvailabilityDatabases.Add($AvailabilityDb)
#Create the availability group
$SqlServerPrim.AvailabilityGroups.Add($AvailabilityGroup)
$AvailabilityGroup.Create()

#Join the secondary replica to the AG, and join the database to the AG
$SqlServerSec.JoinAvailabilityGroup($SqlAgName)
$SqlServerSec.AvailabilityGroups[$SqlAgName].AvailabilityDatabases[$SqlAgDatabase].JoinAvailablityGroup()

To use GUI, search for “SQL Server Configuration Manager”, and open it.

  • Select the SQL Server Service, and then select the SQL Server Instance (if you have more than one instances, select the instance in which you’re configuring Always On Availability Group for).
  • Right-click the SQL Server Instance (MSSQLSERVER in this case) and go to properties.
  • Select the “Always On Availability Group” tab, and in there you should see the Failover Cluster Computer Name (created in previous step).
  • Tick or select “Enable Always On Availability Group” and click “OK”.

You will be prompted to restart the service for changes to be saved, restart the service of your instance (MSSQLSERVER).

Once the service has been restarted, navigate to “SQL Server Management Studio”, launch it and connect to your preferred database engine.

Prerequisite to create an Availability Group:

  • A database. This could even be a test Database.
  • Full back-up performed for the Database.

Creating an Availability Group:

  • Right-click on “Always On High Availability”

  • Select ‘New Availability Group Wizard’
  • Name your Availability Group, e.g. AG01

  • Select your DB.
  • Note: If your DB doesn’t meet the requirements, SQL will suggest an action, e.g. performing full back of the DB.

  • Specify your replicas.

    Note: Do not configure a listener at this stage.

  • Select your Synchronization method.

    Note: You could also select “Full”. Make sure however, you already have a shared network location accessible by your SQL servers.

  • You will receive a warning in the validation section, that’s because we haven’t configured a listener. It will not prevent you from creating your availability group nevertheless.

  • Once you click “Next” SQL will create the Availability Group with no issues.

  • Click “Close”.

Part 5 – Internal Load Balancer and creating the listener

The last step in getting a SQL Server Always On Availability Group working in Azure is where we will create and configure our Azure Load Balancer and create the listener.

Azure networking is different than on premise networking, and this is the main reason why we need to create an Azure Internal Load Balancer for our Availability Group Listener.

In Azure, create on “New” and search for “Load Balancer”, select the Load Balancer and click on Create.

Internal Load Balancer (ILB) configuration

In this section we will discuss best practice in configuring the ILB for your SQL Availability Listener.

Setting Field
Backend pool Name SQLILB
SQLLBBE Availability set AZ-AS-SQL
SQLLBBE Virtual machines AZ-DB-01, AZ-DB-02
SQLLBBE Used by SQLAlwaysOnEndPointListener
Probe Name SQLAlwaysOnEndPointProbe
Probe Protocol TCP
Probe Port 59999 – (You can use any unused port).
Probe Interval 5
Probe Unhealthy threshold 2
Probe Used by SQLAlwaysOnEndPointListener
Load balancing rules Name SQLAlwaysOnEndPointListener
Load balancing rules Protocol TCP
Load balancing rules Port 1433 – Because this is the SQL Server default port.
Load balancing rules Port 1433 – Because this is the SQL Server default port.
Load balancing rules Backend Port 1433
Load balancing rules Probe SQLAlwaysOnEndPointProbe
Load balancing rules Session Persistence None
Load balancing rules Idle Timeout 4
Load balancing rules Floating IP (direct server return) Enabled

Failover Cluster Configuration

The first step you need to here is to create a resource Client Access Point.

To do that through PowerShell:


Add-ClusterResource –Name LSTR-01 –ResourceType IP Address –Group Cluster Group 

To do that through GUI:

  1. Open the WSFC Manager
  2. Expand the cluster name
  3. Click on “Roles”
  4. Right click on your role (availability group name)
  5. Click “Add Resource”
  6. Choose “Client Access Point”

The “Client Access Point” is what will be your listener object, e.g. LSTR-01, as used in this example.

  1. Click “Next” and then click “Finish”.

Do not bring the listener or resource online at this point.

In Failover Cluster Manager, note the name of the cluster network. To determine the cluster network name in Failover Cluster Manager, click Networks in the left pane.

Note: You will use this name in the $ClusterNetworkName variable in the PowerShell script.

Click the Resources tab, then expand the Client Access Point you just created. Right-click the IP resource and click properties. Note the name of the IP address (Note that you can enter any value you like).

Note: You will use this name in the $IPResourceName variable in the PowerShell script.

Under IP Address click “Static IP Address” and set the static IP address to the same address that you used on the Azure Portal for “SQLILB” load balancer. Enable NetBIOS for this address and click “OK”.

Note: You will also use this same IP address in the $ILBIP variable in the PowerShell script.

On the cluster node that currently hosts the primary replica, open an elevated PowerShell ISE and paste the following commands into a new script – Don’t forget to change the variables.

 

$ClusterNetworkName = "SQL Failover Cluster Network" # the cluster network name (Use Get-ClusterNetwork on Windows Server 2012 of higher to find the name)
$IPResourceName = "SQLAGLSTR" # the IP Address resource name
$ILBIP = "172.19.2.10" # the IP Address of the Internal Load Balancer (ILB). This is the static IP address for the load balancer you configured in the Azure portal.

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ILBIP";"ProbePort"="59999";"SubnetMask"="255.255.255.0";"Network"="$ClusterNetworkName";"EnableDhcp"=0} 

Once the script has run successfully, you will need to:

  • Edit some dependencies for the role to come online,
  • Make sure your listener is registered in the OU (computer objects gets created in the same OU your SQL servers and Cluster are),
  • And configure the listener port in SQL.

To do that, follow the steps below:

Navigate back to Failover Cluster Manager. Expand Roles and then highlight your Availability Group (your role). On the Resources tab, right-click the listener name and click Properties.

Click the Dependencies tab. If there are multiple resources listed, verify that the IP addresses have OR, not AND, dependencies.

Note If you changed the name of your IP Address, to something like “lstrIP”, that’s what you will see and need to set in the dependencies.

Right-click the listener name and click Bring Online.

Once the listener is online, from the Resources tab, right-click the availability group and click Properties.

Create a dependency on the listener name resource (The computer object – not the IP address resources name). You may have to bring the availability group offline to change the dependency. Once the dependency is changed, click “OK” then bring online.

Note: In this situation, your primary replica might have changed, failover back to your primary replica before changing the port.

Launch SQL Server Management Studio and connect to the primary replica.

Navigate to Always On High Availability > Availability Groups > Availability Group Listeners.

You should now see the listener name that you created in Failover Cluster Manager. Right-click the listener name and click Properties.

In the Port box, specify the port number for the availability group listener by using the $EndpointPort you used earlier (1433 was the default), then click “OK”.

Test the connection to the listener

You can do that through telnet,

Telnet lstr-01 1433

Or through PowerShell

Move-ClusterGroup Cluster Group –node Test-AZ-DB-02

Thanks for reading, and I hope you found this useful. And please feel free to add your comments below.

Deploy an Ultra High Availability MVC Web App on Microsoft Azure – Part 2

In the first post in this series we setup our scenario and looked at how we can build out an ultra highly available Azure SQL Database layer for our applications. In this second post we’ll go through setting up the MVC Web Application we want to deploy so that it can leverage the capabilities of the Azure platform.

MVC project changes

This is actually pretty straight forward – you can take the sample MVC project from Codeplex and apply these changes easily. The sample Github repository has the resulting project (you’ll need Visual Studio 2013 Update 3 and the Azure SDK v2.4).

The changes are summarised as:

  • Open the MvcMusicStore Solution (it will be upgraded to the newer Visual Studio project / solution format).
  • Right-click on the MvcMusicStore project and select
    Convert – Convert to Microsoft Azure Cloud Project.
    Converting a Project
  • Add new Solution Configurations – one for each Azure Region to deploy to. This allows us to define Entity Framework and ASP.Net Membership / Role Provider database connection strings for each Region. I copied mine from the Release configuration.
    Project Configurations
  • Add web.config transformations for the two new configurations added in the previous step. Right click on the web.config and select “Add Config Transform”. Two new items will be added (as shown below in addition to Debug and Release).
    Configuration Transforms
  • Switch the Role deployment to use a minimum of two Instances by double-clicking on the MvcMusicStore.Azure – Roles – MvcMusicStore node in Solution Explorer to open up the properties page for the Role. Change the setting as shown below.
    Set Azure Role Instance Count

At this stage the basics are in place. A few additional items are required and they will be very familiar to anyone who has had to run ASP.Net web applications in a load balanced farm. These changes all relate to application configuration and are achieved through edits to the web.config.

Configure Membership and Role Provider to use SQL Providers.


    <membership defaultProvider="SqlMembershipProvider" userIsOnlineTimeWindow="15">
      <providers>
        <clear/>
        <add name="SqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="AspNetDbProvider" applicationName="MVCMusicStoreWeb" enablePasswordRetrieval="false" enablePasswordReset="false" requiresQuestionAndAnswer="false" requiresUniqueEmail="true" passwordFormat="Hashed"/>
      </providers>
    </membership>
    <roleManager enabled="true" defaultProvider="SqlRoleProvider">
      <providers>
        <add name="SqlRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="AspNetDbProvider" applicationName="MVCMusicStoreWeb"/>
      </providers>
    </roleManager>

Add a fixed Machine Key (you can use a range of online tools to generate one for you if you want). This allows all Instances to handle Forms authentication and other shared state that will require encryption / decryption between requests.


<machineKey
 validationKey="E9D17A5F58DE897D9161BB8D9AA995C59102AEF75F0224183F1E6F67737DE5EBB649BA4F1622CD52ABF2EAE35F9C26D331A325FC9EAE7F59A19F380E216C20F7"
 decryptionKey="D6F541F7A75BB7684FD96E9D3E694AB01E194AF6C9049F65"
 validation="SHA1"/>

Define a new connection string for our SQL-based Membership and Role Providers


<add name="AspNetDbProvider" 
connectionString="{your_connection_string}" 
providerName="System.Data.SqlClient"/>

Phew! Almost there!

Last piece of the puzzle is to add configuration transformations for our two Azure Regions so we talk to the Azure SQL Database in each Region. Repeat this in each Region’s transform and replace the Azure SQL Database Server name with the one appropriate to that Region (note that your secondary will be read-only at this point).


<connectionStrings>
  <add name="MusicStoreEntities"
       xdt:Transform="SetAttributes" 
       xdt:Locator="Match(name)"
       connectionString="Server=tcp:{primaryeast_db_server}.database.windows.net,1433;Database=mvcmusicstore;User ID={user}@{primaryeast_db_server};Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;"
    />
  <add name="AspNetDbProvider"
       xdt:Transform="SetAttributes" 
       xdt:Locator="Match(name)"
       connectionString="Server=tcp:{primaryeast_db_server}.database.windows.net,1433;Database=aspnetdb;User ID={user}@{primaryeast_db_server};Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;"
    />
  </connectionStrings>

Build Deployment Packages

Now that we have a project ready to deploy we will need to publish the deployment packages locally ready for the remainder of this post.

In Visual Studio, right-click on the MvcMusicStore.Azure project and select Package as shown below.

Package Azure Solution

Choose the appropriate configuration and click ‘Package’.

Package with config choice.

After packaging is finished a Windows Explorer window will open at the location of the published files.

Setup Cloud Services

Now we have all the project packaging work out-of-the-way let’s go ahead and provision up a Cloud Service which will be used to host our Web Role Instances. The sample script below shows how we can use the Azure PowerShell Cmdlets to provision a Cloud Service. The use of an Affinity Group allows us to ensure that our Blob Storage and Cloud Service are co-located closely enough to be at a low latency inside of the Azure Region.

Note: Just about everything you deploy in Azure requires a globally unique name. If you run all these scripts using the default naming scheme I’ve included the chances are that they will fail because someone else may have already run them (hint: you should change them).

Deploy Application to Cloud Services

Once the above Cloud Service script has been run successfully we have the necessary pieces on place to actually deploy our MVC Application. The sample PowerShell script below does just this – it utilises the output of our packaging exercise from above, uploads the package to Azure Blob Storage and then deploys using the appropriate configuration. The reason we have two packages is because the web.config is deployed with the package and making changes to it are not supported post deployment.

Once the above script has finished successfully you should now be open a web browser up and connect to the cloudapp.net endpoints in each Region. The Primary Region should give you full read/write access, whereas the Secondary Region will work but will likely throw exceptions for any action that requires a database write (this is expected behaviour). Note that these cloudapp.net endpoints are resolving to a load balanced endpoint that frontends the individual Instances in the Cloud Service.

Configure Traffic Manager

The final piece of this infrastructure puzzle is to deploy Traffic Manager which is Azure’s service offering for controlling where inbound requests are routed. Traffic Manager is not a load balancer but can provide services such as least-latency and failover routing and as of October 2014 can now support nested profiles (i.e. Traffic Manager managing traffic for another Traffic Manager – all very Inception-like!)

For our purposes we are going to use a Failover configuration that will use periodic health checks to determine if traffic should continue to be routed to the Primary Endpoint or failover to the Secondary.

Notes:

When defining a Failover configuration the ordering of the Endpoints matters. The first Endpoint added is considered as the Primary (you can change this later if you wish though).

You might want to consider a different “MonitorRelativePath” and utilise a custom page (or view in MVC’s case) that performs some form a simple diagnostics and returns a 200 OK response code if everything is working as expected.

But wait, there’s more!

A free set of steak knives!

No, not really, but if you’ve read this far you may as well qualify for some!

There are a few important things to note with this setup.

  • It won’t avoid data loss: the nature of Geo-replication means there is a high likelihood you will not see all Primary Database transactions play through to the Secondary in the event of a failure in the Primary. The window should be fairly small (depending on how geographically dispersed your databases are), but there will be a gap.
  • Failover requires manual intervention: you need to use the Stop-AzureSqlDatabaseCopy Cmdlet to force termination of the relationship between the Primary and Secondary databases. Until you do this the Secondary is read-only. Use external monitoring to find out when the Primary service goes down and then either leverage the Azure REST API to invoke the termination or script the Cmdlet. Note that once you break the copy process it isn’t automatically recreated.
  • Users will notice the cutover: there will be a combination of things they’ll notice depending on how long you wait to terminate the database copy. Active sessions will lose some data as they are backed by the database. User actions that require write access to the database will fail until you terminate the copy.
  • What about caching? We didn’t look at caching in this post. If we leverage the new Redis Cache we could theoretically setup a Slave in our Secondary Region. I haven’t tested this so your mileage may vary! (Would love to hear if you have)

The big benefit of this setup is that you can quickly recover from a failure in one Region. You may choose to use a holding page as your Secondary failover and then manually manage the complete cutover to the secondary active application once you are satisfied that the outage in the Primary Region will be of a long duration.

You should be running monitoring of the setup and check that both Primary and Secondary cloudapp.net endpoints are healthy and that your Traffic Manager service is healthy. Any issue with the Primary cloudapp.net endpoint will be the trigger for you to intervene and potentially switch off the geo-replication.

Anyway, this has been a big couple of posts – thanks for sticking around and I hope you’ve found the content useful.

Deploy an Ultra High Availability MVC Web App on Microsoft Azure – Part 1

As public cloud platforms such as Microsoft Azure mature it is becoming easier to build deployment architectures that are substantially resilient to faults in cloud platforms that are increasingly unlikely to ever eventuate due to the previously mentioned maturity!

We’ll take a look at how we can deploy an ultra highly available database-backed ASP.Net MVC Website using Microsoft Azure across this post and my next one.

Desired State

The diagram below shows what we will be aiming to achieve with our setup.

Ultra High Availability Design

This setup can best be summarised as:

  • Two Azure SQL Database Servers in two Regions utilising Active Geo-replication to provide a read-only replica in a secondary Azure Region.
  • Cloud Services in each Region containing dual Web Roles in an Availability Set. Web Roles are configured to communicate with the Azure SQL Database in their Region. The Primary Region will be read/write and the Secondary read-only.
  • A Traffic Manager instance deployed using the two Regional Cloud Services as Endpoints, using a Failover configuration.

Locally AND Geographically Redundant

If you don’t have a deep familiarity with how Azure Cloud Services and Azure SQL Database work then you may not realise the level of local (intra-Region) resiliency you get “out of the box” when you sign up. Here’s a summary based on the setup we are deploying:

  • Cloud Services with two instances will take advantage of Fault and Upgrade Domains to ensure that at least one Instance remains operational at all times (this is required to qualify for Azure Availability SLAs too).
  • Azure SQL Database runs three local replicas concurrently – one primary and two secondaries ensuring that intra-Region hardware failures are less likely to impact you.

Now this is probably more than sufficient for most businesses. In the years that I’ve worked in the public cloud space I’ve never seen an entire Region in any provider go completely dark. While it is highly unlikely to ever occur this is not the same as saying it won’t ever occur (hence this post!)

Costs

A word to the wise: these setups don’t come cheap (example: Azure SQL Database Premium P3 ~AUD4,267.49/mo in AU East). This should be expected given the level of redundancy and service availability you get. Having said this, in comparison with having to build this setup out on-premises you’d be hard pushed to beat the price points (and flexibility) you’ll get from a cloud deployment like this one.

Sample Code

In order to wrap your head around this post I’ve created a sample project hosted on Github that you can use to test out the design covered here. The project is a modified version of the sample MVC Music Store hosted on Codeplex. I had trouble convincing Codeplex to authenticate or register me, so consequently I’ve set this sample up on Github. We’ll cover code changes in more detail in the second part of this post.

Database Tier

We’re going to start at the database tier as we’ll require configuration information from our database deployment in order to successfully deploy our web tier. The MVC Music Store sample application requires us to deploy two databases: the core data model supporting the MVC Music Store product catalogue and shipping; and the standard ASP.Net SQL Membership and Role Provider database.

The ASP.Net databases can be deployed using aspnet_regsql.exe (if you have problems running it with Azure SQL Database check out this Connect item for help) but I’m going to use some SQL scripts from the mentioned Connect item so that I can manage creation of the database.

The MVC Music Store sample application can also be allowed to initialise its own database using the following snippet:

protected void Application_Start()
{
   System.Data.Entity.Database.SetInitializer(
       new MvcMusicStore.Models.SampleData()
       );
}

but in my case I want to control how and where the database is created so I’m using the supplied SQL script and commenting out the above snippet in my solution.

Get the show on the road

Firstly we’re going to initialise our Azure SQL Database Servers, Databases and geo-replication. Now we could do this via the standard Azure Management Portal but it’s much easier (and repeatable) if we use PowerShell.

The basic approach is:

  • Setup Database Server in Primary Region (in our case Australia East) and open up the Firewall as required.
  • Setup Database Server in Secondary Region (in our case Australia Southeast) and open up the Firewall on it as well.
  • Create empty databases for the MVC store and ASP.Net Membership and Role Providers on the Primary server.
  • Enable geo-replication.
  • Create schemas and data in databases on Primary Server using SQL Server DDL.

I’m a bit bummed as I was unable to do all this via PowerShell leveraging the Invoke-SqlCmd Cmdlet to run the DDL. It looks like there’s a known issue floating around that means the SqlPs and Azure PowerShell Modules have a bit of a spat over some shared code. Rather that get you to edit core PowerShell settings I’m just going to run the DDL using the Azure SQL Database console.

Once you’ve run the above script you’ll have your underlying SQL infrastructure ready to go. Now we need to build the schemas and populate with data. We’ll do this using the old Azure Management Portal so we can avoid having to install any SQL Server tooling locally if we don’t need to.

  1. Open up a browser and log into the Azure Management Portal and navigate to the SQL Server that is your Primary Server.
  2. On the SQL Databases page, Click ‘Servers’ and highlight your primary server by clicking on the line anywhere other then Server Name.
  3. Click ‘Manage’ in the footer. This will open the Silverlight-based admin console.
  4. Log into the server with the credentials you put into your PowerShell setup script.
  5. Select the empty database instance you want to run your DDL SQL Script in and then click ‘Open’ in the navigation bar (your screen should look something like the below sample).Running a SQL Script
  6. Repeat for both databases (MVC Music Store and ASP.Net).
    The ASP.Net scripts can be found in the solution on Github. All you need to run is InstallCommon.Sql, InstallMembership.Sql and InstallRoles.Sql (in that order).

Once completed you have setup the foundation database tier for our sample application. The geo-replication will play out your DDL changes into the Secondary Region and you’re already in a good position now having your data in at least four different locations.

This completes the first post in this two post series. In our next post we will look at setting up the web tier.

How to fix 403 errors when managing Azure SQL Database from Visual Studio

I was recently trying to manage Azure SQL Databases via Visual Studio in a new Azure subscription and was unable to open the SQL Databases node at all and received the following error message.

Screenshot of Visual Studio error dialog.

The text reads:

Error 0: Failed to retrieve all server data for subscription ‘GUID’ due to error ‘Error code: 403 Message: The server failed to authenticate the request. Verify that the certificate is valid and is associated with this subscription.’.

and my Server Explorer window looked like this:

How Server Explorer Looked

I must admit that I don’t often manage my Azure assets via Visual Studio so it had been a while since I’d used this tooling. I tried a few ways to get this to work and double checked that I had the right details for the subscription registered on my local machine. Storage worked fine, Virtual Machines worked fine… everything looked good except SQL Databases!

(At this point I’d say… hint: I should have read the error message more closely!)

After some additional troubleshooting it turns out that unlike many other Azure offerings, Azure SQL Database does not support OAuth-based connections and instead uses certificates (you know, like the error message says…).

Unfortunately, it turns out that if you have an expired or otherwise invalid certificate for any Azure subscription registered then Visual Studio will fail to enumerate SQL Database instances in the subscription you are currently using even if its certificate is fine.

The use of a subscription GUID isn’t that helpful when troubleshooting because I completely missed that the problematic certificate wasn’t even from the subscription I was currently trying to use!

You can fix this issue by managing your registered Azure subscriptions from within Visual Studio as follows.

Manage Certificates in Visual Studio

  • Right-click on the top Azure node in Server Explorer.
  • Select Manage Subscriptions… from the menu.
  • Click on the “Certificates” tab and find the Subscription with the GUID matching the error.
  • Click “Remove…” and then close the dialog.

You should now be able to open the Azure SQL Database node in Server Explorer and manage them as you expect!

HTH.