Connecting to MS SQL Server with MuleSoft

MuleSoft provides an extensive list of connectors which come in handy when implementing integration solutions. In many integration scenarios, we need to connect directly to a database to get or update data. MuleSoft provides a database connector which allows JDBC connectivity with relational databases to execute SQL operations such as Select, Insert, Update, Delete, and Stored Procedures.

However, the database connector only provides out-of-the-box connectivity with Oracle, MySQL, and Derby databases. To connect to Microsoft SQL Server, additional steps are to be followed. In this post I will go through the required setup and to show how to connect with a MS SQL Server database. I will implement some basic integration scenarios like querying data, updating data, and polling updated records from a SQL database.

For this post I will be using MuleSoft 3.7.2 EE and SQL Server 2014, however, you could follow the same steps to connect to other versions of SQL Server.

Pre-requisites to connect to MS SQL Server with MuleSoft.

Because MS SQL Server is not one of the supported out-of-the-box databases, the following is to be done:

  • Download and install the Microsoft JDBC Drivers for SQL Server. As detailed in the Install instructions of the link, the installer is to be downloaded and run. Once run, it will prompt for installation directory, for which specifying %ProgramFiles% is suggested. For my exercise I utilised the latest version (4.2).
  • Make sure TCP/IP is enabled in your SQL Server configuration as described here.
  • Add the required references to your MuleSoft project.
    • Right click on your project -> Build Path -> Add External Archives…

    • Select the corresponding jar file from the installation directory specified before (e.g. “C:\Program Files (x86)\Microsoft JDBC Driver 4.2 for SQL Server\sqljdbc_4.2\enu”).

    • Validate that the library is been referenced as shown in the figure.

Now we are ready to create connections to SQL Server from AnyPoint Studio.

Create a Global Configuration Element for your SQL Server database.

Once MS SQL Server JDBC libraries are referenced by your MuleSoft project, you can create a Configuration Element to connect to a SQL Server database. For this exercise I will be using the AdventureWorks2014 Sample Database. The steps to create the global configuration element are described as follows:

  • Create a new Generic Database Configuration.

  • Set the Generic Database Configuration
    • Specify a Name.
    • Specify your connection URL using the format: jdbc:sqlserver://${mssql.server}:${mssql.port};databaseName=${mssql.database};user=${mssql.user};password=${mssql.password}
    • Specify the Driver Class Name as:
    • Test connection.

The resulting XML of the Generic Database Global Configuration element should be like the one as follows:

doc:name="Generic Database Configuration"/>

Note that I am using a Property Placeholder to store my connection properties. Once the Global Configuration Element has been created, we are ready to start interacting with the database. In the following sections, I will be showing how to implement three common integration scenarios with a SQL Server database

  1. Querying data from a database.
  2. Updating data in a database.
  3. Polling updated records from a database.

Querying Data from a SQL Server Database

The first scenario we will go through is querying data from a SQL Server database based on a HTTP GET request. The requirement of this scenario is to get all employees from the database with first name containing a specified string. To implement this scenario, I have created the following flow containing the steps listed below:

  • HTTP Listener: which expects a GET call with a query param specifying the name to search for.
  • Extract Query Param: A Set Flow Variable Transformer which extracts the query param specifying the name to search for. Because I will be using a LIKE operator on my SQL query, I would default the variable to ‘%%’ to bring all available records when no filter is specified. I set the variable value as follows:
    #[message.inboundProperties.‘http.query.params’.name == empty ? ‘%%’ : ‘%’ + message.inboundProperties.‘http.query.params’.name + ‘%’]
  • Get Employees from DB: Database connector to get employees matching the criterion configured as follows:
    • Connector Configuration: the Generic Database Configuration Element configured previously.
    • Operation: Select
    • Query Type: Parametrized

    • Parametrized query: To query employees based on the filter criterion. I will use a simple SELECT statement and will use the flow variable
      as filter as shown below

      SELECT  Person.BusinessEntityID
      FROM    Person.Person
      JOIN    Person.EmailAddress
           ON Person.BusinessEntityID = EmailAddress.BusinessEntityID
      JOIN    HumanResources.Employee
           ON Person.BusinessEntityID = Employee.BusinessEntityID
      WHERE    Person.FirstName LIKE #[flowVars.employeeName]

  • Convert Object to JSON: to convert the default java.util.LinkedList
    object to JSON.

At the end of the flow, the JSON payload will be returned to the caller. This is resulting configuration XML.

   doc:name="HTTP Listener Configuration"/>
<flow name="getEmployees_Flow">
   <http:listener config-ref="HTTP_Listener_Configuration"
      doc:name="Listener GET Employees"/>
      value="#[message.inboundProperties.'http.query.params'.name == empty ? '%%' : '%' + message.inboundProperties.'http.query.params'.name + '%' ]"
      doc:name="Extract Query Param"/>
      doc:name="Get Employees from DB">
      SELECT Person.BusinessEntityID
       JOIN Person.EmailAddress ON Person.BusinessEntityID = EmailAddress.BusinessEntityID
       JOIN HumanResources.Employee ON Person.BusinessEntityID = Employee.BusinessEntityID
     WHERE Person.FirstName LIKE #[flowVars.employeeName]
      doc:name="Convert Object to JSON"/>

Now, I should be ready to test this. I will use postman to call my endpoint.

When I call the endpoint without specifying a name using this URL http://localhost:8081/employee, I get a JSON payload with all employees.

When I call the endpoint specifying a name as a query param using this URL: http://localhost:8081/employee?name=sha, I get a JSON payload containing 3 employee records with a first name containing the string “sha”.

All good with my first scenario.

Updating Data into a SQL Server Database

In this scenario, I will update records in a database based on a HTTP PATCH request. The requirement is to update an Employee. The Employee ID is to be sent as a URI param and in the body the values to update.

To achieve this, I have created the following flow containing the steps listed below:

  • HTTP Listener: which expects a PATCH call with a URI param specifying the EmployeeID to update and the values to be updated specified in the body as a JSON payload. The expected payload looks like this:

       "FirstName": "Paco",
       "MiddleName": "",
       "LastName": "de la Cruz",
       "LoginID": "adventure-works\\paco",

  • Extract URI Param: A Set Flow Variable Transformer which extracts the URI param specifying the EmployeeID to update.
  • Update Employee: Database connector to update the employee configured as follows:
    • Connector Configuration: the Generic Database Configuration Element configured previously.
    • Operation: Stored Procedure
    • Query Type: Parametrized
    • Parametrized query (I previously created this stored procedure on the AdventureWorks2014 database):

      This is the syntax we have to use to call the stored procedure from MuleSoft using a parametrised query.

      {CALL uspUpdateEmployee (:BusinessEntityID, :FirstName, :MiddleName , :LastName, :EmailAddress, :LoginID)}

      Note the configured parameters in the picture above.

      Here, it is worth mentioning that if you are using MuleSoft versions 3.7.0 or 3.7.1, you might want to update to 3.7.2 or higher to avoid this bug (Database does not supports streaming on stored procedures (java.lang.IllegalArgumentException)) when calling parametrised stored procedures.

  • Choice: Return the corresponding message to the caller notifying if the employee was updated or not.

This is the resulting configuration XML.

<flow name="updateEmployees_Flow">
   doc:name="Listener PATCH Employee"/>
  value="#[message.inboundProperties.'http.uri.params'.id == empty ? 0 : message.inboundProperties.'http.uri.params'.id]"
 doc:name="Extract URI Param"/>
  doc:name="Update Employee">
 {CALL uspUpdateEmployee (:BusinessEntityID, :FirstName, :MiddleName , :LastName, :EmailAddress, :LoginID)}
 expression="#[payload.values().toArray()[0] == 1]">
 value="#['Employee: ' + flowVars.businessEntityID + ' has been updated.']"
 doc:name="Employee has been updated"/>
 value="#['Employee: ' + flowVars.businessEntityID + ' was not found.']"
 doc:name="Employee was not found"/>

Using postman, I will test my new endpoint making a PATCH call and adding the “Content-Type” header with the value “application/json; charset=UTF-8“. I will send the payload below to update the record with EmployeeID = 1:

   "FirstName": "Paco",
   "MiddleName": "",
   "LastName": "de la Cruz",
   "LoginID": "adventure-works\\paco",
   "EmailAddress": ""

When I call the endpoint using this URL http://localhost:8081/employee/1, I get the message that the record has been updated. When I check the database, I am now the new CEO of Adventure Works .

When I call the endpoint using this URL http://localhost:8081/employee/0, I get the message that the Employee was not found.

All done with this scenario.

Polling updated records from a SQL Server Database

The last integration scenario is very common, particularly when implementing the Pub-Sub pattern, in which changes in a source system have to be published to one or more subscribers. The good news is that MuleSoft provides polling updates using watermarks, which comes in very handy and easy to implement. Below I explain how to do this with a SQL database.

I created the following flow with the steps listed below to implement the polling scenario.

  • Poll Scope: The poll scope requires to change the Processing Strategy of the flow to: “synchronous”

    The polling scope won’t work if you leave the default processing strategy, which is asynchronous. If you got the error below, you know then what to do .

    Message : Watermarking requires synchronous polling
    Code : MULE_ERROR-344

    For this scenario I configured the polling to occur once a day. To get only updated records, I am implementing watermarking utilising the ModifiedDate in the payload as shown below.

  • Select Updated Records: Inside the Poll scope, I implemented a database connector to select the updated records as shown in the figure. To get only the updated records, I am filtering those which the ModifiedDate is greater (later) than the flow variable DateTimeWatermark, the watermark created on the Poll scope.

  • Filter empty payload: To stop the flow when no updated records are obtained, using the following expression:

    #[payload.size() > 0]

  • Convert Object to JSON: to get a JSON payload out of the result set.
  • Logger: just as a way to test the flow.

This is resulting configuration XML.

<flow name="publishUpdatedEmployees_Flow"
         doc:name="Select Updated Employees">
SELECT Person.BusinessEntityID
FROM Person.Person
JOIN Person.EmailAddress
  ON Person.BusinessEntityID = EmailAddress.BusinessEntityID
JOIN HumanResources.Employee
  ON Person.BusinessEntityID = Employee.BusinessEntityID
WHERE Person.ModifiedDate > CAST(#[flowVars.DateTimeWatermark] AS DATETIME)
ORDER BY BusinessEntityID
      expression="#[payload.size() >0]"
      doc:name="Filter empty payload"/>
      doc:name="Object to JSON"/>

To test this scenario, I called the API to update data on my SQL database previously implemented (PATCH request to http://localhost:8081/employee/{id}) with different IDs, so different employees were updated. Then, I ran my solution and the polling started getting only the updated records. As simple and beautiful as that!

In this post I have shown you how to prepare your environment and your MuleSoft AnyPoint studio to work with Microsoft SQL Server, how to create a Global Configuration Element for a SQL database, how to search and poll changes from a SQL database, and how to update records as well. I hope now you have a better idea of how to connect to a MS SQL Server from your MuleSoft solutions. Thanks for reading!

Highly Available SQL 2012 across Azure VNET (Part 2)

Part 1 can be found here.

In this Part 2 we will discuss:

  • Create DC on Azure and confirm VNET to VNET connectivity
  • SQL VMs
  • Configure WSFC and lastly configure AAG

DC and Connectivity VNET to VNET

First thing first, we need VMs for the Domain Controller (DC) and SQL Server 2012. I will use my script below to create few VMs

I created 2 DC , one on each VNET: AZSEDC001 and AZUSDC001
I registered both as DNS on Azure. The next step , allow ICMP on wf.msc as we are going to test ping on both servers.




Great. Now we have confirmed the connectivity between both DC and connectivity between SEVNET and USVNET.


Created 2 SQL VMs ( AZSEDB001 and AZSEDB002) under one Cloud Service (AZSEDB) on Azure-Backend Subnet of SEVNET . Domain Joined both SQL server

Configure WSFC

For this scenario, I created three extra accounts on AD:

1. kloud\kloudinstall – for failover cluster and AG. Give permission Allow for Read all properties and Create Computer Objects via AD. Assign Local Admin permission on both SQL Servers 

2. kloud\svcsql1 and kloud\svcsql2

Next part, Add Failover Clustering feature on both servers and install the HotFix for Windows 2012 cluster-based node

1. Create the WSFC Cluster:


2. Create multi – node cluster on azsedc001 (Add all VMs using Wizard and the Wizard will smart enough to detect multi-subnet) and do not choose require support from Microsoft for this cluster.


3. Configure Quorum File Share Witness on other machines. I configure it on SEVNET DC
4. Change the cluster IP address (WSFC will use azsedc001 IP: to unused IP. I used for SEVNET and for USVNET
5. Bring the cluster online:
You can test failover to USVNET by using PowerShell command below:

Click here for more details regarding multi-subnet WSFC

Configure AAG

1. Launch wf.msc to allow firewall inbound rules (All SQL Servers). Program: %ProgramFiles%\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\sqlservr.exe
2. Enable AlwaysOn Availability Group (all SQL servers): Launch SQL Server Configuration Wizard SQL Server Services SQL Server (MSSQLSERVER) > Tick Enable AlwaysOn Availability Group > Restart the Services



3.  Launch SQL Server Management Studio.  Add new Security Login for NTAuthority\System , go to SecurablesGrant: Alter any availability group, connect SQL, view server state and installer account with SysAdmin Server Role.


4. Change the SQL Service Account from NTService\MSSQLSERVER. In this case: svc_sql1 for AZSEDB001 and svc_sql2, svcsql3 for AZSEDB002 and AZUSDB001



AAG Steps:

1. Attach extra disk on AZSEDB001, Format the drive and Create a folder : backup on AZSEDB001. Share backup folder as below:



2. Go to AZSEDB001, Run SQL Management Studio and create new Database: kloud1.

3. Take a full backup of the database: Right click kloud1 database > Tasks Back Up. Remove the default destination to \\azsedb001\backup\kloud1.bak



4. Do the Transactional Backup: Use the same destination file path

5. Restore the full and transactional log backups on azsedb002 and azusdb001. On SQL Server Management Studio (SSMS), Right click databases and select restore database. Click Device and Add Backup Media, Backup File location: \\azsedb001\backup. Choose the backup database: kloud1.bak


6. Click Options and select Restore with No Recovery in Recovery State and then click Ok to restore the database


7. Now the fun stuff, We will run AAG Wizard: Right click the AlwaysOn High Availability on SSMS and follow the wizard


8. In AAG WIzard Specify Replica session, follow instructions as follow:

What we have here: Two replicas – one (AZSEDB002) in SEVNET and one (AZUSDB001) in USVNET. The details configuration:


Note: AZUSDB001 configured with Asynchronous data replication since AZUSDB001 hosted hundreds of miles away from Southeast Asia data centre, latency will hurt the performances.

9. In the Select Initial Data Synchronization page, select Join only and click Next since we have done backup and restore operations which is recommended as best practice especially for enterprise database

10. Follow the wizard, typical click – click – next. Igone the listener configuration warning. Finish.

The AAG dashboard by now:


More details can be found here.

Configure the Listener:

Next we will create the AAG listener:

1. Create load-balanced Azure VM endpoints for each Azure VM

2. Install KB2854082 for Windows Server 2008R2 and Windows Server 2012 cluster nodes.

3. Open the firewall ports to allow inbound rules Ports: 59999 specified earlier on Step 1.

4. Create the AG listener:
Open Failover Cluster Manager > Expand the AAG cluster name > Roles >
Right click the AG name and select Add Resource > Client Access Point




Click the Resources tab right click the listener > Properties > Note the IP Address Name and Network Name

Get the Cloud Service VIP on both SEVNET and USVNET. Run the script below on AZSEDB001

Once completed:


Create a dependency on the listener name resource. Right click the Availability Group and click Properties:


Launch SSMS > Go to AlwaysOn High Availability > Availability Groups > AAG Listener Name > Properties and specify Port: 1433

And that’s it. We have Highly Available SQL 2012 AAG across Azure VNET

Follow this link for more details how to configure AlwaysOn in Azure.



Highly Available SQL 2012 across Azure VNET (Part 1: VNET Peering)

Just over a year Microsoft announced the support of SQL Server AlwaysOn Availability Groups (AAG) on Microsoft Azure IaaS. Last month, Microsoft announced the support of SQL AAG between Azure Regions. This is a great news for great technology like SQL Server 2012 for highly available and disaster recovery scenario. SQL AAG released in SQL 2012 and enhanced in SQL 2014. AAG will detect anomaly which will impact SQL availability. When We will discuss how to do this in two blog posts:

  • Part1: Design SQL 2012 AAG across Azure VNET and How to create Microsoft Azure VNET to VNET peering
  • Part2: SQL, WSFC, Configure Quorum and Voting (SQL) and Configure AAG

Part1 SQL 2012 AAG across Azure VNET SQL 2012 AAG is designed to provide high availability for SQL database and Azure IaaS is great place where this technology can live. There are few benefits using Azure IaaS for this scenario:

  • Security features  from Azure as Cloud Provider. The security whitepaper can be found here
  • Azure VM Security Extensions which means we can rest assure when VM is deployed, it is protected from day 1. Details can be found here
  • Azure ILB to provide load balancing solution
  •  99.90% SLA for VNET connectivity (VPN). This feature is backed up by two “hidden” Azure VMs in active-passive configuration
  • Express Route (MPLS) for higher bandwidth requirement – we won’t discuss and use this feature in this blog posts

The architecture components for this scenario: 2 VNET on two different regions to avoid single point of region failure. We will call this VNET: SEVNET (Southeast Asia Region) and USVNET (US West Region). These VNETs will be peered. DC on each VNET to provide AD and DNS service. First DC on Southeast Asia region will be used as File Share Witness. 3 SQL Servers in AAG which 2 SQL will be at SEVNET on Azure Availability Set and 1 SQL will be at USVNET. The constraints for this scenario:

  • Cloud Service cannot span across Azure VNET. For this scenario two Cloud Service will be used for SQL VMs
  • Azure Availability Set (AS) cannot span across VNET and Cloud Service. Two SQL AS will be deployed
  • Azure ILB cannot span across VNET. Only Primary and Secondary SQL will be load balanced on SEVNET

Diagram below illustrates the architecture:


Diagram above shows SQL AAG is configured across two Azure VNET. This configuration will give resiliency from full Azure region failure. AAG will be configured with 2 replicas (Primary at SEVNET, one replica at SEVNET for automatic failover and the other replica across region at USVNET configured for manual failover and disaster recovery in case of region failure at SEVNET). The listener will be configured o SEVNET which configured to route connections to primary replica. The scenario above also allows offloading read workloads from the Primary replica to readable secondary replicas in Azure region that are closer to the source of the read workloads (For example: reporting/ BI / backup purpose) Microsoft Azure VNET to VNET Peering Now let’s create 2 VNET on Southeast Asia region and US West region. Below is the configuration:

  • SEVNET | Southeast Asia Region | Storage: sevnetstor | Address Space: | DNS:
  • USVNET | US West Region | Storage: usvnetstor | Address Space: | DNS:

We will use Regional Virtual Network instead of using Affinity Group for this scenario which will enable us to use ILB for the future use. My colleague Scott Scovell wrote a blog about this a while ago. Create 2 Storage Accounts:

2 storage

Create DNS Server – I registered with DC name AZSEDC001 Create VNET Peering We will use Azure GUI to create VNET-VNET peering Create first VNET at SE Asia Region: Go to NEW>Network Services>Virtual Network>Custom Create> Enter VNET name: SEVNET and select Region Southeast Asia > Next > select DNS Servers: AZSEDC001 > check Configure a site-to-site VPN > On Local Network choose Specify a new Local Network > Next



Enter the name of local network as USVNET and specify the address space. On VPN Device IP Address just use temporary one and we will replace that with the actual Public IP Address of the Gateway.


Next – we will configure the IP range of SEVNET . The important bit: Click on Add Gateway Subnet


Next We need to configure the USVNET with the same way. Configure site-to-site connectivity with the local as SEVNET using its address space. Both VNET will be like below:


Next: We will need to create Dynamic Routing VPN Gateways for both VNET. Static Routing is not supported.


Once completed, get the Gateway IP address for both VNET and replace the temporary VPN IP Address on Local Networks with the actual Gateway IP address we just obtained.



The last step: Set the IPsec/IKE pre-shared keys for both VNET. We will use Azure PowerShell for this configuration. Firstly we will get the Pre-Shared keys to be used on our PowerShell script.


Please ensure You are on the right subscription. Always good habit to use select-azuresubscription -default cmdlet before executing Azure PowerShell script.


And That’s it! We should see the successful VNET to VNET Peering :


Part 2 we will deep dive on how to configure the SQL AAG on across both VNET

New backup to Azure feature in SQL Server 2014

SQL Server 2014 went RTM a couple of weeks ago and today it hits general availability for on-premises users via MSDN download, and for the cloud, where you can now spin one up as a VM from the Azure image gallery.  So let’s saddle up and take Microsoft’s new Pegasi workhorse for a flying gallop through the hybrid cloud. Giddyup!


First to download the SQL Server 2014 Developer Edition (x64) from MSDN.  It’s a hefty 2.6 GB image, meaning it’s time to put the new Kloud HQ 100 Mbps fibre link through its paces… four minutes later and we’re done, nice.  If you don’t have access to an MSDN subscription there’s also a fully featured 180 day evaluation version available for your convenience.

Installation of a default instance “on-premises” to my steadfast 4 GB MacBook Air is smooth, uneventful and very much like every other preceding version of SQL Server for the last 15 years.


Now to fire up the new SQL Server Management Studio and connect to my shiny new instance.  I’m going to need a database, so rather than spending time creating my own I’m going to download and attach the sample AdventureWorks database from Microsoft.


Backup to Azure Blob Storage

The first new cloud feature I want to try is backing up directly to Azure.  I’m guessing this’ll be an additional option in the Back Up Database dialog, so that’s where I’ll head.  Already there’re a couple of tantalising new options further down the context menu to deploy my database into the Azure SQL Database service or a virtual machine.  I’ll come back to explore these in a later blog post.


It’s not immediately obvious where to select Azure as the backup destination, but expanding the Back up to dropdown reveals a new option for backing up to a URL.  To me, URL feels like a strange hypernym given the address must point to an Azure storage blob endpoint – maybe a Cloud media type would’ve been more descriptive.


Before going any further at this stage, I’ll need to create a new Azure storage account and blob container to hold my database backups.  I could do this through the Azure management portal but PowerShell is quicker and more fun.

New-AzureStorageAccount `
     -StorageAccountName "sql2014kloud" `
     -Location "Southeast Asia"

Set-AzureSubscription `
     -SubscriptionName "Kloud Solutions" `
     -CurrentStorageAccount "sql2014kloud"

New-AzureStorageContainer -Name "sqlbackups"

Now that’s done, I’ll continue configuring my Azure backup destination.  I’ll stick with the default backup filename, then move on to authentication where again the UI is slightly unintuitive. The SQL credential field refers to an object SQL Server uses to store authentication information for connecting to an external resource, in this case an Azure account.  I’ll click Create to set up a new one.


These credentials can be either an Azure management certificate or a storage access key, but the Create Credential dialog only offers the certificate options.  Here, I can either create my own self-signed management certificate using makecert.exe and upload the public key to Azure.  Or much easier, have Azure automatically create me a management certificate, then download it in a publishing profile, which is what I’ll do.

To generate and download a publish profile settings, I’ll go directly to the Azure Management Portal, but I could also invoke the Get-AzurePublishSettingsFile cmdlet to launch a browser to the same page.  This prompts a file called [Subscription].[Date]-credentials.publishsettings to start downloading which contains my new management certificate and details of all the Azure subscriptions it’s linked to. It’s important to look after this file carefully because it’s an unsecured key to all my Azure subscriptions.

Next, I’ll Browse to my publish settings file from the Create Credential window and select an Azure subscription from the dropdown.  Now I can open the storage account dropdown, triggering SQL Server to verify my management certificate with Azure and retrieve a list of storage accounts, from which I’ll choose the sql2014kloud account created above.


Lastly, I’ll choose the sqlbackups storage container I created earlier, and we’re good to go.


I’ll click OK to start the backup, and a minute or so later it’s finished.


Restoring from Azure Blob Storage

To make sure the backup exists in blob storage, I’ll use PowerShell to list the contents of my sqlbackups Azure blob conatiner.

Get-AzureStorageBlob -Container "sqlbackups"


And there it is, in my Azure storage account. To verify the backup worked properly, I’ll drop the AdventureWorks database completely and attempt to restore it again from Azure.


The Restore Database feature now has a new device media type of URL, which I’ll select, then Add my Azure storage account:


For this, I need my Azure storage access key, which is available from the Azure management portal or can be easily grabbed using PowerShell.

Get-AzureStorageKey -StorageAccountName "sql2014kloud"

Now I’ll select the storage account where I saved my backup, and paste in my storage account key.


Once connected to my storage account I can see the backup file in the blob container and select it to be restored.


And it’s done, easy as that, my AdventureWorks database is fully restored to my on-premises SQL Server instance directly from a blob stored in Azure.



No more messing about with physical backup disks, or even worse, tapes.  Although the SSMS user interface is a bit clunky in places, everything is scriptable with TSQL, making the option to backup and restore a database directly to Azure blob storage very convenient.  Each Azure storage account can hold up to 200TB with a maximum blob size of 1TB, giving me a lot of low cost, off-site, geo-redundant archive storage at my fingertips.