Secure Azure Virtual Network Defense In Depth using Network Security Groups, User Defined Routes and Barracuda NG Firewall

Security Challenge on Azure

There are few common security related questions when we start planning migration to Azure:

  • How can we restrict the ingress and egress traffic on Azure ?
  • How can we route the traffic on Azure ?
  • Can we have Firewall kit, Intrusion Prevention System (IPS), Network Access Control, Application Control and Anti – Malware on Azure DMZ ?

This blog post intention is to answer above questions using following Azure features combined with Security Virtual Appliance available on Azure Marketplace:

  • Azure Virtual Network (VNET)
  • Azure Network Security Groups (NSGs)
  • Azure Network Security Rule
  • Azure Forced Tunelling
  • Azure Route Table
  • Azure IP Forwarding
  • Barracuda NG Firewall available on Azure Marketplace

One of the most common methods of attack is The Script Kiddie / Skiddie / Script Bunny / Script Kitty. Script Kiddies attacks frequency is one of the highest frequency and still is. However the attacks have been evolved into something more advanced, sophisticated and far more organized. The diagram below illustrates the evolution of attacks:

evolution of attacks

 

The main target of the attacks from the lowest sophistication level of the attacks to the most advanced one is our data. Data loss = financial loss. We are working together and sharing the responsibility with our cloud provider to secure our cloud environment. This blog post will focus on Azure environment.

Defense in Depth

Based on SANS Institute of Information Security. Defense in depth is the concept of protecting a computer network with a layer of defensive mechanisms. There are varies of defensive mechanisms and countermeasures to protect our Azure environment because there are many attack scenarios and attack methods available.

In this post we will use combination of Azure Network Security Groups to establish Security Zone discussed previously on my previous blog, deploy network firewall including Intrusion Prevention System on our Azure network to implement additional high security layer and route the traffic to our security kit. On Secure Azure Network blog we have learned on how to establish the simple Security Zone on our Azure VNET. The underlying concept behind the zone model is the increasing level of trust from outside into the center. On the outside is the Internet – Zero Trust which is where the Script Kiddies and other attackers reside.

The diagram below illustrates the simple scenario we will implement on this post:

Barracuda01

There are four main configurations we need to do in order to establish solution as per diagram above:

  • Azure VNET Configuration
  • Azure NSG and Security Rules
  • Azure User Defined Routes and IP Forwarding
  • Barracuda NG Firewall Configuration

In this post we will focus on the last two items. This tutorial link will assist the readers on how to create Azure VNET and my previous blog post will assist the readers on how to establish Security Zone using Azure NSGs.

Barracuda NG Firewall

The Barracuda NG Firewall fills the functional gaps between cloud infrastructure security and Defense-In-Depth strategy by providing protection where our application and data reside on Azure rather than solely where the connection terminates.

The Barracuda NG Firewall can intercept all Layer 2 through 7 traffic and apply Policy – based controls, authentication, filtering and other capabilities. Just like its physical device, Barracuda NG Firewall running on Azure has traffic management capability and bandwidth optimizations.

The main features:

  • PAYG – Pay as you go / BYOL – Bring your own license
  • ExpressRoute Support
  • Network Firewall
  • VPN
  • Application Control
  • IDS – IPS
  • Anti-Malware
  • Network Access Control Management
  • Advanced Threat Detection
  • Centralized Management

Above features are necessary to establish a virtual DMZ in Azure to implement our Defense-In-Depth and Security Zoning strategy.

Choosing the right size of Barracuda NG Firewall will determine the level of support and throughput to our Azure environment. Details of the datasheet can be found here.

I wrote handy little script below to deploy Barracuda NG Firewall Azure VM with two Ethernets :

User Defined Routes in Azure

Azure allows us to re-defined the routing in our VNET which we will use in order to re-direct the routing through our Barracuda NG Firewall. We will enable IP forwarding for the Barracuda NG Firewall virtual appliance and then create and configure the routing table for the backend networks so all traffic is routed through the Barracuda NG Firewall.

There are some notes using Barracuda NG Firewall on Azure:

  • User-defined routing at the time of writing cannot be used for two Barracuda NG Firewall units in a high availability cluster
  • After the Azure routing table has been applied, the VMs in the backend networks are only reachable via the NG Firewall. This also means that existing Endpoints allowing direct access no longer work

Step 1: Enable IP Forwarding for Barracuda NG Firewall VM

In order to forward the traffic, we must enable IP forwarding on Primary network interface and other network interfaces (Ethernet 1 and Ethernet 2) on the Barracuda NG Firewall VM.

Enable IP Forwarding:

Enable IP Forwarding on Ethernet 1 and Ethernet 2:

On the Azure networking side, our Azure Barracuda NG Firewall VM is now allowed to forward IP packets.

Step 2: Create Azure Routing Table

By creating a routing table in Azure, we will be able to redirect all Internet outbound connectivity from Mid and Backend subnets of the VNET to the Barracuda NG Firewall VM.

Firstly, create the Azure Routing Table:

Next, we need to add the Route to the Azure Routing Table:

As we can see the next hop IP address for the default route is the IP address of the default network interface of the Barracuda NG Firewall (192.168.0.54). We have extra two network interfaces which can be used for other routing (192.168.0.55 and 192.168.0.55).

Lastly, we will need to assign the Azure Routing Table we created to our Mid or Backend subnet.

Step 3: Create Access Rules on the Barracuda NG Firewall

By default all outgoing traffic from the mid or backend is blocked by the NG Firewall. Create an access rule to allow access to the Internet.

Download the Barracuda NG Admin to manage our Barracuda NG Firewall running on Azure and login to our Barracuda NG Admin console:

barra01

 

Create a PASS access rule:

  • Source – Enter our mid or backend subnet
  • Service – Select Any
  • Destination – Select Internet
  • Connection – Select Dynamic SNAT
  • Click OK and place the access rule higher than other rules blocking the same type of traffic
  • Click Send Changes and Activate

barra02

Our VMs in the mid or backend subnet can now access the Internet via the Barracuda NG Firewall. RDP to my VM sitting on Mid subnet 192.168.1.4, browse to Google.com:

barra03

Let’s have a quick look at Barracuda NG Admin Logs 🙂

barra04

And we are good to go using same method configuring the rest to protect our Azure environment:

  • Backend traffic to go pass our Barracuda NG Firewall before hitting the Mid traffic and Vice Versa
  • Mid traffic to go pass our Barracuda NG Firewall before hitting the Frontend traffic and Vice Versa

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.

 

 

 

Azure VNET gateway: basic, standard and high performance

Originally posted on Lucian’s blog over at Clouduccino.com. Follow Lucian on twitter @Lucianfrango.


I’ve been working a lot with Azure virtual network (VNET) virtual private network (VPN) gateways of late. The project I’m working on at the moment requires two sites to connect to a multi-site dynamic routing VPN gateway in Azure. This is for redundancy when connecting to the Azure cloud as there is a dedicated link between the two branch sites.

Setting up a multi-site VPN is a relatively streamlined process and Matt Davies has written a great article on how to run through that process via the Azure portal on the Kloud blog.

Read More

Mule ESB DEV/TEST environments in Microsoft Azure

Agility in delivery of IT services is what cloud computing is all about. Week in, week out, projects on-board and wind-up, developers come and go. This places enormous stress on IT teams with limited resourcing and infrastructure capacity to provision developer and test environments. Leveraging public cloud for integration DEV/TEST environments is not without its challenges though. How do we develop our interfaces in the cloud yet retain connectivity to our on-premises line-of-business systems?

In this post I will demonstrate how we can use Microsoft Azure to run Mule ESB DEV/TEST environments using point-to-site VPNs for connectivity between on-premises DEV resources and our servers in the cloud.

MuleSoft P2S

Connectivity

A point-to-site VPN allows you to securely connect an on-premises server to your Azure Virtual Network (VNET). Point-to-site connections don’t require a VPN device. They use the Windows VPN client and must be started manually whenever the on-premises server (point) wishes to connect to the Azure VNET (site). Point-to-site connections use secure socket tunnelling protocol (SSTP) with certificate authentication. They provide a simple, secure connectivity solution without having to involve the networking boffin’s to stand up expensive hardware devices.

I will not cover the setup of the Azure Point-to-site VPN in this post, there are a number of good articles already covering the process in detail including this great MSDN article.

A summary of steps to create the Point-to-site VPN are as follows:

  1. Create an Azure Virtual Network (I named mine AUEastVNet and used address range 10.0.0.0/8)
  2. Configure the Point-to-site VPN client address range  (I used 172.16.0.0/24)
  3. Create a dynamic routing gateway
  4. Configure certificates (upload root cert to portal, install private key cert on on-premise servers)
  5. Download and install client package from the portal on on-premise servers

Once we established the point-to-site VPN we can verify the connectivity by running ipconfig /all and checking we had been assigned an IP address from the range we configured on our VNET.

IP address assigned from P2S client address range

Testing our Mule ESB Flow using On-premises Resources

In our demo, we want to test the interface we developed in the cloud with on-premises systems just as we would if our DEV environment was located within our own organisation

Mule ESB Flow

The flow above listens for HL7 messages using the TCP based MLLP transport and processes using two async pipelines. The first pipeline maps the HL7 message into an XML message for a LOB system to consume. The second writes a copy of the received message for auditing purposes.

MLLP connector showing host running in the cloud

The HL7 MLLP connector is configured to listen on port 50609 of the network interface used by the Azure VNET (10.0.1.4).

FILE connector showing on-premise network share location

The first FILE connector is configured to write the output of the xml transformation to a network share on our on-premises server (across the point-to-site VPN). Note the IP address used is the one assigned by the point-to-site VPN connection (from the client IP address range configured on our Azure VNET)

P2S client IP address range

To test our flow we launch a MLLP client application on our on-premises server and establish a connection across the point-to-site VPN to our Mule ESB flow running in the cloud. We then send a HL7 message for processing and verify we receive a HL7 ACK and that the transformed xml output message has also been written to the configured on-premises network share location.

Establishing the connection across the point-to-site VPN…

On-premises MLLP client showing connection to host running in the cloud

Sending the HL7 request and receiving an HL7 ACK response…

MLLP client showing successful response from Mule flow

Verifying the transformed xml message is written to the on-premises network share…

On-premises network share showing successful output of transformed message

Considerations

  • Connectivity – Point-to-site VPNs provide a relatively simple connectivity option that allows traffic between the your Azure VNET (site) and your nominated on-premise servers (the point inside your private network). You may already be running workloads in Azure and have a site-to-site VPN or MPLS connection between the Azure VNET and your network and as such do not require establishing the point-to-site VPN connection. You can connect up to 128 on-premise servers to your Azure VNET using point-to-site VPNs.
  • DNS – To provide name resolution of servers in Azure to on-premise servers OR name resolution of on-premise servers to servers in Azure you will need to configure your own DNS servers with the Azure VET. The IP address of on-premise servers will likely change every time you establish the point-to-site VPN as the IP address is assigned from a range of IP addresses configured on the Azure VET.
  • Web Proxies – SSTP does not support the use of authenticated web proxies. If your organisation uses a web proxy that requires HTTP authentication then the VPN client will have issues establishing the connection. You may need the network boffins after all to bypass the web proxy for outbound connections to your Azure gateway IP address range.
  • Operating System Support – Point-to-site VPNs only support the use of the Windows VPN client on Windows 7/Windows 2008 R2 64 bit versions and above.

Conclusion

In this post I have demonstrated how we can use Microsoft Azure to run a Mule ESB DEV/TEST environment using point-to-site VPNs for simple connectivity between on-premises resources and servers in the cloud. Provisioning integration DEV/TEST environments on demand increases infrastructure agility, removes those long lead times whenever projects kick-off or resources change and enforces a greater level of standardisation across the team which all improve the development lifecycle, even for integration projects!

Secure Azure Virtual Network and create DMZ on Azure VNET using Network Security Groups (NSG)

At TechEd Europe 2014, Microsoft announced the General Availability of Network Security Groups (NSGs) which add security feature to Azure’s Virtual Networking capability. Network Security Groups provides Access Control on Azure Virtual Network and the feature that is very compelling from security point of view. NSG is one of the feature Enterprise customers have been waiting for.

What are Network Security Groups and how to use them?

Network Security Groups allow us to control traffic (ingress and egress) on our Azure VNET using rules we define and provide segmentation within VNET by applying Network Security Groups to our subnet as well as Access Control to VMs.

What’s the difference between Network Security Groups and Azure endpoint-based ACLs? Azure endpoint-based ACLs work only on VM public port endpoint. NSGs are able to work on one or more VMs and controls all ingress and egress traffic on the VM. In addition NSG can be associated with a subnet and to all VMs in that subnet.

NSG Features and Constraints

NSG features and constraints are as follow:

  • 100 NSGs per Azure subscription
  • One VM / Subnet can only be associated with One NSG
  • One NSG can contain up to 200 Rules
  • A Rule has characteristics as follow:
    • Name
    • Type: Inbound/Outbound
    • Priority: Integer between 100 and 4096
    • Source IP Address: CIDR of Source IP Range
    • Source Port Range: Range between 0 and 65000
    • Destination IP Range: CIDR of Destination IP Range
    • Destination Port Range: Integer or Range between 0 and 65000
    • Protocol: TCP, UDP or use * for Both
    • Access: Allow/Deny
    • Rules processed in the order of priority. Rule with lower priority is processed before rules with higher priority numbers.

Security Zone Model

Designing isolated Security Zones within an Enterprise network is an effective strategy for reducing many types of risk, and this applies in Azure also. We need to work together with Microsoft as our Cloud Vendor to secure our Azure environment. Our On-Premises knowledge to create Security Zones model can be applied to our Azure environment.

As a demonstration I will pick the simplest Security Zone model which I will apply on my test Azure enviroment just to get some ideas how NSG will work. I will create 3 layers of Security Zone model for my test Azure environment. This simple security zone is only for the demo purpose and might not be suitable for your Enterprise environment.

  • Internet = Attack Vectors / Un-trusted
  • Front-End = DMZ
  • App / Mid-Tier = Trusted Zone
  • DB / Back-end = Restricted Zone

Based on Security Zone model above, I created my test Azure VNET :

Azure VNET: SEVNET
Address Space: 10.0.0.0/20
Subnets:
Azure-DMZ – 10.0.2.0/25
Azure-App – 10.0.0.0/25
Azure-DB – 10.0.1.0/25

Multi Site Connectivity to: EUVNET (172.16.0.0/16) and USVNET (192.168.0.0/20).

The diagram below illustrates above scenario:

Security Zone 1

Lock ‘Em Down

After we decided our simple Security Zone model it’s time to lock them down and secure the zones.

The diagram below illustrates how the traffic flow will be configured:

Security Zone 2

In High Level the Traffic flow as follow:

  • Allow Internet ingress and egress on DMZ
  • Allow DMZ – App ingress and egress
  • Allow App – DB ingress and egress
  • Deny DMZ-DB ingress and egress
  • Deny App-Internet ingress and egress
  • Deny DB-Internet ingress and egress
  • Deny EUVNET-DB on SEVNET ingress and egress
  • Deny USVNET-DB on SEVNET ingress and egress

Section below will show the examples of Azure NSG rules table will look like.

NSG Rules Table

Azure DMZ NSG Rules Table

Name Source IP Source Port Destination IP Destination Port Port Type Action Priority
RDPInternet-DMZ * 63389 10.0.2.0/25 63389 TCP Inbound Allow 347
Internet-DMZSSL * 443 10.0.2.0/25 443 TCP Inbound Allow 348
Internet-DMZDRS * 49443 10.0.2.0/25 49443 TCP Inbound Allow 349
USVNET-DMZ 192.168.0.0/20 * 10.0.2.0/25 * * Inbound Allow 400
EUVNET-DMZ 172.16.0.0/16 * 10.0.2.0/25 * * Inbound Allow 401
DMZ-App 10.0.2.0/25 * 10.0.0.0/25 * * Outbound Allow 500
DMZ-DB 10.0.2.0/25 * 10.0.1.0/25 * * Outbound Deny 600
Allow VNET Inbound Virtual_Network * Virtual_Network * * Inbound Allow 65000
Allow Azure Internal Load Balancer Inbound Azure_LoadBalancer * * * * Inbound Allow 65001
Deny All Inbound * * * * * Inbound Deny 65500
Allow VNET Outbound Virtual_Network * Virtual_Network * * Outbound Allow 65000
Allow Internet Outbound * * INTERNET * * Outbound Allow 65001
Deny All Outbound * * * * * Outbound Deny 65500

Azure App NSG Rules Table

Name Source IP Source Port Destination IP Destination Port Prot Type Action Priority
DMZ-App 10.0.2.0/25 * 10.0.0.0/25 * * Inbound Allow 348
USVNET-App 192.168.0.0/20 * 10.0.0.0/25 * * Inbound Allow 400
EUVNET-App 172.16.0.0/16 * 10.0.0.0/25 * * Inbound Allow 401
App-DMZ 10.0.0.0/25 * 10.0.2.0/25 * * Outbound Allow 500
App-DB 10.0.0.0/25 * 10.0.1.0/25 * * Outbound Allow 600
App-Internet 10.0.0.0/25 * INTERNET * * Outbound Deny 601
Allow VNET Inbound Virtual_Network * Virtual_Network * * Inbound Allow 65000
Allow Azure Internal Load Balancer Inbound Azure_LoadBalancer * * * * Inbound Allow 65001
Deny All Inbound * * * * * Inbound Deny 65500
Allow VNET Outbound Virtual_Network * Virtual_Network * * Outbound Allow 65000
Allow Internet Outbound * * INTERNET * * Outbound Allow 65001
Deny All Outbound * * * * * Outbound Deny 65500

Azure DB NSG Rules Table

Name Source IP Source Port Destination IP Destination Port Prot Type Action Priority
App-DB 10.0.0.0/25 * 10.0.1.0/25 * * Inbound Allow 348
USVNET-App 192.168.0.0/20 * 10.0.1.0/25 * * Inbound Deny 400
EUVNET-App 172.16.0.0/16 * 10.0.1.0/25 * * Inbound Deny 401
DB-DMZ 10.0.1.0/25 * 10.0.2.0/25 * * Outbound Deny 500
DB-App 10.0.1.0/25 * 10.0.0.0/25 * * Outbound Allow 600
DB-Internet 10.0.0.0/25 * INTERNET * * Outbound Deny 601
Allow VNET Inbound Virtual_Network * Virtual_Network * * Inbound Allow 65000
Allow Azure Internal Load Balancer Inbound Azure_LoadBalancer * * * * Inbound Allow 65001
Deny All Inbound * * * * * Inbound Deny 65500
Allow VNET Outbound Virtual_Network * Virtual_Network * * Outbound Allow 65000
Allow Internet Outbound * * INTERNET * * Outbound Allow 65001
Deny All Outbound * * * * * Outbound Deny 65500

Tables above will give us some ideas how to plan our Azure NSGs in order to establish our Security Zones.

Get Started using NSGs!

At the time this post was written, NSG is exposed only through PowerShell and REST API. To use PowerShell, we need version 0.8.10 of the Azure PowerShell module.

The commands are as follow:

  • Get-AzureNetworkSecurityGroup
  • Get-AzureNetworkSecurityGroupConfig
  • Get-AzureNetworkSecurityGroupForSubnet
  • New-AzureNetworkSecurityGroup
  • Remove-AzureNetworkSecurityGroup
  • Remove-AzureNetworkSecurityGroupConfig
  • Remove-AzureNetworkSecurityGroupFromSubnet
  • Remove-AzureNetworkSecurityRule
  • Set-AzureNetworkSecurityGroupConfig
  • Set-AzureNetworkSecurityGroupToSubnet
  • Set-AzureNetworkSecurityRule

Here are some examples:

Personally I will be recommending Azure NSG for every Azure Production deployment I perform in future!

Leave a comment below or contact us if you have any questions regarding Azure especially in more complex Enterprise scenarios.

http://www.wasita.net

Highly Available SQL 2012 across Azure VNET (Part 1: VNET Peering)

Just over a year Microsoft announced the support of SQL Server AlwaysOn Availability Groups (AAG) on Microsoft Azure IaaS. Last month, Microsoft announced the support of SQL AAG between Azure Regions. This is a great news for great technology like SQL Server 2012 for highly available and disaster recovery scenario. SQL AAG released in SQL 2012 and enhanced in SQL 2014. AAG will detect anomaly which will impact SQL availability. When We will discuss how to do this in two blog posts:

  • Part1: Design SQL 2012 AAG across Azure VNET and How to create Microsoft Azure VNET to VNET peering
  • Part2: SQL, WSFC, Configure Quorum and Voting (SQL) and Configure AAG

Part1 SQL 2012 AAG across Azure VNET SQL 2012 AAG is designed to provide high availability for SQL database and Azure IaaS is great place where this technology can live. There are few benefits using Azure IaaS for this scenario:

  • Security features  from Azure as Cloud Provider. The security whitepaper can be found here
  • Azure VM Security Extensions which means we can rest assure when VM is deployed, it is protected from day 1. Details can be found here
  • Azure ILB to provide load balancing solution
  •  99.90% SLA for VNET connectivity (VPN). This feature is backed up by two “hidden” Azure VMs in active-passive configuration
  • Express Route (MPLS) for higher bandwidth requirement – we won’t discuss and use this feature in this blog posts

The architecture components for this scenario: 2 VNET on two different regions to avoid single point of region failure. We will call this VNET: SEVNET (Southeast Asia Region) and USVNET (US West Region). These VNETs will be peered. DC on each VNET to provide AD and DNS service. First DC on Southeast Asia region will be used as File Share Witness. 3 SQL Servers in AAG which 2 SQL will be at SEVNET on Azure Availability Set and 1 SQL will be at USVNET. The constraints for this scenario:

  • Cloud Service cannot span across Azure VNET. For this scenario two Cloud Service will be used for SQL VMs
  • Azure Availability Set (AS) cannot span across VNET and Cloud Service. Two SQL AS will be deployed
  • Azure ILB cannot span across VNET. Only Primary and Secondary SQL will be load balanced on SEVNET

Diagram below illustrates the architecture:

SQLAAGacrossVNET

Diagram above shows SQL AAG is configured across two Azure VNET. This configuration will give resiliency from full Azure region failure. AAG will be configured with 2 replicas (Primary at SEVNET, one replica at SEVNET for automatic failover and the other replica across region at USVNET configured for manual failover and disaster recovery in case of region failure at SEVNET). The listener will be configured o SEVNET which configured to route connections to primary replica. The scenario above also allows offloading read workloads from the Primary replica to readable secondary replicas in Azure region that are closer to the source of the read workloads (For example: reporting/ BI / backup purpose) Microsoft Azure VNET to VNET Peering Now let’s create 2 VNET on Southeast Asia region and US West region. Below is the configuration:

  • SEVNET | Southeast Asia Region | Storage: sevnetstor | Address Space: 10.0.0.0/20 | DNS:  10.0.0.4
  • USVNET | US West Region | Storage: usvnetstor | Address Space: 192.168.0.0/20 | DNS: 10.0.0.4

We will use Regional Virtual Network instead of using Affinity Group for this scenario which will enable us to use ILB for the future use. My colleague Scott Scovell wrote a blog about this a while ago. Create 2 Storage Accounts:

2 storage

Create DNS Server 10.0.0.4 – I registered with DC name AZSEDC001 Create VNET Peering We will use Azure GUI to create VNET-VNET peering Create first VNET at SE Asia Region: Go to NEW>Network Services>Virtual Network>Custom Create> Enter VNET name: SEVNET and select Region Southeast Asia > Next > select DNS Servers: AZSEDC001 > check Configure a site-to-site VPN > On Local Network choose Specify a new Local Network > Next

sevnet1

 

Enter the name of local network as USVNET and specify the address space. On VPN Device IP Address just use temporary one and we will replace that with the actual Public IP Address of the Gateway.

sevnet2

Next – we will configure the IP range of SEVNET . The important bit: Click on Add Gateway Subnet

sevnet3

Next We need to configure the USVNET with the same way. Configure site-to-site connectivity with the local as SEVNET using its address space. Both VNET will be like below:

vnet4

Next: We will need to create Dynamic Routing VPN Gateways for both VNET. Static Routing is not supported.

vnet5

Once completed, get the Gateway IP address for both VNET and replace the temporary VPN IP Address on Local Networks with the actual Gateway IP address we just obtained.

 

vnet6

The last step: Set the IPsec/IKE pre-shared keys for both VNET. We will use Azure PowerShell for this configuration. Firstly we will get the Pre-Shared keys to be used on our PowerShell script.

vnet7

Please ensure You are on the right subscription. Always good habit to use select-azuresubscription -default cmdlet before executing Azure PowerShell script.

vnet8

And That’s it! We should see the successful VNET to VNET Peering :

vnet9

Part 2 we will deep dive on how to configure the SQL AAG on across both VNET

Bad Request: Internal Load Balancer usage not allowed for this deployment

Microsoft released a number of new networking features for the Azure platform this week:

  • Multiple Site-to-Site VPNs
  • VNET-to-VNET Secure Connectivity
  • Reserved IPs
  • Instance level Public IPs
  • Internal Load Balancing

Announcement details can be found on Scott Gu’s blog post

Internal load balancing (ILB) was a much needed networking feature that will enable the design of highly available environments in hybrid infrastructure scenarios. Until now, 3rd party solutions were required to load balance workloads in IaaS virtual machines when accessed by on-premise (internal) clients across the site-to-site VPN. Using the existing public load balancer and exposing public endpoints was not a palatable option for many organisations even when hardened using network ACL’s.

internal load balancer

ILB now provides a highly available load balancing solution using an internally addressable endpoint within your cloud service or VNET. However, the ILB feature will not work out-of-the-box for you. Like most new features in preview, documentation is sparse and spread out across a number of locations. I have pulled together a list of considerations you’ll need to address before using this feature:

  1. ILB is not available from the Portal (Scott Gu’s post), you must use PowerShell or REST API
  2. You cannot create an ILB instance for a cloud service or virtual network if there are no external endpoints configured on any of the resident virtual machines. (MSDN)
  3. ILB is available only with new deployments and new virtual networks. (Scott Gu’s post)

After installing the latest PS Azure cmdlets you will have a number of cmdlets used to manage ILBs in Azure:

  • Get-AzureInternalLoadBalancer
  • Add-AzureInternalLoadBalancer
  • Remove-AzureInternalLoadBalancer
  • Add-AzureEndpoint (now includes the –InternalLoadBalancerName parameter)

However, you may receive the following error when trying to provision a new instance of the ILB despite redeploying virtual machines into a new VNET and cloud service:

Add-AzureInternalLoadBalancer : BadRequest: Internal Load Balancer usage not allowed for this deployment.
At line:1 char:1
+ Add-AzureInternalLoadBalancer -ServiceName $svc -InternalLoadBalancerName $ilb – …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : CloseError: (:) [Add-AzureInternalLoadBalancer], CloudException
+ FullyQualifiedErrorId : Microsoft.WindowsAzure.Commands.ServiceManagement.IaaS.AddAzureInternalLoadBalancer

The solution lies in the definition of “…new virtual network” (from point 3 above). Re-creating the VNET using the portal provisions the “old” type of VNET. As part of this release, a new type of VNET was introduced, Regional Virtual Network. Regional VNETs span the entire regional data centre instead of being pinned to an Affinity Group. This is a good thing for a number of reasons well documented in this post by Narayan Annamalai. Also mentioned in the post was the key to unlocking this:

Following are the key scenarios enabled by Regional Virtual Networks:

  • New VM Sizes like A8 and A9 can be deployed into a virtual network which can also contain other sizes.
  • New features like Reserved IPs, Internal load balancing, Instance level Public IPs
  • The Virtual Network can scale seamlessly to use the capacity in the entire region, but we still restrict to maximum of 2048 virtual machines in a VNet.
  • You are not required to create an affinity group before creating a virtual network.
  • Deployments going into a VNet does not have to be in the same affinity group.

To provision an ILB instance we need to create a Regional Virtual Network. Again this is not currently supported in the Portal (either version). Fortunately, creating this new type of VNET is just a matter of changing our exported VNET configuration file as follows:

Original VNET config:

<VirtualNetworkSite name="lab-vnet" AffinityGroup="lab-ag">

New VNET config

<VirtualNetworkSite name="lab-vnet" Location="Southeast Asia">

Important: You will need to un-deploy all resources in the current VNET before you can upgrade to the new regional VNET as the old VNET is deleted and the new VNET created. Not an insignificant task for those looking to take advantage of this new feature in existing DEV/TEST environments. I can’t stress enough the importance of automating your cloud provisioning and deployment operations. It will save you many days of Click > Click > Next’ing

After the VNET is upgraded and environment re-provisioned the ILB feature can be provisioned as follows:

  1. Create the ILB instance using the Add-AzureInternalLoadBalancer cmdlet
    Add-AzureInternalLoadBalancer -ServiceName $svc -InternalLoadBalancerName $ilb –SubnetName $subnet –StaticVNetIPAddress $IP
  2. Add internal endpoints to the ILB instance on each VM to be load balanced using the Add-AzureEndpoint cmdlet
    Get-AzureVM –ServiceName $svc –Name $vmname | `
    	Add-AzureEndpoint -Name $epname -Protocol $prot -LocalPort $locport -PublicPort $pubport –DefaultProbe -InternalLoadBalancerName $ilb | `
    	Update-AzureVM
    

    Note: the InternalLoadBalancerName parameter.

  3. Create an DNS A record that points to ILB virtual IP (VIP) that clients will use to communicate with the ILB instance. Use the Get-AzureInternalLoadBalancer cmdlet to get the ILB instance VIP

    Get-AzureService -ServiceName $svc | `
    	Get-AzureInternalLoadBalancer
    

With the new Internal Load Balancer feature we can now achieve a highly available, load balancing solution that clients on-premise can access across the site-to-site VPN.