I discussed Azure ExpressRoute via Equinix Cloud Exchange (ECX) in my previous blog. In this post I am going to focus on AWS Direct Connect which ECX also provides. This means you can share the same physical link (1GBps or 10GBps) between Azure and AWS!
ECX also provides connectivity service to AWS for connection speed less than 1GBps. AWS Direct Connect provides dedicated, private connectivity between your WAN or datacenter and AWS services such as AWS Virtual Private Cloud (VPC) and AWS Elastic Compute Cloud (EC2).
AWS Direct Connect via Equinix Cloud Exchange is Exchange (IXP) provider based allowing us to extend our infrastructure that is:
Private: The connection is dedicated bypassing the public Internet which means better performance, increases security, consistent throughput and enables hybrid cloud use cases (Even hybrid with Azure when both connectivity using Equinix Cloud Exchange)
Redundant: If we configure a second AWS Direct Connect connection, traffic will failover to the second link automatically. Enabling Bidirectional Forwarding Detection (BFD) is recommended when configuring your connections to ensure fast detection and failover. AWS does not offer any SLA at the time of writing
High Speed and Flexible: ECX provides a flexible range of speeds: 50, 100, 200, 300, 400 and 500MBps.
The only tagging mechanism supported by AWS Direct Connect is 802.1Q (Dot1Q). AWS always uses 802.1Q (Dot1Q) on the Z-side of ECX.
ECX pre-requisites for AWS Direct Connect
The pre-requisites for connecting to AWS via ECX:
Physical ports on ECX. Two physical ports on two separate ECX chassis is required if redundancy is required.
Virtual Circuit on ECX. Two virtual circuits are also required for redundancy
Buy-side (A-side) Dot1Q and AWS Direct Connect
The following diagram illustrates the network setup required for AWS Direct Connect using Dot1Q ports on ECX:
The Dot1Q VLAN tag on the A-side is assigned by the buyer (A-side). The Dot1Q VLAN tag on the seller side (Z-side) is assigned by AWS.
There are a few steps needing to be noted when configuring AWS Direct Connect via ECX:
We need our AWS Account ID to request ECX Virtual Circuits (VC)s
Create separate Virtual Interfaces (VI)s for Public and Private Peering on AWS Management Console. We need two ECX VCs and two AWS VIs for redundancy Private or Public Peering.
We can accept the Virtual Connection either from ECX Portal after requesting the VCs or on AWS Management Console.
Configure our on-premises edge routers for BGP sessions. We can download the router configuration which we can use to configure our BGP sessions from AWS Management Console
Attach the AWS Virtual Gateway (VGW) to the Route Table associated with our VPC
Verify the connectivity.
Please refer to the AWS Direct Connect User Guide on how to configure edge routers for BGP sessions. Once we have configured the above we will need to make sure any firewall rules are modified so that traffic can be routed correctly.
I hope you’ve found this post useful – please leave any comments or questions below!
Microsoft Azure ExpressRoute provides dedicated, private circuits between your WAN or datacentre and private networks you build in the Microsoft Azure public cloud. There are two types of ExpressRoute connections – Network (NSP) based and Exchange (IXP) based with each allowing us to extend our infrastructure by providing connectivity that is:
Private: the circuit is isolated using industry-standard VLANs – the traffic never traverses the public Internet when connecting to Azure VNETs and, when using the public peer, even Azure services with public endpoints such as Storage and Azure SQL Database.
Reliable: Microsoft’s portion of ExpressRoute is covered by an SLA of 99.9%. Equinix Cloud Exchange (ECX) provides an SLA of 99.999% when redundancy is configured using an active – active router configuration.
High Speed speeds differ between NSP and IXP connections – but go from 10Mbps up to 10Gbps. ECX provides three choices of virtual circuit speeds in Australia: 200Mbps, 500Mbps and 1Gbps.
Microsoft provided a handy table comparison between all different types of Azure connectivity on this blog post.
ExpressRoute with Equinix Cloud Exchange
Equinix Cloud Exchange is a Layer 2 networking service providing connectivity to multiple Cloud Service Providers which includes Microsoft Azure. ECX’s main features are:
On Demand (once you’re signed up)
One physical port supports many Virtual Circuits (VCs)
Support 1Gbps and 10Gbps fibre-based Ethernet ports. Azure supports virtual circuits of 200Mbps, 500Mbps and 1Gbps
Orchestration using API for automation of provisioning which provides almost instant provisioning of a virtual circuit.
We can share an ECX physical port so that we can connect to both Azure ExpressRoute and AWS DirectConnect. This is supported as long as we use the same tagging mechanism based on either 802.1Q (Dot1Q) or 802.1ad (QinQ). Microsoft Azure uses 802.1ad on the Sell side (Z-side) to connect to ECX.
ECX pre-requisites for Azure ExpressRoute
The pre-requisites for connecting to Azure regardless the tagging mechanism are:
Two Physical ports on two separate ECX chassis for redundancy.
A primary and secondary virtual circuit per Azure peer (public or private).
Buy-side (A-side) Dot1Q and Azure ExpressRoute
The following diagram illustrates the network setup required for ExpressRoute using Dot1Q ports on ECX:
Tags on the Primary and Secondary virtual circuits are the same when the A-side is Dot1Q. When provisioning virtual circuits using Dot1Q on the A-Side use one VLAN tag per circuit request. This VLAN tag should be the same VLAN tag used when setting up the Private or Public BGP sessions on Azure using Azure PowerShell.
There are few things that need to be noted when using Dot1Q in this context:
The same Service Key can be used to order separate VCs for private or public peerings on ECX.
Get-AzureDedicatedCircuit returns the following output.
As we can see the status of ServiceProviderProvisioningState is NotProvisioned.
Note: ensure the physical ports have been provisioned at Equinix before we use this Cmdlet. Microsoft will start charging as soon as we create the ExpressRoute circuit even if we don’t connect it to the service provider.
Two physical ports need to be provisioned for redundancy on ECX – you will get the notification from Equinix NOC engineers once the physical ports have been provisioned.
Submit one virtual circuit request for each of the private and public peers on the ECX Portal. Each request needs a separate VLAN ID along with the Service Key. Go to the ECX Portal and submit one request for private peering (2 VCs – Primary and Secondary) and One Request for public peering (2VCs – Primary and Secondary).Once the ECX VCs have been provisioned check the Azure Circuit status which will now show Provisioned.
Next we need to configure BGP for exchanging routes between our on-premises network and Azure as a next step, but we will come back to this after we have a quick look at using QinQ with Azure ExpressRoute.
Buy-side (A-side) QinQ Azure ExpressRoute
The following diagram illustrates the network setup required for ExpressRoute using QinQ ports on ECX:
C-TAGs identify private or public peering traffic on Azure and the primary and secondary virtual circuits are setup across separate ECX chassis identified by unique S-TAGs. The A-Side buyer (us) can choose to either use the same or different VLAN IDs to identify the primary and secondary VCs. The same pair of primary and secondary VCs can be used for both private and public peering towards Azure. The inner tags identify if the session is Private or Public.
The process for provisioning a QinQ connection is the same as Dot1Q apart from the following change:
Submit only one request on the ECX Portal for both private and public peers. The same pair of primary and secondary virtual circuits can be used for both private and public peering in this setup.
ExpressRoute uses BGP for routing and you require four /30 subnets for both the primary and secondary routes for both private and public peering. The IP prefixes for BGP cannot overlap with IP prefixes in either your on-prem or cloud environments. Example Routing subnets and VLAN IDs:
Primary Private: 192.168.1.0/30 (VLAN 100)
Secondary Private: 192.168.2.0/30 (VLAN 100)
Primary Public: 192.168.1.4/30 (VLAN 101)
Secondary Public: 192.168.2.4/30 (VLAN 101)
The first available IP address of each subnet will be assigned to the local router and the second will be automatically assigned to the router on the Azure side.
To configure BGP sessions for both private and public peering on Azure use the Azure PowerShell Cmdlets as shown below.
Once we have configured the above we will need to configure the BGP sessions on our on-premises routers and ensure any firewall rules are modified so that traffic can be routed correctly.
I hope you’ve found this post useful – please leave any comments or questions below!