SSL Tunneling with socat in Docker to safely access Azure Redis on port 6379

Redis Cache is an advanced key-value store that we should have all come across in one way or another by now. Azure, AWS and many other cloud providers have fully managed offerings for it, which is “THE” way we want to consume it.  As a little bit of insight, Redis itself was designed for use within a trusted private network and does not support encrypted connections. Public offerings like Azure use TLS reverse proxies to overcome this limitation and provide security around the service.

However some Redis client libraries out there do not talk TLS. This becomes a  problem when they are part of other tools that you want to compose your applications with.

Solution? We bring in something that can help us do protocol translation.

socat – Multipurpose relay (SOcket CAT)

Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. Because the streams can be constructed from a large set of different types of data sinks and sources (see address types), and because lots of address options may be applied to the streams, socat can be used for many different purposes.

https://linux.die.net/man/1/socat

In short: it is a tool that can establish a communication between two points and manage protocol translation between them.

An interesting fact is that socat is currently used to port forward docker exec onto nodes in Kubernetes. It does this by creating a tunnel from the API server to Nodes.

Packaging socat into a Docker container

One of the great benefits of Docker is that it allows you to work in sandbox environments. These environments are then fully transportable and can eventually become part of your overall application.

The following procedure prepares a container that includes the socat binary and common certificate authorities required for public TLS certificate chain validation.

We first create our  docker-socat/Dockerfile

Now we build a local Docker image by executing docker build -t socat-local docker-socat. You are free to push this image to a Docker Registry at this point.

Creating TLS tunnel into Azure Redis

To access Azure Redis you we need 2 things:

  1. The FQDN: XXXX-XXXX-XXXX.redis.cache.windows.net:6380
    where all the X’s represent your dns name.
  2. The access key, found under the Access Keys menu of your Cache instance. I will call it THE-XXXX-PASSWORD

Let’s start our socat tunnel by spinning up the container we just built an image for. Notice I am binding port 6379 to my Desktop so that I can connect to the tunnel from localhost:6379 on my machine.

Now let’s have a look at the  arguments I am passing in to socat (which is automatically invoked thanks to the ENTRYPOINT ["socat"] instruction we included when building the container image).

  1. -v
    For checking logs when when doing docker logs socat
  2. TCP-LISTEN:6379,fork,reuseaddr
    – Start a socket listener on port 6379
    – fork to allow for subsequent connections (otherwise a one off)
    – reuseaddr to allow socat to restart and use the same port (in case a previous one is still held by the OS)

  3. openssl-connect:XXXX-XXXX-XXXX.redis.cache.windows.net:6380
    – Create a TLS connect tunnel to the Azure Redis Cache.

Testing connectivity to Azure Redis

Now I will just test my tunnel using redis-cli which I can also use from a container.  In this case THE-XXXX-PASSWORD is the Redis Access Key.

The thing to notice here is the--net host flag. This instructs Docker not to create a new virtual NIC and namespace to isolate the container network but instead use the Host’s (my desktop) interface. This means that localhost in the container is really localhost on my Desktop.

If everything is set up properly and outgoing connections on port6379 are allowed, you should get a PONG message back from redis.

Happy hacking!

Exchange in Azure: NIC disabled/in error state

I recently had the need to build my own Exchange server within Azure and connect it to my Office 365 tenant.
I loosely followed the steps in this Microsoft article: https://technet.microsoft.com/library/mt733070(v=exchg.160).aspx to get my Azure (ARM) VMs and infrastructure deployed.

I initially decided to utilise an A1 Azure VM for my Exchange server to reduce my costs, however upon successfully installing Exchange it was extremely slow and basic things like EAC and creating mailboxes would not function correctly due to the lack of resources. I found that resizing my VM to an A3 Azure VM resolved my issues and Exchange then functioned correctly.

After I powered down the Azure VM to a stopped (deallocated) state that’s when I encountered issues.

I found that after I powered up the VM I could no longer connect to it and once I enabled boot diagnostics I discovered that the NIC was in an error state/disabled.

After going down multiple troubleshooting paths such as redeploying the VM, resizing the VM, changing subnets etc. I discovered that patience was the key and after about 20 minutes the NIC re-enabled itself and all was well.

I have run multiple tests with an A3 Azure VM and found that in some cases it could take anywhere from 20-40 minutes to successfully boot up with 10 minutes being the quickest boot up time.

Hopefully this assists someone out there banging their head against the wall trying to get this to work!

Azure VNET gateway: basic, standard and high performance

Originally posted @ Lucian.Blog. Follow Lucian on twitter @Lucianfrango.


I’ve been working a lot with Azure virtual network (VNET) virtual private network (VPN) gateways of late. The project I’m working on at the moment requires two sites to connect to a multi-site dynamic routing VPN gateway in Azure. This is for redundancy when connecting to the Azure cloud as there is a dedicated link between the two branch sites.

Setting up a multi-site VPN is a relatively streamlined process and Matt Davies has written a great article on how to run through that process via the Azure portal on the Kloud blog.

Read More

AWS Direct Connect in Australia via Equinix Cloud Exchange

I discussed Azure ExpressRoute via Equinix Cloud Exchange (ECX) in my previous blog. In this post I am going to focus on AWS Direct Connect which ECX also provides. This means you can share the same physical link (1GBps or 10GBps) between Azure and AWS!

ECX also provides connectivity service to AWS for connection speed less than 1GBps. AWS Direct Connect provides dedicated, private connectivity between your WAN or datacenter and AWS services such as AWS Virtual Private Cloud (VPC) and AWS Elastic Compute Cloud (EC2).

AWS Direct Connect via Equinix Cloud Exchange is Exchange (IXP) provider based allowing us to extend our infrastructure that is:

  • Private: The connection is dedicated bypassing the public Internet which means better performance, increases security, consistent throughput and enables hybrid cloud use cases (Even hybrid with Azure when both connectivity using Equinix Cloud Exchange)
  • Redundant: If we configure a second AWS Direct Connect connection, traffic will failover to the second link automatically. Enabling Bidirectional Forwarding Detection (BFD) is recommended when configuring your connections to ensure fast detection and failover. AWS does not offer any SLA at the time of writing
  • High Speed and Flexible: ECX provides a flexible range of speeds: 50, 100, 200, 300, 400 and 500MBps.

The only tagging mechanism supported by AWS Direct Connect is 802.1Q (Dot1Q). AWS always uses 802.1Q (Dot1Q) on the Z-side of ECX.

ECX pre-requisites for AWS Direct Connect

The pre-requisites for connecting to AWS via ECX:

  • Physical ports on ECX. Two physical ports on two separate ECX chassis is required if redundancy is required.
  • Virtual Circuit on ECX. Two virtual circuits are also required for redundancy

Buy-side (A-side) Dot1Q and AWS Direct Connect

The following diagram illustrates the network setup required for AWS Direct Connect using Dot1Q ports on ECX:

AWSDot1Q

The Dot1Q VLAN tag on the A-side is assigned by the buyer (A-side). The Dot1Q VLAN tag on the seller side (Z-side) is assigned by AWS.

There are a few steps needing to be noted when configuring AWS Direct Connect via ECX:

  • We need our AWS Account ID to request ECX Virtual Circuits (VC)s
  • Create separate Virtual Interfaces (VI)s for Public and Private Peering on AWS Management Console. We need two ECX VCs and two AWS VIs for redundancy Private or Public Peering.
  • We can accept the Virtual Connection either from ECX Portal after requesting the VCs or  on AWS Management Console.
  • Configure our on-premises edge routers for BGP sessions. We can download the router configuration which we can use to configure our BGP sessions from AWS Management Console
    awsdot1q_a
  • Attach the AWS Virtual Gateway (VGW) to the Route Table associated with our VPC
  • Verify the connectivity.

Please refer to the AWS Direct Connect User Guide on how to configure edge routers for BGP sessions. Once we have configured the above we will need to make sure any firewall rules are modified so that traffic can be routed correctly.

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.

Amazon Web Services (AWS) networking: public IP address and subnet list

Originally posted on Lucian’s blog over at Lucian.Blog.


Amazon Web Services (AWS) has many data centre’s in many continents and countries all over the world. AWS has two key grouping methods of these data centres: regions and availability zones.

It can be very handy to either reference the IP address or subnet of a particular service in say a proxy server to streamline connectivity. This is a good practice to avoid unnecessary latency via proxy authentication requests. Below is an output of Amazon Web Services IP address and subnet details split into the key categories as listed by AWS via thier publishing of information through the IP address JSON file available here.

Sidebar: Click here to read up more on regions and availability zones or click here or click here. Included in these references is also information about the DNS endpoints for services that are therefore IP address agnostic. Also, If you’d like more details about the JSON file click here.

Read More

Connection Options When Building An Azure Hybrid Cloud Solution

If your business is migrating workloads to Azure the chances are at some point you will probably want to create a form of private interconnect with Azure. There is more than one way to achieve this, so in this post I’ll take a look at what options you have and the most appropriate scenarios for each.

We’ll work through the connection types from simplest (and quickest to provision) to more complex (where you’ll need IP networking expertise and hardware).

Hybrid Connection

This is your baseline interconnect option and is tied to the BizTalk Services offering within Azure. At time of writing the only Azure-based services that can leverage Hybrid Connections are Web Apps (formerly Websites) and Mobile Apps (formerly Mobile Services).

Hybrid Connections are a great way to quickly get access to on-premises resources without the complexity involved in firewall or VPN setups. If you look at the official documentation you’ll see there is no mention of firewall rules or VPN setup!

Your on-premises resources must be running on Server 2008 R2 or above in order to leverage this service offering which at its most restricted can work over standard HTTP(S) ports and nothing more.

  • Benefits:
    • Quick to setup (requires no changes on-prem)
    • Typically “just works” with most corporate network edge configurations
    • Doesn’t require a Virtual Network to be configured in Azure
    • Can be shared between multiple Web and Mobile Apps
    • Good for exposing single on-prem resources to consumers in Azure (i.e. DB on-prem / web in Azure).
  • Drawbacks:
    • Your security team may be unhappy with you 🙂
    • Performance may not meet your needs beyond simple use cases
    • TCP services requiring dynamic ports aren’t supported (think FTP)
    • Tied to BizTalk Services and utilises a range of other Azure services such as ACS and Azure SQL Database
    • In Preview (no SLA) and not available in all Azure Regions
    • Limited use cases in Azure (Web and Mobile Apps).

Point-to-Site VPN

The next step up from Hybrid Connections is Point-to-Site (P2S) VPN connections. These connections allow you to use a downloaded client to provide an SSTP VPN between a single on-premises (or Azure based) resource and a Virtual Network (VNet) in Azure.

This VPN setup is a good way to test out simple-to-medium complexity hybrid scenarios or proof-of-concepts without the need for dedicated VPN hardware in your corporate environment.

When setting up a P2S VPN you have a few items you need to succeed:

  • the IP address range that will be used for clients when they connect
  • a subnet defined on your VNet for the Azure Gateway that will host VPN connections
  • a running Gateway instance that will allow your VPN clients to connect
  • a valid x509 certificate that will be used by the VPN client.

As you can see, there are quite a few extra steps involved beyond the Hyrbid Connection! You can run up to 128 on-prem clients connected to an Azure VNet if needed.

  • Benefits:
    • Does not require dedicated on-premises networking hardware
    • SSTP can usually connect OK with most corporate network edge configurations
    • Can co-exist with Site-to-Site connections
    • Allows you expose services on a single on-prem resource to an entire Azure VNet.
  • Drawbacks:
    • You’ll need to understand IP networking to setup a VNet in Azure
    • Performance is still relatively limited due to the nature of SSTP
    • Isn’t an ‘always on’ proposition as requires an interactive user session on-prem to run the client
    • Only supports connection from a single on-prem resource running on Windows
    • You’ll need an x509 certificate for use with the VPN client.

Site-to-Site VPN

Now we start to get serious!

If you want to run a Site-to-Site (S2S) connection you will need to have dedicated VPN hardware or Windows Server 2012 (or above) running RRAS on-prem and some private IP address space for your Azure environment that doesn’t overlap with the on-premises network you’ll be connecting with.

This option is the first to really offer you a true hybrid environment where two networks can connect via the VPN. This is often the first step we see many enterprises take when adopting Azure as it is relatively quick to stand up and typically most customers have the necessary devices (or ones that meet Azure’s VPN requirements) available already.

When you setup your Gateway in Azure, the Azure platform will even handily provide you with a configuration script/template for whichever on-prem device you’ve selected.

  • Benefits:
    • Provides full network-to-network connectivity
    • Supports a growing number of standard VPN appliances
    • Foundation of support for multi-site connectivity
    • Can use Windows Server 2012 RRAS if you don’t have an appliance.
  • Drawbacks:
    • Maximum throughput of 100 Mbps
    • Doesn’t support redundant single site to single VNet connections.

Be aware: Forced Tunnelling

Before we move on to the next Azure connection type we do need to talk about Forced Tunelling. The current generation Azure VNet has a default route for all public Internet traffic which is out over Azure’s managed Internet infrastructure (it’s just there and you can’t manage it or turn it off). On some other public cloud platforms you can disable public internet traffic by not adding an Internet Gateway – on Azure that option isn’t currently available.

In order to mitigate some challenges around controlling the path public traffic takes from an Azure VNet, Microsoft introduced Forced Tunelling which can be used to force traffic bound for the Internet back over your VPN and into your on-prem environment.

You must plan your subnets appropriately and only apply Forced Tunelling to those where required. This is especially important if you will consume any of Azure’s PaaS offerings other than Web or Worker Roles which can be added to an Azure VNet.

Almost all of Azure’s PaaS services (even Blob Storage) are exposed as secured public Internet endpoints which means any call to these from a VNet configured for Forced Tunelling will result in that traffic heading back to your on-prem network and out your own Internet Gateway. Performance will take a hit and you will pay data egress charges on those calls as well as they will appear to originate from your on-prem Internet Gateway.

ExpressRoute

The grandpappy of all of them – and the one that requires the most planning and commitment. If you find yourself starting your hybrid journey here then either you have an existing investment in an MPLS WAN or you’re already co-located in an exchange that is providing ExpressRoute services.

The are two connection options for ExpressRoute:

  • Network Service Provider (NSP): utilises a new or existing MPLS WAN cross-connect into one or more Azure Region. Speeds 10 Mbps to 1 Gbps supported.
  • Exchange Provider (IXP): uses a new paired cross-connect in a data centre location when the IXP and Microsoft’s routers are co-located. Speeds 200 Mbps to 10 Gbps supported.

The officially support list of NSPs and IXPs is pretty small, but you can quite often work with your existing provider to get a connection into an IXP, or look to leverage offerings such as Equinix’s Cloud Exchange as a shortcut (for example, in Sydney 130+ network service providers provide services into Equinix).

Once you’re operating at this level you will definitely need the networking team in your organisation involved as you’ll be doing heavy lifting that requires solid knowledge of IP networking and specifically BGP.

  • Benefits:
    • A single ExpressRoute circuit can connect to multiple Azure VNets (up to 10) and across multiple Azure Subscriptions (also up to 10)
    • Redundant connection by default (a pair is provided when you connect)
    • Two peers provided: one for Azure public services and one for your private services. You can choose to not use either peer if you wish
    • Can support bursting to higher bandwidth levels (provider depending)
    • Offers an SLA for availability.
  • Drawbacks:
    • Requires that you have a relationship with an NSP or IXP in addition to Azure.
    • NSP bandwidth maximum is 1 Gbps
    • Maximum 4,000 route prefixes for BGP on all peers on a connection.

If you’re unsure how to get started here, but you have an existing WAN or co-location facility it may be worth talking to them about how to get a connection into Azure.

Be Aware: Default Routes and Public Peering

This topic falls under the same category as the earlier section on Forced Tunnelling for S2S VPNs.

When using ExpressRoute you can use BGP to advertise the default route for all your Azure VNets to be back over your ExpressRoute connection to your on-prem environment. Unlike the VPN connection scenario though, where all Azure PaaS services will route back over your on-prem Internet gateway, with ExpressRoute’s peering you can use the public peer as the shortcut back to Azure.

While this is a better option than you get with VPN it still means you are pushing Azure calls back to your ExpressRoute gateway so you will potentially see a performance hit and will see the data included if you are using an IXP connection.

Conclusion

So there we have it – a quick rundown of the techniques you have at your disposal when looking to create a private hybrid network environment that allows you to connect your existing locations with Azure.

HTH.

IPv6 – Are we there yet??

The topic of IPv6 seems to come up every couple of years. The first time I recall there being a lot of hype about IPv6 was way back in the early 2000’s, ever since then the topic seems to get attention every once in a while and then disappears into insignificance alongside more exciting IT news.

The problem with IPv4 is that there are only about 3.7 billion public IPv4 addresses. Whilst this may initially sound like a lot, take a moment to think about how many devices you currently have that connect to the Internet. Globally we have already experienced a rapid uptake of Internet connected smart-phones and the recent hype surrounding the Internet of Things (IoT) promises to connect an even larger array of devices to the Internet. With a global population of approx. 7 billion people we just don’t have enough to go around.

Back in the early 2000’s there was limited support in the form of hardware and software that supported IPv6. So now that we have wide spread hardware and software IPv6 support, why is it that we haven’t all switched?

Like most things in the world it’s often determined by the capacity to monetise an event. Surprisingly not all carriers / ISP’s are on board and some are reluctant to spend money to drive the switch. Network address translation (NAT) and Classless Inter-Domain Routing (CIDR), have made it much easier to live with IPv4. NAT used on firewalls and routers lets many nodes in a network sit behind a single public IP address. CIDR, sometimes referred to as supernetting is a way to allocate and specify the Internet addresses used in inter-domain routing in a much more flexible manner than with the original system of Internet Protocol (IP) address classes. As a result, the number of available Internet addresses has been greatly increased and has allowed service providers to conserve addresses by divvying up pieces of a full range of IP addresses to multiple customers.

Perceived risk by consumers also comes into play. It is plausible that many companies may be of the view that the introduction of IPv6 is somewhat unnecessary and potentially risky in terms of effort required to implement and loss of productivity during implementation. Most corporations are simply not feeling any pain with IPv4 so it’s not on their short term radar as being of any level of criticality to their business. When considering IPv6 implementation from a business perspective, the successful adoption of new technologies are typically accompanied by some form of reward or competitive advantage associated with early adoption. The potential for financial reward is often what drives significant change.

To IPv6’s detriment from the layperson’s perspective it has little to distinguish itself from IPv4 in terms of services and service costs. Many of IPv4’s short comings have been addressed. Financial incentives to make the decision to commence widespread deployment just don’t exist.

We have all heard the doom and gloom stories associated with the impending end of IPv4. Surely this should be reason enough for accelerated implementation of IPv6? Why isn’t everyone rushing to implement IPv6 and mitigate future risk? The situation where exhaustion of IPv4 addresses would cause rapid escalation in costs to consumers hasn’t really happened yet and has failed to be a significant factor to encourage further deployment of IPv6 in the Internet.

Another factor to consider is backward compatibility. IPv4 hosts are unable to address IP packets directly to an IPv6 host and vice-versa.

So this means that it is not realistic to just switch over a network from IPv4 to IPv6. When implementing IPv6 a significant period of dual stack IPv4 and IPv6 coexistence needs to take place. This is where IPv6 is turned on and run in parallel with the existing IPv4 network. This just sounds like two networks instead of one and double administrative overhead for most IT decision makers.

Networks need to provide continued support for IPv4 for as long as there are significant levels of IPv4 only networks and services still deployed. Many IT decision makers would rather spend their budget elsewhere and ignore the issue for another year.

Only once the majority of the Internet supports a dual stack environment can networks start to turn off their continued support for IPv4. Therefore, while there is no particular competitive advantage to be gained by early adoption of IPv6, the collective internet wide decommissioning of IPv4 is likely to be determined by the late adopters.

So what should I do?

It’s important to understand where you are now and arm yourself with enough information to plan accordingly.

  • Check if your ISP is currently supporting IPv6 by visiting a website like http://testmyipv6.com/. There is a dual stack test which will let you know if you are using IPv4 alongside IPv6.
  • Understand if the networking equipment you have in place supports IPv6.
  • Understand if all your existing networked devices (everything that consumes an IP address) supports IPv6.
  • Ensure that all new device acquisitions are fully supportive of IPv6.
  • Understand if the services you consume support IPv6. (If you are making use of public cloud providers, understand if the services you consume support IPv6 or have a road map to IPv6.)

The reality is that IPv6 isn’t going away and as IT decision makers we can’t postpone planning for its implementation indefinitely. Take the time now to understand where your organisation is at. Make your transition to IPv6 a success story!!