Akamai Cloud based DNS Black Magic

Let us start with traditional DNS hosting with any DNS hoster or ISP. How does traditional DNS name resolution works? When you type a human readable name http://www.anydomain.com on the address bar of internet explorer, that name is resolved to an Internet Protocol (IP) address hosted by an Internet Service Provider (ISP). The browser presented the website to the user. By doing so, the website exposed the public IP address to everyone. The good and bad guys know the IP address and can trace globally. A state sponsored hackers or private individual can launch a denial of service attack also known as DDoS on a website whose publicly IP address is known and traceable. The bad guys can send overwhelming number of fake service request to the original IP address of the human readable name http://www.anydomain.com and shut the website. In this situation, DNS server hosting DNS record http://www.anydomain.com will stop serving genuine DNS request resulting distributed denial of service (DDoS).

Akamai introduced Fast DNS that is dynamic DNS located almost every country, state, territory and regions to mitigate such risk of DDoS and DNS hijack.

Akamai Fast DNS offloads domain name resolution from on-premises infrastructure and traditional domain name provider to an intelligent, secure and authoritative DNS service.  Akamai has successfully prevented DDoS attack, DNS forgery and manipulation by complex dynamic DNS hosting and spoof IP addresses.

As of today Akamai has more than 150000+ servers located in more than 2000+ locations in the world that are very well connected in 1200+ networks in 700+ cities in 92 countries and in most cases, an Akamai edge server is just a hop away from the end users.

How does it works?

  1. User request http://www.anydomain.com
  2. User’s ISP respond the DNS name query http://www.anydomain.com
  3. User’s ISP resolves http://www.anydomain.com DNS Name to http://www.anydomain.com.edgekey.net hosted by Akamai
  4. Akamai Global DNS checks the CNAME http://www.anydomain.com.edgekey.net and the region of the request coming from
  5. Akamai Global then forward the request to the Akamai regional DNS Server for example Sydney, Australia
  6. Akamai regional DNS server forward the request to nearest Akamai edge server of the user location for example Melbourne, Australia
  7. Akamai local DNS server for example Melbourne, Australia resolve the original CNAME http://www.anydomain.com to http://www.anydomain.com.edgekey.net
  8. http://www.anydomain.com.edgekey.net resolve to cached (if cached) website http://www.anydomain.com by Akamai which then presented to user’s browser

Since Akamai uses dynamic DNS server, it is extremely difficult for a bad guy to track down the real IP address of the website and origin host of the website. In Akamai terminology, .au or .uk means that the website is hosted in that country (au or uk) but the response of the website is coming to the user from his/her geolocation hence IP address of the website will always be presented from the Akamai edge server of the user’s geolocation. In plain English, origin host and IP address is vanished in the complex dynamic DNS servers of Akamai. For example,

  1. http://www.anydomain.com.edgekey.net resolve to a spoof IP address hosted by Akamai DNS server
  2. The original IP address of http://www.anydomain.com is never resolved by Akamai DNS server or the ISP hosting the http://www.anydomain.com

Implementing Akamai Fast DNS:

  1. Create a Host A record in your ISP http://www.anydomain.com and point to 201.17.xx.xx public IP (VIP of Azure Web Services or any web services)
  2. Create an origin host record or CNAME record http://www.anydomain.com and point to xyz9013452bcf.anaydomain.com
  3. Now request Akamai to black magic http://www.anydomain.com and point to http://www.anydomain.com.edgekey.net
  4. Once Akamai completes the black magic, request your ISP to create another CNAME record xyz9013452bcf.anydomain.com and point to http://www.anydomain.com.edgekey.net

Testing Akamai Fast DNS: I am using http://www.akamai.com as the DNS name instead of a real DNS of record of any of my client.

Go to mxtoolbox.com and DNS lookup, http://www.akamai.com you will see

CNAME http://www.akamai.com  resolve to http://www.akamai.com.edgekey.net

Open command Prompt and ping http://www.akamai.com.edgekey.net

Since I am pinging from Sydney Australia, my ping responded by the Akamai edge server Sydney, result is

Ping http://www.akamai.com.edgekey.net

Pinging e1699.dscc.akamaiedge.net [118.215.118.16] with 32 bytes of data:

Reply from 118.215.118.16: bytes=32 time=6ms TTL=56

Reply from 118.215.118.16: bytes=32 time=3ms TTL=56

Open a browser and go to http://www.kloth.net/services/dig.php and trace e1699.dscc.akamaiedge.net

; <<>> DiG 9 <<>> @localhost e1699.dscc.akamaiedge.net A

; (1 server found)

;; global options: +cmd

.                                            375598   IN            NS           d.root-servers.net.

.                                            375598   IN            NS           c.root-servers.net.

.                                            375598   IN            NS           i.root-servers.net.

.                                            375598   IN            NS           j.root-servers.net.

.                                            375598   IN            NS           k.root-servers.net.

.                                            375598   IN            NS           m.root-servers.net.

.                                            375598   IN            NS           a.root-servers.net.

.                                            375598   IN            NS           l.root-servers.net.

.                                            375598   IN            NS           e.root-servers.net.

.                                            375598   IN            NS           f.root-servers.net.

.                                            375598   IN            NS           b.root-servers.net.

.                                            375598   IN            NS           g.root-servers.net.

.                                            375598   IN            NS           h.root-servers.net.

;; Received 228 bytes from 127.0.0.1#53(127.0.0.1) in 3 ms

net.                                       172800   IN            NS           a.gtld-servers.net.

net.                                       172800   IN            NS           b.gtld-servers.net.

net.                                       172800   IN            NS           c.gtld-servers.net.

net.                                       172800   IN            NS           d.gtld-servers.net.

net.                                       172800   IN            NS           e.gtld-servers.net.

net.                                       172800   IN            NS           f.gtld-servers.net.

net.                                       172800   IN            NS           g.gtld-servers.net.

net.                                       172800   IN            NS           h.gtld-servers.net.

net.                                       172800   IN            NS           i.gtld-servers.net.

net.                                       172800   IN            NS           j.gtld-servers.net.

net.                                       172800   IN            NS           k.gtld-servers.net.

net.                                       172800   IN            NS           l.gtld-servers.net.

net.                                       172800   IN            NS           m.gtld-servers.net.

;; Received 512 bytes from 2001:7fd::1#53(2001:7fd::1) in 8 ms

akamaiedge.net.                  172800   IN            NS           la1.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           la3.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           lar2.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           ns3-194.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           ns6-194.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           ns7-194.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           ns5-194.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           a12-192.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           a28-192.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           a6-192.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           a1-192.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           a13-192.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           a11-192.akamaiedge.net.

;; Received 504 bytes from 2001:503:a83e::2:30#53(2001:503:a83e::2:30) in 14 ms

dscc.akamaiedge.net.          8000       IN            NS           n7dscc.akamaiedge.net.

dscc.akamaiedge.net.          4000       IN            NS           n0dscc.akamaiedge.net.

dscc.akamaiedge.net.          6000       IN            NS           a0dscc.akamaiedge.net.

dscc.akamaiedge.net.          6000       IN            NS           n3dscc.akamaiedge.net.

dscc.akamaiedge.net.          4000       IN            NS           n2dscc.akamaiedge.net.

dscc.akamaiedge.net.          6000       IN            NS           n6dscc.akamaiedge.net.

dscc.akamaiedge.net.          4000       IN            NS           n5dscc.akamaiedge.net.

dscc.akamaiedge.net.          8000       IN            NS           n1dscc.akamaiedge.net.

dscc.akamaiedge.net.          8000       IN            NS           n4dscc.akamaiedge.net.

;; Received 388 bytes from 184.85.248.194#53(184.85.248.194) in 8 ms

e1699.dscc.akamaiedge.net. 20        IN            A             23.74.181.249

;; Received 59 bytes from 77.67.97.229#53(77.67.97.229) in 5 ms

Now tracert 23.74.181.249 on a command prompt

Tracert 23.74.181.249

Tracing route to a23-74-181-249.deploy.static.akamaitechnologies.com [23.74.181.249]

over a maximum of 30 hops:

1     1 ms     1 ms     1 ms  172.28.67.2

2     4 ms     1 ms     4 ms  172.28.2.10

3     *        *        *     Request timed out.

4     *        *        *     Request timed out.

5     *        *        *     Request timed out.

6                     *     Request timed out.

7     *        *        *     Request timed out.

8     *      125 ms    75 ms  bundle-ether1.sydp-core04.sydney.reach.com [203.50.13.90]

9   172 ms   160 ms   165 ms  i-52.tlot-core02.bx.telstraglobal.net [202.84.137.101]

10   152 ms   192 ms   164 ms  i-0-7-0-11.tlot-core01.bi.telstraglobal.net [202.84.251.233]

11   163 ms   183 ms   176 ms  gtt-peer.tlot02.pr.telstraglobal.net [134.159.63.182]

12   151 ms   157 ms   155 ms  xe-2-2-0.cr2-lax2.ip4.gtt.net [89.149.129.234]

13   175 ms   160 ms   154 ms  as5580-gw.cr2-lax2.ip4.gtt.net [173.205.59.18]

14   328 ms   318 ms   317 ms  ae21.edge02.fra06.de.as5580.net [78.152.53.219]

15   324 ms   325 ms   319 ms  78.152.48.250

16   336 ms   336 ms   339 ms  a23-74-181-249.deploy.static.akamaitechnologies.com [23.74.181.249]

Now open hosts file of windows machine C:\WINDOWS\system32\drivers\etc\hosts and add Akamai spoof IP 172.233.15.98   http://www.akamai.com (reference)

Browse http://www.akamai.com website on internet explorer that will point you to 172.233.15.98

Open command prompt, nslookup 172.233.15.98

Server:  lon-resolver.telstra.net

Address:  203.50.2.71

Name:    a172-233-15-98.deploy.static.akamaitechnologies.com

Address:  172.233.15.98

In conclusion, Akamai tricked web browser to go to Akamai edge server Sydney Australia instead of original Akamai server hosted in USA. An user will never know the original IP address of the http://www.akamai.com website. Abracadabra the original IP address is vanished…

Enterprise Cloud Take Up Accelerating Rapidly According to New Study By McKinsey

A pair of studies published a few days ago by global management consulting firm McKinsey & Company entitled IT as a service: From build to consume show enterprise adoption of Infrastructure as a Service (IaaS) services accelerating increasingly rapidly over the next two years into 2018.

Of the two, one examined the on-going migrations of 50 global businesses. The other saw a large number of CIOs, from small businesses up to Fortune 100 companies, interviewed on the progress of their transitions and the results speak for themselves.

1. Compute and storage is shifting massively to cloud service providers.

Compute and storage is shift massively to the cloud service providers.

Compute and storage is shift massively to the cloud service providers.

“The data reveals that a notable shift is under way for enterprise IT vendors, with on-premise shipped server instances and storage capacity facing compound annual growth rates of –5 percent and –3 percent, respectively, from 2015 to 2018.”

With on-premise storage and server sales growth going into negative territory, it’s clear the next couple of years will see the hyperscalers of this world consume an ever increasing share of global infrastructure hardware shipments.

2.Companies of all sizes are shifting to off-premise cloud services.

Companies of all sizes are shifting to off-premise cloud services.

Companies of all sizes are shifting to off-premise cloud services.

“A deeper look into cloud adoption by size of enterprise shows a significant shift coming in large enterprises (Exhibit 2). More large enterprises are likely to move workloads away from traditional and virtualized environments toward the cloud—at a rate and pace that is expected to be far quicker than in the past.

The report also anticipates the number of enterprises hosting at least one workload on an IaaS platform will see an increase of 41% in the three year period to 2018. While that of small and medium sized businesses will increase a somewhat less aggressive 12% and 10% respectively.

3. A fundamental shift is underway from a build to consume model for IT workloads.

a-fundamental-shift

“The survey showed an overall shift from build to consume, with off-premise environments expected to see considerable growth (Exhibit 1). In particular, enterprises plan to reduce the number of workloads housed in on-premise traditional and virtualized environments, while dedicated private cloud, virtual private cloud, and public infrastructure as a service (IaaS) are expected to see substantially higher rates of adoption.”

Another takeaway is that the share of traditional and virtualized on-premise workloads will shrink significantly from 77% and 67% in 2015 to 43% and 57% respectively in 2018. While virtual private cloud and IaaS will grow from 34% and 25% in 2015 to 54% and 37% respectively in 2018.

Cloud adoption will have far-reaching effects

The report concludes “McKinsey’s global ITaaS Cloud and Enterprise Cloud Infrastructure surveys found that the shift to the cloud is accelerating, with large enterprises becoming a major driver of growth for cloud environments. This represents a departure from today, and we expect it to translate into greater headwinds for the industry value chain focused on on-premise environments; cloud-service providers, led by hyperscale players and the vendors supplying them, are likely to see significant growth.”

About McKinsey & Company

McKinsey & Company is a worldwide management consulting firm. It conducts qualitative and quantitative analysis in order to evaluate management decisions across the public and private sectors. Widely considered the most prestigious management consultancy, McKinsey’s clientele includes 80% of the world’s largest corporations, and an extensive list of governments and non-profit organizations.

Web site: McKinsey & Company
The full report: IT as a service: From build to consume

Create a Cloud Strategy For Your Business

Let’s be clear, today’s cloud as a vehicle for robust and flexible enterprise grade IT is here and it’s here to stay. Figures published by IDG Research’s 2015 Enterprise Cloud Computing Survey predict that in 2016 25% of total enterprise IT budgets will be allocated to cloud computing.

steady-increase-in-cloud-adoption2

Steady increase in cloud utilisation. Source: IDG Enterprise Cloud Computing Survey.

They also reported that the average cloud spend for all the enterprises surveyed would reach 2.87M in the following year and that 72% of enterprises have at least one application running in the cloud already, compared to 57% in 2012. IDC in the meantime predicts that public cloud adoption will increase from 22% in 2016 to 32.1% within the next 18 months which equates to no less than 45.8% growth.

wide-range-of-cloud-investments3

Wide range in cloud investments. Source: 2015 IDG Enterprise Cloud Computing Survey.

Now any organization looking to leverage the cloud needs a governing strategy. And research has shown that businesses with a vision are likely to be more efficient, successful and keep their costs down further than those who don’t. So let’s consider how a well rounded cloud strategy should be a priority for any business that doesn’t have one.

What are the key benefits of a cloud strategy?

A well thought out strategy will help an organization integrate cloud based IT into their business model in a more structured way. One that has given the proper consideration to all of its requirements.

1. Stay in control of your business in the era of on-demand cloud services

With the proliferation of cloud services available today. Shadow IT, that is to say the unsanctioned use of cloud services inside an organization, is a growing problem which if left unchecked runs the risk of a being a problem. Control it, before it controls you.

2. Better prepared infrastructure

Fully consider the potential ramifications and benefits which present themselves by using a structured approach. By properly mapping all of the requirements of it’s infrastucture, network, storage and compute resources a business will manage their migration with greater efficiency.

3. Increased benefit

Whether it’s a change in consumption models from owned to “pay-as-you-go” and the resulting shift from CAPEX to OPEX expenditures, potential for greater flexibility, efficiency and choice offered by the cloud over so called traditional IT. It’s seems obvious that having a well thought out strategy will magnify the value of these benefits.

4. Increased opportunity

Having a solid strategy is also likely to make for a bigger opportunity as an organization maps out it’s business and carefully thinks through the possibilities before taking making any changes.

Strategy formulation

Creating a migration strategy requires mapping out the suitability of all existing applications, weighing up their value against the costs and savings cloud services may offer and choosing which to prioritize.

1. Evaluate your applications

The first step is a business wide evaluation of all existing applications, categorized by two factors: business value and flexibility. With business value equating to the place and importance an asset holds in an organization and flexibility meaning it’s suitability for migration. It should seek to understand how they are deployed, how critical they are and whether moving them to the cloud will be cost effective or not.

2. Choose the right cloud model

The second is to determine the right cloud model for your requirements.

Private clouds, whether owned or leased, consist of closed IT infrastructures accessible only to a business which then makes available resources to it’s own internal customers. Private clouds are often home to core applications where control is essential to the business, they can also offer economies of scales where companies can afford larger, long term investments and have the ability to either run these environments themselves or pay for a managed service. Private cloud investments tend to operate on a CAPEX model.

Public clouds are shared platforms for services made available by third parties to their customers on a pay-as-you go basis. Public cloud environments are best suited to all but the most critical and expensive applications to run. They offer the significant benefit of not requiring large upfront capital investments because they operate on an OPEX model.

Hybrid clouds are made up of a mix of both types of resources working together across secured, private network connections. They can offer the benefits of both models but run the risk of additional complexity and can lessen the benefits of working at scale.

Why a clear strategy is vital for your business today?

The benefits of the cloud are real. Maximizing their value is therefore key to any business looking to leverage them. And the best way to do that is to have a well thought out strategy.

Azure reference architecture

Originaly posted here on Lucian’s blog, clouduccino.com. Follow Lucian on Twitter @LucianFrango. Like the Facebook page here.


tl;dr

  • What is a reference architecture
    • My definition of a reference architecture
  • I stop using the word architecture after the first 3 paragraphs – word overkill
  • What are some important topics to cover in said document
  • Is it easy to write? NO
  • Final words – don’t jump into Azure without a reference architecture

I’m not going to lie to you. This is not a quick topic to write about. When it comes to Azure, you absolutely, 100% cannot dive straight in and consume services if you’re planning on doing that for pretty much any size organisation. The only way this could be averted is in a development environment, or a home lab. Period.

Without order nothing exists.

-someone awesome

This is where an Azure reference architecture comes in. Let’s define a reference architecture, or most commonly a reference architecture document (or series of documents);

Within IT: A reference architecture is a set of standards, best practices and guidelines for a given architecture that architects, consultants, administrators or managers refer to when making decisions on future implementations in that environment.

Since I think I’ve reached the word quota limit for “architect” or “architecture”, I will attempt to limit the use of those from this point forward. If necessary, I’ll refer to either of those as just the “a-word“.

Read More

Secure Azure Virtual Network Defense In Depth using Network Security Groups, User Defined Routes and Barracuda NG Firewall

Security Challenge on Azure

There are few common security related questions when we start planning migration to Azure:

  • How can we restrict the ingress and egress traffic on Azure ?
  • How can we route the traffic on Azure ?
  • Can we have Firewall kit, Intrusion Prevention System (IPS), Network Access Control, Application Control and Anti – Malware on Azure DMZ ?

This blog post intention is to answer above questions using following Azure features combined with Security Virtual Appliance available on Azure Marketplace:

  • Azure Virtual Network (VNET)
  • Azure Network Security Groups (NSGs)
  • Azure Network Security Rule
  • Azure Forced Tunelling
  • Azure Route Table
  • Azure IP Forwarding
  • Barracuda NG Firewall available on Azure Marketplace

One of the most common methods of attack is The Script Kiddie / Skiddie / Script Bunny / Script Kitty. Script Kiddies attacks frequency is one of the highest frequency and still is. However the attacks have been evolved into something more advanced, sophisticated and far more organized. The diagram below illustrates the evolution of attacks:

evolution of attacks

 

The main target of the attacks from the lowest sophistication level of the attacks to the most advanced one is our data. Data loss = financial loss. We are working together and sharing the responsibility with our cloud provider to secure our cloud environment. This blog post will focus on Azure environment.

Defense in Depth

Based on SANS Institute of Information Security. Defense in depth is the concept of protecting a computer network with a layer of defensive mechanisms. There are varies of defensive mechanisms and countermeasures to protect our Azure environment because there are many attack scenarios and attack methods available.

In this post we will use combination of Azure Network Security Groups to establish Security Zone discussed previously on my previous blog, deploy network firewall including Intrusion Prevention System on our Azure network to implement additional high security layer and route the traffic to our security kit. On Secure Azure Network blog we have learned on how to establish the simple Security Zone on our Azure VNET. The underlying concept behind the zone model is the increasing level of trust from outside into the center. On the outside is the Internet – Zero Trust which is where the Script Kiddies and other attackers reside.

The diagram below illustrates the simple scenario we will implement on this post:

Barracuda01

There are four main configurations we need to do in order to establish solution as per diagram above:

  • Azure VNET Configuration
  • Azure NSG and Security Rules
  • Azure User Defined Routes and IP Forwarding
  • Barracuda NG Firewall Configuration

In this post we will focus on the last two items. This tutorial link will assist the readers on how to create Azure VNET and my previous blog post will assist the readers on how to establish Security Zone using Azure NSGs.

Barracuda NG Firewall

The Barracuda NG Firewall fills the functional gaps between cloud infrastructure security and Defense-In-Depth strategy by providing protection where our application and data reside on Azure rather than solely where the connection terminates.

The Barracuda NG Firewall can intercept all Layer 2 through 7 traffic and apply Policy – based controls, authentication, filtering and other capabilities. Just like its physical device, Barracuda NG Firewall running on Azure has traffic management capability and bandwidth optimizations.

The main features:

  • PAYG – Pay as you go / BYOL – Bring your own license
  • ExpressRoute Support
  • Network Firewall
  • VPN
  • Application Control
  • IDS – IPS
  • Anti-Malware
  • Network Access Control Management
  • Advanced Threat Detection
  • Centralized Management

Above features are necessary to establish a virtual DMZ in Azure to implement our Defense-In-Depth and Security Zoning strategy.

Choosing the right size of Barracuda NG Firewall will determine the level of support and throughput to our Azure environment. Details of the datasheet can be found here.

I wrote handy little script below to deploy Barracuda NG Firewall Azure VM with two Ethernets :

User Defined Routes in Azure

Azure allows us to re-defined the routing in our VNET which we will use in order to re-direct the routing through our Barracuda NG Firewall. We will enable IP forwarding for the Barracuda NG Firewall virtual appliance and then create and configure the routing table for the backend networks so all traffic is routed through the Barracuda NG Firewall.

There are some notes using Barracuda NG Firewall on Azure:

  • User-defined routing at the time of writing cannot be used for two Barracuda NG Firewall units in a high availability cluster
  • After the Azure routing table has been applied, the VMs in the backend networks are only reachable via the NG Firewall. This also means that existing Endpoints allowing direct access no longer work

Step 1: Enable IP Forwarding for Barracuda NG Firewall VM

In order to forward the traffic, we must enable IP forwarding on Primary network interface and other network interfaces (Ethernet 1 and Ethernet 2) on the Barracuda NG Firewall VM.

Enable IP Forwarding:

Enable IP Forwarding on Ethernet 1 and Ethernet 2:

On the Azure networking side, our Azure Barracuda NG Firewall VM is now allowed to forward IP packets.

Step 2: Create Azure Routing Table

By creating a routing table in Azure, we will be able to redirect all Internet outbound connectivity from Mid and Backend subnets of the VNET to the Barracuda NG Firewall VM.

Firstly, create the Azure Routing Table:

Next, we need to add the Route to the Azure Routing Table:

As we can see the next hop IP address for the default route is the IP address of the default network interface of the Barracuda NG Firewall (192.168.0.54). We have extra two network interfaces which can be used for other routing (192.168.0.55 and 192.168.0.55).

Lastly, we will need to assign the Azure Routing Table we created to our Mid or Backend subnet.

Step 3: Create Access Rules on the Barracuda NG Firewall

By default all outgoing traffic from the mid or backend is blocked by the NG Firewall. Create an access rule to allow access to the Internet.

Download the Barracuda NG Admin to manage our Barracuda NG Firewall running on Azure and login to our Barracuda NG Admin console:

barra01

 

Create a PASS access rule:

  • Source – Enter our mid or backend subnet
  • Service – Select Any
  • Destination – Select Internet
  • Connection – Select Dynamic SNAT
  • Click OK and place the access rule higher than other rules blocking the same type of traffic
  • Click Send Changes and Activate

barra02

Our VMs in the mid or backend subnet can now access the Internet via the Barracuda NG Firewall. RDP to my VM sitting on Mid subnet 192.168.1.4, browse to Google.com:

barra03

Let’s have a quick look at Barracuda NG Admin Logs 🙂

barra04

And we are good to go using same method configuring the rest to protect our Azure environment:

  • Backend traffic to go pass our Barracuda NG Firewall before hitting the Mid traffic and Vice Versa
  • Mid traffic to go pass our Barracuda NG Firewall before hitting the Frontend traffic and Vice Versa

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.

 

 

 

AWS Direct Connect in Australia via Equinix Cloud Exchange

I discussed Azure ExpressRoute via Equinix Cloud Exchange (ECX) in my previous blog. In this post I am going to focus on AWS Direct Connect which ECX also provides. This means you can share the same physical link (1GBps or 10GBps) between Azure and AWS!

ECX also provides connectivity service to AWS for connection speed less than 1GBps. AWS Direct Connect provides dedicated, private connectivity between your WAN or datacenter and AWS services such as AWS Virtual Private Cloud (VPC) and AWS Elastic Compute Cloud (EC2).

AWS Direct Connect via Equinix Cloud Exchange is Exchange (IXP) provider based allowing us to extend our infrastructure that is:

  • Private: The connection is dedicated bypassing the public Internet which means better performance, increases security, consistent throughput and enables hybrid cloud use cases (Even hybrid with Azure when both connectivity using Equinix Cloud Exchange)
  • Redundant: If we configure a second AWS Direct Connect connection, traffic will failover to the second link automatically. Enabling Bidirectional Forwarding Detection (BFD) is recommended when configuring your connections to ensure fast detection and failover. AWS does not offer any SLA at the time of writing
  • High Speed and Flexible: ECX provides a flexible range of speeds: 50, 100, 200, 300, 400 and 500MBps.

The only tagging mechanism supported by AWS Direct Connect is 802.1Q (Dot1Q). AWS always uses 802.1Q (Dot1Q) on the Z-side of ECX.

ECX pre-requisites for AWS Direct Connect

The pre-requisites for connecting to AWS via ECX:

  • Physical ports on ECX. Two physical ports on two separate ECX chassis is required if redundancy is required.
  • Virtual Circuit on ECX. Two virtual circuits are also required for redundancy

Buy-side (A-side) Dot1Q and AWS Direct Connect

The following diagram illustrates the network setup required for AWS Direct Connect using Dot1Q ports on ECX:

AWSDot1Q

The Dot1Q VLAN tag on the A-side is assigned by the buyer (A-side). The Dot1Q VLAN tag on the seller side (Z-side) is assigned by AWS.

There are a few steps needing to be noted when configuring AWS Direct Connect via ECX:

  • We need our AWS Account ID to request ECX Virtual Circuits (VC)s
  • Create separate Virtual Interfaces (VI)s for Public and Private Peering on AWS Management Console. We need two ECX VCs and two AWS VIs for redundancy Private or Public Peering.
  • We can accept the Virtual Connection either from ECX Portal after requesting the VCs or  on AWS Management Console.
  • Configure our on-premises edge routers for BGP sessions. We can download the router configuration which we can use to configure our BGP sessions from AWS Management Console
    awsdot1q_a
  • Attach the AWS Virtual Gateway (VGW) to the Route Table associated with our VPC
  • Verify the connectivity.

Please refer to the AWS Direct Connect User Guide on how to configure edge routers for BGP sessions. Once we have configured the above we will need to make sure any firewall rules are modified so that traffic can be routed correctly.

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.

Azure ExpressRoute in Australia via Equinix Cloud Exchange

Microsoft Azure ExpressRoute provides dedicated, private circuits between your WAN or datacentre and private networks you build in the Microsoft Azure public cloud. There are two types of ExpressRoute connections – Network (NSP) based and Exchange (IXP) based with each allowing us to extend our infrastructure by providing connectivity that is:

  • Private: the circuit is isolated using industry-standard VLANs – the traffic never traverses the public Internet when connecting to Azure VNETs and, when using the public peer, even Azure services with public endpoints such as Storage and Azure SQL Database.
  • Reliable: Microsoft’s portion of ExpressRoute is covered by an SLA of 99.9%. Equinix Cloud Exchange (ECX) provides an SLA of 99.999% when redundancy is configured using an active – active router configuration.
  • High Speed speeds differ between NSP and IXP connections – but go from 10Mbps up to 10Gbps. ECX provides three choices of virtual circuit speeds in Australia: 200Mbps, 500Mbps and 1Gbps.

Microsoft provided a handy table comparison between all different types of Azure connectivity on this blog post.

ExpressRoute with Equinix Cloud Exchange

Equinix Cloud Exchange is a Layer 2 networking service providing connectivity to multiple Cloud Service Providers which includes Microsoft Azure. ECX’s main features are:

  • On Demand (once you’re signed up)
  • One physical port supports many Virtual Circuits (VCs)
  • Available Globally
  • Support 1Gbps and 10Gbps fibre-based Ethernet ports. Azure supports virtual circuits of 200Mbps, 500Mbps and 1Gbps
  • Orchestration using API for automation of provisioning which provides almost instant provisioning of a virtual circuit.

We can share an ECX physical port so that we can connect to both Azure ExpressRoute and AWS DirectConnect. This is supported as long as we use the same tagging mechanism based on either 802.1Q (Dot1Q) or 802.1ad (QinQ). Microsoft Azure uses 802.1ad on the Sell side (Z-side) to connect to ECX.

ECX pre-requisites for Azure ExpressRoute

The pre-requisites for connecting to Azure regardless the tagging mechanism are:

  • Two Physical ports on two separate ECX chassis for redundancy.
  • A primary and secondary virtual circuit per Azure peer (public or private).

Buy-side (A-side) Dot1Q and Azure ExpressRoute

The following diagram illustrates the network setup required for ExpressRoute using Dot1Q ports on ECX:

Dot1Q setup

Tags on the Primary and Secondary virtual circuits are the same when the A-side is Dot1Q. When provisioning virtual circuits using Dot1Q on the A-Side use one VLAN tag per circuit request. This VLAN tag should be the same VLAN tag used when setting up the Private or Public BGP sessions on Azure using Azure PowerShell.

There are few things that need to be noted when using Dot1Q in this context:

  1. The same Service Key can be used to order separate VCs for private or public peerings on ECX.
  2. Order a dedicated Azure circuit using Azure PowerShell Cmdlet (shown below) and obtain the Service Key and use the this to raise virtual circuit requests with Equinix.https://gist.github.com/andreaswasita/77329a14e403d106c8a6

    Get-AzureDedicatedCircuit returns the following output.Get-AzureDedicatedCircuit Output

    As we can see the status of ServiceProviderProvisioningState is NotProvisioned.

    Note: ensure the physical ports have been provisioned at Equinix before we use this Cmdlet. Microsoft will start charging as soon as we create the ExpressRoute circuit even if we don’t connect it to the service provider.

  3. Two physical ports need to be provisioned for redundancy on ECX – you will get the notification from Equinix NOC engineers once the physical ports have been provisioned.
  4. Submit one virtual circuit request for each of the private and public peers on the ECX Portal. Each request needs a separate VLAN ID along with the Service Key. Go to the ECX Portal and submit one request for private peering (2 VCs – Primary and Secondary) and One Request for public peering (2VCs – Primary and Secondary).Once the ECX VCs have been provisioned check the Azure Circuit status which will now show Provisioned.expressroute03

Next we need to configure BGP for exchanging routes between our on-premises network and Azure as a next step, but we will come back to this after we have a quick look at using QinQ with Azure ExpressRoute.

Buy-side (A-side) QinQ Azure ExpressRoute

The following diagram illustrates the network setup required for ExpressRoute using QinQ ports on ECX:

QinQ setup

C-TAGs identify private or public peering traffic on Azure and the primary and secondary virtual circuits are setup across separate ECX chassis identified by unique S-TAGs. The A-Side buyer (us) can choose to either use the same or different VLAN IDs to identify the primary and secondary VCs.  The same pair of primary and secondary VCs can be used for both private and public peering towards Azure. The inner tags identify if the session is Private or Public.

The process for provisioning a QinQ connection is the same as Dot1Q apart from the following change:

  1. Submit only one request on the ECX Portal for both private and public peers. The same pair of primary and secondary virtual circuits can be used for both private and public peering in this setup.

Configuring BGP

ExpressRoute uses BGP for routing and you require four /30 subnets for both the primary and secondary routes for both private and public peering. The IP prefixes for BGP cannot overlap with IP prefixes in either your on-prem or cloud environments. Example Routing subnets and VLAN IDs:

  • Primary Private: 192.168.1.0/30 (VLAN 100)
  • Secondary Private: 192.168.2.0/30 (VLAN 100)
  • Primary Public: 192.168.1.4/30 (VLAN 101)
  • Secondary Public: 192.168.2.4/30 (VLAN 101)

The first available IP address of each subnet will be assigned to the local router and the second will be automatically assigned to the router on the Azure side.

To configure BGP sessions for both private and public peering on Azure use the Azure PowerShell Cmdlets as shown below.

Private peer:

Public peer:

Once we have configured the above we will need to configure the BGP sessions on our on-premises routers and ensure any firewall rules are modified so that traffic can be routed correctly.

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.

Automate your Cloud Operations Part 2: AWS CloudFormation

Stacking the AWS CloudFormation

Automate your Cloud Operations blog post Part 1 have given us the basic understanding on how to automate the AWS stack using CloudFormation.

This post will help the reader on how to layer the stack on top of the existing AWS CloudFormation stack using AWS CloudFormation instead of modifying the base template. AWS resources can be added into existing VPC using the outputs detailing the resources from the main VPC stack instead of having to modify the main template.

This will allow us to compartmentalize, separate out any components of AWS infrastructure and again versioning all different AWS infrastructure code for every components.

Note: The template I will use for this post for educational purposes only and may not be suitable for production workloads :).

Diagram below will help to illustrate the concept:

CloudFormation3

Bastion Stack

Previously (Part 1), we created the initial stack which provide us the base VPC. Next, we will  provision bastion stack which will create a bastion host on top of our base VPC. Below are the components of the bastion stack:

  • Create IAM user that can find out information about the stack and has permissions to create KeyPairs and actions related.
  • Create bastion host instance with the AWS Security Group enable SSH access via port 22
  • Use CloudFormation Init to install packages, create files and run commands on the bastion host instance also take the creds created for the IAM user and setup to be used by the scripts
  • Use the EC2 UserData to run the cfn-init command that actually does the above via a bash script
  • The condition handle: the completion of the instance is dependent on the scripts running properly, if the scripts fail, the CloudFormation stack will error out and fail

Below is the CloudFormation template to build the bastion stack:

Following are the high level steps to layer the bastion stack on top of the initial stack:

I put together the following video on how to use the template:

NAT Stack

It is important to design the VPC with security in mind. I recommend to design your Security Zone and network segregation, I have written a blog post regarding how to Secure Azure Network. This approach also can be impelemented on AWS environment using VPC, subnet and security groups. At the very minimum we will segregate the Private subnet and Public subnet on our VPC.

NAT instance will be added to our Initial VPC “public” subnets so that the future private instances can use the NAT instance for communication outside the Initial VPC. We will use exact same method like what we did on Bastion stack.

Diagram below will help to illustrate the concept:

CloudFormation4

The components of the NAT stack:

  • An Elastic IP address (EIP) for the NAT instance
  • A Security Group for the NAT instance: Allowing ingress traffic tcp, udp from port 0-65535 from internal subnet ; Allowing egress traffic tcp port 22, 80, 443, 9418 to any and egress traffic udp port 123 to Internet and egress traffic port 0-65535 to internal subnet
  • The NAT instance
  • A private route table
  • A private route using the NAT instance as the default route for all traffic

Following is the CloudFormation template to build the stack:

Similar like the previous steps on how to layer the stack:

Hopefully after reading the Part 1 and the Part 2 of my blog posts, the readers will gain some basic understanding on how to automate the AWS cloud operations using AWS CloudFormation.

Please contact Kloud Solutions if the readers need help with automating AWS production environment

http://www.wasita.net

Automate your Cloud Operations Part 1: AWS CloudFormation

Operations

What is Operations?

In the IT world, Operations refers to a team or department within IT which is responsible for the running of a business’ IT systems and infrastructure.

So what kind of activities this team perform on day to day basis?

Building, modifying, provisioning, updating systems, software and infrastructure to keep them available, performing and secure which ensures that users can be as productive as possible.

When moving to public cloud platforms the areas of focus for Operations are:

  • Cost reduction: if we design it properly and apply good practices when managing it (scale down / switch off)
  • Smarter operation: Use of Automation and APIs
  • Agility: faster in provisioning infrastructure or environments by Automating the everything
  • Better Uptime: Plan for failover, and design effective DR solutions more cost effectively.

If Cloud is the new normal then Automation is the new normal.

For this blog post we will focus on automation using AWS CloudFormation. The template I will use for this post for educational purposes only and may not be suitable for production workloads :).

AWS CloudFormation

AWS CloudFormation provides developers and system administrators DevOps an easy way to create and manage a collection of related AWS resources, including provisioning and updating them in an orderly and predictable fashion. AWS provides various CloudFormation templates, snippets and reference implementations.

Let’s talk about versioning before diving deeper into CloudFormation. It is extremely important to version your AWS infrastructure in the same way as you version your software. Versioning will help you to track change within your infrastructure by identifying:

  • What changed?
  • Who changed it?
  • When was it changed?
  • Why was it changed?

You can tie this version to a service management or project delivery tools if you wish.

You should also put your templates into source control. Personally I am using Github to version my infrastructure code, but any system such as Team Foundation Server (TFS) will do.

AWS Infrastructure

The below diagram illustrates the basic AWS infrastructure we will build and automate for this blog post:

CloudFormation1

Initial Stack

Firstly we will create the initial stack. Below are the components for the initial stack:

  • A VPC with CIDR block of 192.168.0.0/16 : 65,543 IPs
  • Three Public Subnets across 3 Availability Zones : 192.168.10.0/24, 192.168.11.0/24,  192.168.12.0/24
  • An Internet Gateway attached to the VPC to allow public Internet access. This is a routing construct for VPC and not an EC2 instance
  • Routes and Route tables for three public subnets so EC2 instances in those public subnets can communicate
  • Default Network ACLs to allow all communication inside of the VPC.

Below is the CloudFormation template to build the initial stack.

The template can be downloaded here: https://s3-ap-southeast-2.amazonaws.com/andreaswasita/cloudformation_template/demo/lab1-vpc_ELB_combined.template

I put together the following video on how to use the template:

Understanding a CloudFormation template

AWS CloudFormation is pretty neat and FREE. You only need to pay for the AWS resources provisioned by the CloudFormation template.

The next bit is understanding the Structure of the template. Typically CloudFormation template will have 5 sections:

  • Headers
  • Parameters
  • Mappings
  • Resources
  • Outputs

Headers: Example:

Parameters: Provision-time spec command-line options. Example:

Mappings: Conditionals Case Statements. Example:

Resources: All resources to be provisioned. Example:

Outputs: Example:

Note: Not all AWS Resources can be provisioned using AWS CloudFormation and it has the following limitations.

In Part 2 we will deep dive further on AWS CloudFormation and automating the EC2 including the configuration for NAT and Bastion Host instance.

http://www.wasita.net

Secure Azure Virtual Network and create DMZ on Azure VNET using Network Security Groups (NSG)

At TechEd Europe 2014, Microsoft announced the General Availability of Network Security Groups (NSGs) which add security feature to Azure’s Virtual Networking capability. Network Security Groups provides Access Control on Azure Virtual Network and the feature that is very compelling from security point of view. NSG is one of the feature Enterprise customers have been waiting for.

What are Network Security Groups and how to use them?

Network Security Groups allow us to control traffic (ingress and egress) on our Azure VNET using rules we define and provide segmentation within VNET by applying Network Security Groups to our subnet as well as Access Control to VMs.

What’s the difference between Network Security Groups and Azure endpoint-based ACLs? Azure endpoint-based ACLs work only on VM public port endpoint. NSGs are able to work on one or more VMs and controls all ingress and egress traffic on the VM. In addition NSG can be associated with a subnet and to all VMs in that subnet.

NSG Features and Constraints

NSG features and constraints are as follow:

  • 100 NSGs per Azure subscription
  • One VM / Subnet can only be associated with One NSG
  • One NSG can contain up to 200 Rules
  • A Rule has characteristics as follow:
    • Name
    • Type: Inbound/Outbound
    • Priority: Integer between 100 and 4096
    • Source IP Address: CIDR of Source IP Range
    • Source Port Range: Range between 0 and 65000
    • Destination IP Range: CIDR of Destination IP Range
    • Destination Port Range: Integer or Range between 0 and 65000
    • Protocol: TCP, UDP or use * for Both
    • Access: Allow/Deny
    • Rules processed in the order of priority. Rule with lower priority is processed before rules with higher priority numbers.

Security Zone Model

Designing isolated Security Zones within an Enterprise network is an effective strategy for reducing many types of risk, and this applies in Azure also. We need to work together with Microsoft as our Cloud Vendor to secure our Azure environment. Our On-Premises knowledge to create Security Zones model can be applied to our Azure environment.

As a demonstration I will pick the simplest Security Zone model which I will apply on my test Azure enviroment just to get some ideas how NSG will work. I will create 3 layers of Security Zone model for my test Azure environment. This simple security zone is only for the demo purpose and might not be suitable for your Enterprise environment.

  • Internet = Attack Vectors / Un-trusted
  • Front-End = DMZ
  • App / Mid-Tier = Trusted Zone
  • DB / Back-end = Restricted Zone

Based on Security Zone model above, I created my test Azure VNET :

Azure VNET: SEVNET
Address Space: 10.0.0.0/20
Subnets:
Azure-DMZ – 10.0.2.0/25
Azure-App – 10.0.0.0/25
Azure-DB – 10.0.1.0/25

Multi Site Connectivity to: EUVNET (172.16.0.0/16) and USVNET (192.168.0.0/20).

The diagram below illustrates above scenario:

Security Zone 1

Lock ‘Em Down

After we decided our simple Security Zone model it’s time to lock them down and secure the zones.

The diagram below illustrates how the traffic flow will be configured:

Security Zone 2

In High Level the Traffic flow as follow:

  • Allow Internet ingress and egress on DMZ
  • Allow DMZ – App ingress and egress
  • Allow App – DB ingress and egress
  • Deny DMZ-DB ingress and egress
  • Deny App-Internet ingress and egress
  • Deny DB-Internet ingress and egress
  • Deny EUVNET-DB on SEVNET ingress and egress
  • Deny USVNET-DB on SEVNET ingress and egress

Section below will show the examples of Azure NSG rules table will look like.

NSG Rules Table

Azure DMZ NSG Rules Table

Name Source IP Source Port Destination IP Destination Port Port Type Action Priority
RDPInternet-DMZ * 63389 10.0.2.0/25 63389 TCP Inbound Allow 347
Internet-DMZSSL * 443 10.0.2.0/25 443 TCP Inbound Allow 348
Internet-DMZDRS * 49443 10.0.2.0/25 49443 TCP Inbound Allow 349
USVNET-DMZ 192.168.0.0/20 * 10.0.2.0/25 * * Inbound Allow 400
EUVNET-DMZ 172.16.0.0/16 * 10.0.2.0/25 * * Inbound Allow 401
DMZ-App 10.0.2.0/25 * 10.0.0.0/25 * * Outbound Allow 500
DMZ-DB 10.0.2.0/25 * 10.0.1.0/25 * * Outbound Deny 600
Allow VNET Inbound Virtual_Network * Virtual_Network * * Inbound Allow 65000
Allow Azure Internal Load Balancer Inbound Azure_LoadBalancer * * * * Inbound Allow 65001
Deny All Inbound * * * * * Inbound Deny 65500
Allow VNET Outbound Virtual_Network * Virtual_Network * * Outbound Allow 65000
Allow Internet Outbound * * INTERNET * * Outbound Allow 65001
Deny All Outbound * * * * * Outbound Deny 65500

Azure App NSG Rules Table

Name Source IP Source Port Destination IP Destination Port Prot Type Action Priority
DMZ-App 10.0.2.0/25 * 10.0.0.0/25 * * Inbound Allow 348
USVNET-App 192.168.0.0/20 * 10.0.0.0/25 * * Inbound Allow 400
EUVNET-App 172.16.0.0/16 * 10.0.0.0/25 * * Inbound Allow 401
App-DMZ 10.0.0.0/25 * 10.0.2.0/25 * * Outbound Allow 500
App-DB 10.0.0.0/25 * 10.0.1.0/25 * * Outbound Allow 600
App-Internet 10.0.0.0/25 * INTERNET * * Outbound Deny 601
Allow VNET Inbound Virtual_Network * Virtual_Network * * Inbound Allow 65000
Allow Azure Internal Load Balancer Inbound Azure_LoadBalancer * * * * Inbound Allow 65001
Deny All Inbound * * * * * Inbound Deny 65500
Allow VNET Outbound Virtual_Network * Virtual_Network * * Outbound Allow 65000
Allow Internet Outbound * * INTERNET * * Outbound Allow 65001
Deny All Outbound * * * * * Outbound Deny 65500

Azure DB NSG Rules Table

Name Source IP Source Port Destination IP Destination Port Prot Type Action Priority
App-DB 10.0.0.0/25 * 10.0.1.0/25 * * Inbound Allow 348
USVNET-App 192.168.0.0/20 * 10.0.1.0/25 * * Inbound Deny 400
EUVNET-App 172.16.0.0/16 * 10.0.1.0/25 * * Inbound Deny 401
DB-DMZ 10.0.1.0/25 * 10.0.2.0/25 * * Outbound Deny 500
DB-App 10.0.1.0/25 * 10.0.0.0/25 * * Outbound Allow 600
DB-Internet 10.0.0.0/25 * INTERNET * * Outbound Deny 601
Allow VNET Inbound Virtual_Network * Virtual_Network * * Inbound Allow 65000
Allow Azure Internal Load Balancer Inbound Azure_LoadBalancer * * * * Inbound Allow 65001
Deny All Inbound * * * * * Inbound Deny 65500
Allow VNET Outbound Virtual_Network * Virtual_Network * * Outbound Allow 65000
Allow Internet Outbound * * INTERNET * * Outbound Allow 65001
Deny All Outbound * * * * * Outbound Deny 65500

Tables above will give us some ideas how to plan our Azure NSGs in order to establish our Security Zones.

Get Started using NSGs!

At the time this post was written, NSG is exposed only through PowerShell and REST API. To use PowerShell, we need version 0.8.10 of the Azure PowerShell module.

The commands are as follow:

  • Get-AzureNetworkSecurityGroup
  • Get-AzureNetworkSecurityGroupConfig
  • Get-AzureNetworkSecurityGroupForSubnet
  • New-AzureNetworkSecurityGroup
  • Remove-AzureNetworkSecurityGroup
  • Remove-AzureNetworkSecurityGroupConfig
  • Remove-AzureNetworkSecurityGroupFromSubnet
  • Remove-AzureNetworkSecurityRule
  • Set-AzureNetworkSecurityGroupConfig
  • Set-AzureNetworkSecurityGroupToSubnet
  • Set-AzureNetworkSecurityRule

Here are some examples:

Personally I will be recommending Azure NSG for every Azure Production deployment I perform in future!

Leave a comment below or contact us if you have any questions regarding Azure especially in more complex Enterprise scenarios.

http://www.wasita.net