Building websites with Ionic Framework, Angular and Microsoft Azure App Services

The Ionic Framework (https://ionicframework.com/) is an angular 4 based framework that is designed to build beautiful applications quickly and easily that can be targeted to native platforms as well as Progressive Web Apps (PWAs).  In this blog post, I’ll walk through the steps to start your own Ionic PWA hosted on Azure App Services, which will then serve your application.

What is Microsoft Azure App Services?

Microsoft Azure is a cloud platform that allows you to host server workloads that you’d previously host locally in a data centre or on a server somewhere to be hosted in an environment where massive scale and availability becomes available at an hourly rate. This is great for this application because our application only needs to serve static HTML and assets, which is very low on CPU overhead. We’ll host this as a free site in Azure and it will be fine. Azure App Services are Platform-as-a-Service web applications, your code, hosted on Azure’s infrastructure. You don’t worry about the operating system, or the web server, only the code you want to write.

What is Angular?

Angular is a browser based framework that allows complex single page applications to be built and run using a common set of libraries and structure. Angular 4 is the current version of the framework, and uses Typescript as the language you write programming components in, along with SCSS for CSS files, and HTML.

What is Ionic Framework?

Ionic Framework is an application that takes the Angular framework and powers it with Cordova to allow web application developers to be able to develop native apps. This means you have one common language and framework set that everyone can use to develop apps, native or web based. It also recently enabled support to build applications into PWA’s, or Progressive Web Apps, which are web based applications that behave very similar to native apps. Its this capability that we will take advantage of in this tutorial.

Prerequisites

You’ll need a Microsoft Azure account to create the Web App. You should have some knowledge of git, typescript and web application development, but most of this we’ll step through. You also need to install node and npm (https://nodejs.org/en/download/) which will enable you to develop the application. You can check that node and npm are working correctly by opening terminal or a command prompt and typing “npm -v” which will show you the current version of npm.

Steps to take

  1. First you need to install the ionic framework and cordova on your machine. You do this by opening a command prompt or terminal window and running: “npm install -g ionic cordova”
  2. Once you’ve done this, you’ll be able to run the command “ionic” and you should see the following: ionic
  3. Once this is done, you will need to create a directory, and create your ionic app. From your newly created directory, run “ionic start myApp blank”. You’ll be asked if you want to integrate your new app with Cordova. This means you would like to make a native application. In this instance we don’t (we’re creating a PWA) so type “n” and enter. This will download and install the required code for your application. Wait patiently – this will take a few minutes. start.png
  4. Once you’ve seen the success message, then your app is ready to serve locally. Change directory to “./myApp” and then run “ionic serve” and you should see your browser open with your app running. If you get an error saying “Sorry! ionic serve can only be run in an ionic project directory” you aren’t in the right folder.
  5. Now that your application is built, its ready for you to develop. All you do is go and edit the code in your src folder, and build what you need to. There are great generators and assistants that you can use to structure your app.
  6. At this point we need to ready our code for production – this means we need to minify, AoT and treeshake any wasted code from the Typescript, and remove our debug maps to reduce the size of the delivered application to our apps. To do this we run “ionic build –prod” which produces our production ready output.
  7. Its worth noting the “–prod” in the above build. This does the magic of reducing your code size. If you don’t do this, then the app will be megabytes (as you will take all of angular and ionic and its dependencies, which you won’t need). Try checking the size of the “www” folder using both steps. Mine went down from 11.1Mb to 2.96Mb.
  8. Our code is ready to commit to git. In fact, Ionic has already done that for you, there are only a few other items to check in – so run “git add .” and “git commit -m “initial build”” and you’ll be all good.
  9. The next step is to create your web app in Azure. Go to portal.azure.com and click Add -> Web App, then enter the details and choose your plan (note you will need to force the “free” plan in the settings.createwebapp.PNG
  10. Once you’ve deployed – your app will be able to be viewed at https://{yourwebappnamehere}.azurewebsites.net/. In this case https://bradleytest.azurewebsites.net/:webappstart.PNG
  11. Now its just time to get our running ionic code from our local (or build server if you use continuous integration/delivery) to our application. I’ve got a remote GitHub repository I’m going to push this code to (https://github.com/bsmithb2/ionicdemo), so ill run “git remote add origin https://github.com/bsmithb2/ionicdemo.git”  and then “git push origin master”.

Connecting git to Microsoft Azure

In this part, we’ll use git to connect to Windows Azure and continuously build and deploy using Visual Studio Team Services.

  1. Go to your web application you created in the Microsoft Azure portal, and then choose the “Continuous Delivery (preview)” menu option.
  2. Choose your source control repository (Github in my case) in the first stage. deployment
  3. Now select Build, then configure Continuous Delivery. This will set up your build in Visual Studio Team System. You’ll need to select nodejs as your build type. It will take a few minutes to set up the build and perform the first build and deploy. At this stage your app won’t work, but don’t worry – we’ll fix that next.
  4.  Once your build is set up, click on “Build Definition”. We need to make a change in the build definition, as the build isn’t yet running for npm, and the folder you wish to package and deploy is actually the “www” subdirectory.
  5. In the build process, add a new task – choose npm. Change the Command to “Custom” and then add “npm build –prod” to the arguments. This matches the build you did with “ionic build –prod” in step 6.build add NPM
  6. Once done, click the “Archive files” task, and add “/www” to the Root folder (or files) to archive. This tells VSTS to only package our output directory, which is all we need.  build change path
  7. Save the build. You can queue a build now if you like, or wait and queue one once we’ve tweaked the release.
  8. Go to releases, then choose your release (if you are unsure which, there is a “Release Definition” link in the Azure Portal near the Build Definition one.
  9. Turn off the web.config creation in File Transforms & Variable Substitution Options. release mod
  10. Turn on the “Remove Additional Files” setting in Additional Deployment Options.
  11. Save the Release Definition.
  12. At this point, you can trigger a build. It should take a few minutes. You can do this in VSTS, or alternatively change a file in your local git repository and push to github.
  13. Once the build has completed, open your web application again, and you’ll see your Ionic application!

finished

Conclusion

Ionic is a great solution to build cross-platform, responsive, mobile applications. Serving these applications is incredibly easy to do using Visual Studio Team Services and Microsoft Azure. We get great benefits in separation of concerns, while our scalability, security and cost management processes are simple as we’ve only deployed our consumer side code to this service, and its secured and managed infrastructure saves us time and risk. In this tutorial, we’ve built our first Ionic Application, pushed it to github and then set up our continuous delivery system in a few easy steps.

Running Containers on Azure

Running Containers in public cloud environments brings advantages beyond the realm of “fat” virtual machines: easy deployments through a registry of Images, better use of resources, orchestration are but a few examples.

Azure is embracing containers in a big way (Brendan Burns, one of the primary instigators of Kubernetes while at Google, joined Microsoft last year which might have contributed to it!)

Running Containers nowadays is almost always synonymous with running an orchestrator which allows for automatic deployments of multi-Container workloads.

Here we will explore the use of Kubernetes and Docker Swarm which are arguably the two most popular orchestration frameworks for Containers at the moment. Please note that Mesos is also supported on Azure but Mesos is an entirely different beast which goes beyond the realm of just Containers.

This post is not about comparing the merits of Docker Swarm and Kubernetes, but rather a practical introduction to running both on Azure, as of August 2017.

VMs vs ACS vs Container Instances

When it comes to running containers on Azure, you have the choice of running them on Virtual Machines you create yourself or via the Azure Container Service which takes care of creating the underlying infrastructure for you.

We will explore both ways of doing things with a Kubernetes example running on ACS and a Docker Swarm example running on VMs created by Docker Machine. Note that at times of writing, ACS does not support the relatively new Swarm mode of Docker, but things move extremely fast in the Container space… for instance, Azure Container Instances are a brand new service allowing users to run Containers directly and billed on a per second basis.

In any case, both Docker Swarm and Kubernetes offer a powerful way of managing the lifecycle of Containerised application environments alongside storage and network services.

Kubernetes

Kubernetes is a one-stop solution for running a Container cluster and deploying applications to said cluster.

Although the WordPress example found in the official Kubernetes documentation is not specifically geared at Azure, it will run easily on it thanks to the level of standardisation Kubernetes (aka K8s) has achieved.

This example deploys a MySQL instance Container with a persistent volume to store data and a separate Container which combines WordPress and Apache, sporting its own persistent volume.

The PowerShell script below leverages the “az acs create” Azure CLI 2.0 command to spin up a two node Kubernetes cluster. Yes, one command is all it takes to create a K8s cluste! If you need ten agent nodes, just change the “–agent-count” value to 10.

It is invoked as follows:

The azureSshKey parameter point to a private SSH key (the corresponding public key must exist as well) and kubeWordPressLocation is the path to the git clone of the Kubernetes WordPress example.

Note that you need to have ssh connectivity to the Kubernetes Master i.e. TCP port 22.

Following the creation of the cluster, the script leverages kubectl (“az acs kubernetes install-cli” to install it) to deploy the aforementioned WordPress example.

“The” script:

Note that if you specify an OMS workspace ID and key, the script will install the OMS agent on the underlying VMs to monitor the infrastructure and the Containers (more on this later).

Once the example has been deployed, we can check the status of the cluster using the following command (be patient as it takes nearly 20 minutes for the service to be ready):

Screen Shot 2017-07-28 at 15.31.23

Note the value for EXTERNAL-IP (here 13.82.180.10). By entering this value in the location bar of your browser after both containers are running gives you access to the familiar WordPress install screen:

Screen Shot 2017-07-28 at 16.08.36

This simple K8 visualizer written a while back by Brendan Burns shows the various containers running on the cluster:

Screen Shot 2017-07-28 at 15.36.43.png

Docker Swarm

Docker Swarm is the “other” Container orchestrator, delivered by Docker themselves. Since Docker version 1.12, the so called “Swarm mode” does not require any discovery service like consul as it is handled by the Swarm itself. As mentioned in the introduction, Azure Container Service does not support Swarm mode yet so we will run our Swarm on VMs created by Docker Machine.

It is now possible to use Docker Compose against Docker Swarm in Swarm mode in order to deploy complex applications making the trifecta Swarm/Machine/Compose a complete solution competing with Kubernetes.

swarm-diagram

The new-ish Docker Swarm mode handles service discovery itself

As with the Kubernetes example I have created a script which automates the steps to create a Swarm then deploys a simple nginx continer on it.

As the Swarm mode is not supported by Azure ACS yet, I have leveraged Docker Machine, the docker sanctioned tool to create “docker-ready” VMs in all kinds of public or private clouds.

The following PowerShell script (easily translatable to a Linux flavour of shell) leverages the Azure CLI 2.0, docker machine and docker to create a Swarm and deploy an nginx Container with an Azure load balancer in front. Docker machine and docker were installed on my Windows 10 desktop using chocolatey.

Kubernetes vs Swarm

Compared to Kubernetes on ACS, Docker Swarm takes a bit more effort and is not as compact but would probably be more portable as it does not leverage specific Azure capabilities, save for the load balancer. Since Brendan Burns joinded Microsoft last year, it is understandable that Kubernetes is seeing additional focus.

It is a fairly recent development but deploying multi-Container applications on a Swarm can be done via a docker stack deploy using a docker compose file version 3 (compose files for older revisions need some work as seen here).

Azure Registry Service

I will explore this in more detail a separate post, but a post about docker Containers would not be complete without some thoughts about where the source of Images for your deployments live (namely the registry).

Azure offers a Container Registry service (ACR) to store and retrieve Images which is an easy option to use that well integrated with the rest of Azure.

It might be obvious to point out, but deploying Containers based on Images downloaded from the internet can be a security hazard, so creating and vetting your own Images is a must-have in enterprise environments, hence the criticality of running your own Registry service.

Monitoring containers with OMS

My last interesting bit for Containers on Azure is around monitoring. If you deploy the Microsoft Operations Management Suite (OMS) agent on the underlying VMs running Containers and have the Docker marketplace extension for Azure as well as an OMS account, it is very easy to get useful monitoring information. It is also possible to deploy OMS as a Container but I found that I wasn’t getting metrics for the underlying VM that way.

Screen Shot 2017-07-28 at 16.05.30.png

This allows not only to grab performance metrics from containers and the underlying infrastructure but also to grab logs from applications, here the web logs for instance:

Screen Shot 2017-07-28 at 16.12.23

I hope you found this post interesting and I am looking forward to the next one on this topic.

An Identity Consultants Summary of the recent Cloud Identity Summit 2017

I’ve just returned from Chicago and the Cloud Identity Summit that was held at the Sheraton Grand Chicago. It was my first CIS conference and reminded me a lot of the now defunct Quest Experts Conference and The Burton Group Conference, both in terms of the content and scale. It definitely had a more intimate feel than the massive Microsoft Ignite category of event which attracts 25k+ attendees. 1400 attendees at CIS was a record for this event, but it still meant you got the 1:1 time with vendors and speakers which is fantastic.

Just like the Quest “The Experts Conference” (TEC) and The Burton Group Conference if you pick your sessions based on the synopsis and the speaker the sessions can be highly technical 400+ level and worthy of the 30 hr journey to get to the conference. I focused on my particular subject of Identity, so this summary is biased towards that track.

A summary of my takeaways that I’ll briefly detail in the post are:

  • ID Pro
  • Strong Authentication / Goodbye Passwords
  • PAM and IGA
  • SCIM 2.0
  • FIDO 2.0

And before I forget, CIS is dead; long live CIS, now known as Identiverse which will be in Boston in June (24-27) 2018. Ping Identity have renamed the conference moving forward.

 

ID Pro

Ian Glazer in his keynote on Tuesday announced what has been missing from the IDAM Community. A professional organisation to represent it. Named ID Pro with all the details available here, it is professional organisation for IDAM exponents. Join now here for US$150. Supported by the Kantara initiative this organisation already has the support and backing of the industry.

 

Strong Authentication / Goodbye Passwords

There were numerous sessions around this topic. And it was fantastic to see that the eco-system to support the holy grail future of No Passwords, but Strong Authentication is now present. Alex Simons summed it up nicely in his keynote on Wednesday but setting the goal of 1 Billion Logins (without passwords) by 2018 launching the hashtag to go with it #1Billionby2018 Checkout the FIDO 2.0 summary further below.

 

Privileged Account Management and Identity Governance & Access

Privileged Account Management and Identity Governance & Access are better together. We knew this anyway and I’ve been approaching it this way with my solutions. It was therefore refreshing to be entertained by Kelly Grizzle in his session When meets through their mutual friend . In essence SailPoint have been working heavily on their IGA offering but with the help of SCIM now at 2.0 they’ve been working with PAM vendors such as CyberArk to provide the integration and visibility the two need. Kelly entertaining and informative presentation can be found here.

 

SCIM 2.0

Mentioned above in the PAM and IGA summary, SCIM 2.0 is now ready for prime time. Whilst SCIM has been around for some time it hasn’t seen widespread adoption in my circles. But that’s about to change. Microsoft have been using it as a primary integration method with Azure AD with the likes of Facebook for Work. Microsoft also have a SCIM MA for Microsoft Identity Manager. I’ll be experimenting with it in the near future.

 

FIDO 2.0

FIDO first came on to my radar about 4 years ago. It is in a lot of the workflows we do every day (if you have a modern operation system – Windows 8+ and bio-metrics on your device or a FIDO compliant token). With FIDO 2.0  and U2F v1.1 and UAF v1.1 now complete the foundation and enabling services for Strong Authentication are ready to go.

 

Summary of the Summary

I’ve tried hard to not make this too wordy, but the takeaway is this. Identity is the foundation of who you are and what you do. With all the other disruption in the IT industry around cloud and mobility, identity is always the enabler. Get it right and you can make life easier for your users, more visible for your information security officers and auditable for your compliance requirements. Just keep up, as it’s moving very fast.

Akamai Cloud based DNS Black Magic

Let us start with traditional DNS hosting with any DNS hoster or ISP. How does traditional DNS name resolution works? When you type a human readable name http://www.anydomain.com on the address bar of internet explorer, that name is resolved to an Internet Protocol (IP) address hosted by an Internet Service Provider (ISP). The browser presented the website to the user. By doing so, the website exposed the public IP address to everyone. The good and bad guys know the IP address and can trace globally. A state sponsored hackers or private individual can launch a denial of service attack also known as DDoS on a website whose publicly IP address is known and traceable. The bad guys can send overwhelming number of fake service request to the original IP address of the human readable name http://www.anydomain.com and shut the website. In this situation, DNS server hosting DNS record http://www.anydomain.com will stop serving genuine DNS request resulting distributed denial of service (DDoS).

Akamai introduced Fast DNS that is dynamic DNS located almost every country, state, territory and regions to mitigate such risk of DDoS and DNS hijack.

Akamai Fast DNS offloads domain name resolution from on-premises infrastructure and traditional domain name provider to an intelligent, secure and authoritative DNS service.  Akamai has successfully prevented DDoS attack, DNS forgery and manipulation by complex dynamic DNS hosting and spoof IP addresses.

As of today Akamai has more than 150000+ servers located in more than 2000+ locations in the world that are very well connected in 1200+ networks in 700+ cities in 92 countries and in most cases, an Akamai edge server is just a hop away from the end users.

How does it works?

  1. User request http://www.anydomain.com
  2. User’s ISP respond the DNS name query http://www.anydomain.com
  3. User’s ISP resolves http://www.anydomain.com DNS Name to http://www.anydomain.com.edgekey.net hosted by Akamai
  4. Akamai Global DNS checks the CNAME http://www.anydomain.com.edgekey.net and the region of the request coming from
  5. Akamai Global then forward the request to the Akamai regional DNS Server for example Sydney, Australia
  6. Akamai regional DNS server forward the request to nearest Akamai edge server of the user location for example Melbourne, Australia
  7. Akamai local DNS server for example Melbourne, Australia resolve the original CNAME http://www.anydomain.com to http://www.anydomain.com.edgekey.net
  8. http://www.anydomain.com.edgekey.net resolve to cached (if cached) website http://www.anydomain.com by Akamai which then presented to user’s browser

Since Akamai uses dynamic DNS server, it is extremely difficult for a bad guy to track down the real IP address of the website and origin host of the website. In Akamai terminology, .au or .uk means that the website is hosted in that country (au or uk) but the response of the website is coming to the user from his/her geolocation hence IP address of the website will always be presented from the Akamai edge server of the user’s geolocation. In plain English, origin host and IP address is vanished in the complex dynamic DNS servers of Akamai. For example,

  1. http://www.anydomain.com.edgekey.net resolve to a spoof IP address hosted by Akamai DNS server
  2. The original IP address of http://www.anydomain.com is never resolved by Akamai DNS server or the ISP hosting the http://www.anydomain.com

Implementing Akamai Fast DNS:

  1. Create a Host A record in your ISP http://www.anydomain.com and point to 201.17.xx.xx public IP (VIP of Azure Web Services or any web services)
  2. Create an origin host record or CNAME record http://www.anydomain.com and point to xyz9013452bcf.anaydomain.com
  3. Now request Akamai to black magic http://www.anydomain.com and point to http://www.anydomain.com.edgekey.net
  4. Once Akamai completes the black magic, request your ISP to create another CNAME record xyz9013452bcf.anydomain.com and point to http://www.anydomain.com.edgekey.net

Testing Akamai Fast DNS: I am using http://www.akamai.com as the DNS name instead of a real DNS of record of any of my client.

Go to mxtoolbox.com and DNS lookup, http://www.akamai.com you will see

CNAME http://www.akamai.com  resolve to http://www.akamai.com.edgekey.net

Open command Prompt and ping http://www.akamai.com.edgekey.net

Since I am pinging from Sydney Australia, my ping responded by the Akamai edge server Sydney, result is

Ping http://www.akamai.com.edgekey.net

Pinging e1699.dscc.akamaiedge.net [118.215.118.16] with 32 bytes of data:

Reply from 118.215.118.16: bytes=32 time=6ms TTL=56

Reply from 118.215.118.16: bytes=32 time=3ms TTL=56

Open a browser and go to http://www.kloth.net/services/dig.php and trace e1699.dscc.akamaiedge.net

; <<>> DiG 9 <<>> @localhost e1699.dscc.akamaiedge.net A

; (1 server found)

;; global options: +cmd

.                                            375598   IN            NS           d.root-servers.net.

.                                            375598   IN            NS           c.root-servers.net.

.                                            375598   IN            NS           i.root-servers.net.

.                                            375598   IN            NS           j.root-servers.net.

.                                            375598   IN            NS           k.root-servers.net.

.                                            375598   IN            NS           m.root-servers.net.

.                                            375598   IN            NS           a.root-servers.net.

.                                            375598   IN            NS           l.root-servers.net.

.                                            375598   IN            NS           e.root-servers.net.

.                                            375598   IN            NS           f.root-servers.net.

.                                            375598   IN            NS           b.root-servers.net.

.                                            375598   IN            NS           g.root-servers.net.

.                                            375598   IN            NS           h.root-servers.net.

;; Received 228 bytes from 127.0.0.1#53(127.0.0.1) in 3 ms

net.                                       172800   IN            NS           a.gtld-servers.net.

net.                                       172800   IN            NS           b.gtld-servers.net.

net.                                       172800   IN            NS           c.gtld-servers.net.

net.                                       172800   IN            NS           d.gtld-servers.net.

net.                                       172800   IN            NS           e.gtld-servers.net.

net.                                       172800   IN            NS           f.gtld-servers.net.

net.                                       172800   IN            NS           g.gtld-servers.net.

net.                                       172800   IN            NS           h.gtld-servers.net.

net.                                       172800   IN            NS           i.gtld-servers.net.

net.                                       172800   IN            NS           j.gtld-servers.net.

net.                                       172800   IN            NS           k.gtld-servers.net.

net.                                       172800   IN            NS           l.gtld-servers.net.

net.                                       172800   IN            NS           m.gtld-servers.net.

;; Received 512 bytes from 2001:7fd::1#53(2001:7fd::1) in 8 ms

akamaiedge.net.                  172800   IN            NS           la1.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           la3.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           lar2.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           ns3-194.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           ns6-194.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           ns7-194.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           ns5-194.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           a12-192.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           a28-192.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           a6-192.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           a1-192.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           a13-192.akamaiedge.net.

akamaiedge.net.                  172800   IN            NS           a11-192.akamaiedge.net.

;; Received 504 bytes from 2001:503:a83e::2:30#53(2001:503:a83e::2:30) in 14 ms

dscc.akamaiedge.net.          8000       IN            NS           n7dscc.akamaiedge.net.

dscc.akamaiedge.net.          4000       IN            NS           n0dscc.akamaiedge.net.

dscc.akamaiedge.net.          6000       IN            NS           a0dscc.akamaiedge.net.

dscc.akamaiedge.net.          6000       IN            NS           n3dscc.akamaiedge.net.

dscc.akamaiedge.net.          4000       IN            NS           n2dscc.akamaiedge.net.

dscc.akamaiedge.net.          6000       IN            NS           n6dscc.akamaiedge.net.

dscc.akamaiedge.net.          4000       IN            NS           n5dscc.akamaiedge.net.

dscc.akamaiedge.net.          8000       IN            NS           n1dscc.akamaiedge.net.

dscc.akamaiedge.net.          8000       IN            NS           n4dscc.akamaiedge.net.

;; Received 388 bytes from 184.85.248.194#53(184.85.248.194) in 8 ms

e1699.dscc.akamaiedge.net. 20        IN            A             23.74.181.249

;; Received 59 bytes from 77.67.97.229#53(77.67.97.229) in 5 ms

Now tracert 23.74.181.249 on a command prompt

Tracert 23.74.181.249

Tracing route to a23-74-181-249.deploy.static.akamaitechnologies.com [23.74.181.249]

over a maximum of 30 hops:

1     1 ms     1 ms     1 ms  172.28.67.2

2     4 ms     1 ms     4 ms  172.28.2.10

3     *        *        *     Request timed out.

4     *        *        *     Request timed out.

5     *        *        *     Request timed out.

6                     *     Request timed out.

7     *        *        *     Request timed out.

8     *      125 ms    75 ms  bundle-ether1.sydp-core04.sydney.reach.com [203.50.13.90]

9   172 ms   160 ms   165 ms  i-52.tlot-core02.bx.telstraglobal.net [202.84.137.101]

10   152 ms   192 ms   164 ms  i-0-7-0-11.tlot-core01.bi.telstraglobal.net [202.84.251.233]

11   163 ms   183 ms   176 ms  gtt-peer.tlot02.pr.telstraglobal.net [134.159.63.182]

12   151 ms   157 ms   155 ms  xe-2-2-0.cr2-lax2.ip4.gtt.net [89.149.129.234]

13   175 ms   160 ms   154 ms  as5580-gw.cr2-lax2.ip4.gtt.net [173.205.59.18]

14   328 ms   318 ms   317 ms  ae21.edge02.fra06.de.as5580.net [78.152.53.219]

15   324 ms   325 ms   319 ms  78.152.48.250

16   336 ms   336 ms   339 ms  a23-74-181-249.deploy.static.akamaitechnologies.com [23.74.181.249]

Now open hosts file of windows machine C:\WINDOWS\system32\drivers\etc\hosts and add Akamai spoof IP 172.233.15.98   http://www.akamai.com (reference)

Browse http://www.akamai.com website on internet explorer that will point you to 172.233.15.98

Open command prompt, nslookup 172.233.15.98

Server:  lon-resolver.telstra.net

Address:  203.50.2.71

Name:    a172-233-15-98.deploy.static.akamaitechnologies.com

Address:  172.233.15.98

In conclusion, Akamai tricked web browser to go to Akamai edge server Sydney Australia instead of original Akamai server hosted in USA. An user will never know the original IP address of the http://www.akamai.com website. Abracadabra the original IP address is vanished…

Enterprise Cloud Take Up Accelerating Rapidly According to New Study By McKinsey

A pair of studies published a few days ago by global management consulting firm McKinsey & Company entitled IT as a service: From build to consume show enterprise adoption of Infrastructure as a Service (IaaS) services accelerating increasingly rapidly over the next two years into 2018.

Of the two, one examined the on-going migrations of 50 global businesses. The other saw a large number of CIOs, from small businesses up to Fortune 100 companies, interviewed on the progress of their transitions and the results speak for themselves.

1. Compute and storage is shifting massively to cloud service providers.

Compute and storage is shift massively to the cloud service providers.

Compute and storage is shift massively to the cloud service providers.

“The data reveals that a notable shift is under way for enterprise IT vendors, with on-premise shipped server instances and storage capacity facing compound annual growth rates of –5 percent and –3 percent, respectively, from 2015 to 2018.”

With on-premise storage and server sales growth going into negative territory, it’s clear the next couple of years will see the hyperscalers of this world consume an ever increasing share of global infrastructure hardware shipments.

2.Companies of all sizes are shifting to off-premise cloud services.

Companies of all sizes are shifting to off-premise cloud services.

Companies of all sizes are shifting to off-premise cloud services.

“A deeper look into cloud adoption by size of enterprise shows a significant shift coming in large enterprises (Exhibit 2). More large enterprises are likely to move workloads away from traditional and virtualized environments toward the cloud—at a rate and pace that is expected to be far quicker than in the past.

The report also anticipates the number of enterprises hosting at least one workload on an IaaS platform will see an increase of 41% in the three year period to 2018. While that of small and medium sized businesses will increase a somewhat less aggressive 12% and 10% respectively.

3. A fundamental shift is underway from a build to consume model for IT workloads.

a-fundamental-shift

“The survey showed an overall shift from build to consume, with off-premise environments expected to see considerable growth (Exhibit 1). In particular, enterprises plan to reduce the number of workloads housed in on-premise traditional and virtualized environments, while dedicated private cloud, virtual private cloud, and public infrastructure as a service (IaaS) are expected to see substantially higher rates of adoption.”

Another takeaway is that the share of traditional and virtualized on-premise workloads will shrink significantly from 77% and 67% in 2015 to 43% and 57% respectively in 2018. While virtual private cloud and IaaS will grow from 34% and 25% in 2015 to 54% and 37% respectively in 2018.

Cloud adoption will have far-reaching effects

The report concludes “McKinsey’s global ITaaS Cloud and Enterprise Cloud Infrastructure surveys found that the shift to the cloud is accelerating, with large enterprises becoming a major driver of growth for cloud environments. This represents a departure from today, and we expect it to translate into greater headwinds for the industry value chain focused on on-premise environments; cloud-service providers, led by hyperscale players and the vendors supplying them, are likely to see significant growth.”

About McKinsey & Company

McKinsey & Company is a worldwide management consulting firm. It conducts qualitative and quantitative analysis in order to evaluate management decisions across the public and private sectors. Widely considered the most prestigious management consultancy, McKinsey’s clientele includes 80% of the world’s largest corporations, and an extensive list of governments and non-profit organizations.

Web site: McKinsey & Company
The full report: IT as a service: From build to consume

Create a Cloud Strategy For Your Business

Let’s be clear, today’s cloud as a vehicle for robust and flexible enterprise grade IT is here and it’s here to stay. Figures published by IDG Research’s 2015 Enterprise Cloud Computing Survey predict that in 2016 25% of total enterprise IT budgets will be allocated to cloud computing.

steady-increase-in-cloud-adoption2

Steady increase in cloud utilisation. Source: IDG Enterprise Cloud Computing Survey.

They also reported that the average cloud spend for all the enterprises surveyed would reach 2.87M in the following year and that 72% of enterprises have at least one application running in the cloud already, compared to 57% in 2012. IDC in the meantime predicts that public cloud adoption will increase from 22% in 2016 to 32.1% within the next 18 months which equates to no less than 45.8% growth.

wide-range-of-cloud-investments3

Wide range in cloud investments. Source: 2015 IDG Enterprise Cloud Computing Survey.

Now any organization looking to leverage the cloud needs a governing strategy. And research has shown that businesses with a vision are likely to be more efficient, successful and keep their costs down further than those who don’t. So let’s consider how a well rounded cloud strategy should be a priority for any business that doesn’t have one.

What are the key benefits of a cloud strategy?

A well thought out strategy will help an organization integrate cloud based IT into their business model in a more structured way. One that has given the proper consideration to all of its requirements.

1. Stay in control of your business in the era of on-demand cloud services

With the proliferation of cloud services available today. Shadow IT, that is to say the unsanctioned use of cloud services inside an organization, is a growing problem which if left unchecked runs the risk of a being a problem. Control it, before it controls you.

2. Better prepared infrastructure

Fully consider the potential ramifications and benefits which present themselves by using a structured approach. By properly mapping all of the requirements of it’s infrastucture, network, storage and compute resources a business will manage their migration with greater efficiency.

3. Increased benefit

Whether it’s a change in consumption models from owned to “pay-as-you-go” and the resulting shift from CAPEX to OPEX expenditures, potential for greater flexibility, efficiency and choice offered by the cloud over so called traditional IT. It’s seems obvious that having a well thought out strategy will magnify the value of these benefits.

4. Increased opportunity

Having a solid strategy is also likely to make for a bigger opportunity as an organization maps out it’s business and carefully thinks through the possibilities before taking making any changes.

Strategy formulation

Creating a migration strategy requires mapping out the suitability of all existing applications, weighing up their value against the costs and savings cloud services may offer and choosing which to prioritize.

1. Evaluate your applications

The first step is a business wide evaluation of all existing applications, categorized by two factors: business value and flexibility. With business value equating to the place and importance an asset holds in an organization and flexibility meaning it’s suitability for migration. It should seek to understand how they are deployed, how critical they are and whether moving them to the cloud will be cost effective or not.

2. Choose the right cloud model

The second is to determine the right cloud model for your requirements.

Private clouds, whether owned or leased, consist of closed IT infrastructures accessible only to a business which then makes available resources to it’s own internal customers. Private clouds are often home to core applications where control is essential to the business, they can also offer economies of scales where companies can afford larger, long term investments and have the ability to either run these environments themselves or pay for a managed service. Private cloud investments tend to operate on a CAPEX model.

Public clouds are shared platforms for services made available by third parties to their customers on a pay-as-you go basis. Public cloud environments are best suited to all but the most critical and expensive applications to run. They offer the significant benefit of not requiring large upfront capital investments because they operate on an OPEX model.

Hybrid clouds are made up of a mix of both types of resources working together across secured, private network connections. They can offer the benefits of both models but run the risk of additional complexity and can lessen the benefits of working at scale.

Why a clear strategy is vital for your business today?

The benefits of the cloud are real. Maximizing their value is therefore key to any business looking to leverage them. And the best way to do that is to have a well thought out strategy.

Azure reference architecture

tl;dr

  • What is a reference architecture
    • My definition of a reference architecture
  • I stop using the word architecture after the first 3 paragraphs – word overkill
  • What are some important topics to cover in said document
  • Is it easy to write? NO
  • Final words – don’t jump into Azure without a reference architecture

I’m not going to lie to you. This is not a quick topic to write about. When it comes to Azure, you absolutely, 100% cannot dive straight in and consume services if you’re planning on doing that for pretty much any size organisation. The only way this could be averted is in a development environment, or a home lab. Period.

Without order nothing exists.

-someone awesome

This is where an Azure reference architecture comes in. Let’s define a reference architecture, or most commonly a reference architecture document (or series of documents);

Within IT: A reference architecture is a set of standards, best practices and guidelines for a given architecture that architects, consultants, administrators or managers refer to when making decisions on future implementations in that environment.

Since I think I’ve reached the word quota limit for “architect” or “architecture”, I will attempt to limit the use of those from this point forward. If necessary, I’ll refer to either of those as just the “a-word“.

Read More

Secure Azure Virtual Network Defense In Depth using Network Security Groups, User Defined Routes and Barracuda NG Firewall

Security Challenge on Azure

There are few common security related questions when we start planning migration to Azure:

  • How can we restrict the ingress and egress traffic on Azure ?
  • How can we route the traffic on Azure ?
  • Can we have Firewall kit, Intrusion Prevention System (IPS), Network Access Control, Application Control and Anti – Malware on Azure DMZ ?

This blog post intention is to answer above questions using following Azure features combined with Security Virtual Appliance available on Azure Marketplace:

  • Azure Virtual Network (VNET)
  • Azure Network Security Groups (NSGs)
  • Azure Network Security Rule
  • Azure Forced Tunelling
  • Azure Route Table
  • Azure IP Forwarding
  • Barracuda NG Firewall available on Azure Marketplace

One of the most common methods of attack is The Script Kiddie / Skiddie / Script Bunny / Script Kitty. Script Kiddies attacks frequency is one of the highest frequency and still is. However the attacks have been evolved into something more advanced, sophisticated and far more organized. The diagram below illustrates the evolution of attacks:

evolution of attacks

 

The main target of the attacks from the lowest sophistication level of the attacks to the most advanced one is our data. Data loss = financial loss. We are working together and sharing the responsibility with our cloud provider to secure our cloud environment. This blog post will focus on Azure environment.

Defense in Depth

Based on SANS Institute of Information Security. Defense in depth is the concept of protecting a computer network with a layer of defensive mechanisms. There are varies of defensive mechanisms and countermeasures to protect our Azure environment because there are many attack scenarios and attack methods available.

In this post we will use combination of Azure Network Security Groups to establish Security Zone discussed previously on my previous blog, deploy network firewall including Intrusion Prevention System on our Azure network to implement additional high security layer and route the traffic to our security kit. On Secure Azure Network blog we have learned on how to establish the simple Security Zone on our Azure VNET. The underlying concept behind the zone model is the increasing level of trust from outside into the center. On the outside is the Internet – Zero Trust which is where the Script Kiddies and other attackers reside.

The diagram below illustrates the simple scenario we will implement on this post:

Barracuda01

There are four main configurations we need to do in order to establish solution as per diagram above:

  • Azure VNET Configuration
  • Azure NSG and Security Rules
  • Azure User Defined Routes and IP Forwarding
  • Barracuda NG Firewall Configuration

In this post we will focus on the last two items. This tutorial link will assist the readers on how to create Azure VNET and my previous blog post will assist the readers on how to establish Security Zone using Azure NSGs.

Barracuda NG Firewall

The Barracuda NG Firewall fills the functional gaps between cloud infrastructure security and Defense-In-Depth strategy by providing protection where our application and data reside on Azure rather than solely where the connection terminates.

The Barracuda NG Firewall can intercept all Layer 2 through 7 traffic and apply Policy – based controls, authentication, filtering and other capabilities. Just like its physical device, Barracuda NG Firewall running on Azure has traffic management capability and bandwidth optimizations.

The main features:

  • PAYG – Pay as you go / BYOL – Bring your own license
  • ExpressRoute Support
  • Network Firewall
  • VPN
  • Application Control
  • IDS – IPS
  • Anti-Malware
  • Network Access Control Management
  • Advanced Threat Detection
  • Centralized Management

Above features are necessary to establish a virtual DMZ in Azure to implement our Defense-In-Depth and Security Zoning strategy.

Choosing the right size of Barracuda NG Firewall will determine the level of support and throughput to our Azure environment. Details of the datasheet can be found here.

I wrote handy little script below to deploy Barracuda NG Firewall Azure VM with two Ethernets :

User Defined Routes in Azure

Azure allows us to re-defined the routing in our VNET which we will use in order to re-direct the routing through our Barracuda NG Firewall. We will enable IP forwarding for the Barracuda NG Firewall virtual appliance and then create and configure the routing table for the backend networks so all traffic is routed through the Barracuda NG Firewall.

There are some notes using Barracuda NG Firewall on Azure:

  • User-defined routing at the time of writing cannot be used for two Barracuda NG Firewall units in a high availability cluster
  • After the Azure routing table has been applied, the VMs in the backend networks are only reachable via the NG Firewall. This also means that existing Endpoints allowing direct access no longer work

Step 1: Enable IP Forwarding for Barracuda NG Firewall VM

In order to forward the traffic, we must enable IP forwarding on Primary network interface and other network interfaces (Ethernet 1 and Ethernet 2) on the Barracuda NG Firewall VM.

Enable IP Forwarding:

Enable IP Forwarding on Ethernet 1 and Ethernet 2:

On the Azure networking side, our Azure Barracuda NG Firewall VM is now allowed to forward IP packets.

Step 2: Create Azure Routing Table

By creating a routing table in Azure, we will be able to redirect all Internet outbound connectivity from Mid and Backend subnets of the VNET to the Barracuda NG Firewall VM.

Firstly, create the Azure Routing Table:

Next, we need to add the Route to the Azure Routing Table:

As we can see the next hop IP address for the default route is the IP address of the default network interface of the Barracuda NG Firewall (192.168.0.54). We have extra two network interfaces which can be used for other routing (192.168.0.55 and 192.168.0.55).

Lastly, we will need to assign the Azure Routing Table we created to our Mid or Backend subnet.

Step 3: Create Access Rules on the Barracuda NG Firewall

By default all outgoing traffic from the mid or backend is blocked by the NG Firewall. Create an access rule to allow access to the Internet.

Download the Barracuda NG Admin to manage our Barracuda NG Firewall running on Azure and login to our Barracuda NG Admin console:

barra01

 

Create a PASS access rule:

  • Source – Enter our mid or backend subnet
  • Service – Select Any
  • Destination – Select Internet
  • Connection – Select Dynamic SNAT
  • Click OK and place the access rule higher than other rules blocking the same type of traffic
  • Click Send Changes and Activate

barra02

Our VMs in the mid or backend subnet can now access the Internet via the Barracuda NG Firewall. RDP to my VM sitting on Mid subnet 192.168.1.4, browse to Google.com:

barra03

Let’s have a quick look at Barracuda NG Admin Logs 🙂

barra04

And we are good to go using same method configuring the rest to protect our Azure environment:

  • Backend traffic to go pass our Barracuda NG Firewall before hitting the Mid traffic and Vice Versa
  • Mid traffic to go pass our Barracuda NG Firewall before hitting the Frontend traffic and Vice Versa

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.

 

 

 

AWS Direct Connect in Australia via Equinix Cloud Exchange

I discussed Azure ExpressRoute via Equinix Cloud Exchange (ECX) in my previous blog. In this post I am going to focus on AWS Direct Connect which ECX also provides. This means you can share the same physical link (1GBps or 10GBps) between Azure and AWS!

ECX also provides connectivity service to AWS for connection speed less than 1GBps. AWS Direct Connect provides dedicated, private connectivity between your WAN or datacenter and AWS services such as AWS Virtual Private Cloud (VPC) and AWS Elastic Compute Cloud (EC2).

AWS Direct Connect via Equinix Cloud Exchange is Exchange (IXP) provider based allowing us to extend our infrastructure that is:

  • Private: The connection is dedicated bypassing the public Internet which means better performance, increases security, consistent throughput and enables hybrid cloud use cases (Even hybrid with Azure when both connectivity using Equinix Cloud Exchange)
  • Redundant: If we configure a second AWS Direct Connect connection, traffic will failover to the second link automatically. Enabling Bidirectional Forwarding Detection (BFD) is recommended when configuring your connections to ensure fast detection and failover. AWS does not offer any SLA at the time of writing
  • High Speed and Flexible: ECX provides a flexible range of speeds: 50, 100, 200, 300, 400 and 500MBps.

The only tagging mechanism supported by AWS Direct Connect is 802.1Q (Dot1Q). AWS always uses 802.1Q (Dot1Q) on the Z-side of ECX.

ECX pre-requisites for AWS Direct Connect

The pre-requisites for connecting to AWS via ECX:

  • Physical ports on ECX. Two physical ports on two separate ECX chassis is required if redundancy is required.
  • Virtual Circuit on ECX. Two virtual circuits are also required for redundancy

Buy-side (A-side) Dot1Q and AWS Direct Connect

The following diagram illustrates the network setup required for AWS Direct Connect using Dot1Q ports on ECX:

AWSDot1Q

The Dot1Q VLAN tag on the A-side is assigned by the buyer (A-side). The Dot1Q VLAN tag on the seller side (Z-side) is assigned by AWS.

There are a few steps needing to be noted when configuring AWS Direct Connect via ECX:

  • We need our AWS Account ID to request ECX Virtual Circuits (VC)s
  • Create separate Virtual Interfaces (VI)s for Public and Private Peering on AWS Management Console. We need two ECX VCs and two AWS VIs for redundancy Private or Public Peering.
  • We can accept the Virtual Connection either from ECX Portal after requesting the VCs or  on AWS Management Console.
  • Configure our on-premises edge routers for BGP sessions. We can download the router configuration which we can use to configure our BGP sessions from AWS Management Console
    awsdot1q_a
  • Attach the AWS Virtual Gateway (VGW) to the Route Table associated with our VPC
  • Verify the connectivity.

Please refer to the AWS Direct Connect User Guide on how to configure edge routers for BGP sessions. Once we have configured the above we will need to make sure any firewall rules are modified so that traffic can be routed correctly.

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.

Azure ExpressRoute in Australia via Equinix Cloud Exchange

Microsoft Azure ExpressRoute provides dedicated, private circuits between your WAN or datacentre and private networks you build in the Microsoft Azure public cloud. There are two types of ExpressRoute connections – Network (NSP) based and Exchange (IXP) based with each allowing us to extend our infrastructure by providing connectivity that is:

  • Private: the circuit is isolated using industry-standard VLANs – the traffic never traverses the public Internet when connecting to Azure VNETs and, when using the public peer, even Azure services with public endpoints such as Storage and Azure SQL Database.
  • Reliable: Microsoft’s portion of ExpressRoute is covered by an SLA of 99.9%. Equinix Cloud Exchange (ECX) provides an SLA of 99.999% when redundancy is configured using an active – active router configuration.
  • High Speed speeds differ between NSP and IXP connections – but go from 10Mbps up to 10Gbps. ECX provides three choices of virtual circuit speeds in Australia: 200Mbps, 500Mbps and 1Gbps.

Microsoft provided a handy table comparison between all different types of Azure connectivity on this blog post.

ExpressRoute with Equinix Cloud Exchange

Equinix Cloud Exchange is a Layer 2 networking service providing connectivity to multiple Cloud Service Providers which includes Microsoft Azure. ECX’s main features are:

  • On Demand (once you’re signed up)
  • One physical port supports many Virtual Circuits (VCs)
  • Available Globally
  • Support 1Gbps and 10Gbps fibre-based Ethernet ports. Azure supports virtual circuits of 200Mbps, 500Mbps and 1Gbps
  • Orchestration using API for automation of provisioning which provides almost instant provisioning of a virtual circuit.

We can share an ECX physical port so that we can connect to both Azure ExpressRoute and AWS DirectConnect. This is supported as long as we use the same tagging mechanism based on either 802.1Q (Dot1Q) or 802.1ad (QinQ). Microsoft Azure uses 802.1ad on the Sell side (Z-side) to connect to ECX.

ECX pre-requisites for Azure ExpressRoute

The pre-requisites for connecting to Azure regardless the tagging mechanism are:

  • Two Physical ports on two separate ECX chassis for redundancy.
  • A primary and secondary virtual circuit per Azure peer (public or private).

Buy-side (A-side) Dot1Q and Azure ExpressRoute

The following diagram illustrates the network setup required for ExpressRoute using Dot1Q ports on ECX:

Dot1Q setup

Tags on the Primary and Secondary virtual circuits are the same when the A-side is Dot1Q. When provisioning virtual circuits using Dot1Q on the A-Side use one VLAN tag per circuit request. This VLAN tag should be the same VLAN tag used when setting up the Private or Public BGP sessions on Azure using Azure PowerShell.

There are few things that need to be noted when using Dot1Q in this context:

  1. The same Service Key can be used to order separate VCs for private or public peerings on ECX.
  2. Order a dedicated Azure circuit using Azure PowerShell Cmdlet (shown below) and obtain the Service Key and use the this to raise virtual circuit requests with Equinix.https://gist.github.com/andreaswasita/77329a14e403d106c8a6

    Get-AzureDedicatedCircuit returns the following output.Get-AzureDedicatedCircuit Output

    As we can see the status of ServiceProviderProvisioningState is NotProvisioned.

    Note: ensure the physical ports have been provisioned at Equinix before we use this Cmdlet. Microsoft will start charging as soon as we create the ExpressRoute circuit even if we don’t connect it to the service provider.

  3. Two physical ports need to be provisioned for redundancy on ECX – you will get the notification from Equinix NOC engineers once the physical ports have been provisioned.
  4. Submit one virtual circuit request for each of the private and public peers on the ECX Portal. Each request needs a separate VLAN ID along with the Service Key. Go to the ECX Portal and submit one request for private peering (2 VCs – Primary and Secondary) and One Request for public peering (2VCs – Primary and Secondary).Once the ECX VCs have been provisioned check the Azure Circuit status which will now show Provisioned.expressroute03

Next we need to configure BGP for exchanging routes between our on-premises network and Azure as a next step, but we will come back to this after we have a quick look at using QinQ with Azure ExpressRoute.

Buy-side (A-side) QinQ Azure ExpressRoute

The following diagram illustrates the network setup required for ExpressRoute using QinQ ports on ECX:

QinQ setup

C-TAGs identify private or public peering traffic on Azure and the primary and secondary virtual circuits are setup across separate ECX chassis identified by unique S-TAGs. The A-Side buyer (us) can choose to either use the same or different VLAN IDs to identify the primary and secondary VCs.  The same pair of primary and secondary VCs can be used for both private and public peering towards Azure. The inner tags identify if the session is Private or Public.

The process for provisioning a QinQ connection is the same as Dot1Q apart from the following change:

  1. Submit only one request on the ECX Portal for both private and public peers. The same pair of primary and secondary virtual circuits can be used for both private and public peering in this setup.

Configuring BGP

ExpressRoute uses BGP for routing and you require four /30 subnets for both the primary and secondary routes for both private and public peering. The IP prefixes for BGP cannot overlap with IP prefixes in either your on-prem or cloud environments. Example Routing subnets and VLAN IDs:

  • Primary Private: 192.168.1.0/30 (VLAN 100)
  • Secondary Private: 192.168.2.0/30 (VLAN 100)
  • Primary Public: 192.168.1.4/30 (VLAN 101)
  • Secondary Public: 192.168.2.4/30 (VLAN 101)

The first available IP address of each subnet will be assigned to the local router and the second will be automatically assigned to the router on the Azure side.

To configure BGP sessions for both private and public peering on Azure use the Azure PowerShell Cmdlets as shown below.

Private peer:

Public peer:

Once we have configured the above we will need to configure the BGP sessions on our on-premises routers and ensure any firewall rules are modified so that traffic can be routed correctly.

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.