Running Containers on Azure

Running Containers in public cloud environments brings advantages beyond the realm of “fat” virtual machines: easy deployments through a registry of Images, better use of resources, orchestration are but a few examples.

Azure is embracing containers in a big way (Brendan Burns, one of the primary instigators of Kubernetes while at Google, joined Microsoft last year which might have contributed to it!)

Running Containers nowadays is almost always synonymous with running an orchestrator which allows for automatic deployments of multi-Container workloads.

Here we will explore the use of Kubernetes and Docker Swarm which are arguably the two most popular orchestration frameworks for Containers at the moment. Please note that Mesos is also supported on Azure but Mesos is an entirely different beast which goes beyond the realm of just Containers.

This post is not about comparing the merits of Docker Swarm and Kubernetes, but rather a practical introduction to running both on Azure, as of August 2017.

VMs vs ACS vs Container Instances

When it comes to running containers on Azure, you have the choice of running them on Virtual Machines you create yourself or via the Azure Container Service which takes care of creating the underlying infrastructure for you.

We will explore both ways of doing things with a Kubernetes example running on ACS and a Docker Swarm example running on VMs created by Docker Machine. Note that at times of writing, ACS does not support the relatively new Swarm mode of Docker, but things move extremely fast in the Container space… for instance, Azure Container Instances are a brand new service allowing users to run Containers directly and billed on a per second basis.

In any case, both Docker Swarm and Kubernetes offer a powerful way of managing the lifecycle of Containerised application environments alongside storage and network services.

Kubernetes

Kubernetes is a one-stop solution for running a Container cluster and deploying applications to said cluster.

Although the WordPress example found in the official Kubernetes documentation is not specifically geared at Azure, it will run easily on it thanks to the level of standardisation Kubernetes (aka K8s) has achieved.

This example deploys a MySQL instance Container with a persistent volume to store data and a separate Container which combines WordPress and Apache, sporting its own persistent volume.

The PowerShell script below leverages the “az acs create” Azure CLI 2.0 command to spin up a two node Kubernetes cluster. Yes, one command is all it takes to create a K8s cluste! If you need ten agent nodes, just change the “–agent-count” value to 10.

It is invoked as follows:

The azureSshKey parameter point to a private SSH key (the corresponding public key must exist as well) and kubeWordPressLocation is the path to the git clone of the Kubernetes WordPress example.

Note that you need to have ssh connectivity to the Kubernetes Master i.e. TCP port 22.

Following the creation of the cluster, the script leverages kubectl (“az acs kubernetes install-cli” to install it) to deploy the aforementioned WordPress example.

“The” script:

Note that if you specify an OMS workspace ID and key, the script will install the OMS agent on the underlying VMs to monitor the infrastructure and the Containers (more on this later).

Once the example has been deployed, we can check the status of the cluster using the following command (be patient as it takes nearly 20 minutes for the service to be ready):

Screen Shot 2017-07-28 at 15.31.23

Note the value for EXTERNAL-IP (here 13.82.180.10). By entering this value in the location bar of your browser after both containers are running gives you access to the familiar WordPress install screen:

Screen Shot 2017-07-28 at 16.08.36

This simple K8 visualizer written a while back by Brendan Burns shows the various containers running on the cluster:

Screen Shot 2017-07-28 at 15.36.43.png

Docker Swarm

Docker Swarm is the “other” Container orchestrator, delivered by Docker themselves. Since Docker version 1.12, the so called “Swarm mode” does not require any discovery service like consul as it is handled by the Swarm itself. As mentioned in the introduction, Azure Container Service does not support Swarm mode yet so we will run our Swarm on VMs created by Docker Machine.

It is now possible to use Docker Compose against Docker Swarm in Swarm mode in order to deploy complex applications making the trifecta Swarm/Machine/Compose a complete solution competing with Kubernetes.

swarm-diagram

The new-ish Docker Swarm mode handles service discovery itself

As with the Kubernetes example I have created a script which automates the steps to create a Swarm then deploys a simple nginx continer on it.

As the Swarm mode is not supported by Azure ACS yet, I have leveraged Docker Machine, the docker sanctioned tool to create “docker-ready” VMs in all kinds of public or private clouds.

The following PowerShell script (easily translatable to a Linux flavour of shell) leverages the Azure CLI 2.0, docker machine and docker to create a Swarm and deploy an nginx Container with an Azure load balancer in front. Docker machine and docker were installed on my Windows 10 desktop using chocolatey.

Kubernetes vs Swarm

Compared to Kubernetes on ACS, Docker Swarm takes a bit more effort and is not as compact but would probably be more portable as it does not leverage specific Azure capabilities, save for the load balancer. Since Brendan Burns joinded Microsoft last year, it is understandable that Kubernetes is seeing additional focus.

It is a fairly recent development but deploying multi-Container applications on a Swarm can be done via a docker stack deploy using a docker compose file version 3 (compose files for older revisions need some work as seen here).

Azure Registry Service

I will explore this in more detail a separate post, but a post about docker Containers would not be complete without some thoughts about where the source of Images for your deployments live (namely the registry).

Azure offers a Container Registry service (ACR) to store and retrieve Images which is an easy option to use that well integrated with the rest of Azure.

It might be obvious to point out, but deploying Containers based on Images downloaded from the internet can be a security hazard, so creating and vetting your own Images is a must-have in enterprise environments, hence the criticality of running your own Registry service.

Monitoring containers with OMS

My last interesting bit for Containers on Azure is around monitoring. If you deploy the Microsoft Operations Management Suite (OMS) agent on the underlying VMs running Containers and have the Docker marketplace extension for Azure as well as an OMS account, it is very easy to get useful monitoring information. It is also possible to deploy OMS as a Container but I found that I wasn’t getting metrics for the underlying VM that way.

Screen Shot 2017-07-28 at 16.05.30.png

This allows not only to grab performance metrics from containers and the underlying infrastructure but also to grab logs from applications, here the web logs for instance:

Screen Shot 2017-07-28 at 16.12.23

I hope you found this post interesting and I am looking forward to the next one on this topic.

Setup a Power BI Gateway

Scenario

So, you have explored Power BI (free) and wanted to start some action in the cloud. Suddenly you realise that your data is stored in an on-premise SQL data source and you still wanted to get insights up in the cloud and share it with your senior business management.

Solution

Microsoft’s on-premises data gateway is a bridge that can securely transfer your data to Power BI service from your on-premises data source.

Assumptions

  • Power BI pro licenses have been procured already for the required no of users (this is a MUST)
  • Users are already part of Azure AD and can sign in to Power BI service as part of Office 365 offering

Pre-requisites

You can build and setup a machine to act as a gateway between your Azure cloud service and on-premises data sources. Following are the pre-requisites to build that machine:

1) Server Requirements

Minimum Requirements:
  • .NET 4.5 Framework
  • 64-bit version of Windows 7 / Windows Server 2008 R2 (or later)

Recommended:

  • 8 Core CPU
  • 8 GB Memory
  • 64-bit version of Windows 2012 R2 (or later)

Considerations:

  • The gateway is supported only on 64-bit Windows operating systems.
  • You can’t install gateway on a domain controller
  • Only one gateway can be installed on a single machine
  • Your gateway should not be turned off, disconnected from the Internet, or have a power plan to go sleep – in all cases, it should be ‘always on’
  • Avoid wireless network connection, always use a LAN connection
  • It is recommended to have the gateway as close to the data source as possible to avoid network latency. The gateway requires connectivity to the data source.
  • It is recommended to have good throughput for your network connection.

Notes:

  • Once a gateway is configured and you need to change a name, you will need to reinstall and configure a new gateway.

2) Service Account

If your company/client is using a proxy server and your gateway is not having a direct connection to Internet. You may need to configure a windows service account for authentication purposes and change default log on credential (NT SERVICE\PBIEgwService) to a service account you like with a right of ‘Log on as a service’

3) Ports

The gateway creates an outbound connection to Azure Service Bus and does not require inbound ports for communication. It is required to whitelist IP addresses listed in Azure Datacentres IP List.

Installation

Once you are over with a pre-requisite as listed in the previous paragraph, you can proceed to gateway installation.

  1. Login to Power BI with your organisation credentials and download your data gateway setup
  2. While installing, you need to select the highlighted option so a single gateway can be shared among multiple users.
  3. As listed in pre-requisites section, if your network has a proxy requirement – you can change the service account for the following windows service:

  4. You will notice gateway is installed on your server
  5. Once you open a gateway application, you can see a success message

Configuration

Post installation, you need to configure a gateway to be used within your organisation.

  1. Open gateway application and sign in with your credentials
  2. You need to set a name and a recovery key for a gateway that can be used later to reconfigure/restore gateway
  3. Once it is configured successfully, you will see a message window that now it is ready to use

  4. Switch to Gateway’s Network tab and you will see its status as Connected – great!
  5. You are all set, the gateway is up and running and you need to start building reports to use data from your on-premises server using gateway you just configured.

 

Brisbane O365 Saturday

On the weekend I had a pleasure of presenting to the O365 Saturday Brisbane event. Link below

http://o365saturdayaustralia.com/

In my presentation I demonstrated a new feature within Azure AD that allows the automatic assigment of licences to any of your Azure subscriptions using Dynamic Groups. So what’s cool about this feature?

Well, if you have a well established organisational structure within your on-premise AD and you are synchronising any of the attributes that you need to identity this structure, then you can have your users automatically assigned licences based on their job type, department or even location. The neat thing about this is it drastically simplifies the management of your licence allocation, which until now has been largely done through complicated scripting processes either through an enterprise IAM system, or just through the service desk when users are being setup for the first time.

You can view or download the presentation by clicking on the following link.

O365 Saturday Automate Azure Licencing

Throughout the Kloud Blog there is an enormous amount of material featuring the new innovative way to use Azure AD to advance your business, and if you would like a more detailed post on this topic then please place a comment below and I’ll put something together.

 

Getting started with Azure Cloud Shell

A few weeks back I noticed that I now had the option for the Azure Cloud Shell in the Azure Portal.

What is Azure Cloud Shell?

Essentially rather than having the Azure CLI installed on your local workstation, you can now initiate it from the Portal and you have automatically assigned (initiated as part of the setup) 5Gbytes of storage associated with it. So you can now create, manage and delete Azure resources using a centrally hosted CLI session. Each time you start your shell your homedrive will mount and your profile, scripts and whatever else you’ve stored in it will be available to you. Nice. Let’s do it.

Getting Started

Login to the Azure Portal and click on the Cloud Shell icon.

As this is the first time you’ve accessed it, you will not have any storage associated with your Azure Cloud Shell. You will be prompted for storage information.

Azure Files must reside in the same region as the machine being mounted to. Cloud Shell machines currently (July 2017) exist in the below regions:

Area Region
Americas East US, South Central US, West US
Europe North Europe, West Europe
Asia Pacific India Central, Southeast Asia

I hit the Advanced Settings to specify creation of a new Resource Group, Storage Account and File Share.

The UI doesn’t check for uniqueness of the configuration settings until it is written. So you might need a couple of attempts with the naming of your storage account. As you can see below it isn’t surprising that my attempt to use azcloudshell as a “Storage Account Name” was already taken.

Providing unique values for these options

.. let the initial creation go through just nicely. I now had a homedrive created for my profile and any files I create, store for my sessions.

As for commands you can use with the Azure CLI go have a look here for the full list that you can use to create, manage and delete your Azure resources.

Personally I’m currently doing a lot with Azure Functions. A list of the full range of Azure Functions CLI commands is available here.

The next thing I looked to do was to put my scripts etc into the clouddrive. I just navigated to the new StorageAccount that I created as part of this and uploaded via the browser.

Below you can see the file I uploaded on the right which appears in the directory in the middle pane.

Using the Azure CLI I changed directories and could see my uploaded file.

And that is pretty much it. Continue as you would with the CLI, but just now with it all centrally stored. Sweet.

Cloud Security Research: Cross-Cloud Adversary Analytics

Newly published research from security firm Rapid7 is painting a worrying picture of hackers and malicious actors increasingly looking for new vectors against organizations with resources hosted in public cloud infrastructure environments.

Some highlights of Rapid7’s report:

  • The six cloud providers in our study make up nearly 15% of available IPv4 addresses on the internet.
  • 22% of Softlayer nodes expose database services (MySQL & SQL Server) directly to the internet.
  • Web services are prolific, with 53-80% of nodes in each provider exposing some type of web service.
  • Digital Ocean and Google nodes expose shell (Telnet & SSH) services at a much higher rate – 86% and 74%, respectively – than the other four cloud providers in this study.
  • A wide range of attacks were detected, including ShellShock, SQL Injection, PHP webshell injection and credentials attacks against ssh, Telnet and remote framebuffer (e.g. VNC, RDP & Citrix).

Findings included nearly a quarter of hosts deployed in IBM’s SoftLayer public cloud having databases publicly accessible over the internet, which should be a privacy and security concern to those organization and their customers.

Many of Google’s cloud customers leaving shell access publicly accessible over protocols such as SSH and much worse still, telnet which is worrying to say the least.

Businesses using the public cloud being increasingly probed by outsiders looking for well known vulnerabilities such as OpenSSL Heartbleed (CVE-2014-0160), Stagefright (CVE-2015-1538) and Poodle (CVE-2014-3566) to name but a few.

Digging further into their methodologies, looking to see whether these were random or targeted. It appears these actors are honing their skills in tailoring their probes and attacks to specific providers and organisations.

Rapid7’s research was conducted by means of honey traps, hosts and services made available solely for the purpose of capturing untoward activity with a view to studying how these malicious outsiders do their work. What’s more the company has partnered with Microsoft, Amazon and others under the auspices of projects Heisenberg and Sonar to leverage big data analytics to mine the results of their findings and scan the internet for trends.

Case in point project Heisenberg saw the deployment of honeypots in every geography in partnership with all major public cloud providers. And scanned for compromised digital certifcates in those environments. While project Sonar scanned millions of digital certificates on the internet for sings of the same.

However while the report leads to clear evidence showing that hackers are tailoring their attacks to different providers and organisations. It reads as somewhat more of an indictment of the poor standard of security being deployed by some organisations in the public cloud today. Than a statement on the security practices of the major providers.

The 2016 national exposure survey.

Read about the Heisenberg cloud project (slides).

Zero to MySQL in less than 10 minutes with Azure Database for MySQL and Azure Web Apps

siliconvalve

I’m a long-time fan of Open Source and have spent chunks of my career knocking out LAMP solutions since before ‘LAMP’ was a thing!

Over the last few years we have seen a revived Microsoft begin to embrace (not ’embrace and extend’) the various Open Source platforms and tools that are out there and to actively contribute and participate with them.

Here’s our challenge today – setup a MySQL environment, including a web-based management UI, with zero local installation on your machine and minimal mouse clicks.

Welcome to Azure Cloud Shell

Our first step is to head on over to the Azure portal at https://portal.azure.com/ and login.

Once you are logged in open up a Cloud Shell instance by clicking on the icon at the top right of the navigation bar.

Cloud Shell

If this is the first time you’ve run it you will be prompted to create a home file share…

View original post 592 more words

Using ADFS on-premises MFA with Azure AD Conditional Access

With the recent announcement of General Availability of the Azure AD Conditional Access policies in the Azure Portal, it is a good time to reassess your current MFA policies particularly if you are utilising ADFS with on-premises MFA; either via a third party provider or with something like Azure MFA Server.

Prior to conditional MFA policies being possible, when utilising on-premises MFA with Office 365 and/or Azure AD the MFA rules were generally enabled on the ADFS relying party trust itself.  The main limitation with this of course is the inability to define different MFA behaviours for the various services behind that relying party trust.  That is, within Office 365 (Exchange Online, Sharepoint Online, Skype for Business Online etc.) or through different Azure AD Apps that may have been added via the app gallery (e.g. ServiceNow, SalesForce etc.).  In some circumstances you may have been able to define some level of granularity utilising custom authorisation claims, such as bypassing MFA for activesync and legacy  authentication scenarios, but that method was reliant on special client headers or the authentication endpoints that were being used and hence was quite limited in its use.

Now with Azure AD Conditional Access policies, the definition and logic of when to trigger MFA can, and should, be driven from the Azure AD side given the high level of granularity and varying conditions you can define. This doesn’t mean though that you can’t keep using your on-premises ADFS server to perform the MFA, you’re simply letting Azure AD decide when this should be done.

In this article I’ll show you the method I like to use to ‘migrate’ from on-premises MFA rules to Azure AD Conditional Access.  Note that this is only applicable for the MFA rules for your Azure AD/Office 365 relying party trust.  If you are using ADFS MFA for other SAML apps on your ADFS farm, they will remain as is.

Summary

At a high level, the process is as follows:

  1. Configure Azure AD to pass ‘MFA execution’ to ADFS using the SupportsMFA parameter
  2. Port your existing ADFS MFA rules to an Azure AD Conditional Access (CA) Policy
  3. Configure ADFS to send the relevant claims
  4. “Cutover” the MFA execution by disabling the ADFS MFA rules and enabling the Azure AD CA policy

The ordering here is important, as by doing it like this, you can avoid accidentally forcing users with a ‘double MFA’ prompt.

Step 1:  Using the SupportsMFA parameter

The crux of this configuration is the use of the SupportsMFA parameter within your MSOLDomainFederationSettings configuration.

Setting this parameter to True will tell Azure AD that your federated domain is running an on-premises MFA capability and that whenever it determines a need to perform MFA, it is to send that request to your STS IDP (i.e. ADFS) to execute, instead of triggering its own ‘Azure Cloud MFA’.

To perform this step is a simple MSOL PowerShell command:

Set-MsolDomainFederationSettings -domain yourFederatedDomain.com -SupportsMFA $true

Pro Tip:  This setting can take up to 15-30 mins to take effect.  So make sure you factor in this into your change plan.  If you don’t wait for this to kick in before cutting over your users will get ‘double MFA’ prompts.

Step 2:  Porting your existing MFA Rules to Azure AD Conditional Access Policies

There’s a whole article in itself talking about what Azure AD CA policies can do nowadays, but for our purposes let’s use the two most common examples of MFA rules:

  1. Bypass MFA for users that are a member of a group
  2. Bypass MFA for users on the internal network*

Item 1 is pretty straight forward, just ensure our Azure AD CA policy has the following:

  • Assignment – Users and Groups:
    • Include:  All Users
    • Exclude:  Bypass MFA Security Group  (simply reuse the one used for ADFS if it is synced to Azure AD)

MFABypass1

Item 2 requires the use of the Trusted Locations feature.  Note that at the time of writing, this feature is still the ‘old’ MFA Trusted IPs feature hosted in the Azure Classic Portal.   Note*:  If you are using Windows 10 Azure AD Join machines this feature doesn’t work.  Why this is the case will be an article in itself, so I’ll add a link here when I’ve written that up.

So within your Azure AD CA policy do the following:

  • Conditions – Locations:
    • Include:  All Locations
    • Exclude:  All Trusted IPs

MFABypass2.png

Then make sure you click on Configure all trusted locations to be taken to the Azure Classic Portal.  From there you must set Skip multi-factor authentication for requests from federated users on my intranet

MFABypass3.png

This effectively tells Azure AD that a ‘trusted location’ is any authentication requests that come in with a InsideCorporateNetwork claim.

Note:  If you don’t use ADFS or an IDP that can send that claim, you can always use the actual ‘Trusted IP addresses’ method.

Now you can define exactly which Azure AD apps you want MFA to be enabled for, instead of all of them as you had originally.

MFABypass7.png

Pro Tip:  If you are going to enable MFA on All Cloud Apps to start off with, check the end of this article for some extra caveats you should consider for, else you’ll start breaking things.

Finally, to make this Azure AD CA policy actually perform MFA, set the access controls:

MFABypass8.png

For now, don’t enable the policy just yet as there is more prep work to be done.

Step 3:  Configure ADFS to send all the relevant claims

So now that Azure AD is ready for us, we have to configure ADFS to actually send the appropriate claims across to ‘inform’ it of what is happening or what it is doing.

The first is to make sure we send the InsideCorporateNetwork claim so Azure AD can apply the ‘bypass for all internal users’ rule.  This is well documented everywhere, but the short version is, within your Microsoft Office 365 Identity Platform relying party trust in ADFS and Add a new Issuance Transform Rule to pass through the Inside Corproate Network Claim:

MFABypass4

Fun fact:   The Inside Corporate Network claim is automatically generated by ADFS when it detects that the authentication was performed on the internal ADFS server, rather then through the external ADFS proxy (i.e. WAP).  This is why it’s a good idea to always use an ADFS proxy as opposed to simply reverse proxying your ADFS.  Without it you can’t easily tell whether it was an ‘internal’ or ‘external’ authentication request (plus its more secure).

The other important claim to send through is the authnmethodsreferences claim.  Now you may already have this if you were following some online Microsoft Technet documentation when setting up ADFS MFA.  If so, you can skip this step.

This claim is what is generated when ADFS successfully performs MFA.  So think of it as a way for ADFS to tell Azure AD that it has performed MFA for the user.

MFABypass6

Step 4: “Cutover” the MFA execution

So now that everything is prepared, the ‘cutover’ can be performed by doing the following:

  1. Disable the MFA rules on the ADFS Relying Party Trust
    Set-AdfsRelyingPartyTrust -TargetName "Microsoft Office 365 Identity Platform" -AdditionalAuthenticationRules $null
  2. Enable the Azure AD CA Policy

Now if it all goes as planned, what should happen is this:

  1. User attempts sign into an Azure AD application.  Since their domain is federated, they are redirected to ADFS to sign in.
  2. User will perform standard username/password authentication.
    • If internal, this is generally ‘SSO’ with Windows Integrated Auth (WIA).  Most importantly this user will get a ‘InsideCorporateNetwork’ = true claim
    • If external, this is generally a Forms Based credential prompt
  3. Once successfully authenticated, they will be redirected back to Azure AD with a SAML token.  Now is actually when Azure AD will assess the CA policy rules and determines whether the user requires MFA or not.
  4. If they do, Azure AD actually generates a new ADFS sign in request, this time specifically stating via the wauth parameter to use multipleauthn. This will effectively tell ADFS to execute MFA using its configured providers
  5. Once the user successfully completes MFA, they will go back to Azure AD with this new SAML token that contains a claim telling Azure AD that MFA has now been performed and subsequently lets the user through

This is what the above flow looks like in Fiddler:

MFABypass9.png

This is what your end-state SAML token should like as well:

MFABypass10

The main takeaway is that Step 4 is the new auth flow that is introduced by moving MFA evaluation into Azure AD.  Prior to this, step 2 would have simply perform both username/password authentication and MFA in the same instance rather then over two requests.

Extra Considerations when enabling MFA on All Cloud Apps

If you decide to take a ‘exclusion’ approach to MFA enforcement for Cloud Apps, be very careful with this.  In fact you’ll even see Microsoft giving you a little extra warning about this.

MFABypass12

The main difference with taking this approach compared to just doing MFA enforcement at the ADFS level is that you are now enforcing MFA on all cloud identities as well!  This may very well unintentionally break some things, particularly if you’re using ‘cloud identity’ service accounts (e.g. for provisioning scripts or the like).  One thing that will definitely break is the AADConnect account that is created for directory synchronisation.

So at a very minimum, make sure you remember to add the On-Premises Directory Synchronization Service Account(s) into the exclusion list for for your Azure AD MFA CA policy.

The very last thing to call out is that some Azure AD applications, such as the Intune Company Portal and Azure AD Powershell cmdlets, can cause a ‘double ADFS prompt’ when MFA evaluation is being done in Azure AD.   The reason for this and the fix is covered in my next article Resolving the ‘double auth’ prompt issue with Azure AD Conditional Access MFA and ADFS so make sure you check that out as well.

 

Putting SQL to REST with Azure Data Factory

Microsoft’s integration stack has slowly matured over the past years, and we’re on the verge of finally breaking away from BizTalk Server, or are we? In this article I’m going to explore Azure Data Factory (ADF). Rather than showing the usual out of the box demo I’m going to demonstrate a real-world scenario that I recently encountered at one of Kloud’s customers.

ADF is a very easy to use and cost-effective solution for simple integration scenarios that can be best described as ETL in the ‘old world’. ADF can run at large scale, and has a series of connectors to load data from a data source, apply a simple mapping and load the transformed data into a target destination.

ADF is limited in terms of standard connectors, and (currently) has no functionality to send data to HTTP/RESTful endpoints. Data can be sourced from HTTP endpoints, but in this case, we’re going to read data from a SQL server and write it to a HTTP endpoint.

Unfortunately ADF tooling isn’t available in VS2017 yet, but you can download the Microsoft Azure DataFactory Tools for Visual Studio 2015 here. Next we’ll use the extremely useful 3rd party library ‘Azure.DataFactory.LocalEnvironment’ that can be found on GitHub. This library allows you to debug ADF projects locally, and eases deployment by generating ARM templates. The easiest way to get started is to open the sample solution, and modify accordingly.

You’ll also need to setup an Azure Batch account and storage account according to Microsoft documentation. Azure Batch is running your execution host engine, which effectively runs your custom activities on one or more VMs in a pool of nodes. The storage account will be used to deploy your custom activity, and is also used for ADF logging purposes. We’ll also create a SQL Azure AdventureWorksLT database to read some data from.

Using the VS templates we’ll create the following artefacts:

  • AzureSqlLinkedService (AzureSqlLinkedService1.json)
    This is the linked service that connects the source with the pipeline, and contains the connection string to connect to our AdventureWorksLT database.
  • WebLinkedService (WebLinkedService1.json)
    This is the linked service that connects to the target pipeline. ADF doesn’t support this type as an output service, so we only use it to refer to from our HTTP table so it passes schema validation.
  • AzureSqlTableLocation (AzureSqlTableLocation1.json)
    This contains the table definition of the Azure SQL source table.
  • HttpTableLocation (HttpTableLocation1.json)
    T
    he tooling doesn’t contain a specific template for Http tables, but we can manually tweak any table template to represent our target (JSON) structure.

AzureSqlLinkedService

AzureSqlTable

Furthermore, we’ll adjust the DataDownloaderSamplePipeline.json to use the input and output tables that are defined above. We’ll also set our schedule and add a custom property to define a column mapping that allows us to map between input columns and output fields.

The grunt of the solution is performed in the DataDownloaderActivity class, where custom .NET code ‘wires together’ the input and output data sources and performs the actual copying of data. The class uses a SqlDataReader to read records, and copies them in chunks as JSON to our target HTTP service. For demonstration purposes I am using the Request Bin service to verify that the output data made its way to the target destination.

We can deploy our solution via PowerShell, or the Visual Studio 2015 tooling if preferred:

NewADF

After deployment we can see the data factory appearing in the portal, and use the monitoring feature to see our copy tasks spinning up according to the defined schedule:

ADF Output

In the Request Bin that I created I can see the output batches appearing one at a time:

RequestBinOutput

As you might notice it’s not all that straightforward to compose and deploy a custom activity, and having to rely on Azure Batch can incur significant cost unless you adopt the right auto scaling strategy. Although the solution requires us to write code and implement our connectivity logic ourselves, we are able to leverage some nice platform features as a reliable execution host, retry logic, scaling, logging and monitoring that are all accessible through the Azure portal.

The complete source code can be found here. The below gists show the various ADF artefacts and the custom .NET activity.

The custom activity C# code:

The quickest way to create new VMs in Azure from existing VM snapshots, mostly with PowerShell

Originally posted on Lucian’s blog @clouduccino.com. Follow Lucian on Twitter @LucianFrango.


There’s probably multiple ways to do this, both right and wrong, but, here’s a process that I’ve been using for a while that I’ve recently tweaked to take advantage of new Azure Managed Disks.

Sidebar – standard managed disk warning

Before I go on though, I wanted to issue a quick warning about the differences between standard unmanaged and managed disks. Microsoft will be pushing you to you Managed Disks more and more. Yes, its a great feature that makes the management of VM disks simpler. The key bit of information though is as follows:

  • If you provision an unmanaged disk that is 1Tb in size, but, only use 100Gb, you are paying for 100Gb of storage costs. So you’re only paying for what you use. [1. Unmanaged disk cost – Azure Documentation ]
  • If you provision a managed disk that is 1Tb in size, but, only you 10Mb, you will be paying for the privilege of the whole 1Tb disk [2. Managed disk cost – Azure Documentation ]
  • Additionally, Premium disks, you’re paying for what you provision no matter if its managed or unmanaged

That aside, Managed Disks are a pretty good feature that makes disk and storage account management considerably simpler. If you’re frugal with your VM allocation and have the process to manage people and technology correctly, Managed Disks are great.

The Process

tl;dr

  • Create a snapshot in Azure
  • Copy the snapshot from snapshot storage location to Blob storage
  • Create a new VM instance based on the blob.vhd file
    • This blob post outlines the use of managed disks
    • However, mounting direct from Blob can also be done

The actual process

I’ve gone through this recently and updated it so that it’s as streamlined, for me, as possible. Again, this is skewed towards managed disk usage, but, can easily be extended to be used with unmanaged disks as well. Lets begin:

Step 0 ?

If you’re wanting to do this to create copies of your VM instances, to scale out your workload, remember to generalise or sysprep your VM instance prior to Step 1. In the example I go into below, my use case was to create a copy of a server from a production environment (VNET and subscription) and move it to different and seperate non-production environment (seperate VNET and subscription).

Step 1 – Create a snapshot of your VM disk(s)

The first thing we need to do is actually power off your virtual machine instance. I’ve seen that snapshots can happen while the VM instance is running, but, I guess you can call me a a little bit more old school, a little bit more on the cautious side when it comes to these sorts of things. I’ve been bitten by this particular bug in the past, unpleasant it was; so i’m inclined to err on the side of caution.

Once the VM instance is offline, go to the Azure Portal and search for “Snapshots”. Create a new snapshot.

  • NOTE: snapshots in Azure are done per DISK and not per VM INSTANCE
  • Name the snapshot
  • Select the subscription where the VM instance is located
  • Select the resource group you want to save the snapshot to
    • Or create a new one
  • Select the snapshot location
  • Select the source disk
    • If you earlier selected the same resource group where your VM instance is contained, the disk selection will display the resource group member VM instance disks first in the list
  • Select the storage type- standard or premium for your snapshot
    • I usually just use standard as I’ve not had the need for faster speed premium as yet (that will change one day for sure)
  • Create the snapshot

One the snapshot is created, complete this quick next step to generate an export access URL (we’ll need this in step 2):

  • Select the snapshot
  • From the top menu, select Export
  • You’ll be presented with a menu item with a time interval (based in seconds)
  • The default is 3600 or 1 hour
  • That is fine, but, I like to make that 36000 (add another 0) so that I have a whole day to do this again and again if need be
  • Save the generated URL to notepad for later!

Step 2 – Copy the snapshot to Blob

The next part relies on PowerShell. Update the following PowerShell script with your parameters to copy the snapshot to Blob:

$storageAccountName = "<storage account name>"
$storageAccountKey = “<storage account key>
$absoluteUri = “https://blahblahblah.blob.core.windows.net/blahblahblah/........
$destContainer = “<container>”
$blobName = “server.vhd

$destContext = New-AzureStorageContext –StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey
Start-AzureStorageBlobCopy -AbsoluteUri $absoluteUri -DestContainer $destContainer -DestContext $destContext -DestBlob $blobName

Just for your info, heres a quick explanation of the above:

  • Storage account name = there storage account where you want to store the VHD
  • The storage account key = either the primary or secondary  which is used for authentication for accessing the storage account
  • The absolute URI = this is the snapshot URI we generated at the end of step 1
  • The destination container = where you want to store the VHD. Usually this is either “vhds”,  or maybe create one called “snapshots”
  • The blob name = the file name of the VHD itself (remember to only use lowercase)

Step 2.5 – Moving around the blob if need be

Before we actually create a new VM instance based on this snapshot blob, there is an additional option we could take. That is, perhaps it would make sense to move the blob to a different subscription. This is particularly handy when you would have a development environment that you would want to move to production. Other use cases might be the inverse- making a replica of a production system for development purposes.

The absolute fastest way to do this, as I don’t like being inefficient here is with the Azure Storage Explorer (ASE) tool. Its an application that provides a quick GUI for completing storage actions. If you add in both the storage accounts in the ASE, you can as easily as this:

  • Select the blob from storage account A (in subscription A)
  • Select copy from the top menu
  • Go to the your second storage account (storage account B in perhaps subscription B)
  • Go to the relevant container
  • Select paste from the top menu
  • Wait for the blob to copy
  • DONE

It can’t get any simpler or faster than that. I’m sure if you’re command line inclined, you have a quick go to PowerShell cmdlet for that, but, for me, I’ve found that to be pretty damn quick. So it isn’t broken, why fix it.

Step 3 – Create a new VM with a managed disk based on the snapshot we’ve put into Azure Blob

The final piece of the puzzle, as the cliche would go, is to create a new virtual machine instance. Again, as the wonderfully elusive and vague title of this blog post states, we’ll use PowerShell to do this. Sure, ARM templates would work and likely the Azure Portal can get you pretty far as well. However, again I like to be efficient and I’ve found that the following PowerShell script does this the best.

Additionally, you can change this up to mount the VHD from blob, vs create a new managed disk as well. So, for purpose of creating a new machine, PowerShell is as flexible as it is fast and convenient.

Here’s the script you’ll need to create the new VM instance:

#Prepare the VM parameters 
$rgName = "<resource-group-name>"
$location = "australiaEast"
$vnet = "<virtual-network>"
$subnet = "/subscriptions/xxxxxxxxx/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network>/subnets/<subnet>"
$nicName = "VM01-Nic-01"
$vmName = "VM01"
$osDiskName = "VM01-OSDisk"
$osDiskUri = "https://<storage-account>.blob.core.windows.net/<container>/server.vhd"
$VMSize = "Standard_A1"
$storageAccountType = "StandardLRS"
$IPaddress = "10.10.10.10"

#Create the VM resources
$IPconfig = New-AzureRmNetworkInterfaceIpConfig -Name "IPConfig1" -PrivateIpAddressVersion IPv4 -PrivateIpAddress $IPaddress -SubnetId $subnet
$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $rgName -Location $location -IpConfiguration $IPconfig
$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize $VMSize
$vm = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $nic.Id

$osDisk = New-AzureRmDisk -DiskName $osDiskName -Disk (New-AzureRmDiskConfig -AccountType $storageAccountType -Location $location -CreateOption Import -SourceUri $osDiskUri) -ResourceGroupName $rgName
$vm = Set-AzureRmVMOSDisk -VM $vm -ManagedDiskId $osDisk.Id -StorageAccountType $storageAccountType -DiskSizeInGB 128 -CreateOption Attach -Windows
$vm = Set-AzureRmVMBootDiagnostics -VM $vm -disable

#Create the new VM
New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vm

Again, let me explain a little the parameters we’ve set that the start of the script:

  • $rgName = the resource group where you want to deploy the VM instance
  • $location = the Azure region
  • $vnet = the virtual network where you want to deploy the VM instance
  • $subnet = the subnet where you want to deploy the VM instance
  • $nicName = the name of the NIC of the server
  • $vmName = the name of the VM instance, the server name
  • $osDiskName = the OS disk name
  • $osDiskUri = the direct URI/URL to the VHD in your storage account
  • $VMSize = the VM size or the service plan for the VM
  • $storageAccountType = what type of storage you would like to have
  • $IPaddress = the static IP address of the server as I like to do this in Azure, rather than use dynamic IP’s

And that is pretty much that!

Conclusion

It’s Friday in Sydney. Its the pre-kend and it’s a gloomy, cold 9th day of Winter 2017. I hope that the above content helps you out of a jam or gives you the insight you need to run through this process quickly and efficiently. That feeling of giving back, helping. Thats that feeling that should warm me up and get me to lunch time! Counting down!

Cheers!

How to build and deploy an Azure NodeJS WebApp using Visual Studio Code

Introduction

This week I had the need to build a small web application with a reasonably simple front end that will later be integrated inside a Portal. The web application isn’t going to be high use and didn’t necessitate deployment of infrastructure (VM’s). I’d messed with NodeJS a while back in this post where I configured a UI for Microsoft Identity Manager and Azure based functions.

In the back of my mind I knew I didn’t want to have to go for a full Visual Studio Project Solution for this, and with the recent updates to Visual Studio Code I figured it must be possible to do it using it. There wasn’t much around on doing it, so I dived in and worked it out for myself. Here I share the end-to-end process to make it easy for you to started.

Overview

What you will need on your development workstation before you start are the following components. Download and install them on your development machine.

You will also need an Azure Subscription to where you will publish your NodeJS site.

This post details setting up Visual Studio Code to build a shell NodeJS site and deploy it to Azure using a local GIT Repository. Let’s get started.

Visual Studio Code Extensions

A really smart and handy extension for VS Code is Azure Tools for VS Code. Release a few months ago (January 2017), this extension allows you to quickly create a Web App (Resource Group, App Service, Application Service Plan etc) from within VS Code. With VSCode on your development machine from the prerequisites above click on the Extensions Icon (bottom left) in the VSCode menu and type Azure Tools. Click the green Install button.

Azure Tools for VS Code

Creating the NodeJS Site in VS Code

I had a couple of attempts at doing this before I found a quick, neat and repeatable method of getting started. In order to get the Web App deployed and accessible correctly in Azure I found it easiest to use the Sample Azure NodeJS Hello World example from here. Download that sample and extract the contents to a new folder on your workstation. I created a new path on mine named …\NodeJS\nodejssite and dropped the sample in there so it looked like below.

After creating the folder structure and putting the sample in it, whilst in the sub-directory type:

code .

That will startup Visual Studio Code in the newly created folder with the starter sample.

Install Express for NodeJS

To that base sample site we’ll install Express. From the Terminal tab in the lower pane type:

npm install -g express-generator

Express App

With Express now on our machine, lets add the Express App to our new NodeJS site. Type express in the Terminal window.

express

Accept that the directory is not empty

This will create the folder structure for Express.

Now to get all the files and modules for our site configured for our app run npm install

Now type npm start in the terminal window to start our new site.

The NodeJS site will start. Open a Web Browser and go to http://localhost:3000 and you should see the Express empty site.

Navigate to views => index.jade Update the text like I have below.

Refresh your browser window and you should see the text updated.

In the terminal windows press Cntrl + C to stop NodeJS.

Test Deploy to Azure

Now let’s do a test deploy of our shell site as an Azure WebApp.

Press Cntrl + Shift + P or from the View menu select Command Palette.

Type Azure: Login 

This will generate a code and give you a link to open in your browser and login

Paste in the code from the clipboard and select continue

Then login with your account for the Tenant where you want to deploy the WebApp too. You’ll then be authorized.

From the Command Palette type azure sub and choose Azure: List Azure Subscriptions and choose the subscription where you will create and deploy the WebApp

Now from the Command Palette type Azure Create a Web App (Simple).

Give the WebApp a name. This will become the WebApp Name, and the basis for the all the associated WebApp components. Use Create a Web App (Advanced) if you want to be more specific about the name of the App Resources etc.

If you watch the bottom VS Code Status bar you will see the Azure Tools extension create the new Resource Group, Web App and Web App Plan.

Login to the Azure Portal, select the new Web App.

Select Deployment Options and then Local Git Repository. Select OK.

Select Deployment credentials and provide a username and password. You’ll need this shortly to publish your site.

Click Overview. Copy the Git clone url.

Back in VS Code, select the GIT icon (under the magnifying glass) and from the top choose Initialize Repository.

Then in the terminal window type git remote add azure <git clone url> obtained from the step above.

Type Initial Commit as the message and click the tick icon in the Source Control menu bar.

Select and select Publish

Select azure as the remote target we setup earlier.

You’ll be prompted to authenticate. Use the account you created above in Deployment Credentials.

Back in the Azure Portal under the Web App under Deployment Options you will see the initial commit.

Click on Overview and you should see that it is running. Click on URL and the site will open in a new tab in your browser.

Updating our WebApp

Now, let’s make a change to our WebApp.

Back in VS Code, click on the files and folder icon in the top left corner, navigate to views => index.jade and update the title. Hit Cntrl + S (or select Save from the File menu). In Terminal below type npm start to start our NodeJS site locally.

Check out the update locally. In a browser navigate to the local NodeJS site on localhost:3000. You’ll see the changed page.

Select the Git icon on the left menu, give the update some text e.g. ‘updated page text’ and select the tick from the top menu.

Select the and choose Push to publish the changes to our Azure WebApp.

Go back to your browser which was on the Azure WebApp URL and reload. Our change and been push and reflected in the WebApp.

Summary

Very quickly and easily using Visual Studio Code (with NodeJS and Git Desktop installed locally) we have;

  • Created an Azure WebApp
  • Created a base NodeJS site
  • Have a local NodeJS site we can develop
  • Publish it to Azure

Now go create something awesome.