Migrating Sitecore 7.0 to Azure IaaS Virtual Machines – Part 1

INTRODUCTION

Recently, I had the opportunity of working on a Sitecore migration project. I was tasked with moving a third-party hosted Sitecore 7.0 instance to Azure IaaS. The task sounds simple enough but if only life was that simple. A new requirement was to improve upon the existing infrastructure by making the new Sitecore environment highly available and the fun begins right there.

To give some context, the CURRENT Sitecore environment is not highly available and has the following server topology:

  • Single Sitecore Content Delivery (CD) Instance
  • Single Sitecore Content Management (CM) Instance
  • Single SQL Server 2008 Instance for Sitecore Content and Configurations
  • Single SQL Server 2008 Instance for Sitecore Analytics

The NEW Sitecore Azure environment is highly available and has the following server topology:

  • Load-balanced Sitecore CD Instances (2 servers)
  • Single Sitecore CM Instance (single server)
  • SQL Server 2012 AlwaysOn Availability Group (AAG) for Sitecore Content (2 servers)
  • SQL Server 2012 AlwaysOn Availability Group (AAG) for Sitecore Analytics (2 servers)

In this tutorial I will walk you through the processes required to provision a brand new Azure environment and migrate Sitecore.

This tutorial will be split into three parts and they are:

  1. Part 1 – Provision the Azure Sitecore Environment
  2. Part 2 – SQL Server 2012 AlwaysOn Availability Group Configuration (coming soon)
  3. Part 3 – Sitecore Configuration and Migration (coming soon)

 

PART 1 – Provision the Azure Sitecore Environment

In the Part 1 of the tutorial, we’ll look at building the foundations required for the Sitecore migration.

1. Sitecore Web Servers

  • First we need to create the two Sitecore CD instances. In Azure Management Portal, navigate to Virtual Machines and create a new virtual machine from gallery. Find the Windows Server 2012 R2 Datacenter template, go through the creation wizard and fill out all the required information.

1

  • When creating a new VM it must be assigned to a Cloud Service, you will get the opportunity to create a new Cloud Service if you don’t already have one. For load-balanced configurations, you also need to create a new Availability Set. Let’s create that too.

2

  • Repeat the above steps to create the second Sitecore CD instance and assign it to the same Cloud Service and Availability Group.
  • Repeat the above steps to create the Sitecore CM instance and create a new Cloud Service for it (you don’t need to create an Availability Group for a single instance).
  • Once all the VMs have been provision and configured properly, make sure they are all joined to the same domain in AD.

2. Sitecore SQL Servers

  • Now we need to create two SQL Server 2012 clusters – one for Sitecore content and the other for Sitecore analytics.
  • In Azure Management Portal, navigate to Virtual Machines and create a new virtual machine from gallery. Find the SQL Server 2012 SP2 Enterprise template (it will also work with SQL Server 2012 Standard and Web editions), go through the creation wizard and fill out all the required information.

Please Note: It’s important to note that by creating a new VM based on the SQL Server template, you are automatically assigned a pre-bundled SQL Server licence. If you want to use your own SQL Server licence, you’ll have to manually install SQL Server after spinning up a standard Windows Server VM.

3

  • During the creation process, create a new Cloud Service and Availability Group, and assign them to this VM.

4

  • Repeat the above steps to create the second Sitecore SQL Server instance and assign it to the same Cloud Service and Availability Group. These two SQL Servers will form the SQL Server cluster.
  • Repeat the above steps for the second SQL Server cluster for Sitecore Analystics.
  • Once all the VMs have been provision and configured properly, make sure they are all joined to the same domain in AD.

3. Enable Load-balanced Sitecore Web Servers

In order to make Sitecore CD instances highly available, we need configure a load-balancer that will handle traffic for those two Sitecore CD instances. In Azure terms, it just means adding a new endpoint, clicking on a few check boxes and you are ready to go. If only everything in life was that easy :)

  • In Azure Management Portal, find your Sitecore CD VM instance, open the Dashboard, navigate to the Endpoints tab and add a new endpoint (located at the bottom left hand corner).

5

  • You would need to add two new load-balanced endpoints – one for normal web traffic (port 80) and another for secure web traffic (port 443). In the creation wizard, specify the type of traffic for the endpoint, in this case it’s for HTTP Port 80. Make sure you check the Create a Load-balanced Set check box.

6

  • In the next screen, you’ll have to give the load-balanced set a name and leave the rest of the options as the default, confirm and create.

7

  • Do the same for secure web traffic and create a new endpoint for HTTPS Port 443.
  • Find your second Sitecore CD VM instance, open the Dashboard, navigate to the Endpoints tab and add a new endpoint (located at the bottom left hand corner). You’ll also need to add two load-balanced endpoints – one for normal web traffic (port 80) and another for secure web traffic (port 443). But this time around, you’ll create the endpoints based on the existing Load-Balanced Sets.

8

  • On the next screen, give the endpoint a name, confirm and create. Repeat the same steps for the HTTPS endpoint.

9

  • You should now have load-balanced ready Sitecore CD instances.

 

The next part of this tutorial, we’ll look at how to install and configure SQL Server 2012 AlwaysOn Availability Group. Please stay tuned for the Part 2 of this tutorial.

Moving resources between Azure Resource Groups

The concept of resource groups has been around for a little while, and is adequately supported in the Azure preview portal. Resource groups are logical containers that allow you to group individual resources such as virtual machines, storage accounts, websites and databases so they can be managed together. They give a much clearer picture to what resources belong together, and can also give visibility into consumption/spending in a grouped matter.

However, when resources are created in the classic Azure portal (e.g. virtual machines, storage accounts, etc.) there is no support for resource group management, which results in a new resource group being created for each resource that you create. This can lead to a large number of resource groups that are unclear and tedious to manage. Also, if you do tend to use resource groups in the Azure preview portal there is no way to perform housekeeping or management of these resource groups.

With the latest Azure PowerShell cmdlets (v0.8.15.1) we now have the ability to move resources between resource groups. You can install the latest version of the PowerShell tools via the Web Platform Installer:

wpi azure powershell

After installation of this particular version we now have the following PowerShell commands available that will assist us in moving resources:

  • New-AzureResourceGroup
  • Move-AzureResource
  • Remove-AzureResourceGroup
  • Get-AzureResource
  • Get-AzureResourceGroup
  • Get-AzureResourceLog
  • Get-AzureResourceGroupLog

Switch-AzureMode AzureResourceManager

After launching a Microsoft Azure Powershell console we need to switch to Azure Resource Manager mode in order to manage our resource groups:

Switch-AzureMode AzureResourceManager

Get-AzureResourceGroup

Without any parameters this cmdlet gives a complete list of all resource groups that are deployed in your current subscription:

When resources are created in the classic Azure portal they will appear with a new resource group name that corresponds to the name of the object that was created (e.g. virtual machine name, storage account name, website name, etc.).

Note that we have a few default resource groups for storage, SQL and some specific resource groups corresponding to virtual machines. These were automatically created when I built some virtual machines and a Azure SQL server database in the classic Azure portal.

New-AzureResourceGroup

In order to group our existing resources we’re going to create a new resource group. It’s important to note that resource groups reside in a particular region which needs to be specified upon creation:

You’d think that resources can only be moved across resource groups that reside in the same region. However, I’ve successfully moved resources between resource groups that reside in different regions. This doesn’t affect the actual location of the resource so I’m not sure what the exact purpose of specifying a location for a resource group is.

Get-AzureResourceGroup

The Get-AzureResourceGroup cmdlet allows you to view all resources within a group, including their respective types and IDs:

Move-AzureResourceGroup

To move resources from the existing resource groups we need to provide the Move-ResourceGroup cmdlet a list of resource IDs. The cmdlet accepts the resource ID(s) as pipeline input parameters, so we can use the Get-AzureResource cmdlet to feed the list of resource IDs. The following script moves a cloud service, virtual machine and storage account (all residing in the same region) to the newly created resource group:

The Get-AzureResource cmdlet allows you to further filter based on resource type, or individual resource name. The Move-ResourceGroup cmdlet automatically removed the original resource group in case there are no resources associated after moving them.

Unfortunately at the time of writing there was an issue with moving SQL database servers and databases to other resource groups:

Trying to move the SQL server only does not raise any errors, but doesn’t result in the desired target state and leaves the SQL server and database in the original resource group:

The cmetlets Get-AzureResourceLog and Get-AzureResourceGroupLog provide a log of all the performed operations on resources and resource groups, but couldn’t provide any further information regarding the failure to move resources to the new group.

Now we have successfully moved our virtual machine and storage account to the new resource group we can get insight into these resources through the resource group:

Resource Group

Why We Recruit

In a growing consultancy one thing is a constant– the need to identify good, strong talent to join the team to fulfil growing client and customer needs. A consultancy is only as good as the people providing those services to their customers.

Recruiting for an organisation directly, as opposed to recruiting through an agency, has many advantages for the person doing the recruiting (thank goodness says Kloud’s People and Culture team), but it also has advantages for the consultant coming into the business. It means the recruiter who starts the conversation with you really, really knows what it is to work in that business and if you will be right to work there. It results in a better experience both for the organisation you are recruited to and for you, as the consultant.

Having a consultancy that is populated by, what I truly believe to be, the best and the brightest is a tough row to hoe. Carrying a lean bench is a blessing and a curse. When we win new business, we need to have a ready supply of consultants waiting in the wings to join the Kloudie ranks. So if you already have the brightest and best working for you – what is the next step?

The way we like to recruit at Kloud, ideally, is to have an introduction to a new candidate through our existing consultants’ networks. In an ideal world, a consultant will recommend someone who they have worked with and respect, who they know is technically brilliant and who they think will be valued and find value in joining an organisation like Kloud… but it’s good to have a backup plan just in case. As a result, our People and Culture team also reach out through mediums like forums, LinkedIn, blogs, user groups and other networking events to identify new and talented consultants to join our teams.

We recruit to keep fresh ideas in the team, to invite innovators into our ranks, to keep up with the latest technologies and to ensure our customer needs are met.

As the fastest growing Cloud consultancy in Australia – we need to do this a lot.

When we recruit – we take it seriously. We make considered decisions. It isn’t just about being technically brilliant. That’s a given. It’s ensuring a holistic fit for both parties.

How to implement Multi-Factor Authentication in Office 365 via ADFS – Part 3

Originally posted on Lucian’s blog over at clouduccino.com.

In this blog post I’ll go into the configuration and implementation of Active Directory Federation Services v3.0 Multi-Factor Authentication (MFA). This is in line with a recent proof-of-concept project I conducted for a large customer in the FMCG sector. ADFSv3 MFA coupled with some new functionality that Microsoft is working on in Office 365, MFA in Office 2013 which will be covered by part 4 of this series, offers a fantastic solution to organisations wanting to leverage MFA by way of adhering to company policy or simply to further secure their users accessing Office 365 cloud services.

The good we secure for ourselves is precarious and uncertain until it is secured for all of us and incorporated into our common life

-Jane Addams

Read More

Better Documenting your BizTalk Solutions

The BizTalk Documenter has been available for many years on Codeplex for different BizTalk versions, starting with 2004 all the way to 2013 R2. The Documenter for 2004, 2006, 2006 R2 and 2009 can be found here. Some years later, a version for BizTalk 2010 was created. Last year, MBrimble and Dijkgraaf created a newer version which also supports BizTalk 2013 and 2013 R2. They did a great job; all improvements and fixes are listed here.

As in many things in life, there is always room for further improvement. While using the BizTalk 2013 Documenter, we realised that some changes could be done to better document BizTalk Solutions. I downloaded the source code and did some changes for my own, but then after sharing with the team what I had done, they invited me to collaborate in the project. I created the BizTalk 2013 Documenter 5.1.7.1 for BizTalk 2013 with some fixes and improvements.

I will share here not only the changes that I did, but some tips that I consider can help you to better document your BizTalk solutions. If you would like to implement them, please make sure you have got the latest version of the Documenter.

1. Leverage the BizTalk Documenter.

The first and obvious tip is to leverage the BizTalk Documenter. This tool allows you to create a CHM file describing your BizTalk environment and BizTalk Solutions. The first main section of the generated documentation contains your BizTalk Applications, listing all their artefacts and providing a navigable and very graphical documentation of all artefacts. The second main section describes the platform settings like hosts and adapters. The third main section documents BRE policies and vocabularies. You can expect an output similar to the one shown below.

2. Use embedded documentation as much as possible.

The practice of embedding documentation can be applied to your BizTalk Solutions. Using the BizTalk artefact’s description field within the Admin Console allows you to describe the purpose and function of each artefact and keep this always visible for administrators and developers. If you use the BizTalk Deployment Framework you can easily replicate your artefact’s description on all you environments by exporting applications’ bindings.

In our projects, we wanted to fully use embedded documentation for the BizTalk Solutions, but the previous Documenter has some minor bugs and Receive Ports, Schemas and Pipelines did not include the description field as part of the generated documentation. I’ve fixed them by updating some “.xslt” files and a class of the BizTalkOM library; and now the output includes description for all different artefacts.

3. Include your ESB Itineraries as part of your documentation.

The BizTalk ESB Toolkit provides a lot of functionality which allows and simplifies the implementation of an Enterprise Service Bus; and ESB Itineraries are usually a key component of these solutions. That said, when they are part of a solution, itineraries should be within the documentation to fully understand the solution as a whole.

However, itineraries are not documented in the BizTalk Documenter out-of-the-box. One way to do it is to create a web page which briefly describes the itinerary and attach it to the documentation. There is a simple and easy way to do it. The first step is to create a Word document, including a picture of the itinerary designer, a description of its purpose and functionality, and the configuration of the resolvers. Then, after creating this document, save it as a Single File Web Page “.mht”.

I’ve introduced a change to the BizTalk Documenter to accept not only “.htm” and “.html” files as additional resources, but “.mht” files also. The big advantage of this is that documentation which includes images can be created on Word, saved as a “.mht” file and easily added to the BizTalk documentation.

Once created the documentation for each itinerary, they should be saved in a subfolder which can be called “Itineraries“. I suggest this folder name to have a clear structure in the generated documentation, but it can be set according the specific needs. This folder should be under a “Resources” folder which will be selected during the creation of the documentation.

The last step is to be executed when generating the documentation. Under the “Output Options” page, in the “Advanced Documentation Options” section, the Resources folder which contains the Itineraries folder should be selected.

Doing so, the generated documentation should have the “Itineraries” branch under “Additional Notes“, and under this, the list of itineraries. This way, these important components of your BizTalk solutions are now part of your documentation.

4. Document your Maps

We have incorporated the functionality of another Codeplex project, the BizTalk Map Documenter, as part of the BizTalk Documenter. If you want to include documentation of your maps in more detail, the BizTalk mapper “.btm” source files must be available, and the following steps must be executed when generating the BizTalk documentation.

First, copy the BizTalk mapper files of the BizTalk Applications that are to be documented into a folder named “BtmSourceFiles“. Then, rename the maps so that they have the full name as they appear in the BizTalk Admin Console, but here including the “.btm” extension. And finally copy the “BtmSourceFiles” folder under your Resources Folder to be selected in the Documenter Output Options. The “BtmSourceFiles” name of the folder and the full name of the maps are required for the Documenter to be able to document in detail the maps.

In the following screenshots it can be seen the detailed BizTalk map documentation which you can expect. It shows direct links between source and target nodes, functoids, and constant values utilised in the map.

5. Enrich your documentation with other relevant information

In the tip #3, I mentioned how you can include your itineraries as part of your documentation. In addition to that, you can enrich your documentation with any Word or Excel document saved as “.mht” file or any other html file which is relevant to your solution. As an example, you could include the SettingsFileGenerator file of the BizTalk Deployment Framework. You just need to open it on Excel and save it as “.mht” file. This file must be saved in the corresponding folder under your Resources folder selected when you create the BizTalk Documentation. This way, your settings for Deployment can be included in your documentation.

6. Document only artefacts relevant to your solution.

Previous versions of the BizTalk Documenter allowed you to select those BizTalk applications to be included in the documentation. However, the Platform Settings and Business Rule Engine sections of the generated documentation always included all hosts, adapters, policies and vocabularies. In some projects, we had the need of documenting only those hosts, adapters, and BRE artefacts relevant to the solutions in scope. To satisfy this need, I added the “Additional Filters” page on the Documenter. On this page, you can filter hosts, adapters and BRE artefacts. Filters are applied using a “StartWith” function, which means that all artefacts starting with the filter will be included. Multiple filters can be defined using a “|” (pipe) delimiter. The following screenshots show the configuration and the output of this new functionality.

7. Put a nice cover to your documentation.

The icing on the cake of a good documentation would be to put a nice cover which is aligned to your needs. To do this, you need to add a custom “titlePage.htm” file on the root of your Resources folder selected in the Output Options tabs. If you are using your own custom images, you need to add them to the same root folder.

The default cover page and a customised one can be seen in the following two images.

The option of customising the cover page has been available since past versions of the Documenter, but in order to get the template of it, the source code has to be downloaded. In this link you can see and download the html template only which you can customise according to your needs.

Note that the template makes use of a stylesheet and images which are part of the documenter. You can use yours by adding them in the same Resources root folder. You can freely customise this html according to your preferences and needs. Make sure you name your “.htm” file as “titlePage.htm“.

I hope you find these tips useful and the BizTalk Documenter can help you to provide a comprehensive and quality documentation to your implemented BizTalk solutions. Please feel free to suggest to the team your ideas or improvements for the BizTalk Documenter on the Codeplex page.

Hands Free VM Management with Azure Automation and Resource Manager – Part 2

In this two part series, I am looking at how we can leverage Azure Automation and Azure Resource Manager to schedule the shutting down of tagged Virtual Machines in Microsoft Azure.

  • In Part 1 we walked through tagging resources using the Azure Resource Manager PowerShell module
  • In Part 2 we will setup Azure Automation to schedule a runbook to execute nightly and shutdown tagged resources.

Azure Automation Runbook

At the time of writing, the tooling support around Azure Automation can be politely described as a hybrid one. For starters, there is no support for Azure Automation in the preview portal. The Azure command line tools only support basic automation account and runbook management, leaving the current management portal as the most complete tool for the job

As I mentioned in Part 1, Azure Automation does not yet support the new Azure Resource Manager PowerShell module out-of-the-box, so we need to import that module ourselves. We will then setup service management credentials that our runbook will use (recall the ARM module doesn’t use certificates anymore, we need to supply user account credentials).

We then create our PowerShell workflow to query for tagged virtual machine resources and ensure they are shutdown. Lastly, we setup our schedule and enable the runbook… lets get cracking!

When we first create an Azure Automation account, the Azure PowerShell module is already imported as an Asset for us (v0.8.11 at the time of writing) as shown below.

Clean Azure Automation Screen.
To import the Azure Resource Manager module we need to zip it up and upload it to the portal using the following process. In Windows Explorer on your PC

  1. Navigate to the Azure PowerShell modules folder (typically C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager)
  2. Zip the AzureResourceManager sub-folder.

Local folder to zip.

In the Automation pane of the current Azure Portal:

  1. Select an existing Automation account (or create a new one)
  2. Navigate to the Asset tab and click the Import Module button
  3. Browse to the AzureResourceManager.zip file you created above.

ARM Module Import

After the import completes (this usually takes a few minutes) you should see the Azure Resource Manager module imported as an Asset in the portal.

ARM Module Imported

We now need to setup the credentials the runbook will use and for this we will create a new user in Azure Active Directory (AAD) and add that user as a co-administrator of our subscription (we need to query resource groups and shutdown our virtual machines).

In the Azure Active Directory pane:

  1. Add a new user of type New user in your organisation
  2. Enter a meaningful user name to distinguish it as an automation account
  3. Select User as the role
  4. Generate a temporary password which we’ll need to change it later.

Tell us about this user.

Now go to the Settings pane and add the new user as a co-administrator of your subscription:

Add user as co-admin.

Note: Azure generated a temporary password for the new user. Log out and sign in as the new user to get prompted to change the password and confirm the user has service administration permissions on your subscription.

We now need to add our users credentials to our Azure Automation account assets.

In the Automation pane:

  1. Select the Automation account we used above
  2. Navigate to the Asset tab and click on the Add Setting button on the bottom toolbar
  3. Select Add Credential
  4. Choose Windows PowerShell Credential from type dropdown
  5. Enter a meaningful name for the asset (e.g. runbook-account)
  6. Enter username and password of the AAD user we created above.

Runbook credentials

With the ARM module imported and credentials setup we can now turn to authoring our runbook. The completed runbook script can be found on Github. Download the script and save it locally.

Open the script in PowerShell ISE and change the Automation settings to match the name you gave to your Credential asset created above and enter your Azure subscription name.

workflow AutoShutdownWorkflow
{
    #$VerbosePreference = "continue"

    # Automation Settings
    $pscreds = Get-AutomationPSCredential -Name "runbook-account"
    $subscriptionName = "[subscription name here]"
    $tagName = "autoShutdown"

    # Authenticate using WAAD credentials
    Add-AzureAccount -Credential $pscreds | Write-Verbose 

    # Set subscription context
    Select-AzureSubscription -SubscriptionName $subscriptionName | Write-Verbose

    Write-Output "Checking for resources with $tagName flag set..."

    # Get virtual machines within tagged resource groups
    $vms = Get-AzureResourceGroup -Tag @{ Name=$tagName; Value=$true } | `
    Get-AzureResource -ResourceType "Microsoft.ClassicCompute/virtualMachines"

    # Shutdown all VMs tagged
    $vms | ForEach-Object {
        Write-Output "Shutting down $($_.Name)..."
        # Gather resource details
        $resource = $_
        # Stop VM
        Get-AzureVM | ? { $_.Name -eq $resource.Name } | Stop-AzureVM -Force
    }

    Write-Output "Completed $tagName check"
}

Walking through the script, the first thing we do is gather the credentials we will use to manage our subscription. We then authenticate using those credentials and select the Azure subscription we want to manage. Next we gather all virtual machine resources in resource groups that have been tagged with autoShutdown.

We then loop through each VM resource and force a shutdown. One thing you may notice about our runbook is that we don’t explicitly “switch” between the Azure module and Azure Resource Management module as we must when running in PowerShell.

This behaviour may change over time as the Automation service is enhanced to support ARM out-of-the-box, but for now the approach appears to work fine… at least on my “cloud” [developer joke].

We should now have our modified runbook script saved locally and ready to be imported into the Azure Automation account we used above. We will use the Azure Service Management cmdlets to create and publish the runbook, create the schedule asset and link it to our runbook.

Copy the following script into a PowerShell ISE session and configure it to match your subscription and location of the workflow you saved above. You may need to refresh your account credentials using Add-AzureAccount if you get an authentication error.

$automationAccountName = "[your account name]"
$runbookName = "autoShutdownWorkflow"
$scriptPath = "c:\temp\AutoShutdownWorkflow.ps1"
$scheduleName = "ShutdownSchedule"

# Create a new runbook
New-AzureAutomationRunbook –AutomationAccountName $automationAccountName –Name $runbookName

# Import the autoShutdown runbook from a file
Set-AzureAutomationRunbookDefinition –AutomationAccountName $automationAccountName –Name $runbookName –Path $scriptPath -Overwrite

# Publish the runbook
Publish-AzureAutomationRunbook –AutomationAccountName $automationAccountName –Name $runbookName

# Create the schedule asset
New-AzureAutomationSchedule –AutomationAccountName $automationAccountName –Name $scheduleName –StartTime $([DateTime]::Today.Date.AddDays(1).AddHours(1)) –DayInterval 1

# Link the schedule to our runbook
Register-AzureAutomationScheduledRunbook –AutomationAccountName $automationAccountName –Name $runbookName –ScheduleName $scheduleName

Switch over to the portal and verify your runbook has been created and published successfully…

Runbook published.

…drilling down into details of the runbook, verify the schedule was linked successfully as well…

Linked Schedule.

To start your runbook (outside of the schedule) navigate to the Author tab and click the Start button on the bottom toolbar. Wait for the runbook to complete and click on the View Job icon to examine the output of the runbook.

Manual Start

Run Output

Note: Create a draft version of your runbook to troubleshoot failing runbooks using the built in testing features. Refer to this link for details on testing your Azure Automation runbooks.

Our schedule will now execute the runbook each night to ensure virtual machine resources tagged with autoShutdown are always shutdown. Navigating to the Dashboard tab of the runbook will display the runbook history.

Runbook Dashboard

Considerations

1. The AzureResourceManager module is not officially supported yet out-of-the-box so a breaking change may come down the release pipeline that will require our workflow to be modified. The switch behaviour will be the most likely candidate. Watch that space!

2. Azure Automation is not available in all Azure Regions. At the time of writing it is available in East US, West EU, Japan East and Southeast Asia. However, region affinity isn’t a primary concern as we are merely just invoking the service management API where our resources are located. Where we host our automation service is not as important from a performance point of view but may factor into organisation security policy constraints.

3. Azure Automation comes in two tiers (Free and Basic). Free provides 500 minutes of job execution per month. The Basic tier charges $0.002 USD a minute for unlimited minutes per month (e.g. 1,000 job execution mins will cost $2). Usage details will be displayed on the Dashboard of your Azure Automation account.

Account Usage

In this two part post we have seen how we can tag resource groups to provide more granular control when managing resource lifecycles and how we can leverage Azure Automation to schedule the shutting down of these tagged resources to operate our infrastructure in Microsoft Azure more efficiently.

Microsoft Azure Pricing Calculator

Originally posted in Lucian’s blog over at www.clouduccino.com.

Whether you’re wanting to deploy a new workload in Microsoft Azure, wanting to extend an existing workload via a hybrid scenario or like me wanting to use Azure outside of work to gain more knowledge and experience, the pay-as-you-go charge model can often times intimidate and even deter many from using a cloud service like Azure. From a lab or dev point of view, it is all well and good to dabble in Azure at the various tiers of engagement, but at the end of the day you could be left with a credit card bill allot larger than expected. Enter the Microsoft Azure Pricing Calculator where you can accurately estimate your potential usage for any given service.

2015-03-16-APC-001

Read More

Azure’s G Series VMs – Prime Compute Only One Click Away!

I’m going to start this blog post by making one thing clear. My intent in writing this post is light-hearted – I had some spare time on my hands over a lunch break and I wondered what I could do with it. The result was this blog post :).

Ever since Microsoft announced their G Series Virtual Machines for Azure I’ve been looking for a good reason to fire one up and kick the tyres. Today while I was skimming through my Twitter feed I came across a tweet showing the time it took to calculate the trillionth prime number on a 16 vCPU Linux instance running on GCP.

As any good propeller head knows, the first rule of having access to massive raw compute is to put it to use solving mathematical challenges. This may take the form of a pure maths challenge like finding the n-th digit of Pi or an altogether more financially rewarding endeavour such as Bitcoin mining.

I’d tell you the second rule but it’s covered under an NDA.

The 16 vCPU VM used for this run produced the match in 31 minutes and 49 seconds – impressive!

So I thought to myself – I’m interested to see what an Azure G5 Virtual Machine can do with this challenge.

As Microsoft has publicised this machine is a beast and is currently the biggest VM you can get your hands on in any public cloud. The G5 clocks in at 32 vCPUs, 448GB RAM with about 6.5TB of SSD-based temporary disk. All for $8.69 (USD / West US) per hour.

Let me type that again: $8.69 per *hour*.

If you’d have suggested even as recently as five years ago that this compute power would be available to anyone with a credit card for that hourly rate, those around you may well have suggested you needed a holiday.

The Setup

Firstly let me just point out – you will need a lot of cores in your Azure Subscription if want to run a batch of these VMs. Check to make sure you have a minimum of 32 cores free. If not, you will need to put in a support request.

Let’s create one of these VMs then. I’m going with a vanilla Ubuntu 14.10 distribution and I am firing it up in West US.

G5 Create VM Dialog

The VM provisioning took ~ 15 minutes, after which I was able to SSH into this host and run some stats checks (you know, just to make sure I actually had *all* of that power available!)

A quick visual check in the Portal shows me I have what I expected.

G5 Portal Stats

Then I ran stats on CPU, memory and disk.

less /proc/cpuinfo

Gives me 32 CPUs (last one shown below – we start at base 0 here).

G5 CPU Stats

less /proc/meminfo

Woah Nelly!

G5 Mem Stats

and finally

df -h

Shouldn’t run out of disk here then!

G5 Disk Stats

Now let’s get our prime number generator and start our test. While logged in at a shell prompt we can download the primesieve x64 Linux binaries using wget thus:

wget http://dl.bintray.com/kimwalisch/primesieve/primesieve-5.4-linux-x64-console.tar.gz

and then unpack the tar file:

tar xvf primesieve-5.4-linux-x64-console.tar.gz

which will dump everything into a subfolder called ‘primesieve’ at our current location.

I had to pull down a few extra libraries from Ubuntu to get the generator to work – if you’re serious about trying this out I’m sure you know all about using apt-get install :).

Right. Moment of truth. Copying the command from the tweet I run it. And wait.

As it turns out, not that long.

10 minutes and 22 seconds to be precise.

Prime Sieve Output

Now, this isn’t a direct fruit comparison obviously – I had twice the vCPUs so I would have been surprised to have not been faster. I might have hit Amdahl’s law at some point, but it certainly doesn’t look like it :). The result here is a smidgen under one third the time of the 16 vCPU machine (622 vs 1,909 seconds)!

All this took me less than an hour to execute. If I hadn’t stopped to snapshot some screens I’d probably have shaved a few more minutes off too. As it stands this calculation cost me less than $8 and as I deleted the VM when I was done I have no ongoing cost (and if I need to do it again, I can just create the VM again!)

So there we have it… and I still can’t believe I can create and use a VM this size for less than the cost of three (Sydney) coffees per hour :).

Update: The day after I wrote this post Google launched a 32 vCPU VM in their GCP offering. They re-ran the test and the outcome landed at 17 minutes and 50 seconds. Close, but not quite :).

How to implement Multi-Factor Authentication in Office 365 via ADFS – Part 2

Originally posted on Lucian’s blog: clouduccino.com

Welcome to part 2 of this 4 part series on Multi-Factor Authentication (MFA). In this post i’ll go into some of the different types of MFA available to federated users with either Office 365, Azure AD and hybrid configuration Active Directory Federation Services (ADFS) v3.0; as well as some use cases for each of these.

Quick recap – Multi-factor authentication (MFA) is a means of access control whereby during the logon process, there is more than one claim to grant you access to the cloud service, server application or  even workstation. As with any information technology infrastructure, there is always more than one way to do something which is both a positive and a negative. To explain that oxymoron (and no that’s not me) sometimes having choice is not always a good thing in that it can create headaches for infrastructure consultants or system admins as choosing the correct implementation, in this case MFA, to achieve your end goal can be one of two or several choices. So lets compare the options and go through the 3 main MFA options, as well as a 4th that’s technically available, but not relevant to what this blog series is intending to achieve.

Azure Administrative MFA

The first service offered by the Microsoft Cloud is Azure AD Administrative MFA. This MFA service is available to the Azure AD administrator(s) and only the administrators not general users. It allows your core Azure admin account or accounts to be more secure by way of enabling MFA at no additional charge to your subscription. This is highly recommended, as you would imagine, because Microsoft have your credit card details. Spinning up tens or hundreds of services will potentially cause a massive bill for the unsecured. Protecting the core admin account(s) at all times is a must and should be part of your Azure administrative strategy along with things like admin roles and/or RBAC etc.

Azure AD MFA

The second piece of the Microsoft Cloud MFA puzzle is similar to administrative MFA for Azure. However, where administrative MFA for the admin accounts / roles in Azure, the Azure AD MFA service is available to all Azure AD users. This is one of the main benefits in upgrading your Azure AD subscription from basic to Premium. Azure AD Premium is an additional monthly charge but offers additional features and benefits not just MFA. In addition to the Azure AD Premium upgrade, there is also a fee for the MFA service itself. Optional in two flavours, charged per user or per authentication, securing your cloud resource can incur quite a bit of OPEX if not analysed carefully. Azure AD MFA additionally features an MFA server (role) that can be deployed on an on-premises Windows Server 2012 instance or optionally in Azure IaaS that facilitates the MFA requests.

Office 365 MFA

As most customers of the Microsoft Cloud utilise Office 365, Microsoft have enabled MFA as an included service to Office 365 SKUs. Similar in service functionality as Azure AD MFA, only that theres no server role, Office 365 MFA can in most cases by the first introduction to MFA to most organisations. Its enabled with a couple of optional changes on a per user basis as its globally available on your tenant.

Active Directory Federation Services (ADFS) v3.0 MFA 

Finally, we have an MFA option that when talking Microsoft Cloud most don’t consider but can be quite powerful especially in a certain use case (i’ll get to later). With the introduction of ADFS v3, MFA is available to be configured on a per user, per security group, per inbound, per outbound, per internal or per external connected user; meaning lots of options and lots of configuration to meet various requirements. This type of MFA is what the remaining 2 parts of this post series discusses as from a recent engagement I’ve found it ADFSv3 MFA is a great solution when used in conjunction with some in-preview Microsoft services to Office 365 / Azure AD (again i’ll explain shortly).

Use cases: which MFA suits what purpose

Below is a brief outline of some of the possible use cases for the different types of MFA in a little more detail compared to some of the overview mentioned  above.

Azure Admin MFA

The use case of MFA for your Azure administrator roles is more or less a given. Its a best practice that should be adopted by anyone that has an Azure subscription. Enabling it simply and only provides additional security for your highest level accounts who have the potential as administrators of the subscription to not only spin up and down services but to also add and remove administrator accounts.

Azure AD MFA

Azure AD MFA extends the security of the Azure subscription to any users added / enabled in Azure AD Premium. When dealing with Azure AD Premium, most often than not the customer size is a large enterprise. Security and data integrity at enterprise scale is of the utmost concern with competitors wanting that information, users accessing services with passwords like “password”, and a whole raft of vulnerabilities.

Enabling MFA on Azure AD Premium enforces MFA for any service residing on or authenticating to Azure ADP. Office 365’s backend AD is Azure AD, so enabling this feature extends MFA not only to accessing services in Azure, but to Office 365 as well. As you can see this is quite powerful.

Implementation is likely ideal for an organisation wanting to secure all services in the Microsoft Cloud with MFA with granular administrative policy and governance. As Azure AD MFA utilises a server role, theres many options and settings for astute administrators to ensure high levels of security.

Office 365 MFA

What i’m tipping to be the most commonly implemented form of MFA is simply enabling MFA on your Office 365 tenant and use accounts. Following on from the in-preview ADAL enabled Office 2013 clients, Office 365 MFA is one of the quickest to implement and lowest administrative overhead MFA solutions available in the Microsoft Cloud. The use case is a simple one and by the end of 2015 most if not all Office 365 customers will likely implement as the days of passwords keeping out the bad guys are long gone.

Active Directory Federation Services (ADFS) v3.0 MFA

Finally the dark hose, the underdog and likely the most overlooked means of MFA is ADFSv3 MFA. While it still is a new feature enabled in ADFS v3, many administrators have overlooked its potential as other aspects required to fully utilise this have not been made available from Microsoft.

Come November 2014 when updated authentication enabling MFA was introduced into Office 2013, many a infrastructure consultant and Kloud (where I work) thought of interesting ways this could come into play. Interesting solutions that can leverage the MFA types. Once such use case is with ADFSv3 MFA (this was what i was referring to when i mentioned “ill explain shortly“).

During a recent engagement for a large customer, their “MFA practice” consisted of enforcing users that access any cloud resources and/or any corporate resources to VPN connect to the corporate network. While this doesn’t align with the core principles of MFA, it does provide a level of additional security while also providing additional headaches and frustrations for users.

What i’ll discuss in part 3 is how to leverage ADFSv3 MFA and specifically X.509 certificates issued from an internal enterprise certificate authority to provide users with MFA capabilities. There are a number of “moving parts” and requirements but the end result provides a seamless and functional MFA solution that doesn’t impact as much as say Office 365 of Azure AD Premium MFA whereby a physical act of authentication is required: text message, phone call etc.

MFA implementation Prologue

So part 3 is a deep dive, as much as possible, in how to use ADFSv3 MFA with Office 2013 ADAL updates. However, part of (the ADAL piece) what i’ll discuss in more detail is restricted by an NDA from Microsoft. I’ll explore in as much detail as possible how to leverage this fantastic solution for MFA that gets little fanfare but but overall can provide a streamlined (I won’t say seamless) experience for users completing multi-factor authentication.

Check out the original article at Lucian’s blog here: clouduccino.com

How to implement Multi-Factor Authentication in Office 365 via ADFS – Part 1

Originally posted on Lucian’s blog: clouduccino.com

This is part 1 of a 4 part series put together exploring Multi-Factor Authentication (MFA). Recently I’m been working with a client on a project to implement MFA for Office 365 services as company policy mandates at least two factors of authentication (2FA) for accessing any corporate resources.

In part one I’ll put together my points of view around what MFA is, why its an important topic for organizations especially in 2015. In part two I’ll explain what the main MFA types are around Office 365 and Azure and their use cases as each has different features and each can impact different aspects of implementations. In part three i’ll explain how to implement MFA in organizations that utilise ADFSv3.0 integration with Office 365, and finally in part four provide an in depth how to for the latest currently “in preview” feature of Microsoft Office 2013- MFA utilising Active Directory Authentication Library (ADAL) that was only made available for preview in Q4 of 2014. This is exciting cutting edge stuff that will no doubt be a standard configuration item for existing and new Office 365 customers by the end of 2015.

Lets kick things off with the background of MFA and set the foundations of the posts to come…

What is multi-factor authentication (MFA)?

Multi-factor authentication (MFA) is a means of access control whereby during the logon process, there is more than one claim to grant you access to the cloud service, server application or  even workstation.

What this means in a nutshell is when you logon to your Office 365 Outlook Web Mail web client, along with your username and password combination (something you know), you are required to also enter an additional means of authentication like a one time token (OTT) (something you have).

To finally expand upon that, we have the most basic of MFA concepts where everything can be summed up in three key scenarios of something you know, something you have and finally something you are. This philosophy of authentication ensures that its virtually impossible to beat the system when up to all 3 factors are enabled.

 Why does it matter to my organization?

More and more organizations are moving to the cloud. Key services like email through to line of business applications used by accounting are ending up in a cloud. With that comes some risk; of data being compromised and ever more likely: user credentials being compromised. Systems able to be implemented like CloudLink SecureVSA, available in Microsoft Azure, keeps data at rest encrypted and secure, meaning not any old user can access raw data and compromise corporate intellectual property. The most common way to compromise applications, cloud services or systems as I mentioned earlier is through user credentials. Your username and password have kept your work and your access to systems and applications secure for the most part since the invention of computer systems.

Fast forward through to modern day cloud centric IT. A world where you have an increasing amount of the workforce using any number of cloud service or cloud application. For the security conscious and those in the positions to make the decisions to keep the rest of the organisation secure, this was a headache. I say was because with an ever growing number of cloud users, cloud providers are coming up with better ways to secure thier services. In some ways Microsoft are leading the charge. Of Course other providers like Amazon Web Services (AWS) have MFA services of their own, the potential of MFA use in both Microsoft’s Office 365 and soon Office 2013 has much greater impact.

Why is 2015 going to be a big year for MFA?

Through various reading, Googling and background I have on the subject matter, it seems to me that coming off some strong development work in 2013 and 2014, the year 2015 will be the year that MFA goes mainstream: beyond just securing your bank account (through the use of SMS one time pass-codes (OTP)) and into virtually all enterprise and even standard cloud services as well as thick applications like Microsoft Office.

By far the most popular enterprise and business operating system is Microsoft Windows. Then the most popular workforce application suites are found in the Microsoft Office product stack. Why are these relevant in the cloud world and in the year 2015? Well, put simply, both are from Microsoft and both will feature strong MFA functionality with integration to key cloud services, again from Microsoft, Azure and Office 365.

Microsoft Windows 10

Microsoft announced that WIndows 10 will tie in with Azure Active Directory (AAD) allowing for organizations to be born in the cloud. This is big news especially in relation to MFA where Windows 10 will be the first operating system to feature MFA integration. This point isn’t that much of a talking point in this series, but definitely one of the reasons why 2015 is going to be a big year for MFA. Its something I’m definitely going to get stuck into once Windows 10 becomes generally available.

Microsoft Office 2013

Announced by Microsoft almost a year ago but not able to be publicly used or tested until late 2014, the Microsoft Office 2013 suite will feature MFA integration very soon. Currently available through the connect.microsoft.com in preview testing service, MFA in Office 2013 will likely be many organizations first step towards widespread MFA adoption.

The MFA integration of the Office 2013 suite centres around some key applications that will either talk to ADFS or Azure AD/Office 365: Microsoft Outlook, One Drive for Business and of course Lync. The process itself enables the Office 2013 clients to engage in browser-based authentication (also known as passive authentication). Again put simply, an authentication window will pop up during the logon process requiring the user to enter in additional authentication mechanisms that you as an administrators set.

2015-01-30-MFA-01

Outlook 2013 with ADAL MFA enabled and highlighted

Check out the original article at Lucian’s blog here: clouduccino.com