How to make a copy of a virtual machine running Windows in Azure

How to make a copy of a virtual machine running Windows in Azure

I was called upon recently to help a customer create copies of some of their Windows virtual machines. The idea was to quickly deploy copies of these hosts at any time as opposed to using a system image or point in time copy.

The following PowerShell will therefore allow you to make a copy or clone of a Windows virtual machine using a copy of it’s disks in Azure Resource Manager mode.

Create a new virtual machine from a copy of the disks of another

Having finalized the configuration of the source virtual machine the steps required are as follows.

  1. Stop the source virtual machine, then using Storage Explorer copy it’s disks to a new location and rename them in line with the target name of the new virtual machine.

  2. Run the following in PowerShell making the required configuration changes.

Login-AzureRmAccount
Get-AzureRmSubscription –SubscriptionName "<subscription-name>" | Select-AzureRmSubscription

$location = (get-azurermlocation | out-gridview -passthru).location
$rgName = "<resource-group>"
$vmName = "<vm-name>"
$nicname = "<nic-name>"
$subnetID = "<subnetID>"
$datadisksize = "<sizeinGB>"
$vmsize = (Get-AzureLocation | Where-Object { $_.name -eq "East US"}).VirtualMachineRoleSizes | out-gridview -passthru
$osDiskUri = "https://<storage-acccount>.blob.core.windows.net/vhds/<os-disk-name.vhd>"
$dataDiskUri = "https://<storage-acccount>.blob.core.windows.net/vhds/<data-disk-name.vhd>"

Notes: The URIs above belong to the copies not the original disks and the SubnetID refers to it’s resource ID.

$nic = New-AzureRmNetworkInterface -Name $nicname -ResourceGroupName $rgName -Location $location -SubnetId $subnetID
$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize $vmsize
$vm = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $nic.Id
$osDiskName = $vmName + "os-disk"
$vm = Set-AzureRmVMOSDisk -VM $vm -Name $osDiskName -VhdUri $osDiskUri -CreateOption attach -Windows
$dataDiskName = $vmName + "data-disk"
$vm = Add-AzureRmVMDataDisk -VM $vm -Name $dataDiskName -VhdUri $dataDiskUri -Lun 0 -Caching 'none' -DiskSizeInGB $datadisksize -CreateOption attach
New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vm

List virtual machines in a resource group.

$vmList = Get-AzureRmVM -ResourceGroupName $rgName
$vmList.Name

Having run the above. Log on to the new host in order to make the required changes.

Automate ADFS Farm Installation and Configuration

Originally posted on Nivlesh’s blog @ nivleshc.wordpress.com

Introduction

In this multi-part blog, I will be showing how to automatically install and configure a new ADFS Farm. We will accomplish this using Azure Resource Manager templates, Desired State Configuration scripts and Custom Script Extensions.

Overview

We will use Azure Resource Manager to create a virtual machine that will become our first ADFS Server. We will then use a desired state configuration script to join the virtual machine to our Active Directory domain and to install the ADFS role. Finally, we will use a Custom Script Extension to install our first ADFS Farm.

Install ADFS Role

We will be using the xActiveDirectory and xPendingReboot experimental DSC modules.

Download these from

https://gallery.technet.microsoft.com/scriptcenter/xActiveDirectory-f2d573f3

https://gallery.technet.microsoft.com/scriptcenter/xPendingReboot-PowerShell-b269f154

After downloading, unzip the file and  place the contents in the Powershell modules directory located at $env:ProgramFiles\WindowsPowerShell\Modules (unless you have changed your systemroot folder, this will be located at C:\ProgramFiles\WindowsPowerShell\Modules )

Open your Windows Powershell ISE and lets create a DSC script that will join our virtual machine to the domain and also install the ADFS role.

Copy the following into a new Windows Powershell ISE file and save it as a filename of your choice (I saved mine as InstallADFS.ps1)

In the above, we are declaring some mandatory parameters and some variables that will be used within the script

$MachineName is the hostname of the virtual machine that will become the first ADFS server

$DomainName is the name of the domain where the virtual machine will be joined

$AdminCreds contains the username and password for an account that has permissions to join the virtual machine to the domain

$RetryCount and $RetryIntervalSec hold values that will be used to  check if the domain is available

We need to import the experimental DSC modules that we had downloaded. To do this, add the following lines to the DSC script

Import-DscResource -Module xActiveDirectory, xPendingReboot

Next, we need to convert the supplied $AdminCreds into a domain\username format. This is accomplished by the following lines (the converted value is held in $DomainCreds )

Next, we need to tell DSC that the command needs to be run on the local computer. This is done by the following line (localhost refers to the local computer)

Node localhost

We need to tell the LocalConfigurationManager that it should reboot the server if needed, continue with the configuration after reboot,  and to just apply the settings only once (DSC can apply a setting and constantly monitor it to check that it has not been changed. If the setting is found to be changed, DSC can re-apply the setting. In our case we will not do this, we will apply the setting just once).

Next, we need to check if the Active Directory domain is ready. For this, we will use the xWaitForADDomain function from the xActiveDirectory experimental DSC module.

Once we know that the Active Directory domain is available, we can go ahead and join the virtual machine to the domain.

the JoinDomain function depends on xWaitForADDomain. If xWaitForADDomain fails, JoinDomain will not run

Once the virtual machine has been added to the domain, it needs to be restarted. We will use xPendingReboot function from the xPendingReboot experimental DSC module to accomplish this.

Next, we will install the ADFS role on the virtual machine

Our script has now successfully added the virtual machine to the domain and installed the ADFS role on it. Next, create a zip file with InstallADFS.ps1 and upload it to a location that Azure Resource Manager can access (I would recommend uploading to GitHub). Include the xActiveDirectory and xPendingReboot experimental DSC module directories in the zip file as well. Also add a folder called Certificates inside the zip file and put the ADFS certificate and the encrypted password files (discussed in the next section) inside the folder.

In the next section, we will configure the ADFS Farm.

The full InstallADFS.ps1 DSC script is pasted below

Create ADFS Farm

Once the ADFS role has been installed, we will use Custom Script Extensions (CSE) to create the ADFS farm.

One of the requirements to configure ADFS is a signed certificate. I used a 90 day trial certificate from Comodo.

There is a trick that I am using to make my certificate available on the virtual machine. If you bootstrap a DSC script to your virtual machine in an Azure Resource Manager template, the script along with all the non out-of-box DSC modules have to be packaged into a zip file and uploaded to a location that ARM can access. ARM then will download the zip file, unzip it, and place all directories inside the zip file to $env:ProgramFiles\WindowsPowerShell\Modules ( C:\ProgramFiles\WindowsPowerShell\Modules ) ARM assumes the directories are PowerShell modules and puts them in the appropriate directory.

I am using this feature to sneak my certificate on to the virtual machine. I create a folder called Certificates inside the zip file containing the DSC script and put the certificate inside it. Also, I am not too fond of passing plain passwords from my ARM template to the CSE, so I created two files, one to hold the encrypted password for the domain administrator account and the other to contain the encrypted password of the adfs service account. These two files are named adminpass.key and adfspass.key and will be placed in the same Certificates folder within the zip file.

I used the following to generate the encrypted password files

AdminPlainTextPassword and ADFSPlainTextPassword are the plain text passwords that will be encrypted.

$key  is used to convert the secure string into an encrypted standard string. Valid key lengths are 16, 24, 32

For this blog, we will use

$Key = (3,4,2,3,56,34,254,222,1,1,2,23,42,54,33,233,1,34,2,7,6,5,35,43)

Open Windows PowerShell ISE and paste the following (save the file with a name of your choice. I saved mine as ConfigureADFS.ps1)

param (
 $DomainName,
 $DomainAdminUsername,
 $AdfsSvcUsername
)

These are the parameters that will be passed to the CSE

$DomainName is the name of the Active Directory domain
$DomainAdminUsername is the username of the domain administrator account
$AdfsSvcUsername is the username of the ADFS service account

Next, we will define the value of the Key that was used to encrypt the password and the location where the certificate and the encrypted password files will be placed

$localpath = "C:\Program Files\WindowsPowerShell\Modules\Certificates\"
$Key = (3,4,2,3,56,34,254,222,1,1,2,23,42,54,33,233,1,34,2,7,6,5,35,43)

Now, we have to read the encrypted passwords from the adminpass.key and adfspass.key file and then convert them into a domain\username format

Next, we will import the certificate into the local computer certificate store. We will mark the certificate exportable and set the password same as the domain administrator password.

In the above after the certificate is imported,  $cert is used to hold the certificate thumbprint

Next, we will configure the ADFS Farm

The ADFS Federation Service displayname is set to “Active Directory Federation Service” and the Federation Service Name is set to fs.adfsfarm.com

Upload the CSE to a location that Azure Resource Manager can access (I uploaded my script to GitHub)

The full ConfigureADFS.ps1 CSE is shown below

Azure Resource Manager Template Bootstrapping

Now that the DSC and CSE scripts have been created, we need to add them in our ARM template, straight after the virtual machine is provisioned.

To add the DSC script, create a DSC extension and link it to the DSC Package that was created to install ADFS. Below is an example of what can be used

The extension will run after the ADFS virtual machine has been successfully created (referred to as ADFS01VMName)

The MachineName, DomainName and domain administrator credentials are passed to the DSC extension.

Below are the variables that have been used in the json file for the DSC extension (I have listed my GitHub repository location)

Next, we have to create a Custom Script Extension to link to the CSE for configuring ADFS. Below is an example that can be used

The CSE depends on the ADFS virtual machine being successfully provisioned and the DSC extension that installs the ADFS role to have successfully completed.

The DomainName, Domain Administrator Username and the ADFS Service Username are passed to the CSE script

The following contains a list of the variables being used by the CSE (the example below shows my GitHub repository location)

"repoLocation": "https://raw.githubusercontent.com/nivleshc/arm/master/",
"ConfigureADFSScriptUrl": "[concat(parameters('repoLocation'),'ConfigureADFS.ps1')]",

That’s it Folks! You now have an ARM Template that can be used to automatically install the ADFS role and then configure a new ADFS Farm.

In my next blog, we will explore how to add another node to the ADFS Farm and we will also look at how we can automatically create a Web Application Proxy server for our ADFS Farm.

Azure Deployment Models And How To Migrate From ASM to ARM

This is a post about the two deployment models currently available in Azure, Service Management (ASM) and Resource Manager (ARM). And how to migrate from one to the other if necessary.

About the Azure Service Management deployment model

The ASM model, also known as version 1 and Classic mode, started out as a web interface and a backend API for the PaaS services Azure opened with at launch.

Features

  1. ASM deployments are based on an XML schema.
  2. ASM operations are based at the cloud service level.
  3. Cloud services are the logical containers for IaaS VMs and PaaS services.
  4. ASM is managed through the CLI, old and new portals (features) and PowerShell.
Picture1

In ASM mode the cloud service acts as a container for VMs and PaaS services.

About the Resource Manager deployment model

The ARM model consists of a new web interface and API for resource management in Azure which came out of preview in 2016 and introduced several new features.

Features

  1. ARM deployments are based on a JSON schema.
  2. Templates, which can be imported and exported, define deployments.
  3. RBAC support.
  4. Resources can be tagged for logical access and grouping.
  5. Resource groups are the logical containers for all resources.
  6. ARM is managed through PowerShell (PS), the CLI and new portal only.
Picture2

In ARM mode the resource group acts as a container for all resources.

Why use Service Management mode?

  1. Support for all features that are not exclusive to ARM mode.
  2. No new features will be made available in this mode.
  3. Cannot process operations in parallel (.e.g. vm start, vm create, etc).
  4. ASM needs a VPN or ExpressRoute connection to communicate with ARM.
  5. In Classic mode templates cannot be used to configure resources.

Users should therefore only be using service management mode if they have legacy environments to manage which include features exclusive to it.

Why use Resource Manager mode?

  1. Support for all features that are not exclusive to ASM mode.
  2. Can process multiple operations in parallel.
  3. JSON templates are a practical way of managing resources.
  4. RBAC, resource groups and tags!

Resource manager mode is the recommended deployment model for all Azure environments going forward.

Means of migration

The following tools and software are available to help with migrating environments.

ASM2ARM custom PowerShell script module.
Platform supported migrations using PowerShell or the Azure CLI.
The MigAz tool.
Azure Site Recovery.

About ASM2ARM

ASM2ARM is a custom PowerShell script module for migrating a single Virtual Machine from Azure Service Management to Resource Manager stack which makes available two new cmdlets.

Cmdlets: Add-AzureSMVmToRM & New-AzureSmToRMDeployment

Code samples:

$vm = Get-AzureVm -ServiceName acloudservice -Name atestvm
Add-AzureSMVmToRM -VM $vm -ResourceGroupName aresourcegroupname -DiskAction CopyDisks -OutputFileFolder D:\myarmtemplates -AppendTimeStampForFiles -Deploy

Using the service name and VM name parameters directly.

Add-AzureSMVmToRM -ServiceName acloudservice -Name atestvm -ResourceGroupName aresourcegroupname -DiskAction CopyDisks -OutputFileFolder D:\myarmtemplates -AppendTimeStampForFiles -Deploy

Features

  1. Copy the VM’s disks to an ARM storage account or create a new one.
  2. Create a destination vNet and subnet for migrated VMs.
  3. Create ARM JSON templates and PS script for deployment of resources.
  4. Create an availability set if one exists at source.
  5. Create a public IP if the VM is open to the internet.
  6. Create network security groups for the source VMs public endpoints.

Limitations

  1. Cannot migrate running VMs.
  2. Cannot migrate multiple VMs.
  3. Cannot migrate a whole ASM network
  4. Cannot create load balanced VMs.

For more information: https://github.com/fullscale180/asm2arm

About platform supported migrations using PowerShell

Consists of standard PowerShell cmdlets from Microsoft for migrating resources to ARM.

Features

  1. Migration of virtual machines not in a virtual network (disruptive!).
  2. Migration of virtual machines in a virtual network (non-disruptive!).
  3. Storage accounts are cross compatible but can also be migrated.

Limitations

  1. More than one availability set in a single cloud service.
  2. One or more availability sets.
  3. VMs not in an availability set in a single cloud service.

About platform supported migrations using the Azure CLI

Consists of standard Azure CLI commands from Microsoft for migrating resources to ARM.

Features & Limitations

See above.

A video on the subject of platform supported migrations using PowerShell or the CLI.

About MigAz

MigAz comes with an executable which outputs reference JSON files and makes available a Powershell script capable of migrating ASM resources and blob files to ARM mode environments.

Features

  1. MigAz exports JSON templates from REST API calls for migration.
  2. New resources are created in and disk blobs copied to their destination, all original resources left intact.
  3. Exported JSON can (and should) be reviewed and customized before use.
  4. Export creates all new resources in a single resource group.
  5. Supports using any subscription target, same or different.
  6. With JSON being at the core of ARM, templates can be used for DevOPs.
  7. Can be used to clone existing environments or create new ones for testing.
main

A screenshot of the MigAZ frontend GUI.

About Azure Site Recovery (ASR)

ASR is a backup, continuity and recovery solution set which can also be used for migrating resources to ARM.

Features

  1. Cold backup and replication of both on and off premise virtual machines.
  2. Cross compatible between ASM and ARM deployment models.
  3. ASM virtual machines can be restored into ARM environments.

Picture1

Pros and cons

ASM2ARM: Requires downtime but can be scripted which has potential however this approach only allows for the migration of one VM at a time which is a sizable limitation.

Azure PowerShell and CLI: This approach is well rounded. It can be scripted and allows for rollbacks. Supported migration scenarios include some caveats however and you cannot migrate a whole vNet into an existing network.

MigAz Tool: Exports JSON of ASM resources for customization and uses a PowerShell script for deployment to ARM. Downtime is required going to the same address space or cutting over to new services but this is easily your best and most comprehensive option at this time.

Site Recovery: Possibly the easiest way of migrating resources and managing the overall process but requires a lot of work to set up. Downtime is required in all cases.

Access Azure linked templates from a private repository

I recently was tasked to propose a way to use linked templates, especially how to refer to templates stored in a private repository.  The Azure Resource Manager (ARM) engine accepts a URI to access and deploy linked templates, hence the URI must be accessible by ARM.  If you store your templates in a public repository, ARM can access them fine, but what if you use a private repository?  This post will show you how.

In this example, I use Bitbucket – a Git-based source control product by Atlassian.  The free version (hosted in the cloud) allows you to have up to 5 private repositories.  I will describe the process for obtaining a Bitbucket OAuth 2.0 token using PowerShell and how we can use the access token in Azure linked templates.

Some Basics

If you store your code in a private repository, you can access the stored code after logging into Bitbucket.  Alternatively, you can use an access token instead of user credentials to log in.  Access tokens allow apps or programs to access private repositories hence, they are secret in nature, just like passwords or cash.

Bitbucket access token expires in one hour.  Once the token expires, you can either request for a new access token or renew using a refresh token.  Bitbucket supports all 4 of the RFC-6749 standard to grant access/bearer token – in this example; we will use the password grant method.  Note that this method will not work when you have a two-factor authentication enabled.

Getting into actions

First thing first, you must have a private repository in Bitbucket. To obtain access tokens, we will create a consumer in Bitbucket which will generate a consumer key and a secret.  This key/secret pair is used to grant access token.  See Bitbucket documentation for more detailed instructions.

Before I describe the process to grant access token in PowerShell, let’s examine how we can use get Bitbucket access token using the curl command.


curl -X POST -u &amp;quot;client_id:secret&amp;quot;
https://bitbucket.org/site/oauth2/access_token -d grant_type=password
-d username={username} -d password={password}

  • -X POST = specifies a POST method over HTTP, the request method that is passed to the Bitbucket server.
  • -u “client_id:secret” = specifies the user name and password used for basic authentication.  Note that this does not refer to the bitbucket user login details but rather the consumer key/secret pair.
  • -d this is the body of the HTTP request – in this case, it specifies the grant_type method to be used, which is password grant. In addition to Bitbucket login details – username and password.

To replicate the same command in PowerShell, we can use the Invoke-RestMethod cmdlet. This cmdlet is an REST-compliant, a method that sends HTTP/HTTPS requests which return structured data.


# Construct BitBucket request
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes((&amp;quot;{0}:{1}&amp;quot; -f $bbCredentials.bbConsumerKey,$bbCredentials.bbConsumersecret)))
$data = @{
grant_type = 'password'
username=$bbCredentials.bbUsername
password=$bbCredentials.bbPassword
}
# Perform the request to BB OAuth2.0
$tokens=Invoke-RestMethod -Uri $accessCodeURL -Headers @{Authorization=(&amp;quot;Basic {0}&amp;quot; -f $base64AuthInfo)} -Method Post -Body $data

The base64AuthInfo variable creates a base64 encoded string for HTTP basic authentication and the HTTP body request encapsulated in the data variable.  Both variables are used to construct the Bitbucket OAuth 2.0 request.

When successfully executed, an access token is produced (an example is below).  This access token is valid for 1 hour by default and you can either renew it with a refresh token or request a new access token.

access_token :g-9dXI3aa3upn0KpXIBGfq5FfUE7UXqHAiBeYD4j_mf383YD2drOEf8Y7CCfAv3yxv2GFlODC8hmhwXUhL8=
scopes : repository 
expires_in : 3600 
refresh_token : Vj3AYYcebM8TGnle8K 
token_type : bearer

Use in Azure Linked Templates

Once you have obtained the access token, we can use it in our Azure linked templates by including it as part of the URL query string.

(For ways how we can implement linked templates, refer to my previous blog post)

{
 "apiVersion": "2015-01-01",
 "name": "dbserverLinked",
 "type": "Microsoft.Resources/deployments",
 "properties": {
    "mode": "Incremental",
     "templateLink": {
         "uri": "https://api.bitbucket.org/1.0/repositories/swappr/azurerm/raw/e1db69add5d62f64120b06a3179828a37b7f166c/azuredeploy.json?accesstoken=g-9dXI3aa3upn0KpXIBGfq5FfUE7UXqHAiBeYD4j_mf383YD2drOEf8Y7CCfAv3yxv2GFlODC8hmhwXUhL8=",
         "contentVersion": "1.0.0.0"
     },
     "parametersLink": {
        "uri": "https://api.bitbucket.org/1.0/repositories/swappr/azurerm/raw/6208359175c99bb892c2097901b0ed7bd723ae56/azuredeploy.parameters.json?access_token=g-9dXI3aa3upn0KpXIBGfq5FfUE7UXqHAiBeYD4j_mf383YD2drOEf8Y7CCfAv3yxv2GFlODC8hmhwXUhL8=",
        "contentVersion": "1.0.0.0"
     }
 }
}

Summary

We have described a way to obtain an access token to a private Bitbucket repository. With this token, your app can access resources, code, or any other artifacts stored in your private repository.  You can also use the access token in your build server so it can get required code, build/compile it, or perform other things.

 

 

Break down your templates with Linked Templates (Part 2)

Continued from part 1

The 2nd part of the series will describe how we construct Azure Resource Manager linked templates.

Quick Recap

In the first part, we set up the first template which deploys the storage, virtual network, and subnets. This will be our “master” template where we will link all our related templates.

 

Linked templates.png

  • 1st template: master template – we will modify this template slightly to capture parameters and the linked templates
  • 2nd template: two web servers (IIS) – this is a new template
  • 3rd template: DB server (MySQL) – a new template

We will use the Azure quickstart templates on GitHub as the basis for the second and third templates. The quickstart template is your go-to site to shop for Azure templates. The point of using templates is for re-usability and most likely you do not need to start with a blank template as you may find existing templates that you can build upon.

The templates will provision two Windows web servers with IIS + one Linux server with mySQL . The second template is adapted from windows on IIS.  This template was modified so that it creates two instances of IIS server – installed using the DSC extension. There are some interesting aspects of this template that is worth examining, especially the way we deploy multiple resource instances. The third template is based on mySql on Ubuntu. This template, like the second template, was modified to reference our master template.

We will also revisit the master template in order to link all of our templates together. All templates are provided at the end of this blog post including where to find them on Github.

The templates are not particularly short but bear with me as I highlight the interesting bits so we understand their significance.

The point of using templates is for re-usability and most likely you do not need to start with a blank template as you may find existing templates that you can build upon.

Important Bits

Second template: two web servers (IIS)

First is the use of Copy() function to create multiple resource instances. This function allows you to create multiple instances of a given resource. This will result in a less-cluttered template as you do not need to copy & paste resource when more than one resource is needed.

"copy" : {
        "name" : "publicIpCopy",
        "count" : "[parameters('numOfInstances')]"
 }

The above has a number of consequences. We have a way to copy resource – which is all well and good. But we need to ensure that any resource that is duplicated has a unique name, or otherwise they will be in conflict. This is avoided by the concat() and copyIndex() functions. These two functions allow us to concatenate our base name with a counter index of the resource we are duplicating.

"resources" : { 
   "name" : "[concat(parameters('vmName'), copyIndex())]" 
}

Then, what do we do with the dependsOn attribute when we have multiple resources created. Good question!  And the answer is the same :), we will use the copyIndex() to point Azure RM to the resource it needs. This is the same for assigning resourceId in the properties attribute.

"dependsOn" : [
   "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'), copyIndex())]"
]

"id": "[concat(resourceId('Microsoft.Network/networkInterfaces', variables('nicName')), copyIndex())]"

There is also a DSC (desired state configuration) extension, this extension is responsible for installing the web server role using the PowerShell provider.

Finally, all references to the creation of storage account and virtual network are removed as we employ VNET and storage account from the master template. They are provided as parameters.

Third template: one DB server (MySQL)

The DB template is pretty basic, we will deploy a new Linux VM with a mySQL database installed using the Linux extension. Similar to the second template, we will also replace storage and virtual network resources definition with parameters so that they refer to our master template.

Linking all the templates together

There are two ways we can link templates: the first method is to hard-code the URLs of the linked templates. The second method is a smarter way of linking templates (and is my preferred approach) – which uses variables to link templates.

OK, there is a third method :), using parameters; and based on what being passed, deploy specific linked templates (also known as conditional nested templates).  Note: I will explain the two methods and I might save the third one for another time as it is beyond the scope of this post.

Hard-coded

With this method, we simply hard-code the URLs of our linked templates.

{
 "apiVersion": "2015-01-01",
 "name": "dbserverLinked",
 "type": "Microsoft.Resources/deployments",
 "properties": {
   "mode": "Incremental",
   "templateLink": {
     "uri": "https://urltotemplate.com/linkedtemplate.json",
     "contentVersion": "1.0.0.0"
   },
   "parametersLink": {
     "uri": "https://urltotemplate.com/linkedparamtemplate.json",
     "contentVersion": "1.0.0.0"
   }
 }
}

We could, for example, use variables to group all the URL’s together.

"variables" {
  "linkedTemplate" : {
    "templateUrl" : "https://urltotemplate.com/linkedtemplate.json",
    "paramUrl" : "https://urltotemplate.com/linkedtemplate.param.json"
  }
}

...
"templateLink": {
     "uri": "[variables('linkedTemplate').templateUrl]",
     "contentVersion":"1.0.0.0"
},
"parametersLink": {
     "uri": "[variables('linkedTemplate').paramUrl]",
     "contentVersion":"1.0.0.0"
}
...

The downside of the hard-coded way is that you need to update every single linked template URL every time you move your templates e.g when you fork or migrate to templates in your source control.

The downside of the hard-coded way is that you need to update every single linked template URLs every time you move your templates.

Alternatively, we can choose to pass the parameters in our template – as opposed to using parameters file denoted by the parametersLink attribute.

"templateLink": {
     "uri": "[variables('linkedTemplate').templateUrl]",
     "contentVersion": "1.0.0.0"
},
"parameters": {
     "attribute1" : "someValue",
     "attribute2" : "someValue"
}

The above method can also be applied to the second method (discussed below).

Variables to construct templates

This is the method we will use to link our templates together.  The approach is rather simple albeit more elegant, we assign a variable which contains the URL to the directory of all linked templates.  From this variable, linked template variables are built utilising the concat() method.

This method effectively avoids the need to update every single URL as we only need to update the shared URL variable which the linked templates’ URLs are built on (by concatenating the two together).

"variables" : {   
   "sharedBaseUrl" : "https://raw.githubusercontent.com/vicperdana/azure-quickstart-templates/master/windows-iis-sql-linked-multivm/nested/"
   "secondTemplate" : {
     "templateUrl" : "[concat(variables('sharedBaseUrl'),'azuredeploy-web.json')]",
     "paramUrl" : "[concat(variables('sharedBaseUrl'),'azuredeploy-web.parameters.json')]"
   }
   ...
}

...
"resources" : {
   "templateLink": {
       "uri": "[variables('secondTemplate').templateUrl]",
       "contentVersion":"1.0.0.0"
    },
   "parametersLink": {
       "uri": "[variables('secondTemplate').paramUrl]",
       "contentVersion":"1.0.0.0"
    }

As you can see there are a few permutations we can use, whichever method you prefer,  it is quite easy to link templates together.

All templates are now linked

linkt1

The linked templates on Azure portal once deployed

Extra bits

One thing we need to keep in mind is that each linked template is an entity that can be deployed independently.

When linking templates, ensure that no resource in the parent template or other templates other than what is being defined on the current template is referred to in the dependsOn attribute, otherwise it will throw an error.  This makes sense since the dependsOn attribute scope is within a template and it does not have visibility to other templates other than where it is defined.

Finally, Azure Resource Manager has a two-part validation: before and during deployment. The before deployment checks whether a template is syntactically correct and that it would be accepted by the Azure Resource Manager.  Then the during deployment validation (runtime) checks for resource dependencies within your Azure subscription. The templates may seem valid, but until they are deployed as there is no fool-proof way to guarantee that it is a working template.  However, as a good practice when deploying templates, make pre-deployment validation a habit as it will catch most obvious errors detected by Azure Resource Manager.

As a good practice when deploying templates, make pre-deployment validation a habit as it will catch most obvious errors detected by Azure Resource Manager.

The next post

Now that we have covered the linked templates – we will look at how we can leverage KeyVault to secure our templates and any sensitive information passed into them such as credentials. Hang tight  :).

Below, you will find the updates made to the master template & the new templates; or if you are looking for the full template including the linked ones, I have made them available on my forked quick-templates on Github. Enjoy!

 

Updated master template

2 x web servers template — second template

The DB template — third template

 

Break down your templates with Linked Templates (Part 1)

Templated deployment is one of the key value propositions of moving from the Azure classic to Resource Manager (ARM) deployment model.  This is probably one key feature that made a big stride towards Infrastructure as a Code (IAC).  Personally, I have been looking forward to this feature since it’s a prominent feature on the other competing platform.

Now that this feature is live for a while, one aspect which I found interesting is the ability to link templates in Azure Resource Manager.  This post is part of a three-part series highlighting ways you could deploy linked templates.  The first part of the series describes the basics – building a template that creates the base environment.  The second part will demonstrate the linked template.  Third part will delve into a more advanced scenario with KeyVault and how we can secure our linked templates.

Why linked templates?  We could get away with one template to build our environment, but is it a good idea?  If your environment is pretty small, sure go ahead.  If the template becomes un-manageable for you which is mostly the case if you are serious about ‘templating’, then you come to the right place – linking or nesting templates is a way for you to de-couple your templates to manageable chunks.  This allows you to ‘branch’ out templates development, especially when you have more than one person; resulting in smaller, manageable templates than a one big template.  Plus, this would make testing / debugging a little easier as you have a smaller scope to work off.

Linking or nesting templates is a way for you to de-couple your templates to manageable chunks.  This allows you to ‘branch’ out templates development, especially when you have more than one person; resulting in smaller, manageable templates than a one big template.  Plus, this would make testing / debugging a little easier as you have a smaller scope to work off.

OK, so let’s start with the scenario that demonstrates linked templates.  We will build a two-tier app consisting of two web servers and a database server.  This includes placing the two web servers in an availability set with a virtual network (VNET) and storage account to host the VMs.  To decouple the components, we will have three templates, one template that defines the base environment (VNET, subnets, and storage account).  The second template will build the virtual machines (web servers) in the front-end subnet, and finally the 3rd template will build the database server.

AzTemplate-1

First template (this blog post)

AzTemplate-2

Second template

AzTemplate-3

Third template

AzTemplate-4

The building process

First, create a resource group, this example relies on a resource group for deployment.  You will need to create the resource group now or later during deployment.

Once you have a resource group created, set up your environment – SDK, Visual Studio, etc. You can find more info here.  Visual Studio is optional but this demo will use this to build & deploy the templates.

Create a new project and select Azure Resource Group.  Select Blank templateAzTemplate-5

This will create a project with PowerShell deployment script and two JSON files: azuredeploy and azuredeploy.parameters.  The PowerShell script is particularly interesting as it has everything you need to deploy the Azure templates.

AzTemplate-6

We will start by defining the variables in the azuredeploy template.   Note this can be parameterised for a better portability.

Then we define the resources – storage account and virtual network (including subnets).

We then deploy it – in Visual Studio (VS) the process is simply by right-clicking project and select New Deployment… then Deploy.  If you didn’t create a resource group, you can create it here also.

AzTemplate-7

The template will deploy and a similar message should be displayed once it’s finished.

Successfully deployed template ‘w:\github-repo\linkedtemplatesdemo\linkedtemplatesdemo\templates\azuredeploy-base.json‘ to resource group ‘LinkedTemplatesDemo‘.

On the Azure portal you should see the storage account and virtual network created.

AzTemplate-8AzTemplate-9

The next part in the series describes how we modify this template to call the web and db servers templates (linked templates).  Stay tuned :).

 

 

 

Azure Classic vs Azure Resource Manager

On a recent customer engagement, one of the questions that came up was “Should we use Classic mode or should we use the new Resource Manager?”. The guidance from Microsoft is to deploy all new workloads into Azure ARM, however after scratching the surface, it’s not quite so cut and dry.

Some Background

Azure is a platform that is currently undergoing a significant transformation and as a result, confusingly, there are two deployment models supported by Azure public cloud: Classic and Azure Resource Manager (ARM). The reason I say it’s confusing is because individual Azure resource features or behaviors can be different across the two models, or only exist in one model or the other.

There are two Portals in use today:

  • Classic Azure Portal: If you’ve been using Azure for a while, you’ve used this portal. It is used to create and configure older Azure resources that support the classic deployment model. You cannot use it to create or configure resources that only support Resource Manager.
  • ‘New’ Azure Portal: Recently out of Preview, if you’re using a newer Azure resource, you’ve probably used this portal. It can be used to create and configure some Azure resources. You’ll eventually be able to create and configure all Azure resources with it. For some resources that support both deployment models, this portal can be used to create and configure a resource using either deployment model.

How you create, configure, and manage your Azure resources is different between these two models. For example, load balancing traffic across virtual machines created with the Classic deployment model is implicit because virtual machines are members of an Azure Cloud Service, and load is automatically balanced across virtual machines within a cloud service. Virtual machines created using Resource Manager are not members of a cloud service, and a separate Azure Load Balancer resource must be explicitly created to load balance traffic across multiple virtual machines.

In Classic mode, each resource provisioned in Azure is a single management unit. Although Azure resources are created in a cloud container, when it comes to managing resources in a cloud container, you must manage all of the resources individually. The classic mode does not allow grouping of resources, which makes managing Azure resources difficult.

When you interact with Classic mode resources from a command line such as Azure PowerShell, you are using Azure Service Management API calls (ASM). ASM is a traditional way of accessing Azure resources. Both the Classic portal and Azure PowerShell use ASM API calls to manage Azure resources. It is important to note that if the Classic portal has been used to create or manage Azure resources, you can only work with Classic resources.

On the other hand, ARM plays an important role in managing resources as a single unit as it allows you to deploy Azure resources in groups called Resource Groups.

The new Azure portal allows the ability to work with both Classic and ARM Resources and when you interact with ARM using command-line tools such as Azure PowerShell, you are using ARM API calls.

ARM provides the following benefits:

  • Deploy, manage and monitor Azure resources as a group
  • Deploy resources repeatedly
  • Supports creating templates. Templates can be created to include a set of resources to be deployed as part of a cloud solution
  • Allows you to define dependencies between resources so they are deployed in the correct order
  • Ability to apply RBAC policies to all resources in a Resource Group
  • Ability to specify “tags” to resources for programmatic purposes

So which one should I use?

Since both modes are viable options at this point, it is necessary to pay careful attention to the features that each offer and your specific requirements. ARM cannot simply be assumed to be the best fit as it simply may not meet all of your or your customer’s needs.

Unfortunately, some functionality still lies in the old portal but Microsoft is rapidly bringing this functionality across. In the past, it was common to have to go to both portals to build up a set of resources and services in Azure, however the “new” portal is being positioned as the single portal to create and manage services in both worlds.

The pace of newly deployed features and updates into ARM is hard to keep up with and it appears that Microsoft have removed the compatibility matrix they used to publish.

A good place to start learning about the benefits of the new portal is the General Availability Announcement along with Azure Resource Manager Overview and Azure Resource Manager vs. Classic Deployment: Understand deployment models and the state of your resources

Simultaneously Start|Stop all Azure Resource Manager Virtual Machines in a Resource Group

Problem

How many times have you wanted to Start or Stop all Virtual Machines in an Azure Resource Group ? For me it seems to be quite often, especially for development environment resource groups. It’s not that difficult though. You can just enumerate the VM’s then cycle through them and call ‘Start-AzureRMVM’ or ‘Start-AzureRMVM’. However, the more VM’s you have, that approach running serially as PowerShell does means it can take quite some time to complete. Go to the Portal and right-click on each VM and start|stop ?

There has to be a way of starting/shutting down all VM’s in a Resource Group in parallel via PowerShell right ?

Some searching and it seems common to use Azure Automation and Workflow’s to accomplish it. But I don’t want to run this on schedule or necessarily mess around with Azure Automation for development environments, or have to connected to the portal and kickoff the workflow.

What I wanted was a script that was portable. That lead me to messing around with ‘ScriptBlocks’ and ‘Start-Job’ functions in PowerShell. Passing variables in for locally hosted jobs running against Azure though was painful. So I found a quick clean way of doing it, that I detail in this post.

Solution

I’m using the brilliant Invoke-Parallel Powershell Script from Cookie.Monster, to in essence multi-thread and run in parallel the Virtual Machine ‘start’ and ‘stop’ requests.

In my script at the bottom of this post I haven’t included the ‘invoke-parallel.ps1’. The link for it is in the paragraph above. You’ll need to either reference it at the start of your script, or include it in your script. If you want to keep it all together in a single script include it like I have in the screenshot below.

My rudimentary PowerShell script takes two parameters;

  1. Power state. Either ‘Start’ or ‘Stop’
  2. Resource Group. The name of the Azure Resource Group containing the Virtual Machines you are wanting to start/stop. eg. ‘RG01’

<

p style=”background:white;”>Example: .\AzureRGVMPowerGo.ps1 -power ‘Start’ -azureResourceGroup ‘RG01’ or PowerShell .\AzureRGVMPowerGo.ps1 -power ‘Start’ -azureResourceGroup ‘RG01’

Note: If you don’t have a session to Azure in your current environment, you’ll be prompted to authenticate.

Your VM’s will simultaneously start/stop.

What’s it actually doing ?

It’s pretty simple. The script enumerates the VM’s in the Resource Group you’ve specified. It looks to see the status of the VM’s (Running or Deallocated) that is the inverse of the ‘Power’ state you’ve specified when running the script. It’ll start stopped VM’s in the Resource Group when you run it with ‘Start’ or it will stop all started VM’s in the Resource Group when you run it with ‘Stop’. Simples.

This script could also easily be updated to do other similar tasks. Like, delete all VM’s in a Resource Group.

Here it is

Enjoy.

Follow Darren Robinson on Twitter

No More Plaintext Passwords: Using Azure Key Vault with Azure Resource Manager

siliconvalve

A big part of where Microsoft Azure is going is being driven by template-defined environments that leverage the Azure Resource Manager (ARM) for deployment orchestration.

If you’ve spent any time working with ARM deployments you will have gotten used to seeing this pattern in your templates when deploying Virtual Machines (VMs):

The adminPassword property accepts a Secure String object which contains an encrypted string that is passed to the VM provisioning engine in Azure and is used to set the login password. You provide the clear text version of the password either as a command-line parameter, or via a parameters file.

The obvious problems with this way of doing things are:

  1. Someone needs to type the cleartext password which means:
    1. it needs to be known to anyone who provisions the environment and
    2. how do I feed it into an automated environment deployment?
  2. If I store the password in a parameter…

View original post 781 more words

Resource Manager Cmdlets in Azure PowerShell 1.0

Azure recently launched the 1.0 version of PowerShell cmdlets. The changes are huge, including new Azure Resource Manager (ARM), which resulted in deprecating Azure-SwitchMode between ASM and ARM. In this post, we only have a brief look at how new PowerShell cmdlets for ARM have been introduced, especially for managing resource groups and templates.

Installation

In order to get the newest Azure PowerShell, using MS Web Platform Installer is the quickest and easiest way.

Note: At the moment of writing, the released date of Azure PowerShell is Nov. 9th, 2015.

Of course, there are other ways to install the latest version of Azure PowerShell, but this is beyond the scope of this post.

New Set of Cmdlets

Now, the new version of Azure PowerShell has been installed. Run PowerShell ISE with an Administrator privilege. As always, run Update-Help to get the all help files up-to-date. Then try the following command:

If you can’t see anything, don’t worry. You can restart ISE or even restart your PC to get it working. Alternatively, you can check those cmdlets through ISE like:

Can you find any differences comparing to the previous version of cmdlets? All cmdlets are named as patterns of [action]-AzureRm[noun]. For example, in order to get the list of resource groups, you can run Get-AzureRmResourceGroup. The result will look like:

Now, let’s try to set up a simple web application infrastructure. For the web application, at least one website and database is required. In addition to them, for telemetry purpose, ApplicationInsights might be necessary.

Create a Resource Group

For those infrastructure, we need a resource group. Try the following cmdlets in that order:

Can you find out the differences from the old cmdlets?

Old Cmdlets New Cmdlets
Get-AzureAccount Login-AzureRmAccount
Get-AzureSubscription Get-AzureRmSubscription
Select-AzureSubscription Select-AzureRmSubscription

As stated above, all cmdlets now have names of AzureRm instead of Azure. Once you complete choosing your subscription, if you have more than one subscription, it’s time to create a resource group for those infrastructure items. It might be worth having a look naming guidelines for Azure resources. Let’s try it.

Old Cmdlets New Cmdlets
New-AzureResourceGroup New-AzureRmResourceGroup

Therefore, enter the following to create a resource group:

The resource group is now named as ase-dev-rg-sample and created in the Australia Southeast (Melbourne) region. Let’s move onto the next step, setting up all resources using a template.

Setup Resources with Azure Resource Template

Fortunately, there is a template for our purpose on GitHub repository: https://github.com/Azure/azure-quickstart-templates/tree/master/201-web-app-sql-database.

Old Cmdlets New Cmdlets
New-AzureResourceGroupDeployment New-AzureRmResourceGroupDeployment

Use the new cmdlet and add all resources into the group:

As you can see, we set the template file directly from GitHub and left parameter source. Therefore, it will ask to type necessary parameters:

Once everything is entered, as I put -Verbose parameter, all setup details will be displayed as well as result:

Check out the Azure Portal whether all defined resources have been deployed or not.

Everything has been smoothly deployed.

We have so far had a quick look of ARM with resource group management using new version of PowerShell cmdlets. There are more cmdlets in Azure PowerShell to control individual resources more precisely. I’m not going to deep dive too much here, but it’s much worth trying other cmdlets for your infrastructure setup purpose. They are even more powerful than before.

Keep enjoying on cloud with Kloud!