Creating Accounts on Azure SQL Database through PowerShell Automation

In the previous post, Azure SQL Pro Tip – Creating Login Account and User, we have briefly walked through how to create login accounts on Azure SQL Database through SSMS. Using SSMS is of course the very convenient way. However, as a DevOps engineer, I want to automate this process through PowerShell. In this post, we’re going to walk through how to achieve this goal.

Step #1: Create Azure SQL Database

First of all, we need an Azure SQL Database. It can be easily done by running an ARM template in PowerShell like:

We’re not going to dig it further, as this is beyond our topic. Now, we’ve got an Azure SQL Database.

Step #2: Create SQL Script for Login Account

In the previous post, we used the following SQL script:

Now, we’re going to automate this process by providing username and password as parameters to an SQL script. The main part of the script above is CREATE LOGIN ..., so we slightly modify it like:

Now the SQL script is ready.

Step #3: Create PowerShell Script for Login Account

We need to execute this in PowerShell. Look at the following PowerShell script:

Looks familiar? Yes, indeed. It’s basically the same as using ADO.NET in ASP.NET applications. Let’s run this PowerShell script. Woops! Something went wrong. We can’t run the SQL script. What’s happening?

Step #4: Update SQL Script for Login Account

CREATE LOGIN won’t take variables. In other words, the SQL script above will never work unless modified to take variables. In this case, we don’t want to but should use dynamic SQL, which is ugly. Therefore, let’s update the SQL script:

Then run the PowerShell script again and it will work. Please note that using dynamic SQL here wouldn’t be a big issue, as all those scripts are not exposed to public anyway.

Step #5: Update SQL Script for User Login

In a similar way, we need to create a user in the Azure SQL Database. This also requires dynamic SQL like:

This is to create a user with a db_owner role. In order for the user to have only limited permissions, use the following dynamic SQL script:

Step #6: Modify PowerShell Script for User Login

In order to run the SQL script right above, run the following PowerShell script:

So far, we have walked through how we can use PowerShell script to create login accounts and user logins on Azure SQL Database. With this approach, DevOps engineers will be easily able to create accounts on Azure SQL Database by running PowerShell script on their build server or deployment server.

Programmatically interacting with Yammer via PowerShell – Part 2

In my last post I foolishly said that part 2 would be ‘coming in the next few days’. This of course didn’t happen, but I guess it’s better late than never!

In part 1 which is available here, I wrote how it was possible to post to a Yammer group via a *.ps1 using a ‘Yammer Verified Admin’ account. While this worked a treat, it soon became apparent that this approach had limited productivity rewards. Instead, I wanted to create groups and add users to these groups, all while providing minimal inputs.

Firstly, there isn’t a documented create group .json?, but a quick hunt round the tinterweb with Google helped me uncover the groups.json?. This simply needs a name and whether it’s open or closed, open = $false, closed = $true. So building on my example from Part 1, the below code should create a new group…

$clientID = "fvIPx********GoqqV4A"
$clientsecret = "5bYh6vvDTomAJ********RmrX7RzKL0oc0MJobrwnc"
$Token = "AyD********NB65i2LidQ"
$Group = "Posting to Yammer Group"
$GroupType = $True
$CreateGroupUri = "https://www.yammer.com/api/v1/groups.json?name=$Group&private=$GroupType"

    $Headers = @{
        "Accept" = "*/*"
        "Authorization" = "Bearer "+$Token
        "accept-encoding" = "gzip"
        "content-type" = "application/json"
         }

Invoke-WebRequest -Method POST -Uri $CreateGroupUri -Header $Headers
    You’ll noticed I’ve moved away from Invoke-RestMethod to Invoke-WebRequest. This is due to finding a bug where the script would hang and eventually timeout which is detailed in this link.

All going well, you should end up with a new group which has your ‘Yammer Verified Admin’ as the sole member ala…

YammerGroup

Created Yammer Group

Great, but as I’ve just highlighted, there is only one person in that group, and that’s the admin account we’ve been using. To add other Yammer registered users to the group we need to impersonate. This is only possible via a ‘Yammer Verified Admin’ account for obvious chaos avoiding reasons. So firstly you need to grab the token of the user…

$GetUsersUri = "https://www.yammer.com/api/v1/users.json"
$YammerUPN = "dave.young@daveswebsite.com"
$YammerUsers = (Invoke-WebRequest -Uri $GetUsersUri -Method Get -Headers $Headers).content | ConvertFrom-Json

foreach ($YammerUser in $YammerUsers)
    {
    if ($YammerUser.email -eq $YammerUPN)
        {
        $YammerUserId = $YammerUser.id
        }
    }

$GetUserTokUri = “https://www.yammer.com/api/v1/oauth/tokens.json?user_id=$YammerUserId&consumer_key=$clientID"
$YammerUserDave = (Invoke-WebRequest -Uri $GetUserTokUri -Method Get -Headers $Headers).content | ConvertFrom-Json

To step you through the code. I’ve changed the uri to the users.json, provided the UPN of the user that I want to impersonate and I’m using the headers from the previously provided code. I grab all the users into the $YammerUsers variable and then I do a foreach/if to obtain the id of the user. Now we’ve got that we can use the tokens.json to perform a Get request. This will bring you back a lot of information about the user, but most importantly you’ll get the token!

    user_id : 154**24726
    network_id : 20**148
    network_permalink : daveswebsite.com
    network_name : daveswebsite.com
    token : 18Lz3********Nu0JlvXYA
    secret : Wn9ab********kellNnQgvSfbGJjBfRMWZNICW0JTA
    view_members : True
    view_groups : True
    view_messages : True
    view_subscriptions : True
    modify_subscriptions : True
    modify_messages : True
    view_tags : True
    created_at : 2015/06/15 23:59:19 +0000
    authorized_at : 2015/06/15 23:59:19 +0000
    expires_at :

Storing this into the $UserToken variable allows for you to append this to the Authorization within the Headers so you can impersonate/authenticate on behalf of the user. The code looks like…

$UserToken = $YammerUserDave.token
$YammerGroupId = "61***91"

 $UserHeaders = @{
                "Accept" = "*/*"
                "Authorization" = "Bearer "+$UserToken
                "accept-encoding" = "gzip"
                "content-type" = "application/json"
                }

$PostGroupUri = "https://www.yammer.com/api/v1/group_memberships.json?group_id=$YammerGroupId"
$AddYammerUser = Invoke-WebRequest -Uri $PostGroupUri -Method Post -Headers $UserHeaders

So using the group that we created earlier and the correct variables we then successfully add the user to the group…

DaveGroup

Dave in the group

Something to be mindful of, when you pull the groups or the users it will be done in pages of 50. I found using a Do/While worked nicely to build up the variables so they could then be queried, like this…

If ($YammerGroups.Count -eq 50)
    {
    $GroupCycle = 1
    DO
        {
        $GetMoreGroupsUri = "https://www.yammer.com/api/v1/groups.json?page=$GroupCycle"
        $MoreYammerGroups = (Invoke-WebRequest -Uri $GetMoreGroupsUri -Method Get -Headers $AdminHeaders).content | ConvertFrom-Json    
        $YammerGroups += $MoreYammerGroups
        $GroupCycle ++
        $GroupCount = $YammerGroups.Count
        }
    While ($MoreYammerGroups.Count -gt 0)
    }

Once you’ve got your head around this, then the rest of the API/Json’s on the REST API are really quite useful, my only gripe right now is that they are really missing a delete group json – hopefully it’ll be out soon!

Cheers,

Dave

Programmatically interacting with Yammer via PowerShell – Part 1

For my latest project I was asked to automate some Yammer activity. I’m first to concede that I don’t have much of a Dev background, but I instantly fired up PowerShell ISE in tandem with Google only to find…well not a lot! After a couple of weeks fighting with a steep learning curve, I thought it best to blog my findings, it’s good to share ‘n all that!

    It’s worth mentioning at the outset, if you want to test this out you’ll need an E3 Office 365 Trial and a custom domain. It’s possible to trial Yammer, but not with the default *.onmicrosoft.com domain.

First things first, there isn’t a PowerShell Module for Yammer. I suspect it’s on the todo list over in Redmond since their 2012 acquisition. So instead, the REST API is our interaction point. There is some very useful documentation along with examples of the json queries over at the Yammer developer site, linked here.

The site also covers the basics of how to interact using the REST API. Following the instructions, you’ll want to register your own application. This is covered perfectly in the link here.

When registering you’ll need to provide a Expected Redirect. For this I simply put my Yammer site address again https://www.yammer.com/daveswebsite.com. For the purposes of my testing, I’ve not had any issues with this setting. This URL is important and you’ll need it later so make sure to take a note of it. From the registration you should also grab your Client ID & Client secret.

While we’ve got what appears to be the necessary tools to authenticate, we actually need to follow some steps to retrieve our Admin token.

    It is key to point out that I use the Yammer Verified Admin. This will be more critical to follow for Part 2 of my post, but it’s always good to start as you mean to go on!

So The following script will load Internet Explorer and compile the necessary URL. You will of course simply change the entries in the variables with the ones you created during your app registration. I have obfuscated some of the details in my examples, for obvious reasons 🙂

$clientID = "fvIPx********GoqqV4A"
$clientsecret = "5bYh6vvDTomAJ********RmrX7RzKL0oc0MJobrwnc"
$RedirURL = "https://www.yammer.com/daveswebsite.com"

$ie = New-Object -ComObject internetexplorer.application
$ie.Visible = $true
$ie.Navigate2("https://www.yammer.com/dialog/oauth?client_id=$clientID&redirect_uri=$RedirURL")

From the IE Window, you should login with your Yammer Verified Admin and authorise the app. Once logged in, proceed to this additional code…

$UrlTidy = $ie.LocationURL -match 'code=(......................)'; $Authcode = $Matches[1]
$ie = New-Object -ComObject internetexplorer.application
$ie.Visible = $true
$ie.Navigate2("https://www.yammer.com/oauth2/access_token.json?client_id=$clientID&client_secret=$clientsecret&code=$Authcode")

This script simply captures the 302 return and extracts the $Authcode which is required for the token request. It will then launch an additional Internet Explorer session and prompt you to download an access_token.json. Within here you will find your Admin Token which does not expire and can be used for all further admin tasks. I found it useful to load this into a variable using the code below…

$Openjson = $(Get-Content 'C:\Tokens\access_token.json' ) -join "`n" | ConvertFrom-Json
$token = $Openjson.access_token.token

Ok, so we seem to be getting somewhere, but our Yammer page is still looking rather empty! Well now all the prerequistes are complete, we can make our first post. A good example json to use is posting a message, which is detailed in the link here.

I started with this one mainly because all we need is the Group_ID of one of the groups in Yammer and the message body in json format. I created a group manually and then just grabbed the Group_ID from the end of the URL in my browser. I have provided an example below…

$uri = "https://www.yammer.com/api/v1/messages.json"

$Payloadjson = '{
"body": "Posting to Yammer!",
"group_id": 59***60
}'

$Headers = @{
"Accept" = "*/*"
"Authorization" = "Bearer "+$token
"accept-encoding" = "gzip"
"content-type"="application/json"
}

Invoke-RestMethod -Method Post -Uri $uri -Header $Headers -Body $Payloadjson
Yammer Result

Yammer Post

    It’s at this stage you’ll notice that I’ve only used my second cmdlet, Invoke-RestMethod. Both this and ConvertFrom-Json were introduced in PowerShell 3.0 and specifically designed for REST web services like this.

A key point to highlight here is the Authorisation attribute in the $Headers. This is where the $Token is passed to Yammer for authentication. Furthermore, this $Header construct is all you need going forward. It’s simply a case of changing the -Method, the $uri and the $Payload and you can play around with all the different json queries listed on the Yammer Site.

While this was useful for me, it soon became apparent that I wanted to perform actions on behalf of other users. This is something I’ll look to cover in Part 2 of this Blog, coming in the next few days!

Automate your Cloud Operations Part 1: AWS CloudFormation

Operations

What is Operations?

In the IT world, Operations refers to a team or department within IT which is responsible for the running of a business’ IT systems and infrastructure.

So what kind of activities this team perform on day to day basis?

Building, modifying, provisioning, updating systems, software and infrastructure to keep them available, performing and secure which ensures that users can be as productive as possible.

When moving to public cloud platforms the areas of focus for Operations are:

  • Cost reduction: if we design it properly and apply good practices when managing it (scale down / switch off)
  • Smarter operation: Use of Automation and APIs
  • Agility: faster in provisioning infrastructure or environments by Automating the everything
  • Better Uptime: Plan for failover, and design effective DR solutions more cost effectively.

If Cloud is the new normal then Automation is the new normal.

For this blog post we will focus on automation using AWS CloudFormation. The template I will use for this post for educational purposes only and may not be suitable for production workloads :).

AWS CloudFormation

AWS CloudFormation provides developers and system administrators DevOps an easy way to create and manage a collection of related AWS resources, including provisioning and updating them in an orderly and predictable fashion. AWS provides various CloudFormation templates, snippets and reference implementations.

Let’s talk about versioning before diving deeper into CloudFormation. It is extremely important to version your AWS infrastructure in the same way as you version your software. Versioning will help you to track change within your infrastructure by identifying:

  • What changed?
  • Who changed it?
  • When was it changed?
  • Why was it changed?

You can tie this version to a service management or project delivery tools if you wish.

You should also put your templates into source control. Personally I am using Github to version my infrastructure code, but any system such as Team Foundation Server (TFS) will do.

AWS Infrastructure

The below diagram illustrates the basic AWS infrastructure we will build and automate for this blog post:

CloudFormation1

Initial Stack

Firstly we will create the initial stack. Below are the components for the initial stack:

  • A VPC with CIDR block of 192.168.0.0/16 : 65,543 IPs
  • Three Public Subnets across 3 Availability Zones : 192.168.10.0/24, 192.168.11.0/24,  192.168.12.0/24
  • An Internet Gateway attached to the VPC to allow public Internet access. This is a routing construct for VPC and not an EC2 instance
  • Routes and Route tables for three public subnets so EC2 instances in those public subnets can communicate
  • Default Network ACLs to allow all communication inside of the VPC.

Below is the CloudFormation template to build the initial stack.

The template can be downloaded here: https://s3-ap-southeast-2.amazonaws.com/andreaswasita/cloudformation_template/demo/lab1-vpc_ELB_combined.template

I put together the following video on how to use the template:

Understanding a CloudFormation template

AWS CloudFormation is pretty neat and FREE. You only need to pay for the AWS resources provisioned by the CloudFormation template.

The next bit is understanding the Structure of the template. Typically CloudFormation template will have 5 sections:

  • Headers
  • Parameters
  • Mappings
  • Resources
  • Outputs

Headers: Example:

Parameters: Provision-time spec command-line options. Example:

Mappings: Conditionals Case Statements. Example:

Resources: All resources to be provisioned. Example:

Outputs: Example:

Note: Not all AWS Resources can be provisioned using AWS CloudFormation and it has the following limitations.

In Part 2 we will deep dive further on AWS CloudFormation and automating the EC2 including the configuration for NAT and Bastion Host instance.

http://www.wasita.net

Migrating Azure Virtual Machines to another Region

I have a number of DEV/TEST Virtual Machines (VMs) deployed to Azure Regions in Southeast Asia (Singapore) and West US as these were the closet to those of us living in Australia. Now that the new Azure Regions in Australia have been launched, it’s time to start migrating those VMs closer to home. Manually moving VMs between Regions is pretty straight forward and a number of articles already exist outlining the manual steps.

To migrate an Azure VM to another Region

  1. Shutdown the VM in the source Region
  2. Copy the underlying VHDs to storage accounts in the new Region
  3. Create OS and Data disks in the new Region
  4. Re-create the VM in the new Region.

Simple enough but tedious manual configuration, switching between tools and long waits while tens or hundreds of GBs are transferred between Regions.

What’s missing is the automation…

Automating the Migration

In this post I will share a Windows PowerShell script that automates the migration of Azure Virtual Machines between Regions. I have made the full script available via GitHub.

Here is what we are looking to automate:

Migrate-AzureVM

  1. Shutdown and Export the VM configuration
  2. Setup async copy jobs for all attached disks and wait for them to complete
  3. Restore the VM using the saved configuration.

The Migrate-AzureVM.ps1 script assumes the following:

  • Azure Service Management certificates are installed on the machine running the script for both source and destination Subscriptions (same Subscription for both is allowed)
  • Azure Subscription profiles have been created on the machine running the script. Use Get-AzureSubscription to check.
  • Destination Storage accounts, Cloud Services, VNets etc. already have been created.

The script accepts the following input parameters:

.\Migrate-AzureVM.ps1 -SourceSubscription "MySub" `
                      -SourceServiceName "MyCloudService" `
                      -VMName "MyVM" `
                      -DestSubscription "AnotherSub" `
                      -DestStorageAccountName "mydeststorage" `
                      -DestServiceName "MyDestCloudService" `
                      -DestVNETName "MyRegionalVNet" `
                      -IsReadOnlySecondary $false `
                      -Overwrite $false `
                      -RemoveDestAzureDisk $false
SourceSubscription Name of the source Azure Subscription
SourceServiceName Name of the source Cloud Service
VMName Name of the VM to migrate
DestSubscription Name of the destination Azure Subscription
DestStorageAccountName Name of the destination Storage Account
DestServiceName Name of the destination Cloud Service
DestVNETName Name of the destination VNet – blank if none used
IsReadOnlySecondary Indicates if we are copying from the source storage accounts read-only secondary location
Overwrite Indicates if we are overwriting if the VHD already exists in the destination storage account
RemoveDestAzureDisk Indicates if we remove an Azure Disk if it already exists in the destination disk repository

To ensure that the Virtual Machine configuration is not lost (and avoid us have to re-create by hand) we must first shutdown the VM and export the configuration as shown in the PowerShell snippet below.

# Set source subscription context
Select-AzureSubscription -SubscriptionName $SourceSubscription -Current

# Stop VM
Stop-AzureVMAndWait -ServiceName $SourceServiceName -VMName $VMName

# Export VM config to temporary file
$exportPath = "{0}\{1}-{2}-State.xml" -f $ScriptPath, $SourceServiceName, $VMName
Export-AzureVM -ServiceName $SourceServiceName -Name $VMName -Path $exportPath

Once the VM configuration is safely exported and the machine shutdown we can commence copying the underlying VHDs for the OS and any data disks attached to the VM. We’ll want to queue these up as jobs and kick them off asynchronously as they will take some time to copy across.

Get list of azure disks that are currently attached to the VM
$disks = Get-AzureDisk | ? { $_.AttachedTo.RoleName -eq $VMName }

# Loop through each disk
foreach($disk in $disks)
{
    try
    {
        # Start the async copy of the underlying VHD to
        # the corresponding destination storage account
        $copyTasks += Copy-AzureDiskAsync -SourceDisk $disk
    }
    catch {}   # Support for existing VHD in destination storage account
}

# Monitor async copy tasks and wait for all to complete
WaitAll-AsyncCopyJobs

Tip: You’ll probably want to run this overnight. If you are copying between Storage Accounts within the same Region copy times can vary between 15 mins and a few hours. It all depends on which storage cluster the accounts reside. Michael Washam provides a good explanation of this and shows how you can check if your accounts live on the same cluster. Between Regions will always take a longer time (and incur data egress charges don’t forget!)… see below for a nice work-around that could save you heaps of time if you happen to be migrating within the same Geo.

You’ll notice the script also supports being re-run as you’ll have times when you can’t leave the script running during the async copy operation. A number of switches are also provided to assist when things might go wrong after the copy has completed.

Now that we have our VHDs in our destination Storage Account we can begin putting our VM back together again.

We start by re-creating the logical OS and Azure Data disks that take a lease on our underlying VHDs. So we don’t get clashes, I use a convention based on Cloud Service name (which must be globally unique), VM name and disk number.

# Set destination subscription context
Select-AzureSubscription -SubscriptionName $DestSubscription -Current

# Load VM config
$vmConfig = Import-AzureVM -Path $exportPath

# Loop through each disk again
$diskNum = 0
foreach($disk in $disks)
{
    # Construct new Azure disk name as [DestServiceName]-[VMName]-[Index]
    $destDiskName = "{0}-{1}-{2}" -f $DestServiceName,$VMName,$diskNum   

    Write-Log "Checking if $destDiskName exists..."

    # Check if an Azure Disk already exists in the destination subscription
    $azureDisk = Get-AzureDisk -DiskName $destDiskName `
                              -ErrorAction SilentlyContinue `
                              -ErrorVariable LastError
    if ($azureDisk -ne $null)
    {
        Write-Log "$destDiskName already exists"

        if ($RemoveDisk -eq $true)
        {
            # Remove the disk from the repository
            Remove-AzureDisk -DiskName $destDiskName

            Write-Log "Removed AzureDisk $destDiskName"
            $azureDisk = $null
        }
        # else keep the disk and continue
    }

    # Determine media location
    $container = ($disk.MediaLink.Segments[1]).Replace("/","")
    $blobName = $disk.MediaLink.Segments | Where-Object { $_ -like "*.vhd" }
    $destMediaLocation = "http://{0}.blob.core.windows.net/{1}/{2}" -f $DestStorageAccountName,$container,$blobName

    # Attempt to add the azure OS or data disk
    if ($disk.OS -ne $null -and $disk.OS.Length -ne 0)
    {
        # OS disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -OS $disk.OS `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        $vmConfig.OSVirtualHardDisk.DiskName = $azureDisk.DiskName
    }
    else
    {
        # Data disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        #   Match on source disk name and update with dest disk name
        $vmConfig.DataVirtualHardDisks.DataVirtualHardDisk | ? { $_.DiskName -eq $disk.DiskName } | ForEach-Object {
            $_.DiskName = $azureDisk.DiskName
        }
    }              

    # Next disk number
    $diskNum = $diskNum + 1
}
# Restore VM
$existingVMs = Get-AzureService -ServiceName $DestServiceName | Get-AzureVM
if ($existingVMs -eq $null -and $DestVNETName.Length -gt 0)
{
    # Restore first VM to the cloud service specifying VNet
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -VNetName $DestVNETName -WaitForBoot
}
else
{
    # Restore VM to the cloud service
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -WaitForBoot
}

# Startup VM
Start-AzureVMAndWait -ServiceName $DestServiceName -VMName $VMName

For those of you looking at migrating VMs between Regions within the same Geo and have GRS enabled, I have also provided an option to use the secondary storage location of the source storage account.

To support this you will need to enable RA-GRS (read access) and wait a few minutes for access to be made available by the storage service. Copying your VHDs will be very quick (in comparison to egress traffic) as the copy operation will use the secondary copy in the same region as the destination. Nice!

Enabling RA-GRS can be done at any time but you will be charged for a minimum of 30 days at the RA-GRS rate even if you turn it off after the migration.

# Check if we are copying from a RA-GRS secondary storage account
if ($IsReadOnlySecondary -eq $true)
{
    # Append "-secondary" to the media location URI to reference the RA-GRS copy
    $sourceUri = $sourceUri.Replace($srcStorageAccount, "$srcStorageAccount-secondary")
}

Don’t forget to clean up your source Cloud Services and VHDs once you have tested the migrated VMs are running fine so you don’t incur ongoing charges.

Conclusion

In this post I have walked through the main sections of a Windows PowerShell script I have developed that automates the migration of an Azure Virtual Machine to another Azure data centre. The full script has been made available in GitHub. The script also supports a number of other migration scenarios (e.g. cross Subscription, cross Storage Account, etc.) and will be handy addition to your Microsoft Azure DevOps Toolkit.