Ok Google Email me the status of all vms – Part 2

First published at https://nivleshc.wordpress.com

In my last blog, we configured the backend systems necessary for accomplishing the task of asking Google Home “OK Google Email me the status of all vms” and it sending us an email to that effect. If you haven’t finished doing that, please refer back to my last blog and get that done before continuing.

In this blog, we will configure Google Home.

Google Home uses Google Assistant to do all the smarts. You will be amazed at all the tasks that Google Home can do out of the box.

For our purposes, we will be using the platform IF This Then That or IFTTT for short. IFTTT is a very powerful platform as it lets you create actions based on triggers. This combination of triggers and actions is called a recipe.

Ok, lets dig in and create our IFTTT recipe to accomplish our task

1.1   Go to https://ifttt.com/ and create an account (if you don’t already have one)

1.2   Login to IFTTT and click on My Applets menu from the top

IFTTT_MyApplets_Menu

1.3   Next, click on New Applet (top right hand corner)

1.4   A new recipe template will be displayed. Click on the blue + this choose a service

IFTTT_Reicipe_Step1

1.5   Under Choose a Service type “Google Assistant”

IFTTT_ChooseService

1.6   In the results Google Assistant will be displayed. Click on it

1.7   If you haven’t already connected IFTTT with Google Assistant, you will be asked to do so. When prompted, login with the Google account that is associated with your Google Home and then approve IFTTT to access it.

IFTTT_ConnectGA

1.8   The next step is to choose a trigger. Click on Say a simple phrase

IFTTT_ChooseTrigger

1.9   Now we will put in the phrases that Google Home should trigger on.

IFTTT_CompleteTrigger

For

  • What do you want to say? enter “email me the status of all vms
  • What do you want the Assistant to say in response? enter “no worries, I will send you the email right away

All the other sections are optional, however you can fill them if you prefer to do so

Click Create trigger

1.10   You will be returned to the recipe editor. To choose the action service, click on + that

IFTTT_That

1.11  Under Choose action service, type webhooks. From the results, click on Webhooks

IFTTT_ActionService

1.12   Then for Choose action click on Make a web request

IFTTT_Action_Choose

1.13   Next the Complete action fields screen is shown.

For

  • URL – paste the webhook url of the runbook that you had copied in the previous blog
  • Method – change this to POST
  • Content Type – change this to application/json

IFTTT_CompleteActionFields

Click Create action

1.13   In the next screen, click Finish

IFTTT_Review

 

Woo hoo. Everything is now complete. Lets do some testing.

Go to your Google Home and say “email me the status of all vms”. Google Home should reply by saying “no worries. I will send you the email right away”.

I have noticed some delays in receiving the email, however the most I have had to wait for is 5 minutes. If this is unacceptable, in the runbook script, modify the Send-MailMessage command by adding the parameter -Priority High. This sends all emails with high priority, which should make things faster. Also, the runbook is currently running in Azure. Better performance might be achieved by using Hybrid Runbook Workers

To monitor the status of the automation jobs, or to access their logs, in the Azure Automation Account, click on Jobs in the left hand side menu. Clicking on any one of the jobs shown will provide more information about that particular job. This can be helpful during troubleshooting.

Automation_JobsLog

There you go. All done. I hope you enjoy this additional task you can now do with your Google Home.

If you don’t own a Google Home yet, you can do the above automation using Google Assistant as well.

Ok Google Email me the status of all vms – Part 1

First published at https://nivleshc.wordpress.com

Technology is evolving at a breathtaking pace. For instance, the phone in your pocket has more grunt than the desktop computers of 10 years ago!

One of the upcoming areas in Computing Science is Artificial Intelligence. What seemed science fiction in the days of Isaac Asimov, when he penned I, Robot seems closer to reality now.

Lately the market is popping up with virtual assistants from the likes of Apple, Amazon and Google. These are “bots” that use Artificial Intelligence to help us with our daily lives, from telling us about the weather, to reminding us about our shopping lists or letting us know when our next train will be arriving. I still remember my first virtual assistant Prody Parrot, which hardly did much when you compare it to Siri, Alexa or Google Assistant.

I decided to test drive one of these virtual assistants, and so purchased a Google Home. First impressions, it is an awesome device with a lot of good things going for it. If only it came with a rechargeable battery instead of a wall charger, it would have been even more awesome. Well maybe in the next version (Google here’s a tip for your next version 😉 )

Having played with Google Home for a bit, I decided to look at ways of integrating it with Azure, and I was pleasantly surprised.

In this two-part blog, I will show you how you can use Google Home to send an email with the status of all your Azure virtual machines. This functionality can be extended to stop or start all virtual machines, however I would caution against NOT doing this in your production environment, incase you turn off some machine that is running critical workloads.

In this first blog post, we will setup the backend systems to achieve the tasks and in the next blog post, we will connect it to Google Home.

The diagram below shows how we will achieve what we have set out to do.

Google Home Workflow

Below is a list of tasks that will happen

  1. Google Home will trigger when we say “Ok Google email me the status of all vms”
  2. As Google Home uses Google Assistant, it will pass the request to the IFTTT service
  3. IFTTT will then trigger the webhooks service to call a webhook url attached to an Azure Automation Runbook
  4. A job for the specified runbook will then be queued up in Azure Automation.
  5. The runbook job will then run, and obtain a status of all vms.
  6. The output will be emailed to the designated recipient

Ok, enough talking 😉 lets start cracking.

1. Create an Azure AD Service Principal Account

In order to run our Azure Automation runbook, we need to create a security object for it to run under. This security object provides permissions to access the azure resources. For our purposes, we will be using a service principal account.

Assuming you have already installed the Azure PowerShell module, run the following in a PowerShell session to login to Azure

Import-Module AzureRm
Login-AzureRmAccount

Next, to create an Azure AD Application, run the following command

$adApp = New-AzureRmADApplication -DisplayName "DisplayName" -HomePage "HomePage" -IdentifierUris "http://IdentifierUri" -Password "Password"

where

DisplayName is the display name for your AD Application eg “Google Home Automation”

HomePage is the home page for your application eg http://googlehome (or you can ignore the -HomePage parameter as it is optional)

IdentifierUri is the URI that identifies the application eg http://googleHomeAutomation

Password is the password you will give the service principal account

Now, lets create the service principle for the Azure AD Application

New-AzureRmADServicePrincipal -ApplicationId $adApp.ApplicationId

Next, we will give the service principal account read access to the Azure tenant. If you need something more restrictive, please find the appropriate role from https://docs.microsoft.com/en-gb/azure/active-directory/role-based-access-built-in-roles

New-AzureRmRoleAssignment -RoleDefinitionName Reader -ServicePrincipalName $adApp.ApplicationId

Great, the service principal account is now ready. The username for your service principal is actually the ApplicationId suffixed by your Azure AD domain name. To get the Application ID run the following by providing the identifierUri that was supplied when creating it above

Get-AzureRmADApplication -IdentifierUri {identifierUri}

Just to be pedantic, lets check to ensure we can login to Azure using the newly created service principal account and the password. To test, run the following commands (when prompted, supply the username for the service principal account and the password that was set when it was created above)

$cred = Get-Credential 
Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantId {TenantId}

where Tenantid is your Azure Tenant’s ID

If everything was setup properly, you should now be logged in using the service principal account.

2. Create an Azure Automation Account

Next, we need an Azure Automation account.

2.1   Login to the Azure Portal and then click New

AzureMarketPlace_New

2.2   Then type Automation and click search. From the results click the following.

AzureMarketPlace_ResultsAutomation

2.3   In the next screen, click Create

2.4   Next, fill in the appropriate details and click Create

AutomationAccount_Details

3. Create a SendGrid Account

Unfortunately Azure doesn’t provide relay servers that can be used by scripts to email out. Instead you have to either use EOP (Exchange Online Protection) servers or SendGrid to achieve this. SendGrid is an Email Delivery Service that Azure provides, and you need to create an account to use it. For our purposes, we will use the free tier, which allows the delivery of 2500 emails per month, which is plenty for us.

3.1   In the Azure Portal, click New

AzureMarketPlace_New

3.2   Then search for SendGrid in the marketplace and click on the following result. Next click Create

AzureMarketPlace_ResultsSendGrid

3.3   In the next screen, for the pricing tier, select the free tier and then fill in the required details and click Create.

SendGridAccount_Details

4. Configure the Automation Account

Inside the Automation Account, we will be creating a Runbook that will contain our PowerShell script that will do all the work. The script will be using the Service Principal and SendGrid accounts. To ensure we don’t expose their credentials inside the PowerShell script, we will store them in the Automation Account under Credentials, and then access them from inside our PowerShell script.

4.1   Go into the Automation Account that you had created.

4.2   Under Shared Resource click Credentials

AutomationAccount_Credentials

4.3    Click on Add a credential and then fill in the details for the Service Principal account. Then click Create

Credentials_Details

4.4   Repeat step 4.3 above to add the SendGrid account

4.5   Now that the Credentials have been stored, under Process Automation click Runbooks

Automation_Runbooks

Then click Add a runbook and in the next screen click Create a new runbook

4.6   Give the runbook an appropriate name. Change the Runbook Type to PowerShell. Click Create

Runbook_Details

4.7   Once the Runbook has been created, paste the following script inside it, click on Save and then click on Publish

Import-Module Azure
$cred = Get-AutomationPSCredential -Name 'Service Principal account'
$mailerCred = Get-AutomationPSCredential -Name 'SendGrid account'

Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantID {tenantId}

$outputFile = $env:TEMP+ "\AzureVmStatus.html"
$vmarray = @()

#Get a list of all vms 
Write-Output "Getting a list of all VMs"
$vms = Get-AzureRmVM
$total_vms = $vms.count
Write-Output "Done. VMs Found $total_vms"

$index = 0
# Add info about VM's to the array
foreach ($vm in $vms){ 
 $index++
 Write-Output "Processing VM $index/$total_vms"
 # Get VM Status
 $vmstatus = Get-AzurermVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Status

# Add values to the array:
 $vmarray += New-Object PSObject -Property ([ordered]@{
 ResourceGroupName=$vm.ResourceGroupName
 Name=$vm.Name
 OSType=$vm.StorageProfile.OSDisk.OSType
 PowerState=(get-culture).TextInfo.ToTitleCase(($vmstatus.statuses)[1].code.split("/")[1])
 })
}
$vmarray | Sort-Object PowerState,OSType -Desc

Write-Output "Converting Output to HTML" 
$vmarray | Sort-Object PowerState,OSType -Desc | ConvertTo-Html | Out-File $outputFile
Write-Output "Converted"

$fromAddr = "senderEmailAddress"
$toAddr = "recipientEmailAddress"
$subject = "Azure VM Status as at " + (Get-Date).toString()
$smtpServer = "smtp.sendgrid.net"

Write-Output "Sending Email to $toAddr using server $smtpServer"
Send-MailMessage -Credential $mailerCred -From $fromAddr -To $toAddr -Subject $subject -Attachments $outputFile -SmtpServer $smtpServer -UseSsl
Write-Output "Email Sent"

where

  • ‘Service Principal Account’ and ‘SendGrid Account’ are the names of the credentials that were created in the Automation Account (include the ‘ ‘ around the name)
  • senderEmailAddress is the email address that the email will show it came from. Keep the domain of the email address same as your Azure domain
  • recipientEmailAddress is the email address of the recipient who will receive the list of vms

4.8   Next, we will create a Webhook. A webhook is a special URL that will allow us to execute the above script without logging into the Azure Portal. Treat the webhook URL like a password since whoever possesses the webhook can execute the runbook without needing to provide any credentials.

Open the runbook that was just created and from the top menu click on Webhook

Webhook_menu

4.9   In the next screen click Create new webhook

4.10  A security message will be displayed informing that once the webhook has been created, the URL will not be shown anywhere in the Azure Portal. IT IS EXTREMELY IMPORTANT THAT YOU COPY THE WEBHOOK URL BEFORE PRESSING THE OK BUTTON.

Enter a name for the webhook and when you want the webhook to expire. Copy the webhook URL and paste it somewhere safe. Then click OK.

Once the webhook has expired, you can’t use it to trigger the runbook, however before it expires, you can change the expiry date. For security reasons, it is recommended that you don’t keep the webhook alive for a long period of time.

Webhook_details

Thats it folks! The stage has been set and we have successfully configured the backend systems to handle our task. Give yourselves a big pat on the back.

Follow me to the next blog, where we will use the above with IFTTT, to bring it all together so that when we say “OK Google, email me the status of all vms”, an email is sent out to us with the status of all the vms 😉

I will see you in Part 2 of this blog. Ciao 😉

Scripting the generation & creation of Microsoft Identity Manager Sets/Workflows/Sync & Management Policy Rules with the Lithnet Resource Management PowerShell Module

Introduction

Yes, that title is quite a mouthful. And this post is going to be quite long. But worth the read if you are having to create a number of rules in Microsoft/Forefront Identity Manager, or even more so the same rule in multiple environments (eg. Dev, Staging, Production).

My colleague David Minnelli introduced using the Lithnet RMA PowerShell Module and the Import-RMConfig cmdlet recently for bulk creation of MIM Sets and MPR’s. David has a lot of the background on Import-RMConfig and getting started with it. Give that a read for a more detailed background.

In this post I detail using Import-RMConfig to create a Set, Workflow, Synchronization Rule and Management Policy Rule to populate a Development AD Domain with Users from a Production AD Domain. This process is designed to run on a combined MIM Service/Sync Server. If your roles a separated (as they likely will be in a Production environment) you will need to run these scripts on the MIM Sync Server (so it can query the Management Agents, and you will need to add in a line to connect to the MIM Service (eg. Set-ResourceManagementClient ) at the beginning of the script.

In my environment I have two Active Directory Management Agents, each connected to an AD Forest as shown below.

On each of the AD MA’s I have a Constant Flow Attribute (named Source) configured to flow in a value representing the source AD Forest. I’m doing this in my environment as I have more than one production forest (hence the need for automation). You could simply use the Domain attribute for the Set criteria. That attribute is used in the Set later on. Mentioning it up front so it make sense.

Overview

The Import-RMConfig cmdlet uses XML and XOML files that contain the configuration required to create the Set, Workflow, Sync Rule and MPR in the FIM/MIM Service. The order that I approach the creation is, Sync Rule, Workflow, Set and finally the MPR.

Each of these objects as indicated above leverage an XML and/or XOML input file. I’ve simplified base templates and included them in the scripts.

The Sync Rule Script includes a prompt to choose a folder (you can create one through the GUI presented) to store the XML/XOML files to allow the Import-RMConfig to use them. Once generated you can simply reference the files with Import-RMConfig to replicate the creation in another environment.

Creating the Synchronization Rule

For creation of the Sync Rule we need to define which Management Agent will be the target for our Sync Rule. In my script I’ve automated that too (as I have a number to do). I’m querying the MIM Sync Server for all its Active Directory MA’s and then providing a dialog to allow you to choose the target MA for the Sync Rule. That dialog simply looks like the one below.

Creating the Sync Rule will finally ask you to give the Rule a name. This name will then be used as the base Display Name for the Set, MPR and Workflow (and a truncated version as the Rule ID’s).

The script below in the $SyncRuleXML section defines the rules of the Sync Rule. Mine is an Outbound Sync Rule, with a base set of attributes and transforming the users UPN and DN (for the differing Development AD namespace). Update lines 42 and 45 for the users UPN and DN your namespace.

Creating the Workflow

The Workflow script is pretty self-explanatory. A simple Action based workflow and is below.

Creating the Set

The Set is the group of objects that will be synchronized to the target management agent. As my Sync Rule is only for Users my Set is also only contains users. As stated in the Overview I have an attribute that defines the authoritative source for the objects. I’m also using that in my Set criteria.

Creating the Management Policy Rule

The MPR ties everything together. Here’s that part of the script.

Tying them all together

Here is the end to end automation, and the raw script that you could use as the basis for automating similar rule generation. The Sync Rule could easily be updated for Contacts or Groups. Remember the attributes and object classes are case sensitive’.

  • Through the Browse for Folder dialog I created a new folder named ProvisionDevAD

  • I provided a Display Name for the rules

  • I chose the target Management Agent

  • The SyncRule, Workflow, Set and MPR are created. The whole thing takes seconds.

  • Script Complete

Let’s take a look at the completed objects through the MIM Portal.

Sync Rule

The Sync Rule is present as we named it. Including the !__ prefix so it appears at the top of the list.

Outbound Sync Rule based on a MPR, Set and Workflow

The Resources will be created and if deleted de-provisioned.

And our base attribute flows.

Set

Our Set was created.

Our naming aligns with what we input

And a Criteria based Set. As per the Overview I have an attribute populated by a Constant flow rule that I based my set on. You’ll want to update for you environment.

Workflow

The Action Workflow was created

All looks great

And it applies our Sync Rule

MPR

And finally our MPR. Created as a Transition In MPR with Action Workflow

Set Transition and naming all aligned

The Transition Set configured for the Set that was created

And the Workflow configured with the Workflow that was just created

Summary

When you have a lot of Sync Rules to create, or you know you will need to re-create them numerous times, potentially in different environments automation is key. This just scratches the surface on what can be achieved, and made so much easier using the Lithnet PowerShell Modules.

Here’s is the full script. Note: You’ll need to make a couple of minor changes as indicated earlier, but you should be able to create a Provisioning Rule end to end pretty quick to validate the process. Then customize accordingly for your environment and requirements. Enjoy.

Creating Accounts on Azure SQL Database through PowerShell Automation

In the previous post, Azure SQL Pro Tip – Creating Login Account and User, we have briefly walked through how to create login accounts on Azure SQL Database through SSMS. Using SSMS is of course the very convenient way. However, as a DevOps engineer, I want to automate this process through PowerShell. In this post, we’re going to walk through how to achieve this goal.

Step #1: Create Azure SQL Database

First of all, we need an Azure SQL Database. It can be easily done by running an ARM template in PowerShell like:

We’re not going to dig it further, as this is beyond our topic. Now, we’ve got an Azure SQL Database.

Step #2: Create SQL Script for Login Account

In the previous post, we used the following SQL script:

Now, we’re going to automate this process by providing username and password as parameters to an SQL script. The main part of the script above is CREATE LOGIN ..., so we slightly modify it like:

Now the SQL script is ready.

Step #3: Create PowerShell Script for Login Account

We need to execute this in PowerShell. Look at the following PowerShell script:

Looks familiar? Yes, indeed. It’s basically the same as using ADO.NET in ASP.NET applications. Let’s run this PowerShell script. Woops! Something went wrong. We can’t run the SQL script. What’s happening?

Step #4: Update SQL Script for Login Account

CREATE LOGIN won’t take variables. In other words, the SQL script above will never work unless modified to take variables. In this case, we don’t want to but should use dynamic SQL, which is ugly. Therefore, let’s update the SQL script:

Then run the PowerShell script again and it will work. Please note that using dynamic SQL here wouldn’t be a big issue, as all those scripts are not exposed to public anyway.

Step #5: Update SQL Script for User Login

In a similar way, we need to create a user in the Azure SQL Database. This also requires dynamic SQL like:

This is to create a user with a db_owner role. In order for the user to have only limited permissions, use the following dynamic SQL script:

Step #6: Modify PowerShell Script for User Login

In order to run the SQL script right above, run the following PowerShell script:

So far, we have walked through how we can use PowerShell script to create login accounts and user logins on Azure SQL Database. With this approach, DevOps engineers will be easily able to create accounts on Azure SQL Database by running PowerShell script on their build server or deployment server.

Programmatically interacting with Yammer via PowerShell – Part 2

In my last post I foolishly said that part 2 would be ‘coming in the next few days’. This of course didn’t happen, but I guess it’s better late than never!

In part 1 which is available here, I wrote how it was possible to post to a Yammer group via a *.ps1 using a ‘Yammer Verified Admin’ account. While this worked a treat, it soon became apparent that this approach had limited productivity rewards. Instead, I wanted to create groups and add users to these groups, all while providing minimal inputs.

Firstly, there isn’t a documented create group .json?, but a quick hunt round the tinterweb with Google helped me uncover the groups.json?. This simply needs a name and whether it’s open or closed, open = $false, closed = $true. So building on my example from Part 1, the below code should create a new group…

$clientID = "fvIPx********GoqqV4A"
$clientsecret = "5bYh6vvDTomAJ********RmrX7RzKL0oc0MJobrwnc"
$Token = "AyD********NB65i2LidQ"
$Group = "Posting to Yammer Group"
$GroupType = $True
$CreateGroupUri = "https://www.yammer.com/api/v1/groups.json?name=$Group&private=$GroupType"

    $Headers = @{
        "Accept" = "*/*"
        "Authorization" = "Bearer "+$Token
        "accept-encoding" = "gzip"
        "content-type" = "application/json"
         }

Invoke-WebRequest -Method POST -Uri $CreateGroupUri -Header $Headers
    You’ll noticed I’ve moved away from Invoke-RestMethod to Invoke-WebRequest. This is due to finding a bug where the script would hang and eventually timeout which is detailed in this link.

All going well, you should end up with a new group which has your ‘Yammer Verified Admin’ as the sole member ala…

YammerGroup

Created Yammer Group

Great, but as I’ve just highlighted, there is only one person in that group, and that’s the admin account we’ve been using. To add other Yammer registered users to the group we need to impersonate. This is only possible via a ‘Yammer Verified Admin’ account for obvious chaos avoiding reasons. So firstly you need to grab the token of the user…

$GetUsersUri = "https://www.yammer.com/api/v1/users.json"
$YammerUPN = "dave.young@daveswebsite.com"
$YammerUsers = (Invoke-WebRequest -Uri $GetUsersUri -Method Get -Headers $Headers).content | ConvertFrom-Json

foreach ($YammerUser in $YammerUsers)
    {
    if ($YammerUser.email -eq $YammerUPN)
        {
        $YammerUserId = $YammerUser.id
        }
    }

$GetUserTokUri = “https://www.yammer.com/api/v1/oauth/tokens.json?user_id=$YammerUserId&consumer_key=$clientID"
$YammerUserDave = (Invoke-WebRequest -Uri $GetUserTokUri -Method Get -Headers $Headers).content | ConvertFrom-Json

To step you through the code. I’ve changed the uri to the users.json, provided the UPN of the user that I want to impersonate and I’m using the headers from the previously provided code. I grab all the users into the $YammerUsers variable and then I do a foreach/if to obtain the id of the user. Now we’ve got that we can use the tokens.json to perform a Get request. This will bring you back a lot of information about the user, but most importantly you’ll get the token!

    user_id : 154**24726
    network_id : 20**148
    network_permalink : daveswebsite.com
    network_name : daveswebsite.com
    token : 18Lz3********Nu0JlvXYA
    secret : Wn9ab********kellNnQgvSfbGJjBfRMWZNICW0JTA
    view_members : True
    view_groups : True
    view_messages : True
    view_subscriptions : True
    modify_subscriptions : True
    modify_messages : True
    view_tags : True
    created_at : 2015/06/15 23:59:19 +0000
    authorized_at : 2015/06/15 23:59:19 +0000
    expires_at :

Storing this into the $UserToken variable allows for you to append this to the Authorization within the Headers so you can impersonate/authenticate on behalf of the user. The code looks like…

$UserToken = $YammerUserDave.token
$YammerGroupId = "61***91"

 $UserHeaders = @{
                "Accept" = "*/*"
                "Authorization" = "Bearer "+$UserToken
                "accept-encoding" = "gzip"
                "content-type" = "application/json"
                }

$PostGroupUri = "https://www.yammer.com/api/v1/group_memberships.json?group_id=$YammerGroupId"
$AddYammerUser = Invoke-WebRequest -Uri $PostGroupUri -Method Post -Headers $UserHeaders

So using the group that we created earlier and the correct variables we then successfully add the user to the group…

DaveGroup

Dave in the group

Something to be mindful of, when you pull the groups or the users it will be done in pages of 50. I found using a Do/While worked nicely to build up the variables so they could then be queried, like this…

If ($YammerGroups.Count -eq 50)
    {
    $GroupCycle = 1
    DO
        {
        $GetMoreGroupsUri = "https://www.yammer.com/api/v1/groups.json?page=$GroupCycle"
        $MoreYammerGroups = (Invoke-WebRequest -Uri $GetMoreGroupsUri -Method Get -Headers $AdminHeaders).content | ConvertFrom-Json    
        $YammerGroups += $MoreYammerGroups
        $GroupCycle ++
        $GroupCount = $YammerGroups.Count
        }
    While ($MoreYammerGroups.Count -gt 0)
    }

Once you’ve got your head around this, then the rest of the API/Json’s on the REST API are really quite useful, my only gripe right now is that they are really missing a delete group json – hopefully it’ll be out soon!

Cheers,

Dave

Programmatically interacting with Yammer via PowerShell – Part 1

For my latest project I was asked to automate some Yammer activity. I’m first to concede that I don’t have much of a Dev background, but I instantly fired up PowerShell ISE in tandem with Google only to find…well not a lot! After a couple of weeks fighting with a steep learning curve, I thought it best to blog my findings, it’s good to share ‘n all that!

    It’s worth mentioning at the outset, if you want to test this out you’ll need an E3 Office 365 Trial and a custom domain. It’s possible to trial Yammer, but not with the default *.onmicrosoft.com domain.

First things first, there isn’t a PowerShell Module for Yammer. I suspect it’s on the todo list over in Redmond since their 2012 acquisition. So instead, the REST API is our interaction point. There is some very useful documentation along with examples of the json queries over at the Yammer developer site, linked here.

The site also covers the basics of how to interact using the REST API. Following the instructions, you’ll want to register your own application. This is covered perfectly in the link here.

When registering you’ll need to provide a Expected Redirect. For this I simply put my Yammer site address again https://www.yammer.com/daveswebsite.com. For the purposes of my testing, I’ve not had any issues with this setting. This URL is important and you’ll need it later so make sure to take a note of it. From the registration you should also grab your Client ID & Client secret.

While we’ve got what appears to be the necessary tools to authenticate, we actually need to follow some steps to retrieve our Admin token.

    It is key to point out that I use the Yammer Verified Admin. This will be more critical to follow for Part 2 of my post, but it’s always good to start as you mean to go on!

So The following script will load Internet Explorer and compile the necessary URL. You will of course simply change the entries in the variables with the ones you created during your app registration. I have obfuscated some of the details in my examples, for obvious reasons 🙂

$clientID = "fvIPx********GoqqV4A"
$clientsecret = "5bYh6vvDTomAJ********RmrX7RzKL0oc0MJobrwnc"
$RedirURL = "https://www.yammer.com/daveswebsite.com"

$ie = New-Object -ComObject internetexplorer.application
$ie.Visible = $true
$ie.Navigate2("https://www.yammer.com/dialog/oauth?client_id=$clientID&redirect_uri=$RedirURL")

From the IE Window, you should login with your Yammer Verified Admin and authorise the app. Once logged in, proceed to this additional code…

$UrlTidy = $ie.LocationURL -match 'code=(......................)'; $Authcode = $Matches[1]
$ie = New-Object -ComObject internetexplorer.application
$ie.Visible = $true
$ie.Navigate2("https://www.yammer.com/oauth2/access_token.json?client_id=$clientID&client_secret=$clientsecret&code=$Authcode")

This script simply captures the 302 return and extracts the $Authcode which is required for the token request. It will then launch an additional Internet Explorer session and prompt you to download an access_token.json. Within here you will find your Admin Token which does not expire and can be used for all further admin tasks. I found it useful to load this into a variable using the code below…

$Openjson = $(Get-Content 'C:\Tokens\access_token.json' ) -join "`n" | ConvertFrom-Json
$token = $Openjson.access_token.token

Ok, so we seem to be getting somewhere, but our Yammer page is still looking rather empty! Well now all the prerequistes are complete, we can make our first post. A good example json to use is posting a message, which is detailed in the link here.

I started with this one mainly because all we need is the Group_ID of one of the groups in Yammer and the message body in json format. I created a group manually and then just grabbed the Group_ID from the end of the URL in my browser. I have provided an example below…

$uri = "https://www.yammer.com/api/v1/messages.json"

$Payloadjson = '{
"body": "Posting to Yammer!",
"group_id": 59***60
}'

$Headers = @{
"Accept" = "*/*"
"Authorization" = "Bearer "+$token
"accept-encoding" = "gzip"
"content-type"="application/json"
}

Invoke-RestMethod -Method Post -Uri $uri -Header $Headers -Body $Payloadjson
Yammer Result

Yammer Post

    It’s at this stage you’ll notice that I’ve only used my second cmdlet, Invoke-RestMethod. Both this and ConvertFrom-Json were introduced in PowerShell 3.0 and specifically designed for REST web services like this.

A key point to highlight here is the Authorisation attribute in the $Headers. This is where the $Token is passed to Yammer for authentication. Furthermore, this $Header construct is all you need going forward. It’s simply a case of changing the -Method, the $uri and the $Payload and you can play around with all the different json queries listed on the Yammer Site.

While this was useful for me, it soon became apparent that I wanted to perform actions on behalf of other users. This is something I’ll look to cover in Part 2 of this Blog, coming in the next few days!

Automate your Cloud Operations Part 1: AWS CloudFormation

Operations

What is Operations?

In the IT world, Operations refers to a team or department within IT which is responsible for the running of a business’ IT systems and infrastructure.

So what kind of activities this team perform on day to day basis?

Building, modifying, provisioning, updating systems, software and infrastructure to keep them available, performing and secure which ensures that users can be as productive as possible.

When moving to public cloud platforms the areas of focus for Operations are:

  • Cost reduction: if we design it properly and apply good practices when managing it (scale down / switch off)
  • Smarter operation: Use of Automation and APIs
  • Agility: faster in provisioning infrastructure or environments by Automating the everything
  • Better Uptime: Plan for failover, and design effective DR solutions more cost effectively.

If Cloud is the new normal then Automation is the new normal.

For this blog post we will focus on automation using AWS CloudFormation. The template I will use for this post for educational purposes only and may not be suitable for production workloads :).

AWS CloudFormation

AWS CloudFormation provides developers and system administrators DevOps an easy way to create and manage a collection of related AWS resources, including provisioning and updating them in an orderly and predictable fashion. AWS provides various CloudFormation templates, snippets and reference implementations.

Let’s talk about versioning before diving deeper into CloudFormation. It is extremely important to version your AWS infrastructure in the same way as you version your software. Versioning will help you to track change within your infrastructure by identifying:

  • What changed?
  • Who changed it?
  • When was it changed?
  • Why was it changed?

You can tie this version to a service management or project delivery tools if you wish.

You should also put your templates into source control. Personally I am using Github to version my infrastructure code, but any system such as Team Foundation Server (TFS) will do.

AWS Infrastructure

The below diagram illustrates the basic AWS infrastructure we will build and automate for this blog post:

CloudFormation1

Initial Stack

Firstly we will create the initial stack. Below are the components for the initial stack:

  • A VPC with CIDR block of 192.168.0.0/16 : 65,543 IPs
  • Three Public Subnets across 3 Availability Zones : 192.168.10.0/24, 192.168.11.0/24,  192.168.12.0/24
  • An Internet Gateway attached to the VPC to allow public Internet access. This is a routing construct for VPC and not an EC2 instance
  • Routes and Route tables for three public subnets so EC2 instances in those public subnets can communicate
  • Default Network ACLs to allow all communication inside of the VPC.

Below is the CloudFormation template to build the initial stack.

The template can be downloaded here: https://s3-ap-southeast-2.amazonaws.com/andreaswasita/cloudformation_template/demo/lab1-vpc_ELB_combined.template

I put together the following video on how to use the template:

Understanding a CloudFormation template

AWS CloudFormation is pretty neat and FREE. You only need to pay for the AWS resources provisioned by the CloudFormation template.

The next bit is understanding the Structure of the template. Typically CloudFormation template will have 5 sections:

  • Headers
  • Parameters
  • Mappings
  • Resources
  • Outputs

Headers: Example:

Parameters: Provision-time spec command-line options. Example:

Mappings: Conditionals Case Statements. Example:

Resources: All resources to be provisioned. Example:

Outputs: Example:

Note: Not all AWS Resources can be provisioned using AWS CloudFormation and it has the following limitations.

In Part 2 we will deep dive further on AWS CloudFormation and automating the EC2 including the configuration for NAT and Bastion Host instance.

http://www.wasita.net

Migrating Azure Virtual Machines to another Region

I have a number of DEV/TEST Virtual Machines (VMs) deployed to Azure Regions in Southeast Asia (Singapore) and West US as these were the closet to those of us living in Australia. Now that the new Azure Regions in Australia have been launched, it’s time to start migrating those VMs closer to home. Manually moving VMs between Regions is pretty straight forward and a number of articles already exist outlining the manual steps.

To migrate an Azure VM to another Region

  1. Shutdown the VM in the source Region
  2. Copy the underlying VHDs to storage accounts in the new Region
  3. Create OS and Data disks in the new Region
  4. Re-create the VM in the new Region.

Simple enough but tedious manual configuration, switching between tools and long waits while tens or hundreds of GBs are transferred between Regions.

What’s missing is the automation…

Automating the Migration

In this post I will share a Windows PowerShell script that automates the migration of Azure Virtual Machines between Regions. I have made the full script available via GitHub.

Here is what we are looking to automate:

Migrate-AzureVM

  1. Shutdown and Export the VM configuration
  2. Setup async copy jobs for all attached disks and wait for them to complete
  3. Restore the VM using the saved configuration.

The Migrate-AzureVM.ps1 script assumes the following:

  • Azure Service Management certificates are installed on the machine running the script for both source and destination Subscriptions (same Subscription for both is allowed)
  • Azure Subscription profiles have been created on the machine running the script. Use Get-AzureSubscription to check.
  • Destination Storage accounts, Cloud Services, VNets etc. already have been created.

The script accepts the following input parameters:

.\Migrate-AzureVM.ps1 -SourceSubscription "MySub" `
                      -SourceServiceName "MyCloudService" `
                      -VMName "MyVM" `
                      -DestSubscription "AnotherSub" `
                      -DestStorageAccountName "mydeststorage" `
                      -DestServiceName "MyDestCloudService" `
                      -DestVNETName "MyRegionalVNet" `
                      -IsReadOnlySecondary $false `
                      -Overwrite $false `
                      -RemoveDestAzureDisk $false
SourceSubscription Name of the source Azure Subscription
SourceServiceName Name of the source Cloud Service
VMName Name of the VM to migrate
DestSubscription Name of the destination Azure Subscription
DestStorageAccountName Name of the destination Storage Account
DestServiceName Name of the destination Cloud Service
DestVNETName Name of the destination VNet – blank if none used
IsReadOnlySecondary Indicates if we are copying from the source storage accounts read-only secondary location
Overwrite Indicates if we are overwriting if the VHD already exists in the destination storage account
RemoveDestAzureDisk Indicates if we remove an Azure Disk if it already exists in the destination disk repository

To ensure that the Virtual Machine configuration is not lost (and avoid us have to re-create by hand) we must first shutdown the VM and export the configuration as shown in the PowerShell snippet below.

# Set source subscription context
Select-AzureSubscription -SubscriptionName $SourceSubscription -Current

# Stop VM
Stop-AzureVMAndWait -ServiceName $SourceServiceName -VMName $VMName

# Export VM config to temporary file
$exportPath = "{0}\{1}-{2}-State.xml" -f $ScriptPath, $SourceServiceName, $VMName
Export-AzureVM -ServiceName $SourceServiceName -Name $VMName -Path $exportPath

Once the VM configuration is safely exported and the machine shutdown we can commence copying the underlying VHDs for the OS and any data disks attached to the VM. We’ll want to queue these up as jobs and kick them off asynchronously as they will take some time to copy across.

Get list of azure disks that are currently attached to the VM
$disks = Get-AzureDisk | ? { $_.AttachedTo.RoleName -eq $VMName }

# Loop through each disk
foreach($disk in $disks)
{
    try
    {
        # Start the async copy of the underlying VHD to
        # the corresponding destination storage account
        $copyTasks += Copy-AzureDiskAsync -SourceDisk $disk
    }
    catch {}   # Support for existing VHD in destination storage account
}

# Monitor async copy tasks and wait for all to complete
WaitAll-AsyncCopyJobs

Tip: You’ll probably want to run this overnight. If you are copying between Storage Accounts within the same Region copy times can vary between 15 mins and a few hours. It all depends on which storage cluster the accounts reside. Michael Washam provides a good explanation of this and shows how you can check if your accounts live on the same cluster. Between Regions will always take a longer time (and incur data egress charges don’t forget!)… see below for a nice work-around that could save you heaps of time if you happen to be migrating within the same Geo.

You’ll notice the script also supports being re-run as you’ll have times when you can’t leave the script running during the async copy operation. A number of switches are also provided to assist when things might go wrong after the copy has completed.

Now that we have our VHDs in our destination Storage Account we can begin putting our VM back together again.

We start by re-creating the logical OS and Azure Data disks that take a lease on our underlying VHDs. So we don’t get clashes, I use a convention based on Cloud Service name (which must be globally unique), VM name and disk number.

# Set destination subscription context
Select-AzureSubscription -SubscriptionName $DestSubscription -Current

# Load VM config
$vmConfig = Import-AzureVM -Path $exportPath

# Loop through each disk again
$diskNum = 0
foreach($disk in $disks)
{
    # Construct new Azure disk name as [DestServiceName]-[VMName]-[Index]
    $destDiskName = "{0}-{1}-{2}" -f $DestServiceName,$VMName,$diskNum   

    Write-Log "Checking if $destDiskName exists..."

    # Check if an Azure Disk already exists in the destination subscription
    $azureDisk = Get-AzureDisk -DiskName $destDiskName `
                              -ErrorAction SilentlyContinue `
                              -ErrorVariable LastError
    if ($azureDisk -ne $null)
    {
        Write-Log "$destDiskName already exists"

        if ($RemoveDisk -eq $true)
        {
            # Remove the disk from the repository
            Remove-AzureDisk -DiskName $destDiskName

            Write-Log "Removed AzureDisk $destDiskName"
            $azureDisk = $null
        }
        # else keep the disk and continue
    }

    # Determine media location
    $container = ($disk.MediaLink.Segments[1]).Replace("/","")
    $blobName = $disk.MediaLink.Segments | Where-Object { $_ -like "*.vhd" }
    $destMediaLocation = "http://{0}.blob.core.windows.net/{1}/{2}" -f $DestStorageAccountName,$container,$blobName

    # Attempt to add the azure OS or data disk
    if ($disk.OS -ne $null -and $disk.OS.Length -ne 0)
    {
        # OS disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -OS $disk.OS `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        $vmConfig.OSVirtualHardDisk.DiskName = $azureDisk.DiskName
    }
    else
    {
        # Data disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        #   Match on source disk name and update with dest disk name
        $vmConfig.DataVirtualHardDisks.DataVirtualHardDisk | ? { $_.DiskName -eq $disk.DiskName } | ForEach-Object {
            $_.DiskName = $azureDisk.DiskName
        }
    }              

    # Next disk number
    $diskNum = $diskNum + 1
}
# Restore VM
$existingVMs = Get-AzureService -ServiceName $DestServiceName | Get-AzureVM
if ($existingVMs -eq $null -and $DestVNETName.Length -gt 0)
{
    # Restore first VM to the cloud service specifying VNet
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -VNetName $DestVNETName -WaitForBoot
}
else
{
    # Restore VM to the cloud service
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -WaitForBoot
}

# Startup VM
Start-AzureVMAndWait -ServiceName $DestServiceName -VMName $VMName

For those of you looking at migrating VMs between Regions within the same Geo and have GRS enabled, I have also provided an option to use the secondary storage location of the source storage account.

To support this you will need to enable RA-GRS (read access) and wait a few minutes for access to be made available by the storage service. Copying your VHDs will be very quick (in comparison to egress traffic) as the copy operation will use the secondary copy in the same region as the destination. Nice!

Enabling RA-GRS can be done at any time but you will be charged for a minimum of 30 days at the RA-GRS rate even if you turn it off after the migration.

# Check if we are copying from a RA-GRS secondary storage account
if ($IsReadOnlySecondary -eq $true)
{
    # Append "-secondary" to the media location URI to reference the RA-GRS copy
    $sourceUri = $sourceUri.Replace($srcStorageAccount, "$srcStorageAccount-secondary")
}

Don’t forget to clean up your source Cloud Services and VHDs once you have tested the migrated VMs are running fine so you don’t incur ongoing charges.

Conclusion

In this post I have walked through the main sections of a Windows PowerShell script I have developed that automates the migration of an Azure Virtual Machine to another Azure data centre. The full script has been made available in GitHub. The script also supports a number of other migration scenarios (e.g. cross Subscription, cross Storage Account, etc.) and will be handy addition to your Microsoft Azure DevOps Toolkit.