Office 365 URLs and IP address updates for firewall and proxy configuration, using Flow and Azure Automation

tl;dr

To use Microsoft Office 365, an organisation must allow traffic to [and sometimes from] the respective cloud services via the internet on specific ports and protocols to various URLs and/or IP addresses, or if you meet the requirements via Azure ExpressRoute. Oh duh?!
To further expand on that, connections to trusted networks (which we assume Office 365 falls into this category) that are also high in volume (since most communication and collaborative infrastructure resides there) should be via a low latency egress that is as close to the end user as possible.
As more and more customers use the service, as more and more services and functionality is added, so to will the URLs and IP addresses need to change over time. Firewalls and proxies need to be kept up to date with the destination details of Office 365 services. This is an evergreen solution, lets not forget. So, it’s important to put the processes in-place to correctly optimise connectivity to Office 365. It’s also very important to note that these processes, around change management, if left ignored, will result in services being blocked or delivering inconsistent experiences for end users.

Change is afoot

Come October 2nd 2018, Microsoft will change the way customers can keep up to date with these changes to these URLs and IP addresses. A new web service is coming online that publishes Office 365 endpoints, making it easier for you to evaluate, configure, and stay up to date with changes.

Furthermore, the holistic overview of these URLs and IP addresses is being broken down into three new key categories: OPTIMISE, ALLOW and DEFAULT.

You can get more details on these 3x categories from the following blog post on TechNet: https://blogs.technet.microsoft.com/onthewire/2018/04/06/new-office-365-url-categories-to-help-you-optimize-the-traffic-which-really-matters/
 
It’s not all doom and gloom as your RSS feed no longer works. The new web service (still in preview, public preview, at the time of writing this blog) is rather zippy and allows for some great automation. So, that’s the target state: automation.
Microsoft wants to make it nice and easy for firewall, proxy or whatever edge security appliance vendor or service provider to programmatically interact with the web service and offer dynamic updates for Office 365 URL and IP address information. In practice, change management and governance processes will evidently still be followed. In most circumstances, organisations are following whatever ITIL or ITIL like methodologies are in place for those sorts of things.
The dream Microsoft has, though, is actually one that is quite compelling.
Before we get to this streamlined utopia where my customers edge devices update automatically, I’ve needed to come up with a process for the interim tactical state. This process runs through as follows:

  • Check daily for changes in Office 365 URLs and IP addresses
  • Download changes in a user readable format (So, ideally, no XML or JSON. Perhaps CSV for easy data manipulation or even ingestion into production systems)
  • Email intended parties that there has been a change in the global version number of the current Office 365 URLs and IP addresses
  • Allow intended parties to download the output

NOTE – for my use case here, the details for the output is purely IP addresses. That’s because the infrastructure that the teams I’ll be sending this information to only allows for that data type. If you were to tweak the web service request (details further down), you can grab both URLs and IP addresses, or one or the other.

 

Leveraging Microsoft Flow and Azure Automation

My first instinct here was to use Azure Automation and run a very long PowerShell script with If’s and Then’s and so on. However, when going through the script, 1) my PowerShell skills are not that high level to bang this out and 2) Flow is an amazing tool to run through some of the tricky bits in a more effortless way.
So, leveraging the goodness of Flow, here’s a high level rundown of what the solution looks like:

 
The workflow runs as follows:

  1. Microsoft Flow
  2. On a daily schedule, the flow is triggered at 6am
  3. Runbook #1
    1. Runbook is initiated
    2. Runbook imports CSV from Azure Blob
    3. Powershell runs comment to query web service and saves output to CSV
    4. CSV is copied to Azure Blob
  4. Runbook #2 imports a CSV
    1. Runbook is initiated
    2. Runbook imports CSV from Azure Blob
    3. The last cell in the version column is compared to the previous
    4. An Output is saved to Azure Automation if a newer version found, “NEW-VERSION-FOUND”
  5. The Output is taken from the prvious Azure Automation Runbook run
  6. A Flow Condition is triggered – YES if Output is found, NO if nothing found

Output = YES

  • 7y1 = Runbook #3 is run
    • Runbook queries web service for all 3 conditions: optimise, allow and default
    • Each query for that days IP address information is saved into 3 separate CSV files
  • 7y2 = CSV files are copied to Azure Blob
  • 7y3 = Microsoft Flow queries Azure Blob for the three files
  • 7y4 = An email template is used to email respective interested parties about change to the IP address information
    • The 3x files are added as attachments

Output = Nothing or NO

  • 7n1 = Sent an email to the service account mailbox to say there was no changes to the IP address information for that day

 

The process

Assuming, dear reader, that you have some background with Azure and Flow, here’s a detailed outlined of the process I went through (and one that you can replicate) to automate checking and providing relevant parties with updates to the Office 365 URLs and IP address data.
Lets begin!

Step 1 – Azure AD
  • I created a service account in Azure AD that has been given an Office 365 license for Exchange Online and Flow
  • The user details don’t really matter here as you can follow your own naming convention
  • My example username is as follows: svc-as-aa-01@[mytenant].onmicrosoft.com
    • Naming convention being: “Service account – Australia South East – Azure Automation – Sequence number”
Step 2 – Azure setup – Resource Group
  • I logged onto my Azure environment and created a new Resource Group
  • My solution has a couple of components (Azure Automation account and a Storage account), so I put them all in the same RG. Nice and easy
  • My Resource Group details
    • Name = [ASPRODSVCAA01RG]
    • Region = Australia South East as that’s the local Azure Automation region
    • That’s a basic naming convention of: “Australia South East – Production environment – Purpose, being for the SVC account and Azure Automation – Sequence number – Resource Group”
  • Once the group was created, I added my service account as a Contributor to the group
    • This allows the account downstream permissions to the Azure Automation and Storage accounts I’ll add to the resource group
Step 3 – Azure Setup – Storage account
  • I created a storage account and stored that in my resource group
  • The storage account details are as follows
    • Name = [asprodsvcaa01] = Again, follow your own naming convention
    • Deployment model = Resource manager
    • Storage General Purpose v2
    • Local redundant storage only
    • Standard performance
    • Hot access tier
  • Within the storage account, I’ve used Blob storage
    • There’s two containers that I used:
      • Container #1 = “daily”
      • Container #2 = “ipaddresses”
    • This is where the output CSV files will be stored
  • Again, we don’t need to assign any permissions as we assigned Contributor permissions to the resource group
Step 4 – Azure Setup – Azure Automation
  • I created a new Azure Automation account with the following parameters
    • Name = [SVCASPROD01AA] = Again, follow your own naming convention
    • Default parameters, matching my resource group settings
    • Yes, I also had a Run As account created (default option)
  • I created three Runbooks created, as per below

 

  • Step1-GetGlobalVersion = Again, follow your own naming convention
  • This is a Powershell runbook
  • Here’s the example script I put together:
#region SETUP
Import-Module AzureRM.Profile
Import-Module AzureRM.Resources
Import-Module AzureRM.Storage
#endregion
#region CONNECT
$pass = ConvertTo-SecureString "[pass phrase here]" -AsPlainText –Force
$cred = New-Object -TypeName pscredential –ArgumentList "[credential account]", $pass
Login-AzureRmAccount -Credential $cred -ServicePrincipal –TenantId "[tenant id]"
#endregion
#region IMPORT CSV FILE FROM BLOB
$acctKey = (Get-AzureRmStorageAccountKey -Name [name here] -ResourceGroupName [name here]).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName "[name here]" -StorageAccountKey '[account key here]'
Get-AzureStorageBlob -Context $storageContext -Container "[name here]" | Get-AzureStorageBlobContent -Destination . -Context $storageContext -Force
#endregion
#region GET CURRENT VERION
$DATE = $(((get-date).ToUniversalTime()).ToString("yyyy-MM-dd"))
Invoke-RestMethod -Uri https://endpoints.office.com/version/Worldwide?ClientRequestId=b10c5ed1-bad1-445f-b386-b919946339a7 | Select-Object @{Label="VERSION";Expression={($_.Latest)}},@{Label="DATE";Expression={($Date)}} | Export-Csv [daily-export.csv] -NoTypeInformation -Append
# SAVE TO BLOB
$acctKey = (Get-AzureRmStorageAccountKey -Name [name here] -ResourceGroupName [name here]).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName "[name here]" -StorageAccountKey '[account key here]'
Set-AzureStorageBlobContent -File [.\daily-export.csv] -Container "[name here]" -BlobType "Block" -Context $storageContext -Force
#endregion
#region OUTPUT
Write-Output "SCRIPT-COMPLETE"
#endregion

 

  • Step2-CheckGlobalVersion = Again, follow your own naming convention
  • This is a Powershell runbook
  • Here’s the example script I put together:
#region SETUP
Import-Module AzureRM.Profile
Import-Module AzureRM.Resources
Import-Module AzureRM.Storage
#endregion
#region CONNECT
$pass = ConvertTo-SecureString "[pass phrase here]" -AsPlainText –Force
$cred = New-Object -TypeName pscredential –ArgumentList "[credential account]", $pass
Login-AzureRmAccount -Credential $cred -ServicePrincipal –TenantId "[tenant id]" #endregion
#region IMPORT CSV FILE FROM BLOB
$acctKey = (Get-AzureRmStorageAccountKey -Name [name here] -ResourceGroupName [name here]).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName "[name here]" -StorageAccountKey '[key here]'
Get-AzureStorageBlob -Context $storageContext -Container [name here] | Get-AzureStorageBlobContent -Destination . -Context $storageContext -Force
#endregion
#region CHECK IF THERE IS A DIFFERENCE IN THE VERSION
$ExportedCsv = import-csv [.\daily-export.csv]
$Last = $ExportedCsv | Select-Object -Last 1 -ExpandProperty Version # Last value in Version column
$SecondLast = $ExportedCsv | Select-Object -Last 1 -Skip 1 -ExpandProperty Version #Second last value in version column
If ($Last –gt $SecondLast) {
Write-Output '[NEW-VERSION-FOUND]'
}

 

  • Step3-GetURLsAndIPAddresses = Again, follow your own naming convention
  • This is a Powershell runbook
  • Here’s the example script I put together:
#region SETUP
Import-Module AzureRM.Profile
Import-Module AzureRM.Resources
Import-Module AzureRM.Storage
#endregion
#region EXECUTE PROCESS TO DOWNLOAD NEW VERSION
$endpoints = Invoke-RestMethod -Uri https://endpoints.office.com/endpoints/Worldwide?ClientRequestId=b10c5ed1-bad1-445f-b386-b919946339a7
$endpoints | Foreach {if ($_.category -in ('Optimize')) {$_.IPs}} | Sort-Object -unique | Out-File [.\OptimizeFIle.csv]
$endpoints | Foreach {if ($_.category -in ('Allow')) {$_.IPs}} | Sort-Object -unique | Out-File [.\AllowFile.csv]
$endpoints | Foreach {if ($_.category -in ('Default')) {$_.IPs}} | Sort-Object -unique | Out-File [.\DefaultFile.csv]
$acctKey = (Get-AzureRmStorageAccountKey -Name [name here] -ResourceGroupName [name here]).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName "[name here]" -StorageAccountKey '[key here]'
Set-AzureStorageBlobContent -File [.\OptimizeFIle.csv] -Container "[name here]" -BlobType "Block" -Context $storageContext -Force
Set-AzureStorageBlobContent -File [.\AllowFile.csv] -Container "[name here]" -BlobType "Block" -Context $storageContext -Force
Set-AzureStorageBlobContent -File [.\DefaultFile.csv] -Container "[name here]" -BlobType "Block" -Context $storageContext -Force
#endregion
#region OUTPUT
Write-Output "SCRIPT COMPLETE"
#endregion
  • Note that we don’t need to import the complete AzureRM Powershell modules
  • You’ll find that if you do something “lazy” like that, there’s a whole lot of dependencies in Azure Automation
    • You’ll need to manually add in all the sub-modules which is very time consuming
Step 5 – Microsoft Flow
  • With my service account having a Flow license, I created my Flow there
  • This means that I can pass this onto Managed Services to run with and maintain going forward
  • I started with a blank Flow
  • I added a schedule
    • The schedule runs at 6am every day

  • Step 1 is to add in an Azure Automation Create Job task
    • This is to execute the Runbook “Step1-GetGlobalVersion”
    • Flow will try and connect to Azure with our Service account
    • Because we added all the relevant permissions earlier in Azure, the Resource Group and downstream resources will come up automatically
    • Enter in the relevant details

  • Step 2 is to add in another Azure Automation Create Job task
    • This is to execute the Runbook “Step2-CheckGlobalVersion”
    • Again, Flow will connect and allow you to select resources that the service account has Contributor permissions to

  • Step 3 is to add in an Azure Automation Get Job Output
    • This is to grab the Output data from the previous Azure Automation runbook
    • The details are pretty simply
    • I selected the “JobID” from the Step 2 Azure Automation runbook job

  • Step 4 is where things get interesting
  • This is a Flow Condition
  • This is where we need to specify if a Value of “NEW-VERSION-FOUND” is found in the content of the Output from the Step 2 Job, Do something or Not do something

  • Step 5 is where I added in all the “IF YES” flow to Do something because we have an output of “NEW-VERSION-FOUND”
  • The first sub-process is another Azure Automation Create Job task
  • This is to execute the Runbook “Step3-GetURLsandIPaddresses”
  • Again, Flow will connect and allow you to select resources that the service account has Contributor permissions to

  • Step 6 is to create 3 x Get Blob Content actions
  • This will allow us to connect to Azure Blob storage and grab the 3x CSV files that the previous steps output to Blob created
  • We’re doing this so we can embed them in an email we’re going to send to relevant parties in need of this information

  • Step 7 is to create an email template
  • As we added an Exchange Online license to our service account earlier, we’ll have the ability to send email as the service accounts mailbox
  • The details are pretty straight forward here:
    • Enter in the recipient address
    • The sender
    • The subject
    • The email body
      • This can be a little tricky, but, I’ve found that if you enable HTML (last option in the Send An Email action), you can use <br> or line break to space out your email nicely
    • Lastly, we’ll attach the 3x Blobs that we picked up in the previous step
    • We just need to manually set the name of email file
    • Then select the Content via the Dynamic Content option
      • Note: if you see this error “We can’t find any outputs to match this input format.Select to see all outputs from previous actions.” – simply hit the “See more” button
      • The See more button will show you the content option in the previous step (step 6 above)

  • Step 8 is to go over to the If No condition
  • This is probably option because I believe the old saying goes “no new is good news”
  • However, for the purposes of tracking how often changes happen easily, I thought I’d email the service account and store a daily email if no action was taken
    • I’ll probably see myself as well here to keep an eye on Flow to make sure it’s running
    • I can use inbox rules to move the emails out of my inbox and into a folder to streamline it further and keep my inbox clean
  • The details are pretty much the same as the previous Step 7
    • However, there’s no attachments required here
    • This is a simple email notification where I entered the following in the body: “### NO CHANGE IN O365 URLs and IP ADDRESSES TODAY ###”


 

Final words

Having done many Office 365 email migrations, I’ve come to use Powershell and CSV’s quite a lot to make my life easier when there’s 1000’s of records to work with. This process uses that experience and that speed of working on a solution using CSV files. I’m sure there’s better ways to streamline that component, like for example using Azure Table Storage.
I’m also sure there’s better ways of storing credential information, which, for the time being isn’t a problem while I work out this new process. The overall governance will get ironed out and I’ll likely leverage Azure Automation Credential store, or even Azure Key Vault.
If you, dear reader, have found a more streamlined and novel way to achieve this that requires even less effort to setup, please share!
Best,
Lucian
#WorkSmarterNotHarder
 

Creating your own PowerShell modules for Azure Automation – Part 2

In my previous article I explained the basic steps for creating a PowerShell module that you can upload to Azure Automation. Now, being pedantic by nature, I see a lot of scripts of really poor quality and as I write modules quite regularly, I want to share some useful tips to turn a rubbish module into something significantly better.
Previously, I wrote a basic module that looks like this:

You can run the module by using the command new-compliment -name “someone”. However, the power of PowerShell is in its object-oriented nature and pipelining, so we really need to spruce this up so that it becomes a better quality module. Note, it’ll still be an absurdly useless module, but it’ll give you an idea how to write them.

The parameters section

As you can see at the top, we have a parameter block. That should be standard, but we are not providing any additional information about what the parameter itself is, nor do we specify the rules. That’s no good. Right now, you cannot use pipelining which limits the usability. Always write your modules with pipelining in mind. It’s simply good practice and makes for much more powerful use later one.
So, decorate the variable $name with a parameter block: [Parameter()]$name
This doesn’t do anything and won’t make a difference yet, but now we can start providing some additional settings for the parameters and that’s where it becomes more useful.
So make the following change: [Parameter(Mandatory=$true)]$name. Now we have made this a mandatory parameter. If it is not provided, an error is shown. Good, but still no pipelining.
So, do the following [Parameter(Mandatory=$true, ValueFromPipeline=$true, ValueFromPipelinebyPropertyName=$true)]$name. The two pipeline options do different things. The first “ValueFromPipeline” essentially states that whatever is sent through the pipeline goes into this parameter. Logically, you can really only do this once. If the pipeline contains something with multiple values (like an object), this won’t work because it won’t match a simple string parameter. That’s what the second option (valueFromPipelinebyPropertyname) is for. If you send an object through the pipeline, and that object as a property called “name”, then it will be automatically mapped to this $name parameter. You want to set this option on almost all your parameters. If you do that, you can run your command many times based on the output of an import-csv  (where you need a column called ‘name’).

Pipelining arrays

With the above changes you can run the following
$name = “Ward”
$name | new-compliment
However, what happens if you create an array of names and pipeline that:
$names = @(“Ward”,”James”,”Chris”)
$names | new-compliment
As you’ll see, only the last name in the array is used. That’s not what we want. We want each entry in the array be processed. So, the best way of doing that is to use a begin, process and end block in the function. I rarely see people using this, which is unfortunate, as this is really useful functionality.
So, in the main section of the function, create the following three sections:
Begin { }
Process { }
End { }
Now, move the Write-host line into the curly braces of the Process section. Test, and suddenly our command runs for each time in the array.
What happened? Quite simple, the Begin section will contain code that you want to run only once. This isn’t always used, but can be useful for checking certain things before proceeding such as whether or not there is connectivity. The process section, will run for each item in the pipeline. As our array contains 3 items, this runs 3 times. The End section is like the Begin section in that it runs only once, after the process section has completed.
Our module is starting to be already a bit more professional.

Adding help

The next overlooked thing is adding help information. It’s easy to skip. Don’t! Six months from now, you will have forgotten how to use your module and this help information is going to be really useful. Adding help is easy. Add the following comment section right before the function in the module file. If you have multiple functions, you’ll add this separately for each function.

Avoiding disasters

What if instead of a compliment function, we created an insult function. Perhaps, we want to make sure that before you run it, you are really sure. So, we want confirmation before proceeding. You’d do this for any command that does something destructive.
Fortunately, PowerShell contains all we need, you just need to enable it. So, add the following line right before your param block:
[CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact=”High”)]
Next, in your process block, you want to put an if statement around the actual actions so that they are only carried out in case there is a positive confirmation:

If you run the function now, you’ll be asked to confirm. If you include this in script, you can override the confirmation with -confirm:$false

Lastly exporting functions

In a larger module, you will have functions that are only used by the module itself. These functions are not used by the end users, they should even be able to run them. In a module we can actually specify which functions are available through the Export-modulemember command.  Add the following to the end of the module file:
Export-modulemember new-compliment.
The complete module now looks like this:

You can use this as a template for a PowerShell module as there’s only one line that actually does something, so change that instruction and change the parameters. I know that some of this stuff isn’t needed in Azure, but better build good habits.

Creating your own PowerShell modules for Azure Automation – Part 1

Creating a PowerShell module is an easy way to create scripts you can use over and over again. If you Google  this you’ll find that to create a module is as simple as creating a PowerShell Script with the psm1 extension. However, that won’t work for Azure. Azure loads modules automatically, so you need to write your module to load automatically as well. To ensure a module loads correctly, you’ll need to create a module manifest file.
Here’s how you do it.

Step1. Decide on a name.

You can name your modules whatever you want. However, you cannot have the same module loaded twice. Similarly, should you want to list your module in the module gallery, you’ll need to ensure the name is unique. What convention you use, is up to you, but I typically use the company domain name backwards and add the purpose of the module. So, here I am creating a module called au.com.kloud.demo.

Step 2. Create the necessary files

Following my example, I create a folder called au.com.kloud.demo, and inside I need two files: the module and the manifest: au.com.kloud.demo.psm1, and au.com.kloud.demo.psd1.
The easiest way to create the manifest is to run the following command in powershell: new-modulemanifest -path .\au.com.kloud.demo.psd1

Step 3. Update the manifest

The new-modulemanifest command always creates the same file with a unique GUID so you’ll need to edit it before you can use it.
The first thing you need to do with the manifest is adjust the content for your module. Specifically do the following:

  • uncomment RootModule and change it to RootModule = ‘au.com.kloud.demo.psm1’
  • Change FunctionsToExport to FunctionsToExport = ‘*’
  • update the version number

All three steps are important, without the first, no module will actually be loaded, the second step makes sure the actual functions from the module are available and the last is to ensure that you actually track when the module has been updated.
Here is an example of a module manifest.

When creating a module in PowerShell you specify the functions to include using the FunctionsToExport statement. The CmdletsToExport statement is what you would use for binary modules created in C#. As you can see, there is a section for ‘RequiredAssemblies’. You can use that to load dlls that you use in your script and is one way of getting around some of the limitations using assemblies in Azure Automation.

Step 4. Create the module

Now, a module is a basic Powershell script written as a function. Here is an absurdly  (and badly written) module:

Step 5. Test

Following tests must be successful, or your module is not going to work when uploading to Azure:

  • from PowerShell run test-modulemanifest au.com.kloud.demo\au.com.kloud.demo.psd1  (the command in the module should be listed.)
  • copy the au.com.kloud.demo folder with the 2 files into the modules folder of PowerShell. There are several locations such as C:\Windows\System32\WindowsPowerShell\v1.0\Modules. Now close all PowerShell shells, and if you re-open PowerShell, the module should be loaded automatically which you can confirm by trying to run new-compliment -name “something”.

If the above doesn’t work, then the module won’t work in Azure either.

Step 6. Upload

  1. increment the version number!
  2. select the au.com.kloud.demo folder, and compress it to au.com.kloud.demo.zip.
  3. In the automation account, go to modules, click add new module and point to the zip file.
  4. click OK etc, and after a couple of minutes the module should be available and if you click it, you should see the command you’ve made available.

The module itself is pretty rubbish. We all love cheap flattery, but writing a good PowerShell module requires more than that. See, my step 2 on how make a better module.

Exchange Online & Splunk – Automating the solution

NOTES FROM THE FIELD:

I have recently been consulting on, what I think is a pretty cool engagement to integrate some Office365 mailbox data into the Splunk reporting platform.
I initially thought about using a .csv export methodology however through trial & error (more error than trial if I’m being honest), and realising that this method still required some manual interaction, I decided to embark on finding a fully automated solution.
The final solution comprises the below components:

  • Splunk HTTP event collector
    • Splunk hostname
    • Token from HTTP event collector config page
  • Azure automation account
    • Azure Run As Account
    • Azure Runbook
    • Exchange Online credentials (registered to Azure automation account

I’m not going to run through the creation of the automation account, or required credentials as these had already been created, however there is a great guide to configuring the solution I have used for this customer at  https://www.splunk.com/blog/2017/10/05/splunking-microsoft-cloud-data-part-3.html
What the PowerShell script we are using will achieve is the following:

  • Connect to Azure and Exchange Online – Azure run as account authentication
  • Configure variables for connection to Splunk HTTP event collector
  • Collect mailbox data from the Exchange Online environment
  • Split the mailbox data into parts for faster processing
  • Specify SSL/TLS protocol settings for self-signed cert in test environment
  • Create a JSON object to be posted to the Splunk environment
  • HTTP POST the data directly to Splunk

The Code:

#Clear Existing PS Sessions
Get-PSSession | Remove-PSSession | Out-Null
#Create Split Function for CSV file
function Split-array {
param($inArray,[int]$parts,[int]$size)
if($parts) {
$PartSize=[Math]::Ceiling($inArray.count/$parts)
}
if($size) {
$PartSize=$size
$parts=[Math]::Ceiling($inArray.count/$size)
}
$outArray=New-Object’System.Collections.Generic.List[psobject]’
for($i=1;$i-le$parts;$i++) {
$start=(($i-1)*$PartSize)
$end=(($i)*$PartSize)-1
if($end-ge$inArray.count) {$end=$inArray.count-1}
$outArray.Add(@($inArray[$start..$end]))
}
return,$outArray
}
function Connect-ExchangeOnline {
param(
$Creds
)
#Connect to Exchange Online
$Session=New-PSSession –ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/-Credential $Credentials-Authentication Basic -AllowRedirection
$Commands=@(“Add-MailboxPermission”,”Add-RecipientPermission”,”Remove-RecipientPermission”,”Remove-MailboxPermission”,”Get-MailboxPermission”,”Get-User”,”Get-DistributionGroupMember”,”Get-DistributionGroup”,”Get-Mailbox”)
Import-PSSession-Session $Session-DisableNameChecking:$true-AllowClobber:$true-CommandName $commands|Out-Null
}
#Create Variables
$SplunkHost = “Your Splunk hostname or IP Address”
$SplunkEventCollectorPort = “8088”
$SplunkEventCollectorToken = “Splunk Token from Http Event Collector”
$servicePrincipalConnection = Get-AutomationConnection -Name ‘AzureRunAsConnection’
$credentials = Get-AutomationPSCredential -Name ‘Exchange Online’
#Connect to Azure
Add-AzureRMAccount -ServicePrincipal -Tenant $servicePrincipalConnection.TenantID -ApplicationId $servicePrincipalConnection.ApplicationID -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
#Connect to Exchange Online
Connect-ExchangeOnline -Creds $credentials
#Invoke Script
$mailboxes = Get-Mailbox -resultsize unlimited | select-object -property DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Get Current Date & Time
$time = get-date -Format s
#Convert Timezone to Australia/Brisbane
$bnetime = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId($time, [System.TimeZoneInfo]::Local.Id, ‘E. Australia Standard Time’)
#Adding Time Column to Output
$mailboxes = $mailboxes | Select-Object @{expression = {$bnetime}; Name = ‘Time’}, DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Create Split Array for Mailboxes Spreadsheet
$recipients = Split-array -inArray $mailboxes -parts 5
#Create JSON objects and HTTP Post to Splunk HTTP Event Collector
foreach ($recipient in $recipients) {
foreach($rin$recipient) {
#Create SSL Validation Bypass for Self-Signed Certificate in Testing
$AllProtocols = [System.Net.SecurityProtocolType]’Ssl3,Tls,Tls11,Tls12′
[System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
#Get JSON string to post to Splunk
$StringToPost = “{ `”Time`”: `”$($r.Time)`”, `”DisplayName`”: `”$($r.DisplayName)`”, `”PrimarySMTPAddress`”: `”$($r.PrimarySmtpAddress)`”, `”IsMailboxEnabled`”: `”$($r.IsMailboxEnabled)`”, `”ForwardingSmtpAddress`”: `”$($r.ForwardingSmtpAddress)`”, `”GrantSendOnBehalfTo`”: `”$($r.GrantSendOnBehalfTo)`”, `”ProhibitSendReceiveQuota`”: `”$($r.ProhibitSendReceiveQuota)`”, `”AddressBookPolicy`”: `”$($r.AddressBookPolicy)`” }”
$uri = “https://” + $SplunkHost + “:” + $SplunkEventCollectorPort + “/services/collector/raw”
$header = @{“Authorization”=”Splunk ” + $SplunkEventCollectorToken}
#Post to Splunk Http Event Collector
Invoke-RestMethod -Method Post -Uri $uri -Body $StringToPost -Header $header
}
}
Get-PSSession | Remove-PSSession | Out-Null

 
The final output that can be seen in Splunk looks like the following:

11/13/17
12:28:22.000 PM
{ [-]
AddressBookPolicy:
DisplayName: Shane Fisher
ForwardingSmtpAddress:
GrantSendOnBehalfTo:
IsMailboxEnabled: True
PrimarySMTPAddress: shane.fisher@xxxxxxxx.com.au
ProhibitSendReceiveQuota: 50 GB (53,687,091,200 bytes)
Time: 11/13/2017 12:28:22
}Show as raw text·         AddressBookPolicy =  
·         DisplayName = Shane Fisher
·         ForwardingSmtpAddress =  
·         GrantSendOnBehalfTo =  
·         IsMailboxEnabled = True
·         PrimarySMTPAddress = shane.fisher@xxxxxxxx.com.au
·         ProhibitSendReceiveQuota = 50 GB (53,687,091,200 bytes)

I hope this helps some of you out there.
Cheers,
Shane.
 
 
 

Ok Google Email me the status of all vms – Part 2

First published at https://nivleshc.wordpress.com
In my last blog, we configured the backend systems necessary for accomplishing the task of asking Google Home “OK Google Email me the status of all vms” and it sending us an email to that effect. If you haven’t finished doing that, please refer back to my last blog and get that done before continuing.
In this blog, we will configure Google Home.
Google Home uses Google Assistant to do all the smarts. You will be amazed at all the tasks that Google Home can do out of the box.
For our purposes, we will be using the platform IF This Then That or IFTTT for short. IFTTT is a very powerful platform as it lets you create actions based on triggers. This combination of triggers and actions is called a recipe.
Ok, lets dig in and create our IFTTT recipe to accomplish our task
1.1   Go to https://ifttt.com/ and create an account (if you don’t already have one)
1.2   Login to IFTTT and click on My Applets menu from the top
IFTTT_MyApplets_Menu
1.3   Next, click on New Applet (top right hand corner)
1.4   A new recipe template will be displayed. Click on the blue + this choose a service
IFTTT_Reicipe_Step1
1.5   Under Choose a Service type “Google Assistant”
IFTTT_ChooseService
1.6   In the results Google Assistant will be displayed. Click on it
1.7   If you haven’t already connected IFTTT with Google Assistant, you will be asked to do so. When prompted, login with the Google account that is associated with your Google Home and then approve IFTTT to access it.
IFTTT_ConnectGA
1.8   The next step is to choose a trigger. Click on Say a simple phrase
IFTTT_ChooseTrigger
1.9   Now we will put in the phrases that Google Home should trigger on.
IFTTT_CompleteTrigger
For

  • What do you want to say? enter “email me the status of all vms
  • What do you want the Assistant to say in response? enter “no worries, I will send you the email right away

All the other sections are optional, however you can fill them if you prefer to do so
Click Create trigger
1.10   You will be returned to the recipe editor. To choose the action service, click on + that
IFTTT_That
1.11  Under Choose action service, type webhooks. From the results, click on Webhooks
IFTTT_ActionService
1.12   Then for Choose action click on Make a web request
IFTTT_Action_Choose
1.13   Next the Complete action fields screen is shown.
For

  • URL – paste the webhook url of the runbook that you had copied in the previous blog
  • Method – change this to POST
  • Content Type – change this to application/json

IFTTT_CompleteActionFields
Click Create action
1.13   In the next screen, click Finish
IFTTT_Review
 
Woo hoo. Everything is now complete. Lets do some testing.
Go to your Google Home and say “email me the status of all vms”. Google Home should reply by saying “no worries. I will send you the email right away”.
I have noticed some delays in receiving the email, however the most I have had to wait for is 5 minutes. If this is unacceptable, in the runbook script, modify the Send-MailMessage command by adding the parameter -Priority High. This sends all emails with high priority, which should make things faster. Also, the runbook is currently running in Azure. Better performance might be achieved by using Hybrid Runbook Workers
To monitor the status of the automation jobs, or to access their logs, in the Azure Automation Account, click on Jobs in the left hand side menu. Clicking on any one of the jobs shown will provide more information about that particular job. This can be helpful during troubleshooting.
Automation_JobsLog
There you go. All done. I hope you enjoy this additional task you can now do with your Google Home.
If you don’t own a Google Home yet, you can do the above automation using Google Assistant as well.

Ok Google Email me the status of all vms – Part 1

First published at https://nivleshc.wordpress.com
Technology is evolving at a breathtaking pace. For instance, the phone in your pocket has more grunt than the desktop computers of 10 years ago!
One of the upcoming areas in Computing Science is Artificial Intelligence. What seemed science fiction in the days of Isaac Asimov, when he penned I, Robot seems closer to reality now.
Lately the market is popping up with virtual assistants from the likes of Apple, Amazon and Google. These are “bots” that use Artificial Intelligence to help us with our daily lives, from telling us about the weather, to reminding us about our shopping lists or letting us know when our next train will be arriving. I still remember my first virtual assistant Prody Parrot, which hardly did much when you compare it to Siri, Alexa or Google Assistant.
I decided to test drive one of these virtual assistants, and so purchased a Google Home. First impressions, it is an awesome device with a lot of good things going for it. If only it came with a rechargeable battery instead of a wall charger, it would have been even more awesome. Well maybe in the next version (Google here’s a tip for your next version 😉 )
Having played with Google Home for a bit, I decided to look at ways of integrating it with Azure, and I was pleasantly surprised.
In this two-part blog, I will show you how you can use Google Home to send an email with the status of all your Azure virtual machines. This functionality can be extended to stop or start all virtual machines, however I would caution against NOT doing this in your production environment, incase you turn off some machine that is running critical workloads.
In this first blog post, we will setup the backend systems to achieve the tasks and in the next blog post, we will connect it to Google Home.
The diagram below shows how we will achieve what we have set out to do.
Google Home Workflow
Below is a list of tasks that will happen

  1. Google Home will trigger when we say “Ok Google email me the status of all vms”
  2. As Google Home uses Google Assistant, it will pass the request to the IFTTT service
  3. IFTTT will then trigger the webhooks service to call a webhook url attached to an Azure Automation Runbook
  4. A job for the specified runbook will then be queued up in Azure Automation.
  5. The runbook job will then run, and obtain a status of all vms.
  6. The output will be emailed to the designated recipient

Ok, enough talking 😉 lets start cracking.

1. Create an Azure AD Service Principal Account

In order to run our Azure Automation runbook, we need to create a security object for it to run under. This security object provides permissions to access the azure resources. For our purposes, we will be using a service principal account.
Assuming you have already installed the Azure PowerShell module, run the following in a PowerShell session to login to Azure

Import-Module AzureRm
Login-AzureRmAccount

Next, to create an Azure AD Application, run the following command

$adApp = New-AzureRmADApplication -DisplayName "DisplayName" -HomePage "HomePage" -IdentifierUris "http://IdentifierUri" -Password "Password"

where
DisplayName is the display name for your AD Application eg “Google Home Automation”
HomePage is the home page for your application eg http://googlehome (or you can ignore the -HomePage parameter as it is optional)
IdentifierUri is the URI that identifies the application eg http://googleHomeAutomation
Password is the password you will give the service principal account
Now, lets create the service principle for the Azure AD Application

New-AzureRmADServicePrincipal -ApplicationId $adApp.ApplicationId

Next, we will give the service principal account read access to the Azure tenant. If you need something more restrictive, please find the appropriate role from https://docs.microsoft.com/en-gb/azure/active-directory/role-based-access-built-in-roles

New-AzureRmRoleAssignment -RoleDefinitionName Reader -ServicePrincipalName $adApp.ApplicationId

Great, the service principal account is now ready. The username for your service principal is actually the ApplicationId suffixed by your Azure AD domain name. To get the Application ID run the following by providing the identifierUri that was supplied when creating it above

Get-AzureRmADApplication -IdentifierUri {identifierUri}

Just to be pedantic, lets check to ensure we can login to Azure using the newly created service principal account and the password. To test, run the following commands (when prompted, supply the username for the service principal account and the password that was set when it was created above)

$cred = Get-Credential 
Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantId {TenantId}

where Tenantid is your Azure Tenant’s ID
If everything was setup properly, you should now be logged in using the service principal account.

2. Create an Azure Automation Account

Next, we need an Azure Automation account.
2.1   Login to the Azure Portal and then click New
AzureMarketPlace_New
2.2   Then type Automation and click search. From the results click the following.
AzureMarketPlace_ResultsAutomation
2.3   In the next screen, click Create
2.4   Next, fill in the appropriate details and click Create
AutomationAccount_Details

3. Create a SendGrid Account

Unfortunately Azure doesn’t provide relay servers that can be used by scripts to email out. Instead you have to either use EOP (Exchange Online Protection) servers or SendGrid to achieve this. SendGrid is an Email Delivery Service that Azure provides, and you need to create an account to use it. For our purposes, we will use the free tier, which allows the delivery of 2500 emails per month, which is plenty for us.
3.1   In the Azure Portal, click New
AzureMarketPlace_New
3.2   Then search for SendGrid in the marketplace and click on the following result. Next click Create
AzureMarketPlace_ResultsSendGrid
3.3   In the next screen, for the pricing tier, select the free tier and then fill in the required details and click Create.
SendGridAccount_Details

4. Configure the Automation Account

Inside the Automation Account, we will be creating a Runbook that will contain our PowerShell script that will do all the work. The script will be using the Service Principal and SendGrid accounts. To ensure we don’t expose their credentials inside the PowerShell script, we will store them in the Automation Account under Credentials, and then access them from inside our PowerShell script.
4.1   Go into the Automation Account that you had created.
4.2   Under Shared Resource click Credentials
AutomationAccount_Credentials
4.3    Click on Add a credential and then fill in the details for the Service Principal account. Then click Create
Credentials_Details
4.4   Repeat step 4.3 above to add the SendGrid account
4.5   Now that the Credentials have been stored, under Process Automation click Runbooks
Automation_Runbooks
Then click Add a runbook and in the next screen click Create a new runbook
4.6   Give the runbook an appropriate name. Change the Runbook Type to PowerShell. Click Create
Runbook_Details
4.7   Once the Runbook has been created, paste the following script inside it, click on Save and then click on Publish

Import-Module Azure
$cred = Get-AutomationPSCredential -Name 'Service Principal account'
$mailerCred = Get-AutomationPSCredential -Name 'SendGrid account'
Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantID {tenantId}
$outputFile = $env:TEMP+ "\AzureVmStatus.html"
$vmarray = @()
#Get a list of all vms
Write-Output "Getting a list of all VMs"
$vms = Get-AzureRmVM
$total_vms = $vms.count
Write-Output "Done. VMs Found $total_vms"
$index = 0
# Add info about VM's to the array
foreach ($vm in $vms){
 $index++
 Write-Output "Processing VM $index/$total_vms"
 # Get VM Status
 $vmstatus = Get-AzurermVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Status
# Add values to the array:
 $vmarray += New-Object PSObject -Property ([ordered]@{
 ResourceGroupName=$vm.ResourceGroupName
 Name=$vm.Name
 OSType=$vm.StorageProfile.OSDisk.OSType
 PowerState=(get-culture).TextInfo.ToTitleCase(($vmstatus.statuses)[1].code.split("/")[1])
 })
}
$vmarray | Sort-Object PowerState,OSType -Desc
Write-Output "Converting Output to HTML"
$vmarray | Sort-Object PowerState,OSType -Desc | ConvertTo-Html | Out-File $outputFile
Write-Output "Converted"
$fromAddr = "senderEmailAddress"
$toAddr = "recipientEmailAddress"
$subject = "Azure VM Status as at " + (Get-Date).toString()
$smtpServer = "smtp.sendgrid.net"
Write-Output "Sending Email to $toAddr using server $smtpServer"
Send-MailMessage -Credential $mailerCred -From $fromAddr -To $toAddr -Subject $subject -Attachments $outputFile -SmtpServer $smtpServer -UseSsl
Write-Output "Email Sent"

where

  • ‘Service Principal Account’ and ‘SendGrid Account’ are the names of the credentials that were created in the Automation Account (include the ‘ ‘ around the name)
  • senderEmailAddress is the email address that the email will show it came from. Keep the domain of the email address same as your Azure domain
  • recipientEmailAddress is the email address of the recipient who will receive the list of vms

4.8   Next, we will create a Webhook. A webhook is a special URL that will allow us to execute the above script without logging into the Azure Portal. Treat the webhook URL like a password since whoever possesses the webhook can execute the runbook without needing to provide any credentials.
Open the runbook that was just created and from the top menu click on Webhook
Webhook_menu
4.9   In the next screen click Create new webhook
4.10  A security message will be displayed informing that once the webhook has been created, the URL will not be shown anywhere in the Azure Portal. IT IS EXTREMELY IMPORTANT THAT YOU COPY THE WEBHOOK URL BEFORE PRESSING THE OK BUTTON.
Enter a name for the webhook and when you want the webhook to expire. Copy the webhook URL and paste it somewhere safe. Then click OK.
Once the webhook has expired, you can’t use it to trigger the runbook, however before it expires, you can change the expiry date. For security reasons, it is recommended that you don’t keep the webhook alive for a long period of time.
Webhook_details
Thats it folks! The stage has been set and we have successfully configured the backend systems to handle our task. Give yourselves a big pat on the back.
Follow me to the next blog, where we will use the above with IFTTT, to bring it all together so that when we say “OK Google, email me the status of all vms”, an email is sent out to us with the status of all the vms 😉
I will see you in Part 2 of this blog. Ciao 😉

How to create and auto update route tables in Azure for your local Azure datacentre with Azure Automation, bypassing firewall appliances

When deploying an “edge” or “perimeter” network in Azure, by way of a peered edge VNET or an edge subnet, you’ll likely want to deploy virtual firewall appliances of some kind to manage and control that ingress and egress traffic. This comes at a cost though. That cost being that Azure services are generally accessed via public IP addresses or hosts, even within Azure. The most common of those and one that has come up recently is Azure Blob storage.

If you have ExpressRoute, you can get around this by implementing Public Peering. This essentially sends all traffic destined for Azure services to your ER gateway. A bottleneck? Perhaps.

The problem in detail

Recently I ran into a road block on a customers site around the use of Blob storage. I designed an edge network that met certain traffic monitoring requirements. Azure NSGs were not able to meet all requirements, so, something considerably more complex and time consuming was implemented. It’s IT, isn’t that what always happens you may ask? 

Heres some reference blog posts:

Getting Azure 99.95% SLA for Cisco FTD virtual appliances in Azure via availability sets and ARM templates

Lessons learned from deploying Cisco Firepower Threat Defence firewall virtual appliances in Azure, a brain dump

WE deployed Cisco Firepower Threat Defence virtual appliance firewalls in an edge VNET. Our subnets had route tables with a default route of 0.0.0.0/0 directed to the “tag” “VirtualAppliance”. So all traffic to a host or network not known by Azure is directed to the firewall(s).  How that can be achieved is another blog post.

When implementing this solution, Azure services that are accessed via an external or public range IP address or host, most commonly Blob Storage which is accessed via blah.blob.core.windows.net, additionally gets directed to the Cisco FTDs. Not a big problem, create some rules to allow traffic flow etc and we’re cooking with gas.

Not exactly the case as the FTDv’s have a NIC with a throughput of 2GiB’s per second. That’s plenty fast, but, when you have a lot of workloads, a lot of user traffic, a lot of writes to Blob storage, bottle necks can occur.

The solution

As I mentioned earlier this can be tackled quickly through a number of methods. These discarded methods in this situation are as follows:

  • Implement ExpressRoute
    • Through ExpressRoute enable public peering
    • All traffic to Azure infrastructure is directed to the gateway
    • This is a “single device” that I have heard whispers is a virtual Cisco appliance similar to a common enterprise router
    • Quick to implement and for most cases, the throughout bottleneck isn’t reached and you’re fine
  • Implement firewall rules
    • Allow traffic to Microsoft IP ranges
    • Manually enter those IP ranges into the firewall
      • These are subject to change to a maintenance or managed services runbook should be implemented to do this on a regular basis
    • Or- enable URL filtering and basically allow traffic to certain URI’s or URL’s

Like I said, both above will work.

The solution in this blob post is a little bit more technical, but, does away with the above. Rather than any manual work, lets automate this process though AzureAutomation. Thankfully, this isn’t something new, but, isn’t something that is used often. Through the use of pre-configured Azure Automation modules and Powershell scripts, a scheduled weekly or monthly (or whatever you like) runbook to download the Microsoft publicly available .xml file that lists all subnets and IP addresses use in Azure. Then uses that file to update a route table the script creates with routes to Azure subnets and IP’s in a region that is specified.

This process does away with any manual intervention and works to the ethos “work smarter, not harder”. I like that very much, and no, that is not being lazy. It’s being smart.

The high five

I’m certainly not trying to take the credit for this, except for the minor tweak to the runbook code, so cheers to James Bannan (@JamesBannan) who wrote this great blog post (available here) on the solution. He’s put together the Powershell script that expands on a Powershell module written by Kieran Jacobson (@kjacobsen). Check out their Twitters, their blogs and all round awesome content for. Thank you and high five to both!

The process

I’m going to speed through this as its really straight forward and nothing to complicated here. The only tricky part is the order in doing that. Stick to the order and you’re guaranteed to succeed:

  • Create a new automation user account
  • Create a new runbook
    • Quick create a new Powershell runbook
  • Go back to the automation account
  • Update some config:
    • Update the modules in the automation account – do this FIRST as there are dependencies on up to date modules (specifically the AzureRM.Profile module by AzureRM.network)
    • By default you have these runbook modules:

    • Go to Browse Gallery
    • Select the following two modules, one at a time, and add to the automation user account
      • AzureRM.Network
      • AzurePublicIPAddress
        • This is the module created by Kieran Jacobson 
    • Once all are in, for the purposes of being borderline OCD, select Update Azure Modules
      • This should update all modules to the latest version, incase some are lagging a little behind
    • Lets create some variables
      • Select Variables from the menu blade in the automation user account
      • The script will need the following variables for YOUR ENVIRONMENT
        • azureDatacenterRegions
        • VirtualNetworkName
        • VirtualNetworkRGLocation
        • VirtualNetworkRGName
      • For my sample script, I have resources in the Australia, AustraliaEast region
      • Enter in the variables that apply to you here (your RGs, VNET etc)
  • Lets add in the Powershell to the runbook
    • Select the runbook
    • Select EDIT from the properties of the runbook (top horizontal menu)
    • Enter in the following Powershell:
      • this is my slightly modified version
$VerbosePreference = 'Continue'
### Authenticate with Azure Automation account
$cred = "AzureRunAsConnection"
try
{
 # Get the connection "AzureRunAsConnection "
 $servicePrincipalConnection=Get-AutomationConnection -Name $cred
"Logging in to Azure..."
 Add-AzureRmAccount `
 -ServicePrincipal `
 -TenantId $servicePrincipalConnection.TenantId `
 -ApplicationId $servicePrincipalConnection.ApplicationId `
 -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
}
catch {
 if (!$servicePrincipalConnection)
 {
 $ErrorMessage = "Connection $cred not found."
 throw $ErrorMessage
 } else{
 Write-Error -Message $_.Exception
 throw $_.Exception
 }
}
### Populate script variables from Azure Automation assets
$resourceGroupName = Get-AutomationVariable -Name 'virtualNetworkRGName'
$resourceLocation = Get-AutomationVariable -Name 'virtualNetworkRGLocation'
$vNetName = Get-AutomationVariable -Name 'virtualNetworkName'
$azureRegion = Get-AutomationVariable -Name 'azureDatacenterRegions'
$azureRegionSearch = '*' + $azureRegion + '*'
[array]$locations = Get-AzureRmLocation | Where-Object {$_.Location -like $azureRegionSearch}
### Retrieve the nominated virtual network and subnets (excluding the gateway subnet)
$vNet = Get-AzureRmVirtualNetwork `
 -ResourceGroupName $resourceGroupName `
 -Name $vNetName
[array]$subnets = $vnet.Subnets | Where-Object {$_.Name -ne 'GatewaySubnet'} | Select-Object Name
### Create and populate a new array with the IP ranges of each datacenter in the specified location
$ipRanges = @()
foreach($location in $locations){
 $ipRanges += Get-MicrosoftAzureDatacenterIPRange -AzureRegion $location.DisplayName
}
$ipRanges = $ipRanges | Sort-Object
### Iterate through each subnet in the virtual network
foreach($subnet in $subnets){
$RouteTableName = $subnet.Name + '-RouteTable'
$vNet = Get-AzureRmVirtualNetwork `
 -ResourceGroupName $resourceGroupName `
 -Name $vNetName
### Create a new route table if one does not already exist
 if ((Get-AzureRmRouteTable -Name $RouteTableName -ResourceGroupName $resourceGroupName) -eq $null){
 $RouteTable = New-AzureRmRouteTable `
 -Name $RouteTableName `
 -ResourceGroupName $resourceGroupName `
 -Location $resourceLocation
 }
### If the route table exists, save as a variable and remove all routing configurations
 else {
 $RouteTable = Get-AzureRmRouteTable `
 -Name $RouteTableName `
 -ResourceGroupName $resourceGroupName
 $routeConfigs = Get-AzureRmRouteConfig -RouteTable $RouteTable
 foreach($config in $routeConfigs){
 Remove-AzureRmRouteConfig -RouteTable $RouteTable -Name $config.Name | Out-Null
 }
 }
### Create a routing configuration for each IP range and give each a descriptive name
 foreach($ipRange in $ipRanges){
 $routeName = ($ipRange.Region.Replace(' ','').ToLower()) + '-' + $ipRange.Subnet.Replace('/','-')
 Add-AzureRmRouteConfig `
 -Name $routeName `
 -AddressPrefix $ipRange.Subnet `
 -NextHopType Internet `
 -RouteTable $RouteTable | Out-Null
 }
### Add default route for Edge Firewalls
 Add-AzureRmRouteConfig `
 -Name 'DefaultRoute' `
 -AddressPrefix 0.0.0.0/0 `
 -NextHopType VirtualAppliance `
 -NextHopIpAddress 10.10.10.10 `
 -RouteTable $RouteTable
### Include a routing configuration to give direct access to Microsoft's KMS servers for Windows activation
 Add-AzureRmRouteConfig `
 -Name 'AzureKMS' `
 -AddressPrefix 23.102.135.246/32 `
 -NextHopType Internet `
 -RouteTable $RouteTable
### Apply the route table to the subnet
 Set-AzureRmRouteTable -RouteTable $RouteTable
$forcedTunnelVNet = $vNet.Subnets | Where-Object Name -eq $subnet.Name
 $forcedTunnelVNet.RouteTable = $RouteTable
### Update the virtual network with the new subnet configuration
 Set-AzureRmVirtualNetwork -VirtualNetwork $vnet -Verbose
}

How is this different from James’s?

I’ve made two changes to the original script. These changes are as follows:

I changed the authentication to use an Azure Automation account. This streamlined the deployment process so I could reuse the script across another of subscriptions. This change was the following:

$cred = "AzureRunAsConnection"
try
{
 # Get the connection "AzureRunAsConnection "
 $servicePrincipalConnection=Get-AutomationConnection -Name $cred
"Logging in to Azure..."
 Add-AzureRmAccount `
 -ServicePrincipal `
 -TenantId $servicePrincipalConnection.TenantId `
 -ApplicationId $servicePrincipalConnection.ApplicationId `
 -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
}
catch {
 if (!$servicePrincipalConnection)
 {
 $ErrorMessage = "Connection $cred not found."
 throw $ErrorMessage
 } else{
 Write-Error -Message $_.Exception
 throw $_.Exception
 }
}

Secondly, I added an additional static route. This was for the default route (0.0.0.0/0) which is used to forward to our edge firewalls. This change was the following:

### Add default route for Edge Firewalls
 Add-AzureRmRouteConfig `
 -Name 'DefaultRoute' `
 -AddressPrefix 0.0.0.0/0 `
 -NextHopType VirtualAppliance `
 -NextHopIpAddress 10.10.10.10 `
 -RouteTable $RouteTable

You can re-use this section to add further custom static routes

Tying it all together

  • Hit the SAVE button and job done
    • Well, almost…
  • Next you should test the script
    • Select the TEST PANE from the top horizontal menu
    • A word of warning- this will go off and create the route tables and associate them with the subnets in your selected VNET!!!
    • Without testing though, you can’t confirm it works correctly
  • Should the test work out nicely, hit the publish button in the top hand menu
    • This gets the runbook ready to be used
  • Now go off and create a schedule
    • Azure public IP addresses can update often
    • It’s best to be ahead of the game and keep your route tables up to date
    • A regular schedule is recommended – I do once a week as the script only takes about 10-15min to run 
    • From the runbook top horizontal menu, select schedule
    • Create the schedule as desired
    • Save
  • JOB DONE! -No really, thats it!

Enjoy!

Interacting with Azure Web Apps Virtual File System using PowerShell and the Kudu API

Introduction

Azure Web Apps or App Services are quite flexible regarding deployment. You can deploy via FTP, OneDrive or Dropbox, different cloud-based source controls like VSTS, GitHub, or BitBucket, your on-premise Git, multiples IDEs including Visual Studio, Eclipse and Xcode, and using MSBuild via Web Deploy or FTP/FTPs. And this list is very likely to keep expanding.

However, there might be some scenarios where you just need to update some reference files and don’t need to build or update the whole solution. Additionally, it’s quite common that corporate firewalls restrictions leave you with only the HTTP or HTTPs ports open to interact with your Azure App Service. I had such a scenario where we had to automate the deployment of new public keys to an Azure App Service to support client certificate-based authentication. However, we were restricted by policies and firewalls.

The Kudu REST API provides a lot of handy features which support Azure App Services source code management and deployments operations, among others. One of these is the Virtual File System (VFS) API. This API is based on the VFS HTTP Adapter which wraps a VFS instance as an HTTP RESTful interface. The Kudu VFS API allows us to upload and download files, get a list of files in a directory, create directories, and delete files from the virtual file system of an Azure App Service; and we can use PowerShell to call it.

In this post I will show how to interact with the Azure App Service Virtual File Sytem (VFS) API via PowerShell.

Authenticating to the Kudu API.

To call any of the Kudu APIs, we need to authenticate by adding the corresponding Authorization header. To create the header value, we need to use the Kudu API credentials, as detailed here. Because we will be interacting with an API related to an App Service, we will be using site-level credentials (a.k.a. publishing profile credentials).

Getting the Publishing Profile Credentials from the Azure Portal

You can get the publishing profile credentials, by downloading the publishing profile from the portal, as shown in the figure below. Once downloaded, the XML document will contain the site-level credentials.

Getting the Publishing Profile Credentials via PowerShell

We can also get the site-level credentials via PowerShell. I’ve created a PowerShell function which returns the publishing credentials of an Azure App Service or a Deployment Slot, as shown below.

Bear in mind that you need to be logged in to Azure in your PowerShell session before calling these cmdlets.

Getting the Kudu REST API Authorisation header via PowerShell

Once we have the credentials, we are able to get the Authorization header value. The instructions to construct the header are described here. I’ve created another PowerShell function, which relies on the previous one, to get the header value, as follows.

Calling the App Service VFS API

Once we have the Authorization header, we are ready to call the VFS API. As shown in the documentation, the VFS API has the following operations:

  • GET /api/vfs/{path}    (Gets a file at path)
  • GET /api/vfs/{path}/    (Lists files at directory specified by path)
  • PUT /api/vfs/{path}    (Puts a file at path)
  • PUT /api/vfs/{path}/    (Creates a directory at path)
  • DELETE /api/vfs/{path}    (Delete the file at path)

So the URI to call the API would be something like:

  • GET https://{webAppName}.scm.azurewebsites.net/api/vfs/

To invoke the REST API operation via PowerShell we will use the Invoke-RestMethod cmdlet.

We have to bear in mind that when trying to overwrite or delete a file, the web server implements ETag behaviour to identify specific versions of files.

Uploading a File to an App Service

I have created the PowerShell function shown below which uploads a local file to a path in the virtual file system. To call this function you need to provide the App Service name, the Kudu credentials (username and password), the local path of your file and the kudu path. The function assumes that you want to upload the file under the wwwroot folder, but you can change it if needed.

As you can see in the script, we are adding the “If-Match”=”*” header to disable ETag version check on the server side.

Downloading a File from an App Service

Similarly, I have created a function to download a file on an App Service to the local file system via PowerShell.

Using the ZIP API

In addition to using the VFS API, we can also use the Kudu ZIP Api, which allows to upload zip files and expand them into folders, and compress server folders as zip files and download them.

  • GET /api/zip/{path}    (Zip up and download the specified folder)
  • PUT /api/zip/{path}    (Upload a zip file which gets expanded into the specified folder)

You could create your own PowerShell functions to interact with the ZIP API based on what we have previously shown.

Conclusion

As we have seen, in addition to the multiple deployment options we have for Azure App Services, we can also use the Kudu VFS API to interact with the App Service Virtual File System via HTTP. I have shared some functions for some of the provided operations. You could customise these functions or create your own based on your needs.

I hope this has been of help and feel free to add your comments or queries below. 🙂

Hands Free VM Management with Azure Automation and Resource Manager – Part 2

In this two part series, I am looking at how we can leverage Azure Automation and Azure Resource Manager to schedule the shutting down of tagged Virtual Machines in Microsoft Azure.

  • In Part 1 we walked through tagging resources using the Azure Resource Manager PowerShell module
  • In Part 2 we will setup Azure Automation to schedule a runbook to execute nightly and shutdown tagged resources.

Azure Automation Runbook

At the time of writing, the tooling support around Azure Automation can be politely described as a hybrid one. For starters, there is no support for Azure Automation in the preview portal. The Azure command line tools only support basic automation account and runbook management, leaving the current management portal as the most complete tool for the job

As I mentioned in Part 1, Azure Automation does not yet support the new Azure Resource Manager PowerShell module out-of-the-box, so we need to import that module ourselves. We will then setup service management credentials that our runbook will use (recall the ARM module doesn’t use certificates anymore, we need to supply user account credentials).

We then create our PowerShell workflow to query for tagged virtual machine resources and ensure they are shutdown. Lastly, we setup our schedule and enable the runbook… lets get cracking!

When we first create an Azure Automation account, the Azure PowerShell module is already imported as an Asset for us (v0.8.11 at the time of writing) as shown below.

Clean Azure Automation Screen.
To import the Azure Resource Manager module we need to zip it up and upload it to the portal using the following process. In Windows Explorer on your PC

  1. Navigate to the Azure PowerShell modules folder (typically C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager)
  2. Zip the AzureResourceManager sub-folder.

Local folder to zip.

In the Automation pane of the current Azure Portal:

  1. Select an existing Automation account (or create a new one)
  2. Navigate to the Asset tab and click the Import Module button
  3. Browse to the AzureResourceManager.zip file you created above.

ARM Module Import

After the import completes (this usually takes a few minutes) you should see the Azure Resource Manager module imported as an Asset in the portal.

ARM Module Imported

We now need to setup the credentials the runbook will use and for this we will create a new user in Azure Active Directory (AAD) and add that user as a co-administrator of our subscription (we need to query resource groups and shutdown our virtual machines).

In the Azure Active Directory pane:

  1. Add a new user of type New user in your organisation
  2. Enter a meaningful user name to distinguish it as an automation account
  3. Select User as the role
  4. Generate a temporary password which we’ll need to change it later.

Tell us about this user.

Now go to the Settings pane and add the new user as a co-administrator of your subscription:

Add user as co-admin.

Note: Azure generated a temporary password for the new user. Log out and sign in as the new user to get prompted to change the password and confirm the user has service administration permissions on your subscription.

We now need to add our users credentials to our Azure Automation account assets.

In the Automation pane:

  1. Select the Automation account we used above
  2. Navigate to the Asset tab and click on the Add Setting button on the bottom toolbar
  3. Select Add Credential
  4. Choose Windows PowerShell Credential from type dropdown
  5. Enter a meaningful name for the asset (e.g. runbook-account)
  6. Enter username and password of the AAD user we created above.

Runbook credentials

With the ARM module imported and credentials setup we can now turn to authoring our runbook. The completed runbook script can be found on Github. Download the script and save it locally.

Open the script in PowerShell ISE and change the Automation settings to match the name you gave to your Credential asset created above and enter your Azure subscription name.

[code language=”PowerShell”]
workflow AutoShutdownWorkflow
{
#$VerbosePreference = "continue"

# Automation Settings
$pscreds = Get-AutomationPSCredential -Name "runbook-account"
$subscriptionName = "[subscription name here]"
$tagName = "autoShutdown"

# Authenticate using WAAD credentials
Add-AzureAccount -Credential $pscreds | Write-Verbose

# Set subscription context
Select-AzureSubscription -SubscriptionName $subscriptionName | Write-Verbose

Write-Output "Checking for resources with $tagName flag set…"

# Get virtual machines within tagged resource groups
$vms = Get-AzureResourceGroup -Tag @{ Name=$tagName; Value=$true } | `
Get-AzureResource -ResourceType "Microsoft.ClassicCompute/virtualMachines"

# Shutdown all VMs tagged
$vms | ForEach-Object {
Write-Output "Shutting down $($_.Name)…"
# Gather resource details
$resource = $_
# Stop VM
Get-AzureVM | ? { $_.Name -eq $resource.Name } | Stop-AzureVM -Force
}

Write-Output "Completed $tagName check"
}
[/code]

Walking through the script, the first thing we do is gather the credentials we will use to manage our subscription. We then authenticate using those credentials and select the Azure subscription we want to manage. Next we gather all virtual machine resources in resource groups that have been tagged with autoShutdown.

We then loop through each VM resource and force a shutdown. One thing you may notice about our runbook is that we don’t explicitly “switch” between the Azure module and Azure Resource Management module as we must when running in PowerShell.

This behaviour may change over time as the Automation service is enhanced to support ARM out-of-the-box, but for now the approach appears to work fine… at least on my “cloud” [developer joke].

We should now have our modified runbook script saved locally and ready to be imported into the Azure Automation account we used above. We will use the Azure Service Management cmdlets to create and publish the runbook, create the schedule asset and link it to our runbook.

Copy the following script into a PowerShell ISE session and configure it to match your subscription and location of the workflow you saved above. You may need to refresh your account credentials using Add-AzureAccount if you get an authentication error.

[code language=”PowerShell”]
$automationAccountName = "[your account name]"
$runbookName = "autoShutdownWorkflow"
$scriptPath = "c:\temp\AutoShutdownWorkflow.ps1"
$scheduleName = "ShutdownSchedule"

# Create a new runbook
New-AzureAutomationRunbook –AutomationAccountName $automationAccountName –Name $runbookName

# Import the autoShutdown runbook from a file
Set-AzureAutomationRunbookDefinition –AutomationAccountName $automationAccountName –Name $runbookName –Path $scriptPath -Overwrite

# Publish the runbook
Publish-AzureAutomationRunbook –AutomationAccountName $automationAccountName –Name $runbookName

# Create the schedule asset
New-AzureAutomationSchedule –AutomationAccountName $automationAccountName –Name $scheduleName –StartTime $([DateTime]::Today.Date.AddDays(1).AddHours(1)) –DayInterval 1

# Link the schedule to our runbook
Register-AzureAutomationScheduledRunbook –AutomationAccountName $automationAccountName –Name $runbookName –ScheduleName $scheduleName
[/code]

Switch over to the portal and verify your runbook has been created and published successfully…

Runbook published.

…drilling down into details of the runbook, verify the schedule was linked successfully as well…

Linked Schedule.

To start your runbook (outside of the schedule) navigate to the Author tab and click the Start button on the bottom toolbar. Wait for the runbook to complete and click on the View Job icon to examine the output of the runbook.

Manual Start

Run Output

Note: Create a draft version of your runbook to troubleshoot failing runbooks using the built in testing features. Refer to this link for details on testing your Azure Automation runbooks.

Our schedule will now execute the runbook each night to ensure virtual machine resources tagged with autoShutdown are always shutdown. Navigating to the Dashboard tab of the runbook will display the runbook history.

Runbook Dashboard

Considerations

1. The AzureResourceManager module is not officially supported yet out-of-the-box so a breaking change may come down the release pipeline that will require our workflow to be modified. The switch behaviour will be the most likely candidate. Watch that space!

2. Azure Automation is not available in all Azure Regions. At the time of writing it is available in East US, West EU, Japan East and Southeast Asia. However, region affinity isn’t a primary concern as we are merely just invoking the service management API where our resources are located. Where we host our automation service is not as important from a performance point of view but may factor into organisation security policy constraints.

3. Azure Automation comes in two tiers (Free and Basic). Free provides 500 minutes of job execution per month. The Basic tier charges $0.002 USD a minute for unlimited minutes per month (e.g. 1,000 job execution mins will cost $2). Usage details will be displayed on the Dashboard of your Azure Automation account.

Account Usage

In this two part post we have seen how we can tag resource groups to provide more granular control when managing resource lifecycles and how we can leverage Azure Automation to schedule the shutting down of these tagged resources to operate our infrastructure in Microsoft Azure more efficiently.

Follow ...+

Kloud Blog - Follow