Interacting with Azure Web Apps Virtual File System using PowerShell and the Kudu API

Introduction

Azure Web Sites or App Services are quite flexible regarding deployment. You can deploy via FTP, OneDrive or Dropbox, different cloud-based source controls like VSTS, GitHub, or BitBucket, your on-premise Git, multiples IDEs including Visual Studio, Eclipse and Xcode, and using MSBuild via Web Deploy or FTP/FTPs. And this list is very likely to keep expanding.

However, there might be some scenarios where you just need to update some reference files and don’t need to build or update the whole solution. Additionally, it’s quite common that corporate firewalls restrictions leave you with only the HTTP or HTTPs ports open to interact with your Azure App Service. I had such a scenario where we had to automate the deployment of new public keys to an Azure App Service to support client certificate-based authentication. However, we were restricted by policies and firewalls.

The Kudu REST API provides a lot of handy features which support Azure App Services source code management and deployments operations, among others. One of these is the Virtual File System (VFS) API. This API is based on the VFS HTTP Adapter which wraps a VFS instance as an HTTP RESTful interface. The Kudu VFS API allows us to upload and download files, get a list of files in a directory, create directories, and delete files from the virtual file system of an Azure App Service; and we can use PowerShell to call it.

In this post I will show how to interact with the Azure App Service Virtual File Sytem (VFS) API via PowerShell.

Authenticating to the Kudu API.

To call any of the Kudu APIs, we need to authenticate by adding the corresponding Authorization header. To create the header value, we need to use the Kudu API credentials, as detailed here. Because we will be interacting with an API related to an App Service, we will be using site-level credentials (a.k.a. publishing profile credentials).

Getting the Publishing Profile Credentials from the Azure Portal

You can get the publishing profile credentials, by downloading the publishing profile from the portal, as shown in the figure below. Once downloaded, the XML document will contain the site-level credentials.

Getting the Publishing Profile Credentials via PowerShell

We can also get the site-level credentials via PowerShell. I’ve created a PowerShell function which returns the publishing credentials of an Azure App Service or a Deployment Slot, as shown below.

Bear in mind that you need to be logged in to Azure in your PowerShell session before calling these cmdlets.

Getting the Kudu REST API Authorisation header via PowerShell

Once we have the credentials, we are able to get the Authorization header value. The instructions to construct the header are described here. I’ve created another PowerShell function, which relies on the previous one, to get the header value, as follows.

Calling the App Service VFS API

Once we have the Authorization header, we are ready to call the VFS API. As shown in the documentation, the VFS API has the following operations:

  • GET /api/vfs/{path}    (Gets a file at path)
  • GET /api/vfs/{path}/    (Lists files at directory specified by path)
  • PUT /api/vfs/{path}    (Puts a file at path)
  • PUT /api/vfs/{path}/    (Creates a directory at path)
  • DELETE /api/vfs/{path}    (Delete the file at path)

So the URI to call the API would be something like:

  • GET https://{webAppName}.scm.azurewebsites.net/api/vfs/

To invoke the REST API operation via PowerShell we will use the Invoke-RestMethod cmdlet.

We have to bear in mind that when trying to overwrite or delete a file, the web server implements ETag behaviour to identify specific versions of files.

Uploading a File to an App Service

I have created the PowerShell function shown below which uploads a local file to a path in the virtual file system. To call this function you need to provide the App Service name, the Kudu credentials (username and password), the local path of your file and the kudu path. The function assumes that you want to upload the file under the wwwroot folder, but you can change it if needed.

As you can see in the script, we are adding the “If-Match”=”*” header to disable ETag version check on the server side.

Downloading a File from an App Service

Similarly, I have created a function to download a file on an App Service to the local file system via PowerShell.

Using the ZIP API

In addition to using the VFS API, we can also use the Kudu ZIP Api, which allows to upload zip files and expand them into folders, and compress server folders as zip files and download them.

  • GET /api/zip/{path}    (Zip up and download the specified folder)
  • PUT /api/zip/{path}    (Upload a zip file which gets expanded into the specified folder)

You could create your own PowerShell functions to interact with the ZIP API based on what we have previously shown.

Conclusion

As we have seen, in addition to the multiple deployment options we have for Azure App Services, we can also use the Kudu VFS API to interact with the App Service Virtual File System via HTTP. I have shared some functions for some of the provided operations. You could customise these functions or create your own based on your needs.

I hope this has been of help and feel free to add your comments or queries below.🙂

Viewing AWS CloudFormation and bootstrap logs in CloudWatch

Mature cloud platforms such as AWS and Azure have simplified infrastructure provisioning with toolsets such as CloudFormation and Azure Resource Manager (ARM) to provide an easy way to create and manage a collection of related infrastructure resources. Both tool sets allow developers and system administrators to use JavaScript Object Notation (JSON) to specify resources to provision, as well as provide the means to bootstrap systems, effectively allowing for single click fully configured environment deployments.

While these toolsets are an excellent means to prevent RSI from performing repetitive monotonous tasks, the initial writing and testing of templates and scripts can be incredibly time consuming. Troubleshooting and debugging bootstrap scripts usually involves logging into hosts and checking log files. These hosts are often behind firewalls, resulting in the need to use jump hosts which may be MFA integrated, all resulting in a reduced appetite for infrastructure as code.

One of my favourite things about Azure is the ability to watch the ARM provisioning and host bootstrapping process through the console. Unless there’s a need to rerun a script on a host and watch it in real time, troubleshooting the deployment failure can be performed by viewing the ARM deployment history or viewing the relevant Virtual Machine Extension. Examples can be seen below:

This screenshot shows the ARM resources have been deployed successfully.

This screenshot shows the ARM resources have been deployed successfully.

This screenshot shows the DSC extension status, with more deployment details on the right pane.

This screenshot shows the DSC extension status, with more deployment details on the right pane.

While this seems simple enough in Azure, I found it a little less straight forward in AWS. Like Azure, bootstrap logs for the instance reside on the host itself, however the logs aren’t shown in the console by default. Although there’s a blog post on AWS to view CloudFormation logs in CloudWatch, it was tailored to Linux instances. Keen for a similar experience to Azure, I decided to put together the following instructional to have bootstrap logs appear in CloudWatch.

To enable CloudWatch for instances dynamically, the first step is to create an IAM role that can be attached to EC2 instances when they’re launched, providing them with access to CloudWatch. The following JSON code shows a sample policy I’ve used to define my IAM role.

The next task is to create a script that can be used at the start of the bootstrap process to dynamically enable the CloudWatch plugin on the EC2 instance. The plugin is disabled by default and when enabled, requires a restart of the EC2Config service. I used the following script:

It’s worth noting that the EC2Config service is set to recover by default and therefore starts by itself after the process is killed.

Now that we’ve got a script to enable the CloudWatch plugin, we need to change the default CloudWatch config file on the host prior to enabling the CloudWatch plugin. The default CloudWatch config file is AWS.EC2.Windows.CloudWatch.json and contains details of all the logs that should be monitored as well as defining CloudWatch log groups and log streams. Because there’s a considerable number of changes made to the default file to achieve the desired result, I prefer to create and store a customised version of the file in S3. As part of the bootstrap process, I download it to the host and place it in the default location. My customised CloudWatch config file looks like the following:

Let’s take a closer look at what’s happening here. The first three components are windows event logs I’m choosing to monitor:

You’ll notice I’ve included the Desired State Configuration (DSC) event logs, as DSC is my preferred configuration management tool of choice when it comes to Windows. When defining a windows event log, a level needs to be specified, indicating the verbosity of the output. The values are as follows:

1 – Only error messages uploaded.
2 – Only warning messages uploaded.
4 – Only information messages uploaded.

You can add values together to include more than one type of message. For example, 3 means that error messages (1) and warning messages (2) get uploaded. A value of 7 means that error messages (1), warning messages (2), and information messages (4) get uploaded. For those familiar with Linux permissions, this probably looks very familiar!🙂

To monitor other windows event logs, you can create additional components in the JSON template. The value of “LogName” can be found by viewing the properties of the event log file, as shown below:

img_57c431da7dfb4

The next two components monitor the two logs that are relevant to the bootstrap process:

Once again, a lot of this is self explanatory. The “LogDirectoryPath” specifies the absolute directory path to the relevant log file, and the filter specifies the log filename to be monitored. The tricky thing here was getting the “TimeStampFormat” parameter correct. I used this article on MSDN plus trial and error to work this out. Additionally, it’s worth noting that cfn-init.log’s timestamp is the local time of the EC2 instance, while EC2ConfigLog.txt takes on UTC time. Getting this right ensures you have the correct timestamps in CloudWatch.

Next, we need to define the log groups in CloudWatch that will hold the log streams. I’ve got three separate Log Groups defined:

You’ll also notice that the Log Streams are named after the instance ID. Each instance that is launched will create a separate log stream in each log group that can be identified by its instance ID.

Finally, the flows are defined:

This section specifies which logs are assigned to which Log Group. I’ve put all the WindowsEventLogs in a single Log Group, as it’s easy to search based on the event log name. Not as easy to differentiate between cfn-init.log and EC2ConfigLog.txt entries, so I’ve split them out.

So how do we get this customised CloudWatch config file into place? My preferred method is to upload the file with the set-cloudwatch.ps1 script to a bucket in S3, then pull them down and run the PowerShell script as part of the bootstrap process. I’ve included a subset of my standard cloudformation template below, showing the monitoring config key that’s part of the ConfigSet.

What does this look like in the end? Here we can see the log groups specified have been created:

img_57c4d1ddcd273

If we drill further down into the cfninit-Log-Group, we can see the instance ID of the recently provisioned host. Finally, if we click on the Instance ID, we can see the cfn-init.log file in all its glory. Yippie!

img_57c4d5c4c3b89

Hummm, looks like my provisioning failed because a file is non-existent. Bootstrap monitoring has clearly served its purpose! Now all that’s left to do is to teardown the infrastructure, remediate the issue, and reprovision!

The next step to reducing the amount of repetitive tasks in the infrastructure as code development process is a custom pipeline to orchestrate the provisioning and teardown workflow… More on that in another blog!

pbi-dashboard

Office365 License Reporting in PowerBI from Microsoft Identity Manager

 

Overview

A common request I’m hearing from my customers is visibility of Office365 licensing. Typically this is more from the management staff over the technical team as they don’t have the know-how to get the info themselves. From a management perspective it is also about making sure they get full use of their licensing entitlements. Also to know when they are running close to their licensing limit and the conversations about procuring additional licenses need to be had.

In this post I detail how I’m using PowerShell, the Granfeldt PowerShell Management Agent, the Lithnet MIIS PowerShell module, Microsoft Identity Manager and PowerBI to build a simple reporting dashboard showing user Office365 license assignment as well as tenant Office365 licensing status.

Prerequisites

Logic Overview

The highlevel process goes like this:

  1. Office 365 Tenant and User License info synchronises into the MIM Metaverse via the Office365 PowerShell MA. See how to here.
  2. Using the Lithnet MIIS PowerShell Module query the Metaverse to get User and Tenant Licensing information
  3. Using the PowerBIPS Module we create the Dataset and Tables in PowerBI
  4. Using the PowerBIPS Module we take the information from the query in step 2 and populate the tables in PowerBI
  5. In PowerBI we create our licensing Report and Dashboard, and share it with whoever needs the information

Office 365 Reporting in PowerBI from Microsoft Identity Manager

Here are Steps 2-5 from the Logic Overview.

Using the Lithnet MIIS PowerShell Module we query the MV to get User and Tenant Licensing information

As indicated in the other posts I have an ObjectType in the Metaverse for LicencePlans. On each user I also have the assigned and provisioned licenses.

The image below shows my LicensePlans MV ObjectType.

The image below shows my Person MV ObjectType with assigned and provisioned licenses.

Below is the PowerShell to query my Metaverse to get the licensing information.

Using the PowerBIPS Module we create the Dataset and Tables in PowerBI

Using the PowerBIPS PowerShell module we create the Dataset, Tables and Schema to hold the licensing data.

 

Using the PowerBIPS Module we take the information from the query in step 2 and populate the tables in PowerBI

The following commands use the defaults for the PowerBIPS module and will export your entire dataset to PowerBI.

 

In PowerBI we create our licensing Report and Dashboard, and share it with whoever needs the information

Now in PowerBI you will see you Dataset with the four tables containing the data you exported from the Metaverse.

It’s time to create a report by selecting columns from the tables. Pin your report(s) to an existing or new Dashboard. Customise as required and share with whoever needs the info.

Summary

Using PowerShell I’ve shown:

  • Importing Office365 User and Tenant Licensing information to Microsoft Identity Manager via the Granfeldt PowerShell Management Agent
  • Using the Lithnet MIIS PowerShell Module I’ve extracted Office365 User and Tenant Licensing information from the Microsoft Identity Manager Metaverse
  • Using the PowerBI PowerShell Module I’ve created a Dataset in PowerBI and exported the Office365 User and Tenant Licensing information

All the data is present for any report you may want around O365 licensing. Using the PowerBI PowerShell Module you can flush the tables and refresh the data as required.

Enjoy.

Follow Darren on Twitter @darrenjrobinson

 

 

Implementing Gradient Decent Algorithm in Hadoop for large scale data

In this post I will be exploring how can we use MapReduce to implement Gradient Decent algorithm in Hadoop for large scale data. As we know Hadoop is capable of handling peta-byte scale/size of the data.

Before starting, first we need to understand what is Gradient Decent and where can we use it. Below is an excerpt from Wikipedia:

Gradient descent is a first-order iterative optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or of the approximate gradient) of the function at the current point. If instead one takes steps proportional to the positive of the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent.
[Source: https://en.wikipedia.org/wiki/Gradient_descent ]

If you look at the algorithm, it is an iterative optimisation algorithm. So if we are talking about millions of observations, then we need to iterate those millions of observations and adjust out parameter (theta).
Mathematical notations:

GradientDecent-Eq
where

fx

fx2

And

x-vector

Where p is the number of features.

Now, the question is how can we leverage Hadoop to distribute the work load to minimize the cost function and find the theta parameter?

MapReduce programming model comprises two phases. 1 Map, 2. Reduce shown in below picture. Hadoop gives programmer to only focus on map and reduce phase and rest of the workload is taken care by Hadoop. Programmers do not need to think how I am going to split data etc. Please visit https://en.wikipedia.org/wiki/MapReduce to know about MapReduce framework.

hadoop-mapreduce-framework
[Multiple mappers with single reducer]

When user uploads data to HDFS, the data is splited and saved in various data nodes. Now we know Hadoop will provide subset of data to each Mapper. So we can program our mapper to emit PartialGradientDecent serializable object. For instance if one split has 50 observations, then that mapper will return 50 partial gradient decent objects. Andrew Ng has well explained at https://www.coursera.org/learn/machine-learning/lecture/10sqI/map-reduce-and-data-parallelism

One more thing, there is only ONE reducer in this example, so reducer will get whole lot of partial gradients, it would be better to introduce combiner so that reducer will get low number of PartialGradientDecent objects or you can apply in-memory combining design pattern for MapReduce which I will cover in next post.

Now let’s get into java map reduce program. I would recommend you to have some reading about Writable in Hadoop.

Here is the mapper code that emits PartialGradientDecent object:

Mapper does following:

  1. Parses received data into data point and validate it.
  2. If data point is not valid, then increment the counter (this counter is used to debug how many invalid records were received by mapper)
  3. calculate partial gradients and emit it.

partialGradientEq

Lets have a look at reducer code:

Reducer does following:

  1. Sum all partial gradients emitted by all mappers
  2. Update the theta parameters

Reducer receives all the partial gradients, if we are talking about millions observations, it will iterate all to sum. To overcome this issue, we can introduce combiner that does partial sums of partial gradients and emit to reducers. In that case reducer will receive few partial gradients. The other approach is to implement in-mapper combining pattern.

and the last piece of the puzzle is the Driver program that triggers the Hadoop job based on number of iterations you need. Driver program is also responsible for supplying initial theta and alpha.

You can find about PartialGradientWritable at http://blog.mmasood.com/2016/08/implementing-gradient-decent-algorithm.html

That’s it for now. Stay tuned.

Office365 Licensing Management Agent for Microsoft Identity Manager

Licensing for Office365 has always been a moving target for enterprise customers. Over the years I’ve implemented a plethora of solutions to keep licensing consistent with entitlement logic. For some customers this is as simple as everyone gets say, an E3 license. For other institutions there are often a mix of ‘E’ and ‘K’ licenses depending on EmployeeType.

Using the Granfeldt PowerShell Management Agent to import Office365 Licensing info

In this blog post I detail how I’m using Søren Granfeldt’s extremely versatile PowerShell Management Agent yet again. This time to import Office365 licensing information into Microsoft Identity Manager.

I’m bringing in the licenses associated with users as attributes on the user account. I’m also bringing in the licenses from the tenant as their own ObjectType into the Metaverse. This includes the information about each license such as how many licenses have been purchased, how many licenses have been issued etc.

Overview

I’m not showing assigning licenses. In the schema I have included the LicensesToAdd and LicensesToRemove attributes. Check out my Adding/Removing User Office365 Licences using PowerShell and the Azure AD Graph RestAPI post to see how to assign and remove licenses using Powershell. From that you can workout your logic to implement an Export flow to manage Office365 licenses.

Getting Started with the Granfeldt PowerShell Management Agent

If you don’t already have it, what are you waiting for. Go get it from here. Søren’s documentation is pretty good but does assume you have a working knowledge of FIM/MIM and this blog post is no different.

Three items I had to work out that I’ll save you the pain of are;

  • You must have a Password.ps1 file. Even though we’re not doing password management on this MA, the PS MA configuration requires a file for this field. The .ps1 doesn’t need to have any logic/script inside it. It just needs to be present
  • The credentials you give the MA to run this MA are the credentials for the account that has permissions to the Office365 Tenant. Just a normal account is enough to enumerate it, but you’ll need additional permissions to assign/remove licenses.
  • The path to the scripts in the PS MA Config must not contain spaces and be in old-skool 8.3 format. I’ve chosen to store my scripts in an appropriately named subdirectory under the MIM Extensions directory. Tip: from a command shell use dir /x to get the 8.3 directory format name. Mine looks like C:\PROGRA~1\MICROS~2\2010\SYNCHR~1\EXTENS~2\O365Li~1

Schema.ps1

My Schema is based around the core Office365 Licenses function. You’ll need to create a number of corresponding attributes in the Metaverse Schema on the Person ObjectType to flow the attributes into. You will also need to create a new ObjectType in the Metaverse for the O365 Licenses. I named mine LicensePlans. Use the Schema info below for the attributes that will be imported and the attribute object types to make sure what you create in the Metaverse aligns, so you can import the values. Note the attributes that are multi-valued.

Import.ps1

The logic which the Import.ps1 implements I’m not going to document here as this post goes into all the details Enumerating all Users/Groups/Contacts in an Azure tenant using PowerShell and the Azure Graph API ‘odata.nextLink’ paging function

Password Script (password.ps1)

Empty as not implemented

Export.ps1

Empty as not implemented

Management Agent Configuration

As per the tips above, the format for the script paths must be without spaces etc. I’m using 8.3 format and I’m using an Office 365 account to connect to Office365 and import the user and license data.

As per the Schema script earlier in this post I’m bringing user licensing metadata as well as the Office365 Tenant Licenses info.

Attributes to bring through aligned with what is specified in the Schema file.

Flow through the attributes to the attributes I created in the Metaverse on the Person ObjectType and to the new ObjectType LicensePlans.

Wiring it up

To finish it up you’ll need to do the usual tasks of creating run profiles, staging the connector space from Office365 and importing into the Metaverse.

Enjoy.

Follow Darren on Twitter @darrenjrobinson

How to quickly recover from a FAILED AzureRM Virtual Machine using Powershell

Problem

I have a development sandpit in Azure which I use quite a lot to test and mess with different ideas and concepts. This week when shutting it down things didn’t go that smoothly. All but one virtual machine finally stopped and de-allocated, but one virtual machine just didn’t make it. I tried resizing the VM. I tried changing the configuration of it and obviously tried starting it up many times via the portal and Powershell all without any success.

I realised I was at the point where I just needed to build a new VM but I wanted to attach the OS vhd from the failed VM to it so I could recover my work and not have to re-install the applications and tools on the OS drive.

Solution

I created a little script that does the basics of what I needed to recover a simple single HDD VM and keep most of the core settings. If your VM has multiple disks you can probably attach the other HDD’s manually after you have a working VM again.

Via the AzureRM Portal select your busted VM and get the values highlighted in the screenshot below for VM Name, VM Resource Group and VM VNet Subnet.

The Script then:

  • Takes the values for the information you got above as variables for the;
    • Broken VM Name
    • Broken VM Resource Group
    • Broken VM VNet Subnet
  • Queries the Failed VM and obtains the
    • VM Name
    • VM Location
    • VM Sizing
    • VM HDD information (location etc)
    • VM Networking
  • Copies the VHD from the failed virtual machine to a new VHD file so you don’t have the bite the bullet and kill the failed VM off straight away to release the vhd from the VM.
    • Names the copied VHD <BustedVMName-TodaysDate(YYYYMMDD)>
  • Creates a new Virtual Machine with;
    • Original Name suffixed by ‘2’
    • Same VM Sizing as the failed VM
    • Same VM Location as the failed VM
    • Assigns the copy of the failed VM OS Disk to the new VM
    • Creates a new Public IP for the VM named <NEWVMName-IP2>
    • Creates a new NIC for the VM named <NEWVMName-NIC2>

Summary

With only 3 inputs using this script you can quickly recovery your busted and broken failed AzureRM virtual machine and be back up and running quickly. Once you’ve verified your VM is all good, delete the failed one.

Take this script, change lines 2, 4 and 6 for your failed or VM you want to clone to reflect your Resource Group, Virtual Machine Name and Virtual Machine Subnet. Step through it and let it loose.

Follow Darren on Twitter @darrenjrobinson

Schedule Office 365 PowerShell Tasks Using Azure Automation

Anyone who has used Office 365 knows that just creating your users or syncing them via Azure AD Connect really isn’t enough; instead we almost always have to run scheduled PowerShell scripts to manage tasks such as adding licenses or enabling features, like litigation hold.

Usually I would run these scripts on a management server, or an Azure AD Connect server, but what do you do if you have no on-premises environment? Or no Windows VM’s in Azure? With Azure Automation you have the option to run basic PowerShell scripts without the need to run a full Windows OS, saving on licensing and compute costs.

Recently I needed to do just this for a cloud only based company. The below steps will show you how I configured Office 365 PowerShell using Azure Automation.

Prerequisites

  • An active Azure subscription. You can create a free test account here https://azure.microsoft.com/en-us/free/
  • A PowerShell script. I have attached an example script which sends an email showing used licenses in Office 365

Step 1 – Create an Azure Automation Account

Log into the new Azure portal https://portal.azure.com and browse to More Services / Automation Accounts. Click Add to create a new Automation account or use an existing account.

Name the Automation account and select your active Azure subscription. I have chosen to create a new resource group as this account does not require access to other Azure resources. An Azure Run As account is also not required.

Step 2 – Add the MSOnline Module

If your script connects to Office 365 using Connect-MsolService you will need to add the MSOnline PowerShell module. If you are only using a remote PowerShell session to Exchange Online this would not be required.

After selecting your Azure Automation account, select Assets then Modules

Select Browse gallery and search for the MSOnline PowerShell module

Click on the MSOnline module and click Import to import the module as an asset.

Step 3 – Create a Credential Asset (Optional)

This step is optional but I prefer to not have usernames and passwords inside PowerShell scripts for all to see. On the Assets page select Credentials.

Select Add a Credential and type the Office 365 username and password that has permissions to run your script.

Step 4 – Create the Automation Runbook

Next we will create the Runbook which will execute the PowerShell script. Click your Azure Automation account and select Runbooks.

Click Add a runbook.

Click Quick Create, Create a new runbook

Enter a name for the runbook and select PowerShell as the runbook type

Copy your PowerShell script into the runbook or use the below script as an example.

If you are using your own PowerShell script and created a credential asset you will now need to modify the script to import the credential. Expand the assets tree in the left menu, we can now see the name of the credential asset we created. Use the Get-AutomationPSCredential command to import the credential into your script. For further information on available cmdlets https://msdn.microsoft.com/en-us/library/dn690262.aspx

Capture

Click Save to save the runbook

To test the runbook, select Test pane, then Start. The runbook should execute and anything that is written to the host will be shown in this window.

Step 5 –Publish and Schedule the Runbook

Once you are happy that the runbook is running correctly, select Publish which will make the runbook available to schedule.

To create a schedule, select Schedules, then Add a Schedule.

Select Link a schedule to your runbook and Create a new schedule. In the below example I have a schedule that runs daily at 7:00 AM.

Capture1

Once the schedule is created, highlight it and click OK. It should now be linked to your runbook.

That’s it. Hopefully this will help you easily move your PowerShell scripts to Azure Automation.

Understanding Outlook Auto-Mapping

Auto-mapping is an Exchange & Exchange Online feature, which automatically opens mailboxes with Full Access permissions in a delegate’s Outlook client. The setting is configurable by an Administrator when Full Access permissions are assigned for a user. Once enabled, the periodic Autodiscover requests from the Outlook client will determine which mailboxes should be mapped for a user. Any auto-mapped mailboxes with be opened by the Outlook client in a persistent state and cannot be closed by the user. If users want to remove the auto-mapped mailbox from their Outlook client, Administrative intervention is required to remove the Full Access permission, or clear the auto-mapping flag.

There are two scenarios where you may want to use this configuration:

  • You would like mailboxes to automatically open & persist in Outlook, for users who are assigned full access permissions
  • When users require access an Online Archive associated with the delegating mailbox – the Online Archive isn’t available if you add the mailbox via the “Open these additional mailboxes” console in a mailbox profile. Note that the Online Archive is also available via “Open Another Users Mailbox in Outlook Web App and in some cases, when the delegating mailbox is added as an additional mailbox in the Outlook profile (this has proven to be unreliable depending on identity & permission assignment.)

The auto-mapping feature may be undesirable for users who have access to a large number of mailboxes (taking up valuable Outlook screen real estate), or for users who want to control which mailboxes are open in their Outlook client.

If we take a look at the Autodiscover response for my Office 365 mailbox, we can see the AlternativeMailbox options that advertise a Shared Mailbox (including its Online Archive) for which I have Full Access permissions & auto-mapping enabled.

<?xml version="1.0" encoding="utf-8"?>
<Autodiscover xmlns="http://schemas.microsoft.com/exchange/autodiscover/responseschema/2006">
<Response xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a">
...
<AlternativeMailbox>
<Type>Delegate</Type>
<DisplayName>! Shared Mailbox</DisplayName>
<SmtpAddress>sharedmailbox@kloud.com.au</SmtpAddress>
<OwnerSmtpAddress>sharedmailbox@kloud.com.au</OwnerSmtpAddress>
</AlternativeMailbox>
<AlternativeMailbox>
<Type>Archive</Type>
<DisplayName>In-Place Archive - ! Shared Mailbox</DisplayName>
<SmtpAddress>ExchangeGuid+b44be8a9-61b3-4cb4-a1f5-0b845b1fb419@kloud.mail.onmicrosoft.com</SmtpAddress>
<OwnerSmtpAddress>sharedmailbox@kloud.com.au</OwnerSmtpAddress>
</AlternativeMailbox>
...
</Account>
</Response>
</Autodiscover>

Auto-Mapping Principles

  • By default, auto-mapping is enabled whenever Full Access permissions are granted to the mailbox via Add-MailboxPermission. It must be explicitly set to false if you want it disabled for a user i.e -Automapping:$false
  • At this time, it is not possible to view the state of the auto-mapping setting for a mailbox (whether it has been enabled/disabled)
  • If you are using Mail-Enabled Security Groups to grant Full Access permissions, auto-mapping will not work for the group members. The auto-mapping feature must be assigned by granting Full Access to individual user objects

Managing the Setting

It is enabled per-user, for new or existing Full Access delegates:

Add-MailboxPermission sharedmailbox@kloud.com.au -AccessRights FullAccess -User david.ross@kloud.com.au -Automapping:$true

You can disable auto-mapping for a single user, by removing the user’s Full Access permission and then reinstating the permission with the -automapping:$false parameter defined:

Remove-MailboxPermission sharedmailbox@kloud.com.au -AccessRights FullAccess -User david.ross@kloud.com.au
Add-MailboxPermission sharedmailbox@kloud.com.au -AccessRights FullAccess -User david.ross@kloud.com.au -Automapping:$false

In Exchange Online, auto-mapping can be removed for all existing Full Access delegates on a mailbox by running the command:

Remove-MailboxPermission sharedmailbox@kloud.com.au -ClearAutoMapping

 

That’s it, simple! Hopefully this helps to explain the nuances of Outlook auto-mapping.

Migrating resources from AWS to Microsoft Azure

Kloud receives a lot of communications in relation to the work we do and the content we publish on our blog. My colleague Hugh Badini recently published a blog about Azure deployment models from which we received the following legitimate follow up question…

So, Murali, thanks for letting us know you’d like to know more about this… consider this blog a starting point🙂.

Firstly though…

this topic (inter-cloud migrations), as you might guess, isn’t easily captured in a single blog post, nor, realistically in a series, so what I’m going to do here is provide some basics to consider. I may not answer your specific scenario but hopefully provide some guidance on approach.

Every cloud has a silver lining

The good news is that if you’re already operating in a cloud environment then you have likely had to deal with many of the fundamental differences between traditional application hosting and architecture and that of cloud platforms.

You will have dealt with how you ensure availability of your application(s) across outages; dealing with spikes in traffic via use of elastic compute resources; and will have come to recognise that is many ways, Infrastructure-as-a-Service (IaaS) in the cloud has many similarities to the way you’ve always done things on-prem (such as backups).

Clearly you have less of a challenge in approaching a move to another cloud provider.

Where to start

When we talk about moving from AWS to Azure we need to consider a range of things – let’s take a look at some key ones.

Understand what’s the same and what’s different

Both platforms have very similar offerings, and Microsoft provides many great resources to help those utilising AWS to build an understanding of which services in AWS map to which services in Azure. As you can see the majority of AWS’ services have an equivalent in Azure.

Microsoft’s Channel 9 is also a good place to start to learn about the similarities, with there being no better place than the Microsoft Azure for Amazon AWS Professional video series.

So, at a platform level, we are pretty well covered, but…

the one item to be wary of in planning any move of an existing application is how it has been developed. If we are moving components from, say, an EC2 VM environment to an Azure VM environment then we will probably have less work to do as we can build our Azure VM as we like (yes, as we know, even Linux!) and install whatever languages, frameworks or runtimes we need.

If, however, we are considering moving an application from a more Platform-as-a-Service capability such AWS Lambda we need to look at the programming model required to move its equivalent in Azure – Azure Functions. While AWS Lambda and Azure Functions are functionally the same (no pun intended) we cannot simply take our Lambda code and drop it into an Azure Function and have it work. It may not even make sense to utilise Azure Functions depending on what you are shifting.

It’s also important to consider the differences in the availability models in use today in AWS and Azure. AWS uses Availability Zones to help you manage the uptime of your application and it’s components. In Azure we manage availability at two levels – locally via Availability Sets and then geographically through use of Regions. As these models differ it’s an important area to consider for any migration.

Tools are good, but are no magic wand

Microsoft provides a way to migrate AWS EC2 instances to Azure using Azure Site Recovery (ASR) and while there are many tools for on-prem to cloud migrations and for multi-cloud management, they mostly steer away from actual migration between cloud providers.

Kloud specialises in assessing application readiness for cloud migrations (and then helping with the migration), and we’ve found inter-cloud migration is no different – understanding the integration points an application has and the SLAs it must meet are a big part of planning what your target cloud architecture will look like. Taking into consideration underlying platform services in use is also key as we can see from the previous section.

If you’re re-platforming an application you’ve built or maintain in-house, make sure to review your existing deployment processes to leverage features available to you for modern Continuous Deployment (CD) scenarios which are certainly a strength of Azure.

Data has a gravitational pull

The modern application world is entirely a data-driven one. One advantage to cloud platforms is the logically bottomless pit of storage you have at your disposal. This presents a challenge, though, when moving providers where you may have spent years building data stores containing Terabytes or Petabytes of data. How do you handle this when moving? There are a few strategies to consider:

  • Leave it where it is: you may decide that you don’t need all the data you have to be immediately available. Clearly this option requires you to continue to manage multiple clouds but may make economic sense.
  • Migrate via physical shipping: AWS provides Snowball as a way to extract data out of AWS without needing to pull it over a network connection. If your solution allows it you could ship your data out of AWS to a physical location, extract that data, and then prepare it for import into Azure, either over a network connection using ExpressRoute or through the Azure Import/Export service.
  • Migrate via logical transfer: you may have access to a service such as Equinix’s Cloud Exchange that allows you to provision inter-connects between cloud and other network providers. If so, you may consider using this as your migration enabler. Ensure you consider how much data you will transfer and what, if any, impact the data transfer might have on existing network services.

Outside of the above strategies on transferring of data, perhaps you can consider a staged migration where you only bring across chunks of data as required and potentially let older data expire over time. The type and use of data obviously impacts on which approach to take.

Clear as…

Hopefully this post has provided a bit more clarity around what you need to consider when migrating resources from AWS to Azure. What’s been your experience? Feel free to leave comments if you have feedback or recommendations based on the paths you’ve followed.

Happy dragon slaying!

Exchange in Azure: NIC disabled/in error state

I recently had the need to build my own Exchange server within Azure and connect it to my Office 365 tenant.
I loosely followed the steps in this Microsoft article: https://technet.microsoft.com/library/mt733070(v=exchg.160).aspx to get my Azure (ARM) VMs and infrastructure deployed.

I initially decided to utilise an A1 Azure VM for my Exchange server to reduce my costs, however upon successfully installing Exchange it was extremely slow and basic things like EAC and creating mailboxes would not function correctly due to the lack of resources. I found that resizing my VM to an A3 Azure VM resolved my issues and Exchange then functioned correctly.

After I powered down the Azure VM to a stopped (deallocated) state that’s when I encountered issues.

I found that after I powered up the VM I could no longer connect to it and once I enabled boot diagnostics I discovered that the NIC was in an error state/disabled.

After going down multiple troubleshooting paths such as redeploying the VM, resizing the VM, changing subnets etc. I discovered that patience was the key and after about 20 minutes the NIC re-enabled itself and all was well.

I have run multiple tests with an A3 Azure VM and found that in some cases it could take anywhere from 20-40 minutes to successfully boot up with 10 minutes being the quickest boot up time.

Hopefully this assists someone out there banging their head against the wall trying to get this to work!