Feb 2016 Azure AD Connect Upgrade Fails – IndexOutOfRangeException resolution

I’ve been doing some work for a client recently who decided to upgrade their Azure AD Connect appliance to the latest February release. This was a prerequisite task for future work to follow. As an aside, it’s always nice to run the current version of the sync client. Microsoft regularly update the client to provide new features and improvements. A key driver for this client in particular was the fact that the new client (1.1.105.0 – Released 16/2/2016) will allow you to synchronise every 30 minutes, which is a welcome change from the previous 3-hour sync cycles.

The detailed Azure AD Connect version release history can be found – here

So it was decided… we would upgrade.

All sounds fairly painless really, yeah?

We completed the usual preparation tasks, such as forcing a sync, exporting the connector space, taking backups and snapshots etc. All of this completed as expected. We were ready to kick off the installation process of the new client (shown below.)

Shortly after the client upgrade process completed, I was greeted with this wonderful window.

The log file tells me this:

[11:36:13.912] [ 1] [INFO ] Found existing persisted state context.

[11:36:13.912] [ 1] [ERROR] Caught an exception while creating the initial page set on the root page.

Exception Data (Raw): System.IndexOutOfRangeException: Index was outside the bounds of the array.

at Microsoft.Online.Deployment.Types.Utility.AdDomainInfoEncoder.ConvertBack(Object value, Type targetType, Object parameter, CultureInfo culture)

at Microsoft.Online.Deployment.Types.PersistedState.PersistedStateElement.ToContext(PersistedStateContainer state, PropertyInfo propertyInfo, PersistedElementAttribute attr, Object& value)

at Microsoft.Online.Deployment.Types.Context.LocalContextBase.LoadFromState(PersistedStateContainer state, IPowerShell powerShell)

at Microsoft.Online.Deployment.OneADWizard.UI.WizardPages.RootPageViewModel.GetInitialPagesCore()

at Microsoft.Online.Deployment.OneADWizard.UI.WizardPages.RootPageViewModel.GetInitialPages()

[11:36:41.346] [ 1] [INFO ] Opened log file at path C:\Users\xxxxxx\AppData\Local\AADConnect\trace-20160218-113613.log

To resolve this issue, we need to action the following

  1. Make sure the AAD Connect wizard is not running.

  2. Open this file: %ProgramData%\AADConnect\PersistedState.xml

  3. Take a backup (copy and rename PersistedState.xml to PersistedState.old)

  4. Proceed to edit the xml file using your favourite text editor.

  5. Find a section that looks like this:

<PersistedStateElement>
<Key>IAdfsContext.TargetForestDomainName</Key>
<Value>domain.com</Value>
</PersistedStateElement>

We need to update the <Value> element to look like this:

<PersistedStateElement>
<Key>IAdfsContext.TargetForestDomainName</Key>
<Value>domain.com,domain.com</Value>
</PersistedStateElement>

The domain.com value must be populated twice, separated by a comma.

You can now save the file and restart the AAD Connect wizard. It will now launch as expected.

Following on from this I was able to complete the upgrade as per normal.

I can also now see that the sync cycle has been reduce to 30 minutes post upgrade (shown below.)

It appears that Microsoft have logged an official bug fix for this, However… this should get you out of trouble in the short term.

Happy upgrading!

Windows 10 – First Look: Scaling on the Surface Pro 3

As a fellow Surface user, I love my device.

The surface is a great device, which packs plenty of performance for heavy duty workloads such as running guest virtual machines or 3d rendering. It’s also extremely light which is great for work meetings and note taking on the go. You could say the Surface is great for any task that you can throw at it, almost…

Remember the first time you plugged your brand spanking new Surface into an external display to enable a little more desktop real estate in the office?

You froze, stared in amazement….. Almost immediately you fired up your browser to search for a resolution (pun intended eh.) What you saw before you was…. Lego, Giant icons and menu items which resembled cartoon-esque animated objects. Enter the dreaded scaling issue (and disappointment.)

You quickly discovered that you weren’t alone, there were thousands of others which were equally frustrated. (Evidence – here, here and also here)

Your search results were swiftly followed up with tweaking of display settings in attempt to find the perfect happy medium. It almost became a game, a quest for the perfect balance which would allow you to still read your email on the inbuilt display, whilst not allowing the lady across the street getting coffee to read the document you were authoring on your external display. I for one spent at least an hour going backwards and forwards (Something like this.)

  • Tweak the display scaling setting
  • Sign-out to apply the settings
  • Sign back in to test new display settings
  • Decide that aforementioned settings are not quite right and that you can do better.
  • Tweak settings further in attempt to find the perfect balance (repeat process over and over.)
  • Eventually give up and accept that you would adjust to the way the scaling works, plus… since the Surface (Pro 3 in my case) was great at everything else, this made the pain of the scaling a little more bearable (Nothings perfect right?)

You either decided on… this

Or this…

win8scale

Whist this all might sound a little bit dramatic, it’s fair to say that the scaling and display configuration on Windows 8 didn’t quite hit the mark. Its performance was inconsistent at best, sometimes appearing grainy or blurry. Windows 8 just wasn’t up to independent display scaling.

Where are you going with this dude?

I personally have been running the Windows 10 Insider Preview (Build 10130) on my Surface Pro 3 for the last few days. I was pleasantly surprised to discover that in the most recent build, Microsoft have finally added support for independent display scaling. Yes, you read that right (see below.)

Figure one – Inbuilt Surface 3 display set at 150% (default) scaling.


Figure two – Attached external 1080p display set at 100% scaling.

Whilst this may sound trivial, you can finally use your external display in all of its intended glory. No more Lego, or cartoon style icons. Everything appears sharp and silky smooth (it actually took some adjusting to revert back to the look that the hardware vendor intended for us – a welcome change.)

This setting did not apply immediately for me. As I logon to my SP3 with my Microsoft Account, my display settings were synchronised down from my previous Windows 8.1 install (125% scaling universally.)

I noticed the option adjust individual display scaling, however could not get this to apply in my Microsoft Account. As a side note, I have had to troubleshoot my Microsoft account syncing random settings to multiple devices previously. So I suspected this was profile related.

To validate my suspicion, I created a separate local account. Once I proceeded through the logon fluff (Hi… we’re getting things ready…) I was greeted with native resolutions on both monitors. Viiiiiiiiiictory!

Since I am a fairly flexible user, it wasn’t too much pain for me to recreate my desktop profile to take advantage of the new scaling options. With cloud services such as Exchange Online, OneNote and OneDrive, this really is a trivial task now-a-days, providing you can spare the bandwidth.

Following on from this, I simply reassociate my Microsoft Account with the new improved profile. First impressions – Fantastic!

 

I made the executive decision to test the Windows 10 Preview on physical hardware after running it in a VM for the past couple weeks. My initial virtual machine testing was flawless and relatively pain free, so I decided to take the plunge. Given that Microsoft have announced a global release date of July 29 2015, I mitigated my concerns by assuring myself that the new OS must be mostly complete…

I also took a backup of my previous Windows 8.1 install before beginning J

 

For those of you interested in upgrading to the official release of Windows 10, you should review the following post – here

For those of you who are a little more adventurous and want to test the preview like I did. Sign up – here

I must stress that you should carry out correct testing, and make your own decision to run the pre-release software. This blog post comes with no insurance or guarantees (even though my experience has been relatively painless.) Microsoft have not yet officially confirmed if there will be an upgrade path from the Insider Program once Windows 10 hits RTM, so consider this also. You may be up for a complete re-install if you do decide to take the plunge.

I for one am looking forward to the July 29 release date, initially impressions indicate that this is a welcome change for Surface users.

I’ll leave you all with a teaser whilst I wrap up this blog post….

I am off to test Azure AD join functionality, this is big news for Cloud Services and users alike!

I will aim to cover this off in an upcoming blog post around July 29 J

Happy reading!

 

 

Kerberos Web Application Configuration and Federation.

I’ve spent a lot of time at a client site recently working on a large complex application migration project. In my scenario, the client is migrating applications from another domain, to their own. There are no domain trusts in place, so you could consider it as an acquisition/merger type scenario.

One of the common challenges often encountered in this type of work is troubleshooting Kerberos authentication process for web apps. Once the concepts of Kerberos authentication are understood, the process is relatively straight forward. However, understanding this seems to be a common issue in many IT environments, especially when the line between a traditional Infrastructure resource (who may be responsible for configuring service accounts and SPNs) and a developer (who may be responsible for configuring the IIS platform and deploying applications) becomes somewhat grey.

I decided to write a blog about my experience… Hopefully this will help you troubleshoot the Kerberos implementation process, whilst also explaining how to share or “federate” your applications with disparate parties.

How do I know if this process is suitable for my environment?

  • You have legacy line of business applications which aren’t claims aware.
  • You need to share your line of business applications between forests which do not have domain trusts or federation services in place.
  • You have an identity available in each domain – given that there are no trusts or federation in place, this is key.
  • You just want to know more about Kerberos Authentication and Delegation.

Back to basics – single domain Kerberos authentication process.

Below is a brief explanation of how the Kerberos authentication protocol works in a single domain environment. For more information, read the official Microsoft documentation.

single domain Kerberos authentication process

  1. Client contacts the KDC service saying that it’s a user and requests a ticket to get tickets (TGT).
  2. KDC issues the TGT to the client by encrypting with the client machine password hash.
  3. The client requests a service ticket from the KDC TGS, producing its TGT for authorisation.
  4. Based on the previous authorisation, a service ticket is issued to the client.
  5. The client presents its service ticket to the application server for authentication.
  6. Upon successful authentication, client/server session is open. Normal request/response process continues.

The Scenario

In the scenario presented below, we have two companies, each with their own instance of AD DS (DOMAINA and DOMAINB).

  • Applications have been migrated from DOMAINA to DOMAINB.
  • DOMAINA users are currently being migrated to DOMAINB, therefore they have a user account object in each domain.
  • Some users will access applications internally (from DOMAINB). Some users will access applications externally (from DOMAINA).

dual AD DS scenario

Now you may be thinking… How will the remote users access applications externally via the internet using the Kerberos authentication protocol?

Good question 🙂

As we all know, Kerberos authentication (generally speaking), does not allow an internet-connected client to authenticate directly. This is because the Kerberos Key Distribution Centre (KDC) role is usually deployed to a domain controller and therefore… it is understandable that we do not want this role accessible from the internet.

Enter Kerberos Protocol Transition (KPT)!!

KPT enables clients that are unable to get Kerberos tickets from the domain controller to pass through a service that “transitions” the client’s authentication into a true Kerberos authentication request.

In the scenario presented above, the KPT role is played by the load balancer or application delivery controller. This is a common feature provided by many vendors in the Application Delivery Controller (ADC) space. For the remainder of this article, this will be referred to as the “Kerberos SSO engine.”

To ensure a pleasant user experience, external users will browse to an application portal, hosted by the ADC in DOMAINB.local. They authenticate ONCE to the ADC and from that point onwards they can access multiple web applications provided by the ADC (which will effectively proxy their credentials to each web application server using KPT).

Internal users will continue to access applications directly.

Putting it all together – Required Components

Now that you understand the scenario, let’s cover off the required components to make the solution work.

Kerberos SSO engine – APPGW.DOMAINB.local The Kerberos SSO Engine role is played by the ADC. Upon a successful authentication to a web portal, it will proxy users credentials to multiple web applications ensuring a Single Sign On experience.

  • The Kerberos SSO Engine requires a service account which allows the ADC to retrieve Kerberos tickets on behalf of the user authenticating to the Application portal (once SPNs and delegation have been configured).
Web Farm – WEBSRV1 and WEBSRV2.DOMAINB.local The Web Farm is responsible for hosting web applications.

  • Each application requires its own IIS Application Pool
  • Each unique IIS Application Pool requires its own Domain Service Account
  • Each Domain Service Account will require Service Principal Names (SPNs) configured.
Service Principal Names (SPNs) SPNs will need to be configured for each service account which will run the…

  • Kerberos SSO engine
  • IIS Application Pools.

SPNs should always be in the format SERVICE/FQDN and SERVICE/NetBIOS name. This is a simple concept which often causes a lot of pain when troubleshooting Kerberos authentication processes. For example, if you had a website with the host header “prodweb01”, you would configure the following SPNs “HTTP/prodweb01” and “HTTP/prodweb01.domainb.local” on the service account which is responsible for running the application pool in IIS.

Application Delivery Controller

Whilst it is out of scope for the purpose of this post to document the configuration process of an ADC, it is worthwhile noting that the roles listed below are common features provided by many vendors in the ADC space today.

In my scenario, I used a free trial of an F5 BIG-IP device hosted on Amazon Web Services (AWS) which comes with a deployment guide.

It is also worth noting that AWS also offers a free trial of a Citrix NetScaler which is a competitive alternate to the F5.

The ADC has three primary roles in the scenario presented above:

  • To load balance the web farm which will host our web-based applications
  • To provide an application portal style page for external users (DOMAINA) to access their web applications – On an F5 device, these are called Webtops. On a Citrix NetScaler, these are called Gateways.
  • To act as the Kerberos SSO engine, which will carry out the Kerberos Protocol Transition (KPT) process on behalf of each user.

I will leave it up to you to evaluate which device is the best fit for your organisation, if you already have one of these devices available to you then decision may be that simple :).

Web Farm and Service Account Configuration

In the scenario I presented above, I used two Windows Server 2012 R2 VMs with IIS installed to host my web farm (you guessed it – the VMs were also located in AWS). The web servers were then placed into a server pool on my ADC and presented by a single VIP for load balancing purposes. Finally, I created a dummy website to use as a test page.

Now we are ready to get into the “nuts and bolts” of the Kerberos web application configuration.

Configuration Guide

The following steps assume that you have created a test webpage to perform the configuration on (shown below).

test web page

  1. Launch IIS Manager and select your Website > Authentication.

    IIS Manager Authentication

    As you can see above, by default only Anonymous Authentication is allowed.

  2. Now we need to enable Windows Authentication and disable Anonymous Authentication. This is a common stumbling block I have encountered in the field. If Anonymous authentication is enabled, IIS will always try to authenticate using it first, even if other methods are enabled. You can read more about IIS Authentication precedence on MSDN. Your configuration should now look like the image shown below.

    IIS Authentication Settings

  3. As you’re probably aware, Windows Authentication providers consist of Negotiate or NTLM. Negotiate is actually a container which uses Kerberos as the first authentication method and then NTLM as fall back (when Kerberos fails). We now need modify our providers to ensure Negotiate takes precedence. Select Windows Authentication > Providers and make sure that Negotiate is at the top of the list as Shown below.

    Providers Window showing Negotiate at top

Service Account and SPN Configuration

We are now ready to configure our service account which will run the Application Pool for our test website.

Since we would like to access our website using a custom host header, we need to register a Service Principal Name (SPN). Furthermore, given that our website is operating in a web farm, we are best placed to register the SPN to a domain service account and use the aforementioned service account to run the test website’s Application Pool (on both members of our web farm).

Registering an SPN to a computer account will not work in this scenario given that we have multiple web farm members. Kerberos gets very unhappy when we have duplicate SPNs registered to multiple accounts and because of this, I would STRONGLY advise you to use domain service accounts. One of the things I have taught myself to check first when troubleshooting Kerberos issues is to validate that there are no duplicate SPNs configured (you can do this using the SETSPN -L command).

For the purpose of this example I have created a domain user account called “IIS_Service”. As you can see below, there are currently no SPNs configured on this account.

Note: if you aren’t clear on the exact purpose of an SPN, please do some reading before proceeding.

IIS Service Properties window

Now that you are clear on the purpose of an SPN, let’s review the configuration…

Website Bindings (host header): http://testsite and http://testsite.domainb.local
IIS Service Account: DOMAINB\IIS_Service
Output: Setspn –S HTTP/testsite IIS_Service
Setspn –S HTTP/testsite.domainb.local IIS_Service
  1. We are now ready to register the SPNs to the IIS Service account. SPNs should always be in the format SERVICE/FQDN and SERVICE/NetBIOS name. You can do this using the commands listed in the “Output” section of the table above. Once you have run these from an administrative command prompt (with domain admin rights) you should see an output similar to the following…

    setspn output console window

  2. It is good practise to verify that the SPN’s you configured have been entered correctly. You can do this by running “setspn –l domain account” as shown below.
    Checking SPNs have been setup correctly

Now that we have verified that our SPNs have been configured correctly, we can return to the Website and Application Pool to finalise the configuration.

In the next section we will define the service account (IIS_Service) used to run the website’s Application Pool. IIS will use this service account to decrypt the Kerberos ticket (presented by the client) and authenticate the session.

Application Pool Configuration

  1. Navigate to the website’s Application Pool:
    Application Pool
  2. Select Advanced Settings > Browse to Identity and change the service account to IIS_Service

    Changing App Pool Identity

    Changing App Pool Identity

    Changing App Pool Identity

  3. Validate that the service account entered correctly.
    Checking App Pool Identity
  4. Navigate to IIS > Sites > Test Site > Configuration Editor.
  5. From the drop down menu, browse to system.webServer > security > authentication > windowsAuthentication:
    Configuration Editor
  6. Change useAppPoolCredentials to True.

    Note: by setting the useAppPoolCredentials value to True you have configured IIS to use the domain service account to decrypt the clients Kerberos ticket which is presented to the web server to authenticate the session.

  7. You will need to run an IISRESET command to update the configuration. Unfortunately recycling the Application Pool and Website is not sufficient.
  8. Test the web application – your browser should have http://*.domainb.local in the local intranet zone to ensure seamless single sign on.

You can validate that Kerberos authentication is working correctly by inspecting the traffic with Fiddler:

Fiddler used to show Kerberos ticket

This completes the web farm and account configuration.

Kerberos SSO Engine and Delegation Explained

Now that we have successfully configured our web site to use Kerberos authentication we need to configure delegation to allow the ADC to perform KPT (like we discussed earlier in the post).

As you have probably guessed, the ADC also requires a Service Account to perform KPT. This will allow it to act on behalf (proxy) of users to complete the Kerberos ticket request process. This is especially useful when our users are external to the network, accessing applications via a secure portal, as per the opening scenario. (Yes folks, we have almost gone full circle!)

To handle this process, I have created another service account called “SSO_Service.” I have also registered the SPN “HTTP/apps.domainb.local” – as this is the URL for my application portal page (shown on the scenario diagram above).

We are now ready to configure Kerberos Constrained Delegation, but before we go any further I thought I should provide a brief explanation.

In its simplest form, delegation is the act of providing a service account (SSO_Service in my example) with the authority to delegate the logged in credentials of any user to a backend service. That’s it!

In my scenario, the front end service is the web application portal provided by our application delivery controller. The backend service is the web application (Test Site) we have configured. Therefore, upon successful authentication, credentials will be delegated from the web application portal to our backend web applications, providing seamless single sign on experience for our users. This is best represented by the conceptual diagram shown below.

Conceptual diagram

Kerberos Constrained Delegation – Configuration

  1. Locate the service account you would like to configure delegation access for (SSO_Service in my example) and select the Delegation tab.

    Delegation tab for service account

    TIP: you must have an SPN registered to the service account for the Delegation tab to be visible.

  2. Select Trust this user for delegation to specified services only > Use Kerberos only.
  3. Select Add and browse to the service account you would like to delegate to and select the SPN we registered previously.
    Service Account Delegation

We have now authorised the SSO_Service account to delegate the logged in credentials of any user to the IIS_Service service account. It is important to remember that the IIS service account only has SPNs configured to access the test website :).

Hopefully this helps you to better understand Kerberos Authentication whilst providing insight as to how you can share secure access to Kerberos web applications externally.

How to create custom images for use in Microsoft Azure

In this post I will discuss how we can create custom virtual machine images and deploy them to the Microsoft Azure platform. To complete this process you will need an Azure Subscription, the Azure PowerShell module installed and a pre-prepared VHD which you would like to use (VHDX is not supported at present.)

You can sign up for a free trial of Microsoft Azure here if you don’t currently hold a subscription.

Completing this process will allow you take advantage of platforms which aren’t offered “out of the box” on Microsoft Azure eg, Server 2003 and Server 2008 for testing and development. Currently Microsoft offers Server 2008 R2 as the minimum level from the Azure Image Gallery.

What do I need to do to prepare my image?

To complete this process, I built a volume license copy of Windows Server 2008 Standard inside a generation one Hyper-V guest virtual machine. Once the installation of the operating system completed I installed Adobe Acrobat Reader. I then ran sysprep.exe to generalise the image. This is important, if you don’t generalise your images, they will fail to deploy on the Azure platform.

I will detail the steps carried out after the operating system install below.

  1. Log into the newly created virtual machine
  2. Install the Hyper-V virtual machine additions (if your guest doesn’t already have it installed)
  3. Install any software that is required in your image (I installed Acrobat Reader)
  4. From an Administrative command prompt, navigate to %windir%\system32\sysprep and then execute the command “sysprep.exe”

  1. Once the SysPrep window has opened, select Enter System Out of Box Experience (OOBE) and tick the Generalize check box. The shutdown action should be set to Shutdown, this will shut down the machine gracefully once the sysprep process has completed.
  2. Once you are ready, select OK and wait for the process to complete.

I built my machine inside a dynamically expanding VHD, the main reason for doing so was to avoid having to upload a file size which was larger than necessary. As a result of this, I chose to compact the VHD before moving on to the next step by using the disk wizard inside the Hyper-V management console. To complete this process, follow the steps below.

  1. From the Hyper-V Host pane, select Edit Disk
  2. Browse to the path of VHD we were working on, in my case it is “C:\VHDs\Server2008.vhd” and select Next
  3. Select Compact and Finish.
  4. Wait for the process to complete. Your VHD file is now ready to upload.

What’s next?

We are now ready to upload the virtual machine image, to complete this process you will need access to the Azure PowerShell cmd-lets and a storage account for the source VHD. If you do not already have a storage account created, you can follow the documentation provided by Microsoft here.

IMPORTANT: Once you have a storage account in Azure, ensure that you have a container called VHDs. If you don’t have a container you can create on by selecting Add from the bottom toolbar, name it vhds and ensure the access is set to Private (container shown below.)


We are now ready to connect to the Azure account to kick off the upload process. To do so, launch an Administrative Azure PowerShell console and follow the following steps.

  1. Run the cmd-let Add-AzureAccount, this will present a window which will allow you to authenticate to Azure.

  1. On the next screen, enter your Password. The PowerShell session is now connected.
  2. To verify that the session connected successfully, run the cmd Get-AzureAccount, you should see your account listed below.

We are now ready to commence the upload process. You will need your storage blob URL. You can find this on the container page we visited previously to create the vhds container.

The complete command is as follows.

Add-AzureVhd -Destination “<StorageBlobURL>/vhds/Server2008.vhd” -LocalFilePath “C:\VHDs\Server2008.vhd”

Once you have executed the command, two things happen..

  1. The VHD file is indexed by calculating the MD5 hash

  1. Once the indexing process is completed, the upload starts.


This is very neat, as the demo gods often fail us… (my upload actually failed part way through.) Thankfully I was able to re-execute the command, which resumed the upload process where the first pass left off (see below.)

  1. Wait for the upload process to complete.

Creating the Image in the Azure console.

Now that our upload has completed, we are ready to create an image in the Azure console. This will allow us to easily spawn virtual machines based on the image we uploaded earlier. To complete this process you will need access to the Azure console and your freshly uploaded image.

  1. Select Virtual Machines from the management portal.
  2. Select Images from the virtual machines portal.
  3. Select Create an Image

  1. A new window titled Create an image from a VHD will pop up. Enter the following details (as shown below.)
  • Name
  • Description
  • VHD URL (from your storage blob)
  • Operating System Family


Ensure you have ticked I have run Sysprep on the virtual machine or you will not be able to proceed.

  1. The Image will now appear under MY IMAGES in the image gallery.

Deploying the image!

All the work we have completed so far won’t be much use if the deployment phase fails. In this part of the process we will deploy the image to ensure it will work as expected.

  1. Select Virtual Machines from the management portal.
  2. Select New > Compute > Virtual Machine > From Gallery
  3. From the Choose an Image screen, select MY IMAGES. You should see the image that we just created in the gallery (shown below.)

  1. Select the Image and click Next
  2. Complete the Virtual Machine Configuration with your desired settings.
  3. Wait for the virtual machine to complete deployment provisioning.

Connecting to the virtual machine.

The hard work is done! We are now ready to connect to our newly deployed machine to ensure it is functioning as expected.

  1. Select Virtual Machines from the management portal.
  2. Select the Virtual Machine and then click Connect from the toolbar down the bottom. This will kick-off a download for the RDP file which will allow you to connect to the virtual machine.
  3. Launch the RDP file, you will be asked to authenticate. Enter the credentials you specified during the deployment phase and click OK


  1. You will now be presented with your remote desktop session, connected to your custom image deployed on Microsoft Azure.

I went ahead and activated my Virtual Machine. To prove there is no funny business involved, I have provided one final screenshot showing the machine activation status (which details the Windows version) and a snip showing the results of the ipconfig command. This lists the internal.cloudapp.net addresses showing that machine is running on Microsoft Azure.

Enjoy!