Installing WordPress in a Sub-Folder on Azure Websites

This blog post shows you how to install a wordpress website in a sub-folder on your Azure website. Now somebody would ask why would I need to do that, and that is a good question, so let me start with the reasons:

Why do it this way?

Assume that you have a website and you want to create a blog section. This is a very common practice and most companies nowadays have a blog section of the website (which replaces the old “news” page). To do that, we would need to either develop a section of our website for blogging, or use a standard blogging engine -something like wordpress-. Let’s say we agreed to use wordpress as it is quick and easy and it is becoming the DE-facto engine for blogging. So how do we install it?
Well we could have a sub-domain. Say my website is hasaltaiar.com.au, I could create a sub-domain call it blog.hasaltaiar.com.au, and point this sub-domain to a wordpress website. This would work and it is good. However, it is not the best option, ask me why. Did you ask? never-mind I will answer :). Google and other search engines split the domain authority when requests come into sub-domains. This means that in maintain a better ranking and higher domain authority it is advised that you have your blog as a sub-module of your app rather than a sub-domain. And this is why we are talking about having the wordpress blog installed in a sub-folder.

The Database

To install a wordpress website, we need a MySql database. Azure gives you ONE free MySql database. You could create it from Azure store (Marketplace). There is a good tutorial on how to do that here. If you have exhausted your quota and have already created a MySql db before, you could either pay for a new database with Azure marketplace, or get a free one outside of Azure. I had this issue when I was creating this blog, as we had used the ONE free MySql database for another website, so I went to ClearDb website and created a new free account with a free MySql Database. This is the same provider for the MySql databases on Azure Marketplace, so you would get a similar service and free. One way or another, we will assume that you have a MySql database for this website. Have the connection details to this MySql database handy as we will need them later for installation.

Changes to Azure Website Configuration

In order to be able to install a wordpress website, you need to make two small changes to your azure website. These are:
1. You need to enable IIS to run php. Azure websites support multiple languages out-of-the-box, you just need to enable the languages that you need. By default, it is configured to run .NET 4.5, you could enable any other languages that you would need like Java, Php, Python, etc. We need to enable Php 5.4 or 5.5 as in the screenshot below.

Azure Website Supported Runtime configuration

Azure Website Supported Runtime configuration

2. We also need to ensure that our Azure website has a list (or at least one doc) for the default document type(s). This is also part of the configuration of your Azure website and it tells IIS what type of document to look for when a user navigate to any path/folder. You could have anything in this list (as long as they are valid docs) and in any order you want them. The important line for us in this case is the index.php, which is the default page for wordpress websites.

Azure website default document list

Azure website default document list

WordPress Install

You need to download the latest version of WordPress from wordpress.org. At the time of writing this blog post, the latest version is 4.0.1. Once you have the files downloaded, you could change the folder name to be blog and upload it directly to your website. This means that you would have this folder under your azure website /your-website/blog. The files could sit under your wwwroot folder as in the screenshot below.

Azure Website Files hierarchy

Azure Website Files hierarchy

When the file upload completes, we can navigate to the your-website-root/blog/wp-admin. This will start the wordpress website. WordPress engine will detect that this is the first time it runs, and it will prompt you with the installation wizard. The installation wizard is very simple, only few steps to set the website title, url, language, etc. The main step is adding the database details. These details can be obtained from the database that you created in the earlier step above. Just copy the database connection details here and you should be set. After adding the connection details, you will see a webpage saying the installation is complete and it would ask you to login (with your newly created credentials) to customise your blog.

That’s it, simple and easy. The method shown above could save you from having to maintain two different websites and would give you the flexibility of having your own, self-hosted wordpress site. I hope you find this useful and I would love to hear your thoughts and feedback.

MIM and Privileged Access Management

Recently Microsoft released Microsoft Identity Manager 2015 (MIM) Customer Technology Preview (CTP). Those expecting a major revision of the FIM product should brace themselves for disappointment. The MIM CTP is more like a service release of FIM. MIM CTP V4.3.1484.0 maintains the existing architecture of the FIM Portal (still integrated with SharePoint), FIM Service, and the FIM Synchronisation Service.  Also maintained are the separate FIM Service and FIM Sync databases. Installation of the CTP is almost identical to FIM 2010 R2 SP1, including the same woes with SharePoint 2013 configuration. The MIM CTP package available from Microsoft Connect contains an excellent step by step guide to install and configure a lab to test out PAM, so I won’t repeat that here.

In brief, the CTP adds the following features to FIM.

1. Privileged Access Management

This feature integrates with new functionality in Windows Server 10 Technical Preview to apply expiration to membership in Active Directory groups.

2. Multi Factor Authentication

FIM Self-Service Password Reset can now use Azure Multi-Factor Authentication as an authentication gate.

3. Improvements to Certificate Management

Incorporation of a Modern UI App, integration with ADFS, and support for multi forest deployment.

After installation, first thing that is evident is how much legacy FIM branding is maintained throughout the CTP product – yes this is MIM, please ignore the F word!:

 Privileged Access Management

For this blog I’ll focus on Privileged Access Management (PAM), which looks to be the biggest addition to the FIM Product for this CTP. PAM is actually a combination of new functionality within Windows Server 10 Technical Preview Active Directory Domain Services (ADDS) and MIM. MIM provides the interface for PAM role management (including PowerShell CMDLETS), whist Windows Server 10 ADDS adds capability to apply a timeout for group membership changes made by MIM

PAM requests are available from the MIM Portal – however for the CTP lab, PowerShell CMDLETS are used for PAM requests. These CMDLETS are executed under the context of the user wishing to elevate their rights. For the CTP lab provided, there is no approval applied to PAM requests, so users could continually elevate their rights via repeated PAM requests – however this would be trivial to address via an approval workflow on PAM request creation within the MIM Portal. At this stage there appears to be no way for an administrator to register a user for a PAM role – e.g. requests for PAM roles are done by the end user. The CMDLET usage is covered in detail by the PAM evaluation guide.

The PAM solution outlined for the CTP solution consists of a separate PAM domain in its own forest, containing the MIM infrastructure. Also present in this domain are duplicates from the production domain of privileged groups and user accounts for staff requiring PAM rights elevation.

The not so secret sauce of PAM is the use of SID history between the duplicate PAM domain groups and privileged production groups. SID history enables user accounts in the PAM domain residing in the duplicate PAM domain groups to access resources within the production domain. For example, a share in production secured by group “ShareAdmins” can be administered by a PAM domain account with membership in the duplicate PAM domain group “PROD.ShareAdmins” containing the same SID (in the SIDHistory attribute) as the production “ShareAdmins” group. When the PAM user account authenticates and attempts access to the production resource, it presents membership in both the duplicate PAM domain group and the production group.

PAM controls access to production domain resources via controlling membership in the duplicate PAM domain groups. To elevate rights, users authenticate with their duplicate PAM domain account which MIM has granted temporary membership in the duplicated privileged groups present within the PAM domain, and then access production domain resources.

What is new to PAM, and Windows Server 10 is the ability for a timeout to be configured for memberships in groups. When PAM adds users to groups, a Time-to-Live (TTL) is also applied. When this TTL expires, the membership is removed. For the PAM solution, MIM controls the addition of users to PAM groups and application of the TTL value, Windows Server 10 ADDS performs removal independently of MIM. MIM does not have to be functional for removal to occur.

TTL capability is enabled in Windows Server 10 ADDS via the following command line, for the technical preview a schema modification is also required – refer to the PAM evaluation guide in the MIM CTP distribution package:

Enable-ADOptionalFeature "Expiring Links Feature" -Scope ForestOrConfigurationSet -Target <PAM Domain FQDN>

During my testing of the PAM CTP, all worked as expected. MIM added users to PAM managed groups with a TTL, Windows Server 10 ADDS duly removed said users from PAM groups when the TTL expired. However the fundamental issue with group membership retention in user Access Tokens remains, e.g. a user must re-authenticate for group changes to apply to their Access Token. So any elevated sessions a user has open essentially retain their elevated rights long after PAM has removed said rights. However PAM does assist with segregation of duties, auditing, and addresses issues where there is a proliferation of accounts with high levels of access.

All in all the MIM CTP is a bit of a mixed bag. I am surprised to see the changes to Certificate Management prioritised above native integration of the Azure Active Directory (AAD) Management Agent and implementation of AAD password synchronisation functionality. The PAM implementation is quite heavy architecturally, e.g. an additional forest, two-way Forest trust, and SID History Filtering disablement. It will be interesting to see how the product develops in future CTPs, however with a scheduled MIM product release first half 2015, I don’t anticipate more deviation from the classic FIM architecture.

Getting Started with Office 365 Video

Starting Tuesday November 18 Microsoft started rolling out Office 365 Video to customers who have opted in to the First Release programme (if you haven’t you will need to wait a little longer!)

Kloud has built video solutions on Office 365 in the past so it’s great to see Microsoft deliver this as a native feature of SharePoint Online – and one that leverages the underlying power of Azure Media Services capabilities for video cross-encoding and dynamic packaging.

In this blog post we’ll take a quick tour of the new offering and show a simple usage scenario.

Basic Restrictions

In order to have access to Office 365 Video the following must be true for your Office 365 tenant:

  • SharePoint Online must be part of your subscription and users must have been granted access to it.
  • Users must have E1, E2, E3, E4, A2, A3 or A4 licenses.
  • There is no external sharing capability – you aren’t able to serve video to users who are not licensed as per the above.

There may be some change in the licenses required in future, but at launch these are the only ones supported.

Note that you don’t need to have an Azure subscription to make use of this Office 365 feature.

Getting Started

When Video is made available in your tenant it will show in either the App Launcher or Office 365 Ribbon.

Video on App Launcher

Video on Office 365 Ribbon

Like any well-managed Intranet it’s important to get the structure of your Channels right. At this stage there is no functionality to allow us to create sub-channels so how you create your Channels will depend primarily on who the target audience will be as a Channel is logical container than can be access controlled like any standard SharePoint item.

There are two default Channels out-of-the-box but let’s go ahead and create a new one for our own use.

Options when creating a Channel

Once completed we will be dropped at the Channel landing page and have the ability to upload content or manage settings. I’m going to modify the Channel I just created and restrict who can manage the content by adding one of my Kloud Colleagues to the Editors group (shown below).

Setting Permissions

Now we have our Channel configured, let’s add some content.

I click on the Upload option on the Channel home page and select an appropriate video (I’ve chosen to use an MP4 created on my trusty Lumia 920) and drag and drop it onto the upload form. The file size limits supported match the standard SharePoint Online ones (hint: your files can be pretty large!)

When you see the page below make sure you scroll down, set the video title and description (note: these are really important as they’ll be used by SharePoint Search and Delve to index the video).

Upload Process

Then you need to wait… time to complete the cross-encoding depends on how long the video is you’ve uploaded.

Once it’s completed you can play the video back via the embedded player and, if you want you can cross-post it to Yammer using the Yammer sidebar (assuming you have Yammer and an active session). You also get preview in search results and can play video from right in the preview (see below).

Video Preview

This is very early days for Office 365 Video – expect to see a lot richer functionality over time based on end user feedback.

The Office 365 Video team is listening to feedback and you can provide yours via their Uservoice site.

IoT – Solar & Azure

Ever since we got our solar system installed about two years ago, I’ve been keeping track of the total power generated by the system. Every month I would write down the totals and add it to my Excel spreadsheet. Although it’s not much work, it’s still manual work… yes all 2 minutes every month.

So when the whole “Internet of Things” discussion started at our office (see Matt’s blog “Azure Mobile Services and the Internet of Things“) I thought it would be a good opportunity to look at doing this using Azure – even if it was only to prove the IoT concept. The potential solution should:

  1. Use a device which connects to the solar inverter to reads its data via RS232.
  2. This device needs to be powered by a battery as no power outlet is close to the inverter.
  3. Upload data to Azure without having to rely on a computer running 24/7 to do this.
  4. Use Azure to store and present this data.

Hardware

The device I built is based on the Arduino Uno and consists of the following components:

Arduino UNO R3
With a little bit of programming these devices are perfectly capable of retrieving data from various data sources, are small in size, expandable with various libraries, add on shields and break-out boards and can be battery powered. Having the inverter on a side of the house with no power outlet close by made this a main requirement.
MAX3232 RS232 Serial to TTL Converter module
As the Arduino Uno doesn’t come with any serial connectors this module adds a DB9 connector to the board. Now the Arduino can be connected to the inverter using a null modem cable.
Adafruit CC3000 WiFi Shield with Onboard Ceramic Antenna
Some of the existing solutions which can send inverter data to a website (e.g. PVOutput) or computer logging those details, all rely on a computer which runs 24/7 which is one of the things I definitely didn’t want to do. I ended up getting this WiFi shield which, after soldering it on top of the Arduino board, turns the Arduino into a WiFi enabled device and allows it to send data to the internet directly. After adding the required libraries and credentials to my script, having access to a wireless router already enables basic access to the internet. Even though it is sitting quite a bit away from the wireless router, connectivity is no issue.
arduinobuild inverter
The Arduino Uno unit… …connected to the inverter

Azure

To store and / or display any of the info the Arduino is collecting, an Azure subscription is required. For this project I signed up for a free trial. Once the subscription is sorted, the following Azure services have to be setup:

Azure Service Description
Cloud service Running the worker roles.
Storage Account Hosting the table storage.
Service Bus Message queue for the Arduino.
Website For displaying data in (near) realtime.

Putting it all together

So how do all these different components fit together?

The Arduino connects to the inverter via a null-modem cable. Reading data from it is achieved by adding a MODBUS library to the Arduino script. This adds additional functionality to the Arduino which is now able to read (and write) data from MODBUS (an industrial comms standard) enabled devices.
The script is set to run every 30 minutes and only after a successful connection (the inverter shuts down if there is not enough sunlight) it will set up a wireless internet connection and send the data to the TCP listener worker role in Azure.

In Azure, a service bus message queue was created to hold all incoming data packets sent from the Arduino. A storage table was also created to permantly store data received from the Arduino. The great thing with the storage table is there is no need to create a table schema before being able to use it, just creating the “placeholder” is enough!

Using Visual Studio, two worker roles were created:

  • A TCP listener which “listens” for any device sending information to the specified endpoints. If a message from the Arduino is received it will write it onto the message queue.

service bus explorer screenshot

Using Service Bus Explorer you can see the individual messages arriving in the message queue.

  • A data writer which checks the message queue for new messages. If a new message has arrived, the message will be read, its content stored in the storage table and the message deleted.

Finally, a simple ASP.Net MVC website is used to display data from the storage table in near real-time. The website displays statistics on how many KWs have been generated during that day and how a day compares to previous days.

Energy Today

Stats for current day.

solarduino-website

Website display.

Conclusion

This IoT project was a good opportunity to have a play with various Azure components through using multiple worker roles, message queues and the like. It probably sounds like overkill when just using the one device sending one message every 30 minutes, but a similar setup can be used in larger environments such factories where multiple devices send dozens of messages per minute.

Migrating Azure Virtual Machines to another Region

I have a number of DEV/TEST Virtual Machines (VMs) deployed to Azure Regions in Southeast Asia (Singapore) and West US as these were the closet to those of us living in Australia. Now that the new Azure Regions in Australia have been launched, it’s time to start migrating those VMs closer to home. Manually moving VMs between Regions is pretty straight forward and a number of articles already exist outlining the manual steps.

To migrate an Azure VM to another Region

  1. Shutdown the VM in the source Region
  2. Copy the underlying VHDs to storage accounts in the new Region
  3. Create OS and Data disks in the new Region
  4. Re-create the VM in the new Region.

Simple enough but tedious manual configuration, switching between tools and long waits while tens or hundreds of GBs are transferred between Regions.

What’s missing is the automation…

Automating the Migration

In this post I will share a Windows PowerShell script that automates the migration of Azure Virtual Machines between Regions. I have made the full script available via GitHub.

Here is what we are looking to automate:

Migrate-AzureVM

  1. Shutdown and Export the VM configuration
  2. Setup async copy jobs for all attached disks and wait for them to complete
  3. Restore the VM using the saved configuration.

The Migrate-AzureVM.ps1 script assumes the following:

  • Azure Service Management certificates are installed on the machine running the script for both source and destination Subscriptions (same Subscription for both is allowed)
  • Azure Subscription profiles have been created on the machine running the script. Use Get-AzureSubscription to check.
  • Destination Storage accounts, Cloud Services, VNets etc. already have been created.

The script accepts the following input parameters:

.\Migrate-AzureVM.ps1 -SourceSubscription "MySub" `
                      -SourceServiceName "MyCloudService" `
                      -VMName "MyVM" `
                      -DestSubscription "AnotherSub" `
                      -DestStorageAccountName "mydeststorage" `
                      -DestServiceName "MyDestCloudService" `
                      -DestVNETName "MyRegionalVNet" `
                      -IsReadOnlySecondary $false `
                      -Overwrite $false `
                      -RemoveDestAzureDisk $false
SourceSubscription Name of the source Azure Subscription
SourceServiceName Name of the source Cloud Service
VMName Name of the VM to migrate
DestSubscription Name of the destination Azure Subscription
DestStorageAccountName Name of the destination Storage Account
DestServiceName Name of the destination Cloud Service
DestVNETName Name of the destination VNet – blank if none used
IsReadOnlySecondary Indicates if we are copying from the source storage accounts read-only secondary location
Overwrite Indicates if we are overwriting if the VHD already exists in the destination storage account
RemoveDestAzureDisk Indicates if we remove an Azure Disk if it already exists in the destination disk repository

To ensure that the Virtual Machine configuration is not lost (and avoid us have to re-create by hand) we must first shutdown the VM and export the configuration as shown in the PowerShell snippet below.

# Set source subscription context
Select-AzureSubscription -SubscriptionName $SourceSubscription -Current

# Stop VM
Stop-AzureVMAndWait -ServiceName $SourceServiceName -VMName $VMName

# Export VM config to temporary file
$exportPath = "{0}\{1}-{2}-State.xml" -f $ScriptPath, $SourceServiceName, $VMName
Export-AzureVM -ServiceName $SourceServiceName -Name $VMName -Path $exportPath

Once the VM configuration is safely exported and the machine shutdown we can commence copying the underlying VHDs for the OS and any data disks attached to the VM. We’ll want to queue these up as jobs and kick them off asynchronously as they will take some time to copy across.

Get list of azure disks that are currently attached to the VM
$disks = Get-AzureDisk | ? { $_.AttachedTo.RoleName -eq $VMName }

# Loop through each disk
foreach($disk in $disks)
{
    try
    {
        # Start the async copy of the underlying VHD to
        # the corresponding destination storage account
        $copyTasks += Copy-AzureDiskAsync -SourceDisk $disk
    }
    catch {}   # Support for existing VHD in destination storage account
}

# Monitor async copy tasks and wait for all to complete
WaitAll-AsyncCopyJobs

Tip: You’ll probably want to run this overnight. If you are copying between Storage Accounts within the same Region copy times can vary between 15 mins and a few hours. It all depends on which storage cluster the accounts reside. Michael Washam provides a good explanation of this and shows how you can check if your accounts live on the same cluster. Between Regions will always take a longer time (and incur data egress charges don’t forget!)… see below for a nice work-around that could save you heaps of time if you happen to be migrating within the same Geo.

You’ll notice the script also supports being re-run as you’ll have times when you can’t leave the script running during the async copy operation. A number of switches are also provided to assist when things might go wrong after the copy has completed.

Now that we have our VHDs in our destination Storage Account we can begin putting our VM back together again.

We start by re-creating the logical OS and Azure Data disks that take a lease on our underlying VHDs. So we don’t get clashes, I use a convention based on Cloud Service name (which must be globally unique), VM name and disk number.

# Set destination subscription context
Select-AzureSubscription -SubscriptionName $DestSubscription -Current

# Load VM config
$vmConfig = Import-AzureVM -Path $exportPath

# Loop through each disk again
$diskNum = 0
foreach($disk in $disks)
{
    # Construct new Azure disk name as [DestServiceName]-[VMName]-[Index]
    $destDiskName = "{0}-{1}-{2}" -f $DestServiceName,$VMName,$diskNum   

    Write-Log "Checking if $destDiskName exists..."

    # Check if an Azure Disk already exists in the destination subscription
    $azureDisk = Get-AzureDisk -DiskName $destDiskName `
                              -ErrorAction SilentlyContinue `
                              -ErrorVariable LastError
    if ($azureDisk -ne $null)
    {
        Write-Log "$destDiskName already exists"

        if ($RemoveDisk -eq $true)
        {
            # Remove the disk from the repository
            Remove-AzureDisk -DiskName $destDiskName

            Write-Log "Removed AzureDisk $destDiskName"
            $azureDisk = $null
        }
        # else keep the disk and continue
    }

    # Determine media location
    $container = ($disk.MediaLink.Segments[1]).Replace("/","")
    $blobName = $disk.MediaLink.Segments | Where-Object { $_ -like "*.vhd" }
    $destMediaLocation = "http://{0}.blob.core.windows.net/{1}/{2}" -f $DestStorageAccountName,$container,$blobName

    # Attempt to add the azure OS or data disk
    if ($disk.OS -ne $null -and $disk.OS.Length -ne 0)
    {
        # OS disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -OS $disk.OS `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        $vmConfig.OSVirtualHardDisk.DiskName = $azureDisk.DiskName
    }
    else
    {
        # Data disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        #   Match on source disk name and update with dest disk name
        $vmConfig.DataVirtualHardDisks.DataVirtualHardDisk | ? { $_.DiskName -eq $disk.DiskName } | ForEach-Object {
            $_.DiskName = $azureDisk.DiskName
        }
    }              

    # Next disk number
    $diskNum = $diskNum + 1
}
# Restore VM
$existingVMs = Get-AzureService -ServiceName $DestServiceName | Get-AzureVM
if ($existingVMs -eq $null -and $DestVNETName.Length -gt 0)
{
    # Restore first VM to the cloud service specifying VNet
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -VNetName $DestVNETName -WaitForBoot
}
else
{
    # Restore VM to the cloud service
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -WaitForBoot
}

# Startup VM
Start-AzureVMAndWait -ServiceName $DestServiceName -VMName $VMName

For those of you looking at migrating VMs between Regions within the same Geo and have GRS enabled, I have also provided an option to use the secondary storage location of the source storage account.

To support this you will need to enable RA-GRS (read access) and wait a few minutes for access to be made available by the storage service. Copying your VHDs will be very quick (in comparison to egress traffic) as the copy operation will use the secondary copy in the same region as the destination. Nice!

Enabling RA-GRS can be done at any time but you will be charged for a minimum of 30 days at the RA-GRS rate even if you turn it off after the migration.

# Check if we are copying from a RA-GRS secondary storage account
if ($IsReadOnlySecondary -eq $true)
{
    # Append "-secondary" to the media location URI to reference the RA-GRS copy
    $sourceUri = $sourceUri.Replace($srcStorageAccount, "$srcStorageAccount-secondary")
}

Don’t forget to clean up your source Cloud Services and VHDs once you have tested the migrated VMs are running fine so you don’t incur ongoing charges.

Conclusion

In this post I have walked through the main sections of a Windows PowerShell script I have developed that automates the migration of an Azure Virtual Machine to another Azure data centre. The full script has been made available in GitHub. The script also supports a number of other migration scenarios (e.g. cross Subscription, cross Storage Account, etc.) and will be handy addition to your Microsoft Azure DevOps Toolkit.

Publish to a New Azure Website from behind a Proxy

One of the great things about Azure is the ease of which you can spin up a new cloud based website using Powershell. From there you can quickly publish any web-based solution from Visual Studio to the Azure hosted site.

To show how simple this is; After configuring PowerShell to use an Azure Subscription, I’ve created a new Azure hosted website in the new Melbourne (Australia Southeast) region:

Creating a new website in PowerShell

Creating a new website in PowerShell

That was extremely easy. What next? Publish your existing ASP.NET MVC application from Visual Studio to the web site. For this test, I’ve used Microsoft Visual Studio Ultimate 2013 Update 3 (VS2013). VS2013 offers a simple way from the built-in Publish Web dialogue to select your newly created (or existing) websites.

publish

Web Publish to Azure Websites

This will require that you have already signed in with your Microsoft account linked with a subscription, or you have already imported your subscription certificate to Visual Studio (you can use the same certificate generated for PowerShell). Once your subscription is configured you can select the previously created WebSite:

Select Existing Azure Website

Select Existing Azure Website

The Publish Web dialogue appears, but at this point you may experience failure when you attempt to validate the connection or publish the WebSite. If you are behind a proxy; then the error will show as destination not reachable.

Unable to publish to an Azure Website from behind a proxy

Unable to publish to an Azure Website from behind a proxy

Could not connect to the remote computer ("mykloudtestapp.scm.azurewebsites.net"). On the remote computer, make sure that Web Deploy is installed and that the required process ("Web Management Service") is started. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_DESTINATION_NOT_REACHABLE. Unable to connect to the remote server

The version of Web Deploy included with VS2013 is not able to publish via a Proxy. Even if you configure the msbuild.exe.config to have the correct proxy settings as documented by Microsoft, it will still fail.

Luckily in August 2014 Web Deploy v3.6 BETA3 was released that fixes this issue. To resolve this error, you can download the Web Deploy beta and patch your VS2013 installation. After patching Visual Studio; you can modify the proxy settings used by msbuild.exe (msbuild.exe.config) to use the system proxy:

<system.net>
	<defaultProxy useDefaultCredentials="true" />
</system.net>

You should now be able to publish to your Azure WebSite from behind a proxy with VS2013 Web Deploy.

Australia’s leading wholesale distribution company transforms IT with Office 365

Customer Overview

Metcash is one of Australia’s leading wholesale distribution and marketing companies, specialising in grocery, fresh produce, liquor, hardware and automotive parts and accessories.

Business Situation

Metcash required the transition of a number of on-premises workloads to a cloud based service to alleviate infrastructure, support and performance issues experienced by the organisation.

Metcash evaluated several alternative SaaS options and requested to roll-out the Microsoft Office 365 suite of products in the form of a technology pilot. The pilot focus was primarily on the Exchange elements of the Office 365 solution and would continue with other relevant features throughout the course of the pilot phase.

Metcash have a mix of corporate “knowledge” workers and “deskless” users, so a mix of enterprise and kiosk licences were tested for suitability during the pilot, as well as integration with their Citrix based virtual desktop infrastructure environment.

Due to operations being national, a number of considerations were given in terms of network and general performance with regards to consuming Office 365 services. This meant that the pilot would not only need to assess performance but also provide benchmarking and certainty for quality of service and usability of the Office 365 solution features.

Approximately 35 users were identified for inclusion in the pilot and were able to have the service provisioned and enabled quite rapidly. Due to Metcash running and maintaining core workplace IT services in-house the Office 365 pilot would need to provide the right level of knowledge transfer and operational handover to relevant Metcash Group staff.

Solution

The Office 365 pilot solution for Metcash was constructed in two distinct phases that allowed the organisation the opportunity to make go/no-go decisions part way through the project.  A decision to move forward meant Kloud could commence the planning and integration activities required for a full production roll-out, while still supporting the pilot users with their new technology. This approach ensured a seamless transition to a full production service occurred, without the need to re-create any component of the pilot.

The project started with the use of cloud based identities, primarily focused on the Exchange Online and Lync Online Workloads. The pilot group (35 Users) were created in the Office 365 tenant and assigned the appropriate licenses for the pilot program.

A comprehensive testing program was carried out with Metcash internal users and Kloud consultants which proved to be successful. On the back of this testing, the pilot program was extended to take in the SharePoint Online, OneDrive and Yammer workloads.

The Office 365 pilot was widely regarded as successful, and Metcash has since approached Kloud to extend the pilot program to an Exchange 2010 hybrid implementation, with Active Directory Federation Services and ADFS 3.0 identity services for Single Sign On. Approximately 200 production mailboxes have now been migrated to Exchange Online, with the remainder scheduled for migration.

Benefits

  • Highly available “Evergreen” cloud based solution
  • Reduced administrative overhead for IT teams
  • Better collaboration amongst users’ and teams’
  • Efficiency gains by use of Lync Online
  • Larger user mailboxes and personal archive mailboxes in Exchange Online
  • Lower TCO gained by subsequent programs of work to migrate all users to Office 365

“Kloud has become an integral partner in providing a platform for our Office 365 implementation. This will enable capabilities within our organisation to collaborate more cohesively and utilise the benefits of Office 365. With the implementation of Lync and the ability to have access to our email anywhere, anytime, allows our broad use base to respond to customers and suppliers more efficiently than ever before” – Gerhard Niess, Senior Manager – Tools & Techniques, Metcash Limited

Get Azure Virtual Networks with PowerShell

I needed to make my life easier the other day as a colleague and I worked through setting up a Azure IaaS network topology to connect to an enterprise production network. One of our clients requirements meant that whilst we created the network sites, subnets and segments we needed to report on what we had created to verify it was correct. This simple task of viewing network names and associated subnets is currently missing from the Azure cmdlets, so we have pieced together this quick bit of re-usable code. There isn’t currently a nice way to do a ‘Get-AzureVirtualNetwork’ and instead you only have the portal(s) or a downloadable XML. This isn’t great if you would like to do some validation checks or even a basic CSV dump to report on current configuration to a network administrator.

I’ve only been able to test this on a few VNET configurations but it seems to do the job for me so far. I’m sure that you can add your own trickery to it if you like, but by the time I invest in doing so, Microsoft will have probably released an updated module that includes this functionality. So here it is in all its glory.

The output will give you a nice table that you can export as per below;

Get-AzureVirtualNetwork | FT

Filter to find reference to matching CIDR blocks;

Get-AzureVirtualNetwork | ? {$_.AddressPrefix -like "10.72.*"} | FT

Store the networks and re-use with other Azure cmdlets;

$APSites = (Get-AzureVirtualNetwork | ? {$_.AddressPrefix -like "10.72.*"})

Export the current configuration for a report;

Get-AzureVirtualNetwork | Export-CSV -Path "c:\Temp\AzureVirtualNetworkConfiguration.csv"

If you notice the code snippet I threw in another handy export for all local routes;

Get-AzureVirtualNetworkRoutes

This will be useful to see if you are missing a local route when the list starts getting large.

Kloud turns 4!

Born in the cloud in 2010, Kloud celebrated our 4th birthday over the weekend. As far as birthdays go this was certainly one to celebrate as we took time to reflect on the past 12 months and what we’ve achieved as a company.

Kicking off in January (and in just over 3 years of business) we had deployed in excess of 1 million seats in Office 365, to cement our position as one of the top Microsoft partners in Australia.

In June, our Victorian and New South Wales State Managers, Carl Lowenborg and Damian Coyne, were recognised as finalists for the Young Entrepreneur of the Year Award which recognises an entrepreneur under 40 who has built a successful business in a remarkable way.

In July we were recognised at the Microsoft Worldwide Partner Conference as one of three finalists in the Identity and Access (Security) category.

During September Kloud’s directors and sales and marketing teams arrived in sunny Queensland for the Microsoft Australia Partner Conference. As one of only three two-time winners, we took home Microsoft awards in the categories of Cloud Solutions Partner of the Year and Collaboration and Content Partner of the Year. Our first Microsoft award wins since being founded in 2010!

Our Managing Director, Nicki Bowers, featured on the CRN Panel  “Charting New Territory – the roadmap that got us here might not get us where we need to go” which you can watch here:

In October we received one of our greatest accolades to date, recognised as Australia’s fastest growing company. Kloud enjoyed a strong presence across a range of media outlets (The Age, ARN, BRW, Mashable) and our Managing Director, Nicki Bowers was interviewed by Ross Greenwood on 2GB.

November has seen us included in another two prestigious Fast 50 lists, Deloitte Technology Fast 50 and CRN’s Fast 50, with both award nights to be held later this month.

We have all of our hard-working Kloudies to thank as we celebrate our four years together as a company. A huge thank you to all our valued customers and wonderful partners for your ongoing support. We look forward to an even bigger and better year in 2015!

The Business Case for Microsoft Office 365

Businesses of all sizes and industries are considering adopting Microsoft Office 365. Some customers have a clear vision of the desired capabilities. Others have weighed up the options and identified that there are savings to be leveraged. Others still have heard the hype, but don’t know where to start.

Office 365, like any technology, offers a variety of solutions and risks to any business. Kloud has extensive skills and experience in Microsoft Office 365 and have assisted many commercial and public sector enterprise customers successfully navigate the journey onto Office 365.

The first step is to understand the business strategy, and then the current state environment difficulties faced and usage patterns. This provides a baseline on which the Office 365 opportunities can be measured against.

Next, the broader set of Office 365 capabilities are explored, to discover any unidentified transformational use cases that will help deliver on the business strategy or benefit the business. Analysis of comparable organisations within and across industries will also provide some insight on potential opportunity areas. Key Office 365 capabilities explored are:

  • Consistent and Integrated Platform
  • Effective Collaboration and Enterprise Social
  • Unified Communications
  • Mobility
  • Service Scalability, Backup and High Availability.

Business challenges and other considerations then need to be expanded, to determine where components of Office 365 may not be suitable or require special consideration to fit the business. Key challenges and considerations covered are:

  • Security and Compliance
  • Performance
  • Other technology dependencies.

An Office 365 platform analysis can be performed in conjunction with the business, to ascertain the tangible benefits and costs of adoption. This Cost/Benefit analysis provides the business the ability to make an informed decision, and ultimately empower the business to schedule adoption.

Kloud consultants are working with CIOs and their teams to build the business case for change. Over the course of a few days to a few weeks, Kloud works closely with organisations to provide clarity on their requirements and a realistic expectation of what Office 365 can deliver to meet the business’s unique operating environment and strategy.