powerbi-env20visual-1

Leveraging the PowerBI Beta API for creating PowerBI Tables with Relationships via PowerShell

If anyone actually reads my posts you will have noticed that I’ve been on a bit of a deep dive into PowerBI and how I can use it to provide visualisation of data from Microsoft Identity Manager (here via CSV, and here via API). One point I noticed going direct to PowerBI via the API (v1.0) though was how it is not possible to provide relationships (joins) between tables within datasets (you can via PowerBI Desktop). After a lot of digging and searching I’ve worked out how to actually define the relationships between tables in a dataset via the API and PowerShell. This post is about that.

Overview

In order to define relationships between tables in a dataset (via the API), there are a couple of key points to note:

  • You’ll need to modify the PowerBIPS PowerShell Module to leverage the Beta API
    • see the screenshot in the “How to” section

Prerequisites

To use PowerBI via PowerShell and the PowerBI API you’ll need to get:

How to leverage the PowerBI Beta API

In addition to the prerequisites above, in order to leverage the PowerShell PowerBI module for the PowerBI Beta API I just changed the beta flag $True in the PowerBI PowerShell (PowerBIPS.psm1) module to make all calls on the Beta API. See screenshot below. It will probably be located somewhere like ‘C:\Program Files\WindowsPowerShell\Modules\PowerBIPS

Example Dataset Creation

The sample script below will create a New Dataset in  PowerBI with two tables. Each table holds “Internet of Things” environmental data. One for data from an Seeed WioLink IoT device located outside and one inside. Both tables contain a column with DateTime that is extracted before the sensors are read and then their data added to their respective tables referencing that DateTime.

That is where the ‘relationships‘ part of the schema comes in. The options for the direction of the relationship filter are OneDirection (default), BothDirections and Automatic. For this simple example I’m using OneDirection on DateTime.

To put some data into the tables here is my simple script. I only have a single IoT unit hooked up so I’m fudging the more constant Indoor Readings. The Outside readings are coming from a Seeed WioLink unit.

Visualisation

So there we have it. Something I thought would have been part of the minimum viable product (MVP) release of the API is available in the Beta and using PowerShell we can define relationships between tables in a dataset and use that in our visualisation.

 

Follow Darren on Twitter @darrenjrobinson

Enumerating all Users/Groups/Contacts in an Azure tenant using PowerShell and the Azure Graph API ‘odata.nextLink’ paging function

Recently I posted about using PowerShell and the Azure Active Directory Authentication Library to connect to Azure AD here. Whilst that post detailed performing simple tasks like updating an attribute on a user, in this post I’ll use the same method to connect to Azure AD via PowerShell but cover;

  • enumerate users, contacts or groups
  • where the number of objects is greater than the maximum results per page, get all remaining pages of results
  • limit results based on filters

The premise of my script was one that could just be executed without prompts. As such the script contains the ‘username’ and ‘password’ that are used to perform the query. No special access is required for this script. Any standard user account will have ‘read’ permissions to Azure AD and will return results.

Here is the base script to return all objects of a given type from a tenant. For your environment;

  • change line 7 for your tenant name
  • change line 11 for your account in your tenant
  • change line 12 for the password associated with the account specified in line 11
  • change line 18 for the object type (eg. Users, Groups, Contacts)

I’ve hardcoded the number of results to return per page in both line 39 and 64 to the maximum 999. The default is 100. I wanted to return all objects as quickly as possible.

The first query along with returning 999 query results also returns a value for $query.’odata.nextLink’ if there are more than 999 results. The .nextLink value we then use in subsequent API calls to return the remaining pages until we have returned all objects.

Brilliant. So we can now simply change line 18 for different object types (Users, Groups, Contacts) if required. But what if we want to filter on other criteria such as attribute values?

Here is a slightly modified version (to the URI) to include a query filter. Lines 19-24 have a couple of examples of query filters.

So there you have the basics on getting started returning large numbers of objects from Azure AD via Azure Graph from PowerShell. Hopefully the time I spent working out the syntax for the URI’s helps someone else out as there aren’t any examples I could find whilst working this out.

Follow Darren on Twitter @darrenjrobinson

 

 

 

Adding/Removing User Office365 Licences using PowerShell and the Azure AD Graph RestAPI

In a recent blog post here I posted about the Azure AD v2.0 Preview Powershell cmdlets that are currently in preview. These update the functionality the current MSOL cmdlets provide whilst also supporting features they don’t (such as managing users with MFA).

The Azure AD v2.0 cmdlets interface with the Azure AD Graph API and this week I tried using the Set-AzureADUserLicense cmdlet to add/remove licenses from users in a test tenant. With no sample documentation for syntax I didn’t kick any goals so I figured I’d just go straight to using the Azure AD Graph API to get the job done direct from Powershell instead.

In this post I’m going to show you how to add/remove Office365 licenses from users using PowerShell and the Azure AD Graph API.

As per my other post linked above if you’ve installed the Azure AD Preview Powershell module you’ll have the Microsoft.IdentityModel.Clients.ActiveDirectory.dll which we can leverage via Powershell to then connect to the Azure AD Graph API. Chances are you’ll have Microsoft.IdentityModel.Clients.ActiveDirectory.dll though if you also have the AzureRM Modules installed or TFS. Just search your Program Files sub-directories.

# the default path to where the Azure AD Preview PS Module puts the Libs
'C:\Program Files\WindowsPowerShell\Modules\AzureADPreview\1.1.143.0\Microsoft.IdentityModel.Clients.ActiveDirectory.dll'
# TFS Path
'C:\Program Files\Common Files\microsoft shared\Team Foundation Server\14.0\Microsoft.IdentityModel.Clients.ActiveDirectory.dll'
# Azure RM Cmdlets
'C:\Program Files\WindowsPowerShell\Modules\AzureRM.ApiManagement\1.1.2\Microsoft.IdentityModel.Clients.ActiveDirectory.dll'

Below you’ll find what you need to script your connection to Azure AD via the GraphAPI using the dll discussed above. Change $tenantID $username and $password to reflect you tenant and credentials.

Now that we’ve authenticated let’s enumerate our licenses. Bascially we make a RestAPI call to “https://graph.windows.net/{0}/subscribedSkus?api-version=1.6” as below. The particular license I want to add in my tenant is ExchangeStandard_Student. A key difference from the MSOL cmdlets though is adding or removing a license via the Graph API we reference the License skuId rather than skuPartNumber. The last line gets the skuId for my license.

Now to find the users that need to have the license assigned. Below in line 4 I search for users that are account enabled using this URI “https://graph.windows.net/{0}/users?$filter=accountEnabled eq true &api-version=1.6″  You can modify the filter for your criteria.

I then iterate through the users and find the users that aren’t assigned the license identified earlier above. I also exclude the AADConnect account for this tenant.

The GraphAPI expects the body with the info of licenses to be added or removed in a hashtable. So create the hashtable and convert it to JSON which is the format for Azure AD Graph API requires. Then for each of the unlicensed users we call the Azure AD GraphAPI URI “https://graph.windows.net/myorganization/users/$usertolicense`/assignLicense?api-version=1.6” and assign the license . In my environment it processed users at just over 1 user/sec.

A similar approach to remove licenses from users. A subtle difference is you only need to specify the skuId for removal as shown below. Note: you can add and remove licenses in the same call (if say you’re switching users over from one license plan to another).

All together for quick copy and paste. Update it for your tenant and creds. And remark out the Add or Remove depending on what you’re looking to do. If you’re looking to add/remove multiple licenses just add them to the hashtable/array.

Follow Darren on Twitter @darrenjrobinson

 

File storage in cloud. 3D computer icon isolated on white

Using Microsoft Azure Table Service REST API to collect data samples

Sometimes we need a simple solution that requires collecting data from multiple sources. The sources of data can be IoT devices or systems working on different platforms and in different places. Traditionally, integrators start thinking about implementation of a custom centralised REST API with some database repository. This solution can take days to implement and test, it is very expensive and requires hosting, maintenance, and support. However, in many cases, it is not needed at all. This post introduces the idea that out-of-the-box Azure Tables REST API is good enough to start your data collection, research, and analysis in no time. Moreover, the suggested solution offers very convenient REST API that supports JSON objects and very flexible NoSQL format. Furthermore, what’s great is that you do not need to write lots of code and hire programmers. Anybody who understands how to work with REST API, create headers and put JSON in the Web request body can immediately start working on a project and sending data to a very cheap Azure Tables storage. Additional benefits of using Azure Tables are: native support in Microsoft Azure Machine Learning, other statistical packages also allow you to download data from Azure Tables.

Microsoft provides Azure Tables SDKs for various languages and platforms. By all means you should use these SDKs; your life will be much easier. However, some systems don’t have this luxury and require developers to work with Azure Tables REST API directly. The only requirement for your system is that it should be able to execute web requests, and you should be able to work with the headers of these requests. Most of the systems satisfy this requirement. In this post, I explain how to form web requests and work with the latest Azure Table REST API from your code. I’ve also created a reference code to support my findings. It is written in C# but this technique can be replicated to other languages and platforms.

The full source code is hosted on GitHub here:

https://github.com/dimkdimk/AzureTablesRestDemo

Prerequisites.

You will need an Azure subscription. Create there a new storage account and create a test table with the name “mytable” in it. Below is my table that I’ve created in the Visual Studio 2015.

Picture 1

I’ve created a helper class that has two main methods: RequestResource and InsertEntity.

The full source code of this class is here:

Testing the class is easy. I’ve created a console application, prepared a few data samples and called our Azure Tables helpers methods. The source code of this program is below.

The hardest part in calling Azure Tables Web API is creating encrypted signature string to form Authorise header. It can be a bit tricky and not very clear for beginners. Please take a look at the detailed documentation that describes how to sign a string for various Azure Storage services: https://msdn.microsoft.com/en-au/library/dd179428.aspx

To help you with the authorisation code, I’ve written an example of how it can be done in the class AzuretableHelper. Please take a look at the code that creates strAuthorization variable. First, you will need to form a string that contains your canonical resource name and current time in the specific format including newline characters in-between. Then, this string has to be encrypted with so-called HMAC SHA-256 encryption algorithm. The key for this encryption code is a SharedKey that you can obtain from your Azure Storage Account as shown here:

Picture 2

The encrypted Authorisation string has to be re-created on each request you execute. Otherwise, Azure API will reject your requests with the Unuthorised error message in the response.

The meaning of other request’s headers is straightforward. You specify the version, format, and other attributes. To see the full list of methods you can perform on Azure Tables and to read about all attributes and headers you can refer to MSDN documentation:

https://msdn.microsoft.com/library/dd179423.aspx

Once you’ve mastered this “geeky” method of using Azure Tables Web API, you can send your data straight to your Azure Tables without using any intermediate Web API facilities. Also, you can read Azure Tables data to receive configuration parameters or input commands from another applications. A similar approach can be applied to other Azure Storage API, for example, Azure Blob storage, or Azure Queue.

 

 

 

 

Picture 7

Moving SharePoint Online workflow task metadata into the data warehouse using Nintex Flows and custom Web API

This post suggests the idea of automatic copying of SharePoint Online(SPO) workflow tasks’ metadata into the external data warehouse.  In this scenario, workflow tasks are becoming a subject of another workflow that performs automatic copying of task’s data into the external database using a custom Web API endpoint as the interface to that database. Commonly, the requirement to move workflow tasks data elsewhere arises from limitations of SPO. In particular, SPO throttles requests for access to workflow data making it virtually impossible to create a meaningful workflow reporting system with large amounts of workflow tasks. The easiest approach to solve the problem is to use Nintex workflow to “listen” to the changes in the workflow tasks, then request the task data via SPO REST API and, finally, send the data to external data warehouse Web API endpoint.

Some SPO solutions require creation of a reporting system that includes workflow tasks’ metadata. For example, it could be a report about documents with statuses of workflows linked to these documents. Using conventional approach (ex. SPO REST API) to obtain the data seems unfeasible as SPO throttles requests for workflow data. In fact, the throttling is so tight that generation of reports with more than a hundred of records is unrealistic. In addition to that, many companies would like to create Business Intelligence(BI) systems analysing workflow tasks data. Having data warehouse with all the workflow tasks metadata can assist in this job very well.

To be able to implement the solution a few prerequisites must be met. You must know basics of Nintex workflow creation and to be able to create a backend solution with the database of your choice and custom Web API endpoint that allows you to write the data model to that database. In this post we have used Visual Studio 2015 and created ordinary REST Web API 2.0 project with Azure SQL Database.

The solution will involve following steps:

  1. Get sample of your workflow task metadata and create your data model.
  2. Create a Web API capable of writing data model to the database.
  3. Expose one POST endpoint method of the Web REST API that accepts JSON model of the workflow task metadata.
  4. Create Nintex workflow in the SPO list storing your workflow tasks.
  5. Design Nintex workflow: call SPO REST API to get JSON metadata and pass this JSON object to your Web API hook.

Below is detailed description of each step.

We are looking here to export metadata of a workflow task. We need to find the SPO list that holds all your workflow tasks and navigate there. You will need a name of the list to be able to start calling SPO REST API. It is better to use a REST tool to perform Web API requests. Many people use Fiddler or Postman (Chrome Extension) for this job. Request SPO REST API to get a sample of JSON data that you want to put into your database. The request will look similar to this example:

Picture 1

The key element in this request is getbytitle(“list name”), where “list name” is SPO list name of your workflow tasks. Please remember to add header “Accept” with the value “application/json”. It tells SPO to return JSON instead of the HTML. As a result, you will get one JSON object that contains JSON metadata of Task 1. This JSON object is the example of data that you will need to put into your database. Not all fields are required in the data warehouse. We need to create a data model containing only fields of our choice. For example, it can look like this one in C# and all properties are based on model returned earlier:

The next step is to create a Web API that exposes a single method that accepts our model as a parameter from the body of the request. You can choose any REST Web API design. We have created a simple Web API 2.0 in Visual Studio 2015 using general wizard for MVC, Web API 2.0 project. Then, we have added an empty controller and filled  it with the code that works with the Entity Framework to write data model to the database. We have also created code-first EF database context that works with just one entity described above.

The code of the controller:

The code of the database context for Entity Framework

Once you have created the Web API, you should be able to call Web API method like this:
https://yoursite.azurewebsites.net/api/EmployeeFileReviewTasksWebHook

You will need to put your model data in the request body as a JSON object. Also don’t forget to include proper headers for your authentication and header “Accept” with “application/json” and set type of the request to POST. Once you’ve tested the method, you can move on to the next steps. For example, below is how we tested it in our project.

Picture 4

Next, we will create a new Nintex Workflow in the SPO list with our workflow tasks. It is all straightforward. Click Nintex Workflows, then create a new workflow and start designing it.

Picture 5

Picture 6

Once you’ve created a new workflow, click on Workflow Settings button. In the displayed form please set parameters as it shown on screenshot below. We set “Start when items are created” and “Start when items are modified”. In this scenario, any modifications of our Workflow task will start this workflow automatically. It also includes cases when Workflow task have been modified by other workflows.

Picture 7.1

Create 5 steps in this workflow as it shown on the following screenshots labelled as numbers 1 to 5. Please keep in mind that blocks 3 and 5 are there to assist in debugging only and not required in production use.

Picture 7

Step 1. Create a Dictionary variable that contains SPO REST API request headers. You can add any required headers, including Authentication headers. It is essential here to include Accept header with “application/json” in it to tell SPO that we want JSON in responses. We set Output variable to SPListRequestHeaders so we can use it later.

Picture 8

Step 2. Call HTTP Web Service. We call SPO REST API here. It is important to make sure that getbytitle parameter is correctly set to your Workflow Tasks list as we discussed before. The list of fields that we want to be returned is defined in the “$select=…” parameter of OData request. We need only fields that are included in our data model. Other settings are straightforward: we supply our Request Headers created in Step 1 and create two more variables for response. SPListResponseContent will get resulting JSON object that we going to need at the Step 4.

Picture 9

Step 3 is optional. We’ve added it to debug our workflow. It will send an email with the contents of our JSON response from the previous step. It will show us what was returned by SPO REST API.

Picture 10

Step 4. Here we are calling our custom API endpoint with passing JSON object model that we got from SPO REST API. We supply full URL of our Web Hook, Set method to POST and in the Request body we inject SPListResponseContent from the Step 2. We’re also capturing response code to display later in workflow history.

Picture 11

Step 5 is also optional. It writes a log message with the response code that we have received from our API Endpoint.

Picture 12

Once all five steps are completed, we publish this Nintex workflow. Now we are ready for testing.

To test the system, open list of our Workflow tasks. Click on any task and modify any of task’s properties and save the task. This will initiate our workflow automatically. You can monitor workflow execution in workflow history. Once workflow is completed, you should be able to see messages as displayed below. Notice that our workflow has also written Web API response code at the end.

Picture 13

To make sure that everything went well, open your database and check the records updated by your Web API. After every Workflow Task modification you will see corresponding changes in the database. For example:

Picture 14

 

In this post we have shown that automatic copying of Workflow Tasks metadata into your data warehouse can be done with a simple Nintex Workflow setup and performing only two REST Web API requests. The solution is quite flexible as you can select required properties from the SPO list and export into the data warehouse. We can easily add more tables in case if there are more than one workflow tasks lists. This solution enables creation of powerful reporting system  using data warehouse and also allows to employ BI data analytics tool of your choice.

Programmatically read email from an Exchange Sever Mailbox

I can’t recall how many times I have come across a requirement to programmatically read emails from an Exchange Server mailbox and take some action based on the presence of new messages. The component reading the emails can read the mail content, parse its contents and transmit the data to other downstream systems. In this blog I’m going to take a look at one way we can do this.

Objective

In my scenario there was a requirement to develop a program to retrieve mails from an Exchange mailbox and, based on a specific criteria, send the email to multiple users based on an distribution list identifier. Listed below are the steps, I intended to follow:

  1. Use an efficient calling mechanism to poll for new email based on a given criteria
  2. Read any new email messages
  3. Transform the received email to an SMTP-friendly message to be sent to a list.

Solution

I developed my solution in .Net, using C# as the language of preference, hooking into the Exchange Web Services (EWS) Managed API as the polling endpoint and using simple SMTP endpoints to send mails. Let’s look at the solution in a little more detail.

Prerequisites / Preparation

  1. Open Visual Studio and create a new Console Application
  2. Install Exchange Web Services (EWS) Managed API from NuGet Package Manager
  3. Add reference for System.Configuration (to support application configuration).

Create a connection to a given mailbox

When creating connections to an Exchange mailbox we need to know the version of the source Exchange environment. The code to connect will differ based on the target environment. At time of writing, the EWS managed API supports the following Exchange Server versions:

  1. Exchange 2007 SP1
  2. Exchange 2010 (inc. SP1 & SP2)
  3. Exchange 2013 (inc. SP1)

Use the latest (Exchange 2013 SP1) if you are using this to connect with Exchange Online in Office 365. Below is the code to connect and get a service instance attached to an Office 365 mailbox.

Remember to have a AutodiscoverRedirectionUrlValidationCallback function to enable SSL communication.

The sample below shows how to connect to a local Exchange 2007 environment.

Read email using the created service instance

Once the connection is up and running this is relatively straightforward. The process can be summarised as follows:

  1. Specify the mailbox folder to read messages from (i.e. inbox)
  2. Specify the source mailbox email identifier
  3. Specify any search filters
  4. Specify the aggregation criteria (OR / AND)
  5. Specify the maximum number of messages to be retrieved
  6. Retrieve message identifiers using the service instance.

… and some sample code …

Now we have the message identifiers, but the message body and other basic properties are missing. A second call will need to be made to the server retrieve extended properties for each message. Here is the code for that process:

Prepare the outbound mail for sending

I used a temporary custom object to transform the email from the EWS calls prior to sending. Below is the code used to create the class and the mapping logic.

The code below transforms the temporary mail object into a valid SMTP mail, validates the email identifiers, the mail needs to broadcast to, filters out the invalid mail Ids (if found) and sends the mails in multiple allowed batches.

last but not least.. Send the mail..

 

      mail.SendMail();

Happy Mailing!!!

How to create custom images for use in Microsoft Azure

In this post I will discuss how we can create custom virtual machine images and deploy them to the Microsoft Azure platform. To complete this process you will need an Azure Subscription, the Azure PowerShell module installed and a pre-prepared VHD which you would like to use (VHDX is not supported at present.)

You can sign up for a free trial of Microsoft Azure here if you don’t currently hold a subscription.

Completing this process will allow you take advantage of platforms which aren’t offered “out of the box” on Microsoft Azure eg, Server 2003 and Server 2008 for testing and development. Currently Microsoft offers Server 2008 R2 as the minimum level from the Azure Image Gallery.

What do I need to do to prepare my image?

To complete this process, I built a volume license copy of Windows Server 2008 Standard inside a generation one Hyper-V guest virtual machine. Once the installation of the operating system completed I installed Adobe Acrobat Reader. I then ran sysprep.exe to generalise the image. This is important, if you don’t generalise your images, they will fail to deploy on the Azure platform.

I will detail the steps carried out after the operating system install below.

  1. Log into the newly created virtual machine
  2. Install the Hyper-V virtual machine additions (if your guest doesn’t already have it installed)
  3. Install any software that is required in your image (I installed Acrobat Reader)
  4. From an Administrative command prompt, navigate to %windir%\system32\sysprep and then execute the command “sysprep.exe”

  1. Once the SysPrep window has opened, select Enter System Out of Box Experience (OOBE) and tick the Generalize check box. The shutdown action should be set to Shutdown, this will shut down the machine gracefully once the sysprep process has completed.
  2. Once you are ready, select OK and wait for the process to complete.

I built my machine inside a dynamically expanding VHD, the main reason for doing so was to avoid having to upload a file size which was larger than necessary. As a result of this, I chose to compact the VHD before moving on to the next step by using the disk wizard inside the Hyper-V management console. To complete this process, follow the steps below.

  1. From the Hyper-V Host pane, select Edit Disk
  2. Browse to the path of VHD we were working on, in my case it is “C:\VHDs\Server2008.vhd” and select Next
  3. Select Compact and Finish.
  4. Wait for the process to complete. Your VHD file is now ready to upload.

What’s next?

We are now ready to upload the virtual machine image, to complete this process you will need access to the Azure PowerShell cmd-lets and a storage account for the source VHD. If you do not already have a storage account created, you can follow the documentation provided by Microsoft here.

IMPORTANT: Once you have a storage account in Azure, ensure that you have a container called VHDs. If you don’t have a container you can create on by selecting Add from the bottom toolbar, name it vhds and ensure the access is set to Private (container shown below.)


We are now ready to connect to the Azure account to kick off the upload process. To do so, launch an Administrative Azure PowerShell console and follow the following steps.

  1. Run the cmd-let Add-AzureAccount, this will present a window which will allow you to authenticate to Azure.

  1. On the next screen, enter your Password. The PowerShell session is now connected.
  2. To verify that the session connected successfully, run the cmd Get-AzureAccount, you should see your account listed below.

We are now ready to commence the upload process. You will need your storage blob URL. You can find this on the container page we visited previously to create the vhds container.

The complete command is as follows.

Add-AzureVhd -Destination “<StorageBlobURL>/vhds/Server2008.vhd” -LocalFilePath “C:\VHDs\Server2008.vhd”

Once you have executed the command, two things happen..

  1. The VHD file is indexed by calculating the MD5 hash

  1. Once the indexing process is completed, the upload starts.


This is very neat, as the demo gods often fail us… (my upload actually failed part way through.) Thankfully I was able to re-execute the command, which resumed the upload process where the first pass left off (see below.)

  1. Wait for the upload process to complete.

Creating the Image in the Azure console.

Now that our upload has completed, we are ready to create an image in the Azure console. This will allow us to easily spawn virtual machines based on the image we uploaded earlier. To complete this process you will need access to the Azure console and your freshly uploaded image.

  1. Select Virtual Machines from the management portal.
  2. Select Images from the virtual machines portal.
  3. Select Create an Image

  1. A new window titled Create an image from a VHD will pop up. Enter the following details (as shown below.)
  • Name
  • Description
  • VHD URL (from your storage blob)
  • Operating System Family


Ensure you have ticked I have run Sysprep on the virtual machine or you will not be able to proceed.

  1. The Image will now appear under MY IMAGES in the image gallery.

Deploying the image!

All the work we have completed so far won’t be much use if the deployment phase fails. In this part of the process we will deploy the image to ensure it will work as expected.

  1. Select Virtual Machines from the management portal.
  2. Select New > Compute > Virtual Machine > From Gallery
  3. From the Choose an Image screen, select MY IMAGES. You should see the image that we just created in the gallery (shown below.)

  1. Select the Image and click Next
  2. Complete the Virtual Machine Configuration with your desired settings.
  3. Wait for the virtual machine to complete deployment provisioning.

Connecting to the virtual machine.

The hard work is done! We are now ready to connect to our newly deployed machine to ensure it is functioning as expected.

  1. Select Virtual Machines from the management portal.
  2. Select the Virtual Machine and then click Connect from the toolbar down the bottom. This will kick-off a download for the RDP file which will allow you to connect to the virtual machine.
  3. Launch the RDP file, you will be asked to authenticate. Enter the credentials you specified during the deployment phase and click OK


  1. You will now be presented with your remote desktop session, connected to your custom image deployed on Microsoft Azure.

I went ahead and activated my Virtual Machine. To prove there is no funny business involved, I have provided one final screenshot showing the machine activation status (which details the Windows version) and a snip showing the results of the ipconfig command. This lists the internal.cloudapp.net addresses showing that machine is running on Microsoft Azure.

Enjoy!

How to fix 403 errors when managing Azure SQL Database from Visual Studio

I was recently trying to manage Azure SQL Databases via Visual Studio in a new Azure subscription and was unable to open the SQL Databases node at all and received the following error message.

Screenshot of Visual Studio error dialog.

The text reads:

Error 0: Failed to retrieve all server data for subscription ‘GUID’ due to error ‘Error code: 403 Message: The server failed to authenticate the request. Verify that the certificate is valid and is associated with this subscription.’.

and my Server Explorer window looked like this:

How Server Explorer Looked

I must admit that I don’t often manage my Azure assets via Visual Studio so it had been a while since I’d used this tooling. I tried a few ways to get this to work and double checked that I had the right details for the subscription registered on my local machine. Storage worked fine, Virtual Machines worked fine… everything looked good except SQL Databases!

(At this point I’d say… hint: I should have read the error message more closely!)

After some additional troubleshooting it turns out that unlike many other Azure offerings, Azure SQL Database does not support OAuth-based connections and instead uses certificates (you know, like the error message says…).

Unfortunately, it turns out that if you have an expired or otherwise invalid certificate for any Azure subscription registered then Visual Studio will fail to enumerate SQL Database instances in the subscription you are currently using even if its certificate is fine.

The use of a subscription GUID isn’t that helpful when troubleshooting because I completely missed that the problematic certificate wasn’t even from the subscription I was currently trying to use!

You can fix this issue by managing your registered Azure subscriptions from within Visual Studio as follows.

Manage Certificates in Visual Studio

  • Right-click on the top Azure node in Server Explorer.
  • Select Manage Subscriptions… from the menu.
  • Click on the “Certificates” tab and find the Subscription with the GUID matching the error.
  • Click “Remove…” and then close the dialog.

You should now be able to open the Azure SQL Database node in Server Explorer and manage them as you expect!

HTH.

Purchasing Additional SharePoint Online Storage for Office 365

There are a number of different options for customers to purchase Office 365.  In the U.S.A. and the majority of markets, customers can purchase Office 365 directly from Microsoft via MOSP (Microsoft Online Subscription Program).  This is the most common way for small businesses to purchase Office 365.  Customers can purchase licenses using a credit card.  There is no minimum license quantity for MOSP.  Customers pay for Office 365 via an automatic monthly subscription.

In Australia, Telstra has a syndication agreement with Microsoft.  This means that customers who want to purchase Office 365 in Australia transact the purchase with Telstra.  This service is known as T-Suite.  Billing for T-Suite can be via a monthly credit card payment or the customer’s existing Telstra account.  After purchasing the licenses from Telstra, customers are provided with an Office 365 Org ID and password to access the new tenant.

Another option for customers to purchase Office 365 is via a volume license (VL) agreement.  For large enterprises that require 250 licenses and above, customers can purchase via an Enterprise Agreement (EA) or Enterprise Subscription Agreement (EAS).  Smaller customers that require between 5 – 249 licenses can purchase Office 365 via an Open Agreement.  VL agreements require a commitment of 1 – 3 years, depending on the agreement.  VL agreements are billed annually.  Customers who are based in Australia and wish to buy Office 365 directly from Microsoft can do so with a VL agreement.

There are many differences between Office 365 purchases via MOSP vs. VL.  The differences include:

1) The prices of the licenses

2) The frequency of the payments

3) The length of commitment

4) The types of SKUs which are available

It is important to consider all of these factors before making a decision on the best way to purchase Office 365 for your organization.

This blog will focus on one of the major differences between the Office 365 SKUs offered via MOSP vs. an Open agreement.

When customers purchase Office 365 and SharePoint Online, they are provided with 10 GB of storage by default.  This storage can be used to provision a number of different SharePoint Online websites including public and internal websites.  For each Office 365 and SharePoint Online user license purchased, the tenant is provided with an additional 500 MB of storage.  For example, a customer who purchases 10 E3 licenses will receive 10 GB + (10 users) * (500 MB) = 10 GB + 5 GB = 15 GB.  Please note that this pool of SharePoint Online storage is separate from the storage used by OneDrive for Business. Each users who runs OneDrive for Business is now given 1 TB of storage for personal files.

In some instances, customers may want to increase the amount of storage available for SharePoint Online.  Kloud Solutions works with many customers who would like to move their corporate file shares from an on-premises server to SharePoint Online.  The storage required for your file shares may exceed the default storage allocation in SharePoint Online.  Therefore, Microsoft has introduced the option for customers to purchase additional SharePoint storage on a per GB basis.

There are many different types of Office 365 plans that can be purchased.  You will first need to determine if your existing Office 365 subscription is eligible for additional storage.  SharePoint Online storage is available for the following subscriptions:

  • Office 365 Enterprise E1
  • Office 365 Enterprise E2
  • Office 365 Enterprise E3
  • Office 365 Enterprise E3 for Symphony
  • Office 365 Enterprise E4
  • Office 365 Midsize Business
  • Office Online with SharePoint Plan 1
  • Office Online with SharePoint Plan 2
  • SharePoint Online (Plan 1)
  • SharePoint Online (Plan 2)

SharePoint Online Storage for Small Business is available for the following subscriptions:

  • Office 365 (Plan P1)
  • Office 365 Small Business Premium
  • Office 365 Small Business

If your subscription is one of the above eligible plans, you can purchase Office 365 via MOSP or the T-Suite portal for customers in Australia.

One of the key limitations to consider is that Microsoft does NOT offer the option to purchase additional SharePoint Online storage via an Open Agreement for small and medium businesses.  For instance, you can purchase 10 E3 licenses via an Open Agreement. This would provide 15 GB of SharePoint Online storage using the example above.  However, you would NOT be able to purchase additional GB of storage as the SKU is not available on the Open price list.

You can mix Open and MOSP licensing in the same Office 365 tenant.  For example, you could buy 10 E3 license via an Open agreement and then apply them to a tenant using an Office 365 product key.  If you wanted to buy an additional 3 GB of storage, you could do so via a credit card in the same tenant.  However, SharePoint Online storage must be tied to another license.  It cannot be purchased by itself.  So you would have to buy at least 1 additional E3 license via MOSP in order to add the additional 3 GB of storage.  This is something to consider when you are pricing an Office 365 solution.

For reasons of both simplicity and flexibility, Kloud Solutions recommends purchasing Office 365 via MOSP or T-Suite if you need additional SharePoint Online storage today, or if you think you may need it in the future.  Purchasing via MOSP or T-Suite allows you to keep your options open and plan for future storage growth.  Buying Office 365 via Open means that you are locked in to a certain storage allocation as determined by Microsoft.   There is no guarantee that Microsoft’s default storage allocation will meet your requirements.

It is very likely that Microsoft will increase the default storage allocation for SharePoint Online in the future.  The cost of storage is always declining according to Moore’s Law.  For example, Microsoft recently increased the amount of storage available in OneDrive from 25 GB to 1 TB.  Here is a blog post which references this change:

https://blog.kloud.com.au/2014/05/04/sharepoint-online-storage-improvements-in-office-365/

However, there have been no announcements from Microsoft to date indicating that they plan to increase the default storage for SharePoint Online beyond 10 GB per tenant or 500 MB per user.  There will be future posts to this blog about this topic if there are any relevant updates in the future.

If you have any questions about the different options for purchasing Office 365 from Microsoft or Telstra, please contact Kloud  Solutions using the following URL:

http://www.kloud.com.au/

HOW I REDUCED THE WORKER ROLE TIME FROM ABOVE 5 HRS TO LESS THAN 1 HOUR

This post talks about my experience in reducing the execution time of the Worker Role from above 5 hours to under 1 hour. This Worker Role is set up to call some external APIs to get a list of items with their promotions and store them locally. A typical batch update process that you would see in many apps. Our client was only interested in quick fixes that would help them reduce the time it is taking the Worker Role to run. We were not allowed to change the architecture or make a big change as they had a release deadline in few weeks. So here is what I have done.

Profiling

Before I start doing any changes, I started by profiling the application to see where the bottleneck is. Quite often you find people doing “optimisations and bug fixes” without pre-defined metrics. This is a big mistake as you cannot measure what you cannot see. You need to quantify the issue first then start applying your changes.

Database

As we started to see the statistics of how slow the worker role was running, we started to understand that there is problem in the way we are interacting with the database. The worker role deletes all items and then inserts them again. Do not ask me why delete and insert again, that ‘s a question to the solution architect/developer of the worker role to answer. Data insertion is happening in huge volumes (millions of records). To reduce the time it is taking for these db transactions, I did the following changes:

1. Disabling Entity Framework Changes Tracking
Entity Framework keeps track of all changes to any entity object in memory to facilitate inserting/updating/deleting records from the database. While this is a good thing when you are using one/few objects, it is a killer when you dealing with millions of records simultaneously. To do that, you just need to configure your EF context to disable changes tracking:

	dbContext.Configuration.AutoDetectChangesEnabled = false;

2. Disabling Entity Framework Validation feature
Similar to the first change, we do not need to add extra overhead just for validating if we are certain of our data. So we switched off EF validation:

	dbContext.Configuration.ValidateOnSaveEnabled = false;

3. Individual Insert vs Bulk Insert
One of the things I found in the worker role is that it is inserting the records one by one in a foreach statement. This could work fine for few items and you would not notice the difference, but when it comes to huge volumes, this kills the performance. So I thought of building an extension for EF context to insert data in bulk, but fortunately I found that somebody has already done that. EF.BulkInsert is an extension for EF that allows you to insert in bulk. It is basically a group of extension methods to your EF context. It is a very lightweight and it works great. The authors show on the project home page that having bulk insert is more than 20 times faster than individual inserts. When using such extension, make sure to configure the settings properly. Things like BatchSize, Timeout, DataStreaming, etc.

4. Transactions
I see this quite often that developers surround their db code with a transaction, and it might be a good thing on paper, but you need to understand the implication of this transaction. Such transactions slow down the whole process, add a huge overload on the db server and the app server, and makes even rolling back or committing harder. Moreover, EF 6 and above already adds a transaction scope for your changes when committing them, so you will either have your changes committed or rolled back, so we there was no need for such a transaction scope, I got rid off it.

Computing

Another bottleneck that I found was in the way we are generating tags. This is just meta data about items and their grouping. It was taking a huge amount of time to loop through all items and create these tags, categories, groups, etc. This was all happening in memory. The change I made to this was very simple but substantial, just made it run in parallel, like this:

	Parallel.ForEach(products, (product) =>
	{
		// code for generating tags, categories, etc
	});

If you are trying this, make sure that your code is thread-safe. I had to fix few issues here or there as in my case the code was not thread-safe, but this was a small change. Also, if you are using a shared list among threads, then you might want to consider using a thread-safe collection like ConcurrentDictionary or ConcurrentBag.

External Resources

The list of items and promotions was accessed from an external API. The worker role was accessing this list for 7 main api endpoints (7 different stores) before starting to process the data. This was very slow. To speed this up, I had to fire multiple request in parallel, similar to what I have done with generating tags, as below:

	var endPoints = new[] {1, 2, 3, 4, 5, 6};            
	Parallel.ForEach(endPoints, (apiEndpoint) =>
	{
		// code for calling external API
	});

Also, we started caching some of the info that we were accessing from the API locally, this saved us a lot of time too.

Doing these small changes above took me less than a day but it had made a huge impact on the way the worker role is running. Having an app running for such a long time in the cloud could cost you lots of money, which my client did not mind. What they really hated was the fact that it is failing so many times. The fact that you need to wait 5 hours means that it is more error-prone, connections could drop, database would timeout, etc. Plus, whenever developers are making any changes, they had to wait for a long time when testing the worker role locally or on the server.

In conclusion, making small changes have benefited my client significantly and they were very satisfied. I hope you find these tips useful, and I would love to hear your thoughts. If you are struggling with a long running process, then get in touch and we will happily help you out.