Picture 7

Moving SharePoint Online workflow task metadata into the data warehouse using Nintex Flows and custom Web API

This post suggests the idea of automatic copying of SharePoint Online(SPO) workflow tasks’ metadata into the external data warehouse.  In this scenario, workflow tasks are becoming a subject of another workflow that performs automatic copying of task’s data into the external database using a custom Web API endpoint as the interface to that database. Commonly, the requirement to move workflow tasks data elsewhere arises from limitations of SPO. In particular, SPO throttles requests for access to workflow data making it virtually impossible to create a meaningful workflow reporting system with large amounts of workflow tasks. The easiest approach to solve the problem is to use Nintex workflow to “listen” to the changes in the workflow tasks, then request the task data via SPO REST API and, finally, send the data to external data warehouse Web API endpoint.

Some SPO solutions require creation of a reporting system that includes workflow tasks’ metadata. For example, it could be a report about documents with statuses of workflows linked to these documents. Using conventional approach (ex. SPO REST API) to obtain the data seems unfeasible as SPO throttles requests for workflow data. In fact, the throttling is so tight that generation of reports with more than a hundred of records is unrealistic. In addition to that, many companies would like to create Business Intelligence(BI) systems analysing workflow tasks data. Having data warehouse with all the workflow tasks metadata can assist in this job very well.

To be able to implement the solution a few prerequisites must be met. You must know basics of Nintex workflow creation and to be able to create a backend solution with the database of your choice and custom Web API endpoint that allows you to write the data model to that database. In this post we have used Visual Studio 2015 and created ordinary REST Web API 2.0 project with Azure SQL Database.

The solution will involve following steps:

  1. Get sample of your workflow task metadata and create your data model.
  2. Create a Web API capable of writing data model to the database.
  3. Expose one POST endpoint method of the Web REST API that accepts JSON model of the workflow task metadata.
  4. Create Nintex workflow in the SPO list storing your workflow tasks.
  5. Design Nintex workflow: call SPO REST API to get JSON metadata and pass this JSON object to your Web API hook.

Below is detailed description of each step.

We are looking here to export metadata of a workflow task. We need to find the SPO list that holds all your workflow tasks and navigate there. You will need a name of the list to be able to start calling SPO REST API. It is better to use a REST tool to perform Web API requests. Many people use Fiddler or Postman (Chrome Extension) for this job. Request SPO REST API to get a sample of JSON data that you want to put into your database. The request will look similar to this example:

Picture 1

The key element in this request is getbytitle(“list name”), where “list name” is SPO list name of your workflow tasks. Please remember to add header “Accept” with the value “application/json”. It tells SPO to return JSON instead of the HTML. As a result, you will get one JSON object that contains JSON metadata of Task 1. This JSON object is the example of data that you will need to put into your database. Not all fields are required in the data warehouse. We need to create a data model containing only fields of our choice. For example, it can look like this one in C# and all properties are based on model returned earlier:

The next step is to create a Web API that exposes a single method that accepts our model as a parameter from the body of the request. You can choose any REST Web API design. We have created a simple Web API 2.0 in Visual Studio 2015 using general wizard for MVC, Web API 2.0 project. Then, we have added an empty controller and filled  it with the code that works with the Entity Framework to write data model to the database. We have also created code-first EF database context that works with just one entity described above.

The code of the controller:

The code of the database context for Entity Framework

Once you have created the Web API, you should be able to call Web API method like this:
https://yoursite.azurewebsites.net/api/EmployeeFileReviewTasksWebHook

You will need to put your model data in the request body as a JSON object. Also don’t forget to include proper headers for your authentication and header “Accept” with “application/json” and set type of the request to POST. Once you’ve tested the method, you can move on to the next steps. For example, below is how we tested it in our project.

Picture 4

Next, we will create a new Nintex Workflow in the SPO list with our workflow tasks. It is all straightforward. Click Nintex Workflows, then create a new workflow and start designing it.

Picture 5

Picture 6

Once you’ve created a new workflow, click on Workflow Settings button. In the displayed form please set parameters as it shown on screenshot below. We set “Start when items are created” and “Start when items are modified”. In this scenario, any modifications of our Workflow task will start this workflow automatically. It also includes cases when Workflow task have been modified by other workflows.

Picture 7.1

Create 5 steps in this workflow as it shown on the following screenshots labelled as numbers 1 to 5. Please keep in mind that blocks 3 and 5 are there to assist in debugging only and not required in production use.

Picture 7

Step 1. Create a Dictionary variable that contains SPO REST API request headers. You can add any required headers, including Authentication headers. It is essential here to include Accept header with “application/json” in it to tell SPO that we want JSON in responses. We set Output variable to SPListRequestHeaders so we can use it later.

Picture 8

Step 2. Call HTTP Web Service. We call SPO REST API here. It is important to make sure that getbytitle parameter is correctly set to your Workflow Tasks list as we discussed before. The list of fields that we want to be returned is defined in the “$select=…” parameter of OData request. We need only fields that are included in our data model. Other settings are straightforward: we supply our Request Headers created in Step 1 and create two more variables for response. SPListResponseContent will get resulting JSON object that we going to need at the Step 4.

Picture 9

Step 3 is optional. We’ve added it to debug our workflow. It will send an email with the contents of our JSON response from the previous step. It will show us what was returned by SPO REST API.

Picture 10

Step 4. Here we are calling our custom API endpoint with passing JSON object model that we got from SPO REST API. We supply full URL of our Web Hook, Set method to POST and in the Request body we inject SPListResponseContent from the Step 2. We’re also capturing response code to display later in workflow history.

Picture 11

Step 5 is also optional. It writes a log message with the response code that we have received from our API Endpoint.

Picture 12

Once all five steps are completed, we publish this Nintex workflow. Now we are ready for testing.

To test the system, open list of our Workflow tasks. Click on any task and modify any of task’s properties and save the task. This will initiate our workflow automatically. You can monitor workflow execution in workflow history. Once workflow is completed, you should be able to see messages as displayed below. Notice that our workflow has also written Web API response code at the end.

Picture 13

To make sure that everything went well, open your database and check the records updated by your Web API. After every Workflow Task modification you will see corresponding changes in the database. For example:

Picture 14

 

In this post we have shown that automatic copying of Workflow Tasks metadata into your data warehouse can be done with a simple Nintex Workflow setup and performing only two REST Web API requests. The solution is quite flexible as you can select required properties from the SPO list and export into the data warehouse. We can easily add more tables in case if there are more than one workflow tasks lists. This solution enables creation of powerful reporting system  using data warehouse and also allows to employ BI data analytics tool of your choice.

Why you should use Git over TFS

Git || TFS

Git || TFS (Source: VisualStudio.com)

I have been an advocate of git for long time now and I might be biased a little bit, but take a moment to read this and judge for yourself whether git is the way to go or not.

If you are starting a new greenfield project, then you should consider putting your code on a git repository instead of TFS. There are many reasons why git is better suited, but the two main ones in my perspective are:

Cross-Platform Support
Git tools are available for all platforms and there are many great (and FREE) GUI tools like GitExtensions or SourceTree. In today’s development world, there is guaranteed to be multiple set of technologies, languages, frameworks, and platform-support in the same solution/project. Think of using NodeJS with Windows Phone app. Or building apps for Windows, Android, and iOS. These are very common solutions today and developers should be given the freedom of choosing the development platform of their choice. Do not limit your developers to use Visual Studio on Windows. One might argue that TFS Everywhere (which is an add-on to Eclipse) is available for other platforms, but you should try it and see how buggy it is and how slow it is in finding pending changes. Having used TFS Everywhere, I would not really recommend it to anyone, unless it is your last resort.

Developers should be able to do their work any time and anywhere

Work Offline
Developers should be able to do their work any time and anywhere. Having TFS relying on internet connection to commit, shelf, or pull is just not good enough. I have even had many instances where TFS was having availability problems, which meant that I was not able to push/pull changes for an hour. This is not good. Git is great when it comes to working offline, and this is due to the inherent advantage of being a fully distributed source control system. Git gives you the ability to a) Have full history of the repo locally. This enables you to review historical changes, review commits, and merge with other branches all locally. b) Work and commit changes to your branch locally c) Stash changes locally. d) Create local tags. e) Change repo history locally before pushing. And many other benefits that come out of the box

Having TFS relying on internet connection to commit, shelf, or pull changes is just not good enough

Hosting Freedom
With TFS, you are pretty much stuck with Microsoft TFS offering. With git however, it is widely available and many providers offer free hosting for git, including VSTF, GitHub, and Bitbucket. With Visual Studio Online itself offering you to host your code in git repositories, there is really no reason why not take full advantage of git.

With VSO itself offering to host your code in git repositories, there is really no reason why not take full advantage of git

I have introduced git in many development teams so far, and I must say that I did get resentment and reluctance from some people at the start, but when people start using it and enjoying the benefits, everything settles in place.

Some developers are afraid of command-line tools or the complexity of push/pull/ and fetch and they want to stay simple with TFS, but this does not fit today’s development environment style. Last month, I was working at this client site where they were using Visual Studio to develop, debug, and deploy to iOS devices, and it was ridiculously slow. As a final resort, I opted to using Eclipse with TFS Everywhere plugin with my Xamarin Studio, and it was a lot better. I still had to suffer the pain of Eclipse-TFS not seeing my changes every now and then but compared to the time I was saving by choosing my own development IDE (Xamarin Studio), I was happy with that.

If you are starting a new project, or if you’re looking for a way to improve your team practices, then do yourself a favour and move to Git

So to summarise, if you are starting a new project, or if you are looking for a way to improve your team practices, then do yourself a favour and move to Git. You will be in a good company, especially if you enjoy contributing to open source project :). Teams that use VSTS as their ALM platform can still use VSTS, but host their code in git repositories on VSTS to take advantage of git and TFS together.

Finally, if you have any questions or thoughts, I would love to hear from you.

Publish to a New Azure Website from behind a Proxy

One of the great things about Azure is the ease of which you can spin up a new cloud based website using Powershell. From there you can quickly publish any web-based solution from Visual Studio to the Azure hosted site.

To show how simple this is; After configuring PowerShell to use an Azure Subscription, I’ve created a new Azure hosted website in the new Melbourne (Australia Southeast) region:

Creating a new website in PowerShell

Creating a new website in PowerShell

That was extremely easy. What next? Publish your existing ASP.NET MVC application from Visual Studio to the web site. For this test, I’ve used Microsoft Visual Studio Ultimate 2013 Update 3 (VS2013). VS2013 offers a simple way from the built-in Publish Web dialogue to select your newly created (or existing) websites.

publish

Web Publish to Azure Websites

This will require that you have already signed in with your Microsoft account linked with a subscription, or you have already imported your subscription certificate to Visual Studio (you can use the same certificate generated for PowerShell). Once your subscription is configured you can select the previously created WebSite:

Select Existing Azure Website

Select Existing Azure Website

The Publish Web dialogue appears, but at this point you may experience failure when you attempt to validate the connection or publish the WebSite. If you are behind a proxy; then the error will show as destination not reachable.

Unable to publish to an Azure Website from behind a proxy

Unable to publish to an Azure Website from behind a proxy

Could not connect to the remote computer ("mykloudtestapp.scm.azurewebsites.net"). On the remote computer, make sure that Web Deploy is installed and that the required process ("Web Management Service") is started. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_DESTINATION_NOT_REACHABLE. Unable to connect to the remote server

The version of Web Deploy included with VS2013 is not able to publish via a Proxy. Even if you configure the msbuild.exe.config to have the correct proxy settings as documented by Microsoft, it will still fail.

Luckily in August 2014 Web Deploy v3.6 BETA3 was released that fixes this issue. To resolve this error, you can download the Web Deploy beta and patch your VS2013 installation. After patching Visual Studio; you can modify the proxy settings used by msbuild.exe (msbuild.exe.config) to use the system proxy:

<system.net>
	<defaultProxy useDefaultCredentials="true" />
</system.net>

You should now be able to publish to your Azure WebSite from behind a proxy with VS2013 Web Deploy.

Reducing the size of an Azure Web Role deployment package

If you’ve been working with Azure Web Roles and deployed them to an Azure subscription, you likely have noticed the substantial size of a simple web role deployment package. Even with the vanilla ASP.NET sample website the deployment package seems to be quite bloated. This is not such a problem if you have decent upload bandwidth, but in Australia bandwidth is scarce like water in the desert so let’s see if we can compress this deployment package a little bit. We’ll also look at the consequences of this large package within the actual Web Role instances, and how we can reduce the footprint of a Web Role application.

To demonstrate the package size I have created a new Azure cloud service project with a standard ASP.NET web role:

1

Packaging up this Azure Cloud Service project results in a ‘CSPKG’ file and service configuration file:

2

As you can see the package size for a standard ASPX web role is around 14MB. The CSPKG is created in the ZIP format, and if we have a look inside this package we can have a closer look at what’s actually deployed to our Azure web role:

3

The ApplicationWebRole_….. file is a ZIP file itself and contains the following:

4

The approot and sitesroot folders are of significant size, and if we have a closer look they both contain the complete WebRole application including all content and DLL files! These contents are being copied to the actual local storage disk within the web role instances. When you’re dealing with large web applications this could potentially lead to issues due to the limitation of the local disk space within web role instances, which is around the 1.45 GB mark.

So why do we have these duplicate folders? The approot is used during role start up by the Windows Azure Host Bootstrapper and could contain a derived class from RoleEntryPoint. In this web role you can also include a start-up script which you can use to perform any customisations within the web role environment, like for example registering assemblies in the GAC.

The sitesroot contains the actual content that is served by IIS from within the web role instances. If you have defined multiple virtual directories or virtual applications these will also be contained in the sitesroot folder.

So is there any need for all the website content to be packaged up in the approot folder? No, absolutely not. The only reason we have this duplicate content is that the Azure SDK packages up the web role for storage and both the approot as well as sitesroot folders due to the behaviour of the Azure Web Role Bootstrapper.

The solution to this is to tailor the deployment package a little bit and get rid of the redundant web role content. Let’s create a new solution with a brand new web role:

5

This web role will hold just hold the RoleEntryPoint derived class (WebRole.cs) so we can safely remove all other content, NuGet packages and unnecessary referenced assemblies. The web role will not contain any of the web application bits that we want to host in Azure. This will result in the StartupWebRole to look like this:

6

Now we can add or include the web application that we want to publish to an Azure Web Role into the Visual Studio solution. They key point is to not include this as a role in the Azure Cloud Service project, but add it as a ‘plain web application’ to the solution. The only web role we’re publishing to Azure is the ‘StartupWebRole’, and we’re going to package up the actual web application in a slightly different way:

7

The ‘MyWebApplication’ project does not need to contain a RoleEntryPoint derived class, since this is already present on the StartupWebRole. Next, we open up the ServiceDefinition.csdef in the Cloud Service project and make some modifications in order to publish our web application along the StartupWebRole:
8

There are a few changes that need to be made:

  1. The name attribute of the Site element is set to the name of the web role containing the actual web application, which is ‘MyWebApplication’ in this instance.
  2. The physicalDirectory attribute is added and refers to the location where the ‘MyWebApplication’ will be published prior to creating the Azure package.

Although this introduces the additional step of publishing the web role to a separate physical directory, we immediately notice the reduced size of the deployment package:

9

When you’re dealing with larger web applications that contain numerous referenced assemblies the savings in size can add up quickly.

How to fix 403 errors when managing Azure SQL Database from Visual Studio

I was recently trying to manage Azure SQL Databases via Visual Studio in a new Azure subscription and was unable to open the SQL Databases node at all and received the following error message.

Screenshot of Visual Studio error dialog.

The text reads:

Error 0: Failed to retrieve all server data for subscription ‘GUID’ due to error ‘Error code: 403 Message: The server failed to authenticate the request. Verify that the certificate is valid and is associated with this subscription.’.

and my Server Explorer window looked like this:

How Server Explorer Looked

I must admit that I don’t often manage my Azure assets via Visual Studio so it had been a while since I’d used this tooling. I tried a few ways to get this to work and double checked that I had the right details for the subscription registered on my local machine. Storage worked fine, Virtual Machines worked fine… everything looked good except SQL Databases!

(At this point I’d say… hint: I should have read the error message more closely!)

After some additional troubleshooting it turns out that unlike many other Azure offerings, Azure SQL Database does not support OAuth-based connections and instead uses certificates (you know, like the error message says…).

Unfortunately, it turns out that if you have an expired or otherwise invalid certificate for any Azure subscription registered then Visual Studio will fail to enumerate SQL Database instances in the subscription you are currently using even if its certificate is fine.

The use of a subscription GUID isn’t that helpful when troubleshooting because I completely missed that the problematic certificate wasn’t even from the subscription I was currently trying to use!

You can fix this issue by managing your registered Azure subscriptions from within Visual Studio as follows.

Manage Certificates in Visual Studio

  • Right-click on the top Azure node in Server Explorer.
  • Select Manage Subscriptions… from the menu.
  • Click on the “Certificates” tab and find the Subscription with the GUID matching the error.
  • Click “Remove…” and then close the dialog.

You should now be able to open the Azure SQL Database node in Server Explorer and manage them as you expect!

HTH.

Claims-Based Federation Service using Microsoft Azure

In this post I will discuss how you can setup Microsoft Azure to provide federation services with claims authentication in the same way that an Active Directory Federation Service (ADFS) farm would on-premises. This can be achieved with an Azure subscription, Access Control Services (ACS) and an Azure Active Directory (AAD) instance. The key benefit of using Azure SaaS is that Microsoft have taken care of all the high availability and load scaling configuration, therefor you have no need to manage multiple ADFS servers to gain the same desired functionality.

If you don’t have an Azure subscription then signup for a free trial.

To have this process work in Azure we are going to need 2 functions –

  1. A service supporting claims-based protocols and be our token issuer – ACS
  2. A synchronized directory with connectivity to our claims issuer service – DirSync/AAD.

Azure Access Control Services (ACS) is a cloud based federated identity provider which currently supports tokens issued from social identities such as Windows Live ID, Facebook, Google, Yahoo and also the over-looked feature of enterprise identities like Active Directory. ACS can do some great things with transitions between protocols and transformation of claims as it issues secure tokens from the identity provider to the relying party applications.

Azure Active Directory (AAD) will house the synchronized identities from the on-premises Active Directory and provide claims-based authentication as it sends secure tokens with embedded claims to ACS.

The Solution

This scenario will apply to the majority of organizations whom are wanting to map the identity attributes from a source Active Directory (LDAP) to the outgoing claims type for a single sign on (SSO) experience.

Step 1 – Get the Identity in the Cloud

We need to use either a new directory in Azure or use an existing Office 365 directory if you have a tenant syncing already (skip to step 2) –

  • Turn on Synchronization by selecting Directory Integration > ACTIVATE
  • Create a user account to authenticate from your directory synchronization tool to AAD
  • Download the directory sync tool
  • Install and configure the directory sync tool on a server that is joined to your local Active Directory domain, and then run an initial sync. More information go here
  • Remember to enable password copy in the configuration wizard

Step 2 – Create an ACS Namespace

  • Select > New > App Services > Access Control > Quick Create
  • Give it a useful name prefix example: ‘kloudfed’
  • Once finished creating, Select > Manage
  • Select > Application Integration and see the Endpoint References

Next we need to create the Azure Active Directory as the Identity Provider

Step 3 – Create AAD Endpoint Mapping

Currently we have no way of ACS connecting to the information in AAD. To do this we create an Application Endpoint –

  • Select Active Directory > Federated Identity Instance > Applications > ADD AN APPLICATION
  • Select Add an application my organization is developing
  • Give it a name example: ‘Access Control Services’

  • In APP Properties:
    • Sign-ON URL = < ACS Management Portal >
    • APP ID URI = < ACS Management Portal >

  • In Azure Management Portal > Open your newly created APP
  • Select > View Endpoints

  • Copy Federation Metadata document URL to add to ACS

Step 4 – Add AAD as an Identity Provider in ACS

With the Federation Metadata Endpoint configured this can be the Identity Provider defined in ACS –

  • In ACS Portal Under Trust Relationships Select > Identity Providers
  • Select Add
  • In the Add Identity Provider Page Select > WS-Federation identity provider (e.g. Microsoft AD FS 2.0) > Select Next
  • Give it a name and paste the Federation Metadata URL from the previous step

  • Click Save

Now we are ready to add a claims-aware application in ACS which is requiring federated identity authentication.

Step 5 – Create a Relying Party Application

I’m not a developer, but this is the quickest way I know to make a claims-aware application using my copy of Visual Studio Express –

  • Select File > New Project > Web > ASP .NET Web Application

  • Click OK
  • Click > Change Authentication

  • Select Organization Account > Select On-Premises
  • Enter the ACS WS-Federation Metadata URL and make up an Application Unique ID

Step 6 – Add Relying Party Application information to ACS

The ACS Namespace wont issue tokens till it trusts the application. In the ACS Management Portal –

  • Select Relying Party Applications > New
  • Important – Realm is your App ID URI entered in the above steps

  • Generate a default claim transformation rule

Step 7 – Run a claims aware application

Here is my web application which will redirect from default URL (localhost) which is requiring authentication from Azure Active Directory –

  • The Redirect takes me to Azure Active Directory login

  • Enter Username & Password
  • Then I’m taken to the trusted application redirect URL after successful authenticating and we can already see claims information highlighted in yellow. Success!

     

Fiddler2 Capture

Let’s look at the web requests in a Fiddler2 capture to see what’s happening behind the scenes of my browser. Here are some condensed capture snippets –

#6

302 Redirectlocalhost to kloudfed.accesscontrol.windows.net:


#17

302 Redirectkloudfed.accesscontrol.windows.net to login.windows.net service:


#21

302 Redirectlogin.windows.net to login.microsoftonline.com:

#40

200 Result – Token response with claims returned from kloudfed.accesscontrol.windows.net

Filtering through the above #40 decrypted capture we find claims information. This is where we can validate if the web application is receiving the expected information in the token(I’ve removed the double-encoded values from the capture for readability) –

We can see the token is SAML2.0:

TokenType=http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0

One example of a claim attribute:

Name=http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname><AttributeValue>Peterson</AttributeValue>

Conclusion

If you’re after a claims-based federation service for SSO and installing a bunch of new servers in your existing infrastructure is something you’re not keen on undertaking, then maybe Azure gets a look. In an industry where everything must be called by its acronym to be thought of as a serious entity, I hereby label Microsoft Azure Federation Services “MAFS”.

Through reading Kloud blog posts you have solutions for creating a claims-based federation service in the cloud (MAFS) or an on-premises ADFS farm with Server 2012 R2 (both of which should only take you about 10 minutes to install!).

How to Link Existing Visual Studio Online with Windows Azure

I was trying to link my Visual Studio Online (formerly Team Foundation Service or TFS Online) tenant to my Windows Azure subscription and stumbled through some items that are not well documented. The main problem I ran into was that Visual Studio Online only used Microsoft Accounts and in my case my Windows Azure subscriptions are setup using Office 365 accounts and not Microsoft Accounts. The next problem I ran into was that account owner set on my Visual Studio Online wasn’t the account I thought it was so I need to find a way to update the account owner before I could proceed.

Microsoft Requirements to Link Visual Studio Online with Windows Azure

  • Windows Azure subscription can’t be linked to MSDN benefit. If this is your case you need to create a new Pay-As-You-Go subscription.
  • Microsoft Account used for Visual Studio Online needs to be the subscription Service Administrator or a Co-Administrator on the subscription.

Part 1 – Add / Adjust Microsoft Account users in Visual Studio Online

  1. Log into your Visual Studio Online tenant https://XXXXX.visualstudio.com/ using a Microsoft Account with admin access.
  2. Click on gear icon to Administer Account

  3. On the Administer your account screen, click on “Manage collection security and group membership”.

  4. On the Security screen, confirm you are on “Project Collection Administrators” and click “Members”.

  5. On the Members screen, confirm the Microsoft Account you want to set as owner is listed. If not click on “Add” to add the Microsoft Account.

Part 2 – Adjust Visual Studio Online owner (if current owner is incorrect)

  1. Log into your Visual Studio Online tenant https://XXXXX.visualstudio.com/ using your Microsoft Account that is the owner. If you are not sure which account is the owner you can follow these same steps but you will not be able to update until you are logged in as the account owner.
  2. Click on gear icon to Administer Account.

  3. On the Administer your account screen, click on “Settings”.

  4. On the Settings screen, click on the “Account owner” drop down and select the correct account. — Side note: Another problem I ran into is that I have several Microsoft Accounts that I use. The display name for a few of these turns out to be the same my name. This drop down only shows display name, so I had to try a few times to get the correct one. L

  5. Click the save icon.

Part 3 – Add / Confirm Microsoft Account is a Co-Administrator on the Windows Azure subscription

  1. Log into your Windows Azure subscription https://manage.windowsazure.com as the service administrator. In my case this is an Office 365 account.
  2. On the left hand side navigation, click on “Settings”.

  3. On the Settings screen, click on “Administrators”.

  4. On the Administrators screen, if the Microsoft Account is already listed as an Account Administrator or Co-Administrator then you are done with this part. Skip to Part 4.
  5. If you want to add a new Co-Administrator, click on “Add”.

  6. On the Add a Co-Administrator screen, enter the valid email address that is the same as the Visual Studio Online tenant owner, select which subscriptions this account can co-administer, then click on check mark icon to save.

Part 4 – Link Visual Studio Online to Windows Azure subscription

  1. Log into your Windows Azure subscription https://manage.windowsazure.com as the same Microsoft Account that is the owner of Visual Studio Online tenant.
  2. On the left hand side navigation, click on “Visual Studio Online”.

  3. On the Visual Studio Online screen, click on “Create or Link a Visual Studio Online Account”

  4. This will open the “New” menu.

  5. Click on “Link to Existing”
  6. On the Link to Existing options, confirm the correct Visual Studio Online tenant is displayed, then pick which subscription you want to bill Visual Studio Online services to. You can later unlink your Visual Studio Online account and relink to a different subscription but any licensing purchased will not be refunded.

  7. Click on “Link Account”

  8. Windows Azure will show that it is linking the Visual Studio Online tenant.

Part 5 – Confirm and Manage the Visual Studio Online tenant account related settings.

  1. Log into your Windows Azure subscription https://manage.windowsazure.com as any Service Account or Co-Administrator account that has access to the same subscription as setup with Visual Studio Online.
  2. On the left hand side navigation, click on “Visual Studio Online”.

  3. On the Visual Studio Online scree, you should see the linked Visual Studio Online tenant.

  4. Click on the name of the Visual Studio Online tenant.
  5. Main page looks like this.

  6. Dashboard page looks like this.

  7. Scale page looks like this.

Using SkyDrive (or OneDrive) for Source Code

I don’t keep anything on my laptop and haven’t done for some time. Of course there is the very simple problem of losing a device or a hard disk, but now, with the advent of multiple virtual machines and physical devices, I find myself working on a large number of different machines that may or may not be on my physical laptop. That means I need a home for source code that is available from many machines and backed up somewhere safe.

Visual Studio certainly has taken steps toward supporting storage off-box with the advent of the online TFS offering www.visualstudio.com. That’s fine if you have a version of Visual Studio that supports TFS and if you’re happy to check in and check out your code. But wouldn’t it be nice if you just hit save and let the source code be synced to the cloud in the background?

The two new products SkyDrive Pro (for SharePoint to local machine sync) and the SkyDrive application (for SkyDrive to machine sync) work in the background to keep your machine and the master repository in sync whenever online but allows file access while offline. These two products make it simple to offload file storage elsewhere to a safe shared platform.

So that’s great for documents but what about source code? What we need is to create projects locally and have them synced to the cloud automatically.

First step install SkyDrive and sync it to your local machine and add a folder called “Source” (or whatever)

Now we need to change the default behaviour of Visual Studio to create projects into that folder by default.

Now it’s simple enough to create a new project and have it synced directly to SkyDrive, however you’ll notice the little SkyDrive sync icon changes every time you build.

What TFS does really well (and SkyDrive doesn’t) is to carefully filter between source code and binary output files when syncing the source tree to the repository. Visual Studio is remarkably stubborn about needing to have output files under the source tree which causes excessive syncing traffic. But it can be changed with some effort.

Create a new User Environment Variable (in Win 8 Start Menu just type env and click Settings in side charm). The variable is called OBJDIR and will point to somewhere on your machine to hold all the binary output and debug files.

Now restart Visual Studio and open the new project you created, then right click on the project and “Unload Project” from the Visual Studio solution

Then edit the project file in the XML editor

Under the top default <PropertyGroup>
element put the following elements

<PropertyGroup>
<OutputPath>$(OBJDIR)\$(SolutionName)\bin\$(Configuration)\</OutputPath>
<BaseIntermediateOutputPath>$(OBJDIR)\$(SolutionName)\obj\$(Configuration)\</BaseIntermediateOutputPath>
<IntermediateOutputPath>$(BaseIntermediateOutputPath)\</IntermediateOutputPath>

One more step is to ensure that you take out any other <OutputPath> directives from the project file as we have defined our own one at the top level. For example you should remove this <OutputPath>bin\Debug\</OutputPath> and this <OutputPath>bin\Release\</OutputPath>

Now when you build in Visual Studio the source files remain in the folder synced by SkyDrive and an “obj” folder will appear under your OBJDIR\SolutionName folder with the raw binary assemblies on the local machine and, most importantly, the little sync icon no longer comes up on every build (but gladly it does when you save the source code).

I still use TFS for saving away source code and managing multiple concurrent developers but now I’ve changed the default behaviour of Visual Studio so all my neat little tools and blog post source code are safely tucked away in my SkyDrive and available on any machine (virtual or physical or cloud) that I choose to work on.