Amazon Web Services vs Microsoft Azure service comparison cheat sheet

Originally posted on Lucian’s blog at clouduccino.com.

I’m a big fan of both Microsoft Azure and Amazon Web Services. The two clouds are redefining the way web, apps and everything on the internet is made accessible from enterprise to the average user. Both for my own benefit and for yours, here’s a detailed side by side comparison of services as well as features available in each cloud:

Cloud Service Microsoft Azure Amazon Web Services
Locations Azure Regions Global Infrastructure
  NA Availability Zones
Management Azure Portal Management Console
Azure Preview Portal NA
Powershell+Desired State Configuration Command Line Interface
Compute Services
Cloud Services Elastic Beanstalk
Virtual Machines Elastic Compute Cloud (EC2)
  Batch Auto Scaling
RemoteApp Work Spaces
Web and Mobile Web Apps NA

Mobile Services Mobile SDK
API Management CloudTrail
NA Cognito
NA Mobile Analytics
Storage
SQL Databases Relational Database Service (RDS)
DocumentDB Dynamo DB
  Redis Cache Redshift
Blob Storage Simple Storage Service (S3)
Table Storage Elastic Block Store (EBS)
Queues Simple Queue Service (SQS)
File Storage Elastic File System (EFS)
Storsimple Storage Gateway
Analytics + Big Data
HDInsight (Hadoop) Elastic MapReduce (EMR)
Stream Analytics Kinesis
Machine Learning Machine Learning
Data Orchestration Data Factory Data Pipeline
Media Services
Media Services Elastic Transcoder
  Visual Studio Online NA
  BizTalk Services Simple Email Service (SES)
  Backup (Recovery Services) Glacier
  CDN CloudFront
Automation Automation OpsWorks
  Scheduler CodeDeploy + CodePipeline
Service Bus Simple Workflow (SWF)
Search CloudSearch
Networking Virtual Network Virtual Private Cloud (VPC)
  ExpressRoute DirectConnect
  Traffic Manager Elastic Load Balancing
  NA Route 53 (DNS)
 Management Services Resource Manager Cloud Formation
NA Trusted Adviser
Identity and Access Management
Active Directory Directory Service
NA Identity and Access Management (IAM)
Marketplace Marketplace Marketplace
Container Support Docker VM Extensions EC2 Container Service
Compliance Trust Centre CloudHSM
Multi-factor Authentication Multi-Factor Authentication Multi-Factor Authentication
Monitoring Services Operational Insights Config
Application Insights CloudWatch
Event Hubs NA
Notification Hubs Simple Notification Service (SNS)
Key Vault Key Management Store
Government Government GovCloud
Other services Web Jobs Lambda
NA Service Catalog
Office 365 Exchange Online WorkMail
Office 365 Sharepoint Online WorkDocs

For me this comparison is an exercise to allow me to reference quickly what the major services and features are on each cloud platform. I hope you can use it for reference if you’re needing to quickly know the equivalent service from one platform or the other.

Originally posted on Lucian’s blog at clouduccino.com.

Thank you,

by-lucian-handwritten-v1

 

 

Connecting Salesforce and SharePoint Online with Azure App Services

Back in November I wrote a post that demonstrated how we can integrate Salesforce and SharePoint Online using the MuleSoft platform and the MuleSoft .NET Connector. In this post I hope to achieve the same thing using the recently released into preview Azure App Services offering.

Azure App Services

Azure App Services rebrands a number of familiar service types (Azure Websites, Mobile Services, and BizTalk Services) as well as adding a few new ones to the platform.

azure_app_services

  • Web Apps – Essentially a rebranding of Azure websites.
  • Mobile Apps – Built on the existing Azure Mobile Services with some additional features such as better deployment and scalability options
  • Logic Apps – A new service to the platform that allows you to visually compose process flows using a suite of API Apps from both the Marketplace and custom built.
  • API Apps – A special type of Web App that allows you to host and manage APIs to connect SaaS applications, on-premise applications or implement custom business logic. The Azure Marketplace provides a number of API Apps ready built that you can deploy as APIs in your solution.

Microsoft have also published a number of Apps to the Azure Marketplace to provide some ready-to-use functionality within each of these service types.  A new Azure SDK has also been released that we can use to build & deploy our own custom App Services. Further details on the Azure App Service can be found on the Azure Documentation site here.

Scenario Walkthrough

In this post we will see how we can create a Logic App that composes a collection of API Apps to implement the same SaaS integration solution as we did in the earlier post. To recap, we had the following integration scenario:

  • Customers (Accounts) are entered into Salesforce.com by the Sales team.
  • The team use O365 and SharePoint Online to manage customer and partner related documents.
  • When new customers are entered into Salesforce, corresponding document library folders need to be created in SharePoint.
  • Our interface needs to poll Salesforce for changes and create a new document library folder in SharePoint for this customer according to some business rules.
  • The business logic required to determine the target document library is based on the Salesforce Account type (Customer or Partner)

Azure Marketplace

As a first step, we should search the Azure Marketplace for available connectors that suit our requirements. A quick search yields some promising candidates…

Salesforce Connector – Published by Microsoft and supports Account entities and executing custom queries. Supported as an action within Logic Apps. Looking good.

salesforce_connector

SharePoint Online Connector – Published by Microsoft and supports being used as an action or trigger in Logic Apps. Promising, but upon further inspection we find that it doesn’t support creating folders within a document library. Looks like we’ll need to create our own custom API App to perform this.

sharepoint_online_connector

Business Rules API – Again published by Microsoft and based on the BizTalk Business Rules Engine. Supports being used as an action in Logic Apps however only supports XML based facts which as we’ll see doesn’t play well with the default messaging format used in Logic Apps (JSON). Looks like we’ll either need to introduce additional Apps to perform the conversion (json > xml and xml > json) or create a custom API App to perform our business rules as well.

biztalk_rules_api_app

So it appears we can only utilize one of the out-of-the-box connectors. We will need to roll up our sleaves and create at least two custom API Apps to implement our integration flow. As the offering matures and community contributions to the Marketplace is supported, hopefully we will be spending less time developing services and more time composing them. But for now let’s move on and setup the Azure App Services we will need.

Azure API Apps

As we are creating our first Azure App Service we need to first create a Resource Group and create a Azure App Service Plan. Service plans allow us to apply and manage resource tiers to each of our apps. We can then modify this service plan to scale up/down resources shared across all the apps consistently.

We start by adding a new Logic App and creating a new Resource Group and Service Plan as follows:

create_logic_app

Navigate to the newly created Resource Group. You should see two new resources in your group, your Logic App and an API Gateway that was automatically created for the resource group.

new_resource_group

Tip: Pin the Resource Group to your Home screen (start board) for easy access as we switch back and forth between blades.

Next, add the Salesforce Connector API App from the Marketplace …

create_salesforce_connector

… and add it to our Resource Group using the same Service Plan as our Logic App. Ensure that in the package settings we have the Account entity configured. This is the entity in Salesforce we want to query.

sf_connector_package_config

Now we need to provision two API App Services to host our custom API’s. Let’s add an API App Service for our custom BusinessRulesService API first, ensuring we select our existing Resource Group and Service Plan.

create_rules_api

Then repeat for our custom SharePointOnlineConnector API App Service, again selecting our Resource Group and Service Plan. We should now see three API Apps added to our resource group

resource_group_summary

Developing Custom API Apps

Currently, only the Salesforce Connector API has been deployed (as we created this from the Marketplace). We now need to develop our custom APIs and deploy them to our API App services we provisioned above.

You will need Visual Studio 2013 and the latest Azure SDK for .NET (2.5.1 or above) installed.

Business Rules Service

In Visual Studio, create a new ASP.NET Web Application for the custom BusinessRulesService and choose Azure API App (Preview)

vs_-_create_azure_api_app

Add a model to represent the SharePoint document library details we need our business rules to spit out

    public class DocumentLibraryFolder
    {
        public string DocumentLibrary { get; set; }
        public string FolderName { get; set; }
    }

Add an Api Controller that implements our business rules and return an instance of our DocumentLibraryFolder class.

    public class BusinessRulesController : ApiController
    {

        [HttpGet]
        public DocumentLibraryFolder Get(string accountType, string accountName)
        {
            System.Diagnostics.Trace.TraceInformation("Enter: Get");

            DocumentLibraryFolder docLib = new DocumentLibraryFolder();

            try
            {
                // Check for customer accounts
                if (accountType.Contains("Customer"))
                    docLib.DocumentLibrary = "Customers";

                // Check for partner accounts
                if (accountType.Contains("Partner"))
                    docLib.DocumentLibrary = "Partners";

                // Set folder name
                docLib.FolderName = accountName;
            }
            catch (Exception ex)
            {
                System.Diagnostics.Trace.TraceError(ex.ToString());
            }

            return docLib;
        }
    }

With the implementation done, we should test it works locally (how else can we claim “it works on my machine” right!). The easiest way to test an API App is to enable the swagger UI and use its built in test harness. Navigate to App_Start\SwaggerConfig.cs and uncomment the lines shown below.

enable_swagger_ui

Run your API App and navigate to /swagger

business_rules_-_test_with_swagger_locally

Once we have confirmed it works, we need to deploy the API to the Azure API App service we provisioned above. Right click the BusinessRulesService project in Solution Explorer and select Publish. Sign-in using your Azure Service Administration credentials and select the target API App service from the drop down list.

vs_api_publish

Click Publish to deploy the BusinessRulesService to Azure

Tip: Once deployed it is good practice to test your API works in Azure. You could enable public access and test using the swagger UI test harness as we did locally, or you could generate a test client app in Visual Studio. Using swagger UI is quicker as long as you remember to revoke access once testing has completed as we don’t want to grant access to this API outside our resource group.

Grant public (anonymous) access in the Application Settings section of our API App and test the deployed version using the URL found on the Summary blade of the AP App.

api_access_level

business_rules_-_test_with_swagger_in_the_cloud

Custom SharePoint Online Connector

Since the out-of-the-box connector in the Marketplace didn’t support creating folders in document libraries, we need to create our own custom API App to implement this functionality. Using the same steps as above, create a new ASP.NET Web Application named SharePointOnlineConnector and choose the Azure API App (Preview) project template.

Add the same DocumentLibraryFolder model we used in our BusinessRulesService and an Api Controller to implement the connection to SharePoint and creation of the folder in the specified document library

    public class DocumentLibraryController : ApiController
    {
        #region Connection Details
        string url = "url to your sharepoint site";
        string username = "username";
        string password = "password";
        #endregion

        [HttpPost]
        public void Post([FromBody] DocumentLibraryFolder folder)
        {
            using (var context = new Microsoft.SharePoint.Client.ClientContext(url))
            {
                try
                {
                    // Provide client credentials
                    System.Security.SecureString securePassword = new System.Security.SecureString();
                    foreach (char c in password.ToCharArray()) securePassword.AppendChar(c);
                    context.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(username, securePassword);

                    // Get library
                    var web = context.Web;
                    var list = web.Lists.GetByTitle(folder.DocumentLibrary);
                    var root = list.RootFolder;
                    context.Load(root);
                    context.ExecuteQuery();

                    // Create folder
                    root.Folders.Add(folder.FolderName);
                    //root.Folders.Add(HttpUtility.HtmlEncode(folder.FolderName));
                    context.ExecuteQuery();

                }
                catch (Exception ex)
                {
                    System.Diagnostics.Debug.WriteLine(ex.ToString());
                }
            }
        }

    }

Deploy to our Resource Group selecting our SharePointOnlineConnector API App Service.

vs_api_publish

Grant public access and test the API is working in Azure using swagger UI once again.

spo_connector_-_test_with_swagger_in_the_cloud

Note: I did have some issues with the Microsoft.SharePoint.Client libraries. Be sure to use v16.0.0.0 of these libraries to avoid the System.IO.FileNotFoundException: msoidcliL.dll issue (thanks Alexey Shcherbak for the fix).

Azure Logic App

With all our App Services deployed, let’s now focus on composing them into our logic flow within an Azure Logic App. Open our Logic App and navigate to the Triggers and Actions blade. From the toolbox on the right, drag the following Apps onto the designer:

  • Recurrence Trigger
  • Salesforce Connector
  • BusinessRulesService
  • SharePointOnlineConnector

logic_app_config

Note: Only API Apps and Connectors in your Resource Group will show up in the toolbox on the right hand side as well as the Recurrence Trigger and HTTP Connector.

Configure Recurrence trigger

  • Frequency: Minutes
  • Interval: 1

recurrence

Configure Salesforce Connector API

First we must authorise our Logic App to access our SFDC service domain. Click on Authorize and sign in using your SFDC developer credentials. Configure the Execute Query action to perform a select using the following SQL statement:

SELECT Id, Name, Type, LastModifiedDate FROM Account WHERE LastModifiedDate > YESTERDAY LIMIT 10

salesforce

The output of the Salesforce Connector API will be in json, the default messaging format used in logic apps. The structure of the json data will look something like this

{
	"totalSize": 10,
	"done": true,
	"records": [{
		"attributes": {
			"type": "Account",
			"url": "/services/data/v32.0/sobjects/Account/00128000002l9m6AAA"
		},
		"Id": "00128000002l9m6AAA",
		"Name": "GenePoint",
		"Type": "Customer - Channel",
		"LastModifiedDate": "2015-03-20T22:45:13+00:00"
	},
	{
		"attributes": {
			"type": "Account",
			"url": "/services/data/v32.0/sobjects/Account/00128000002l9m7AAA"
		},
		"Id": "00128000002l9m7AAA",
		"Name": "United Oil & Gas, UK",
		"Type": "Customer - Direct",
		"LastModifiedDate": "2015-03-20T22:45:13+00:00"
	},
	... repeats ...
    ]
}

Notice the repeating “records” section. We’ll need to let downstream APIs be aware of these repeating items so they can get invoked once for every repeating item.

Configure Business Rules API

Setup a repeating item so that our Business Rules API gets called once for every account the Salesforce Connector outputs in the response body.

  • Click on the Settings icon and select Repeat over a list
  • Set Repeat to @body(‘salesforceconnector’).result.records

Note: Here @body(‘salesforceconnector’) references the body of the response (or output) of the API call. “result.records” is referencing the elements within the json response structure where “records” is the repeating collection we want to pass to the next API in the flow.

Configure call to the BusinessRules_Get action passing the Type and Name fields of the repeated item

  • Set accountType to @repeatItem().Type
  • Set accountName to @repeatItem().Name

business_rules

The output of the BusinessRulesService will be a repeating collection of both inputs and outputs (discovered after much trial and error. Exception details are pretty thin as with most preview releases)

{
	"repeatItems": [{
		"inputs": {
			"host": {
				"gateway": "https://blogdemoresgroupxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.azurewebsites.net/",
				"id": "/subscriptions/72608e17-c89f-4822-8726-d15540e3b89c/resourcegroups/blogdemoresgroup/providers/Microsoft.AppService/apiapps/businessrulesservice"
			},
			"operation": "BusinessRules_Get",
			"parameters": {
				"accountType": "Customer - Channel",
				"accountName": "GenePoint"
			},
			"apiVersion": "2015-01-14",
			"authentication": {
				"scheme": "Zumo",
				"type": "Raw"
			}
		},
		"outputs": {
			"headers": {
				"pragma": "no-cache,no-cache",
				"x-ms-proxy-outgoing-newurl": "https://microsoft-apiapp7816bc6c4ee7452687f2fa9f58cce316.azurewebsites.net/api/BusinessRules?accountType=Customer+-+Channel&accountName=GenePoint",
				"cache-Control": "no-cache",
				"set-Cookie": "ARRAffinity=451155c6c25a46b4af4ca2b73a70e702860aefb1d0efa48497d93db09e8a6ca1;Path=/;Domain=blogdemoresgroupxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.azurewebsites.net,ARRAffinity=451155c6c25a46b4af4ca2b73a70e702860aefb1d0efa48497d93db09e8a6ca1;Path=/;Domain=blogdemoresgroupxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.azurewebsites.net",
				"server": "Microsoft-IIS/8.0",
				"x-AspNet-Version": "4.0.30319",
				"x-Powered-By": "ASP.NET,ASP.NET",
				"date": "Sun, 19 Apr 2015 12:42:18 GMT"
			},
			"body": {
				"DocumentLibrary": "Customers",
				"FolderName": "GenePoint"
			}
		},
		"startTime": "2015-04-19T12:42:18.9797299Z",
		"endTime": "2015-04-19T12:42:20.0306243Z",
		"trackingId": "9c767bc2-150d-463a-9bae-26990c48835a",
		"code": "OK",
		"status": "Succeeded"
	}]
}

We will need to again define the appropriate repeating collection to present to the next API. In this case it will need to be the “outputs.body” element of the repeatItems collection.

Configure SharePointOnline Connector API

Setup a repeating item so that our SharePointOnline API gets called once for every item in the repeatItems collection.

  • Click on the Settings icon and select Repeat over a list
  • Set Repeat to @actions(‘businessrulesservice’).outputs.repeatItems

Configure call to the DocumentLibrary_POST action setting the following parameters

  • Set DocumentLibrary to @repeatItem().outputs.body.DocumentLibrary
  • Set FolderName to @repeatItem().outputs.body.FolderName

spo_connector

Save the Logic App and verify no errors are displayed. Close the Triggers and Actions blade so we return to our Logic App Summary blade.

Testing Our Solution

Ensure our Logic App is enabled and verify it is being invoked every 1 minute by the Recurrence trigger.

logic_app_operations

Open a browser and navigate to your Salesforce Developer Account. Modify a number of Accounts ensuring we have a mix of Customer and Partner Account types.

sfdc_accounts

Open a browser and navigate to your SharePoint Online Developer Account. Verify that folders for those modified accounts appear in the correct document libraries.

spo_doclibs_updated

Conclusion

In this post we have seen how we can compose logical flows using a suite of API Apps pulled together from a mix of the Azure Marketplace and custom APIs within a single integrated solution to connect disparate SaaS applications.

However, it is early days for Azure App Services and I struggled with its v1.0 limitations and poor IDE experience within the Azure Preview Portal. I would like to see a Logic App designer in Visual Studio, addition of flow control and expansion of the expression language to include support for more complex data types (perhaps even custom .NET classes).  I’m sure as the offering matures and community contributions to the Marketplace are enabled, we will be spending less time developing services and more time composing them hopefully with a much better user experience.

SCCM 2012 R2 membership rules for mobile devices associated through InTune

Originally posted on Lucian’s blog at clouduccino.com

Microsoft System Centre Configuration Manger (SCCM) 2012 R2 when extended with Microsoft InTune is a powerful EMS or MDM platform. I’ve recently implemented an integration between System Centre 2012 R2 on-prem with InTune Cloud to allow for a hybrid EMS solution at a client.

To allow for a seamless user registration or provisioning process though the Company Portal app, devices should auto enroll with the appropriate device collection which automatically then applies policies to the mobile or external device. To achieve this in SCCM 2012 R2, you need to setup membership rules.

Below are a series of membership rules for various devices and platforms:

Collection of Windows Phone 8 devices

select SMS_R_System.ResourceId, SMS_R_System.ResourceType, SMS_R_System.Name, SMS_R_System.SMSUniqueIdentifier, SMS_R_System.ResourceDomainORWorkgroup, SMS_R_System.Client from SMS_R_System inner join SMS_G_System_DEVICE_OSINFORMATION on SMS_G_System_DEVICE_OSINFORMATION.ResourceID = SMS_R_System.ResourceId where SMS_G_System_DEVICE_OSINFORMATION.Platform like “Windows Phone” and SMS_G_System_DEVICE_OSINFORMATION.Version like “8.0%”

Collection of Windows Phone 8.1 devices

select SMS_R_System.ResourceId, SMS_R_System.ResourceType, SMS_R_System.Name, SMS_R_System.SMSUniqueIdentifier, SMS_R_System.ResourceDomainORWorkgroup, SMS_R_System.Client from SMS_R_System inner join SMS_G_System_DEVICE_OSINFORMATION on SMS_G_System_DEVICE_OSINFORMATION.ResourceID = SMS_R_System.ResourceId where SMS_G_System_DEVICE_OSINFORMATION.Platform like “Windows Phone” and SMS_G_System_DEVICE_OSINFORMATION.Version like “8.1%”

Collection of all Windows Phone devices

select SMS_R_System.ResourceId, SMS_R_System.ResourceType, SMS_R_System.Name, SMS_R_System.SMSUniqueIdentifier, SMS_R_System.ResourceDomainORWorkgroup, SMS_R_System.Client from SMS_R_System inner join SMS_G_System_DEVICE_OSINFORMATION on SMS_G_System_DEVICE_OSINFORMATION.ResourceID = SMS_R_System.ResourceId where SMS_G_System_DEVICE_OSINFORMATION.Platform like “Windows Phone”

Collection of Windows RT devices

select SMS_R_System.ResourceId, SMS_R_System.ResourceType, SMS_R_System.Name, SMS_R_System.SMSUniqueIdentifier, SMS_R_System.ResourceDomainORWorkgroup, SMS_R_System.Client from SMS_R_System inner join SMS_G_System_COMPUTER_SYSTEM on SMS_G_System_COMPUTER_SYSTEM.ResourceId = SMS_R_System.ResourceId where SMS_G_System_COMPUTER_SYSTEM.Model like “Surface%”

Collection of iPhones only

select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,
SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,
SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_DEVICE_COMPUTERSYSTEM on SMS_G_System_DEVICE_COMPUTERSYSTEM.ResourceId = SMS_R_System.ResourceId where SMS_G_System_DEVICE_COMPUTERSYSTEM.DeviceModel like “%iphone%”

Collection of iPads only

select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,
SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,
SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_DEVICE_COMPUTERSYSTEM on SMS_G_System_DEVICE_COMPUTERSYSTEM.ResourceId = SMS_R_System.ResourceId where SMS_G_System_DEVICE_COMPUTERSYSTEM.DeviceModel like “%ipad%”

Collection of all iOS devices

select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,
SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,
SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_DEVICE_OSINFORMATION on
SMS_G_System_DEVICE_OSINFORMATION.ResourceId = SMS_R_System.ResourceId where SMS_G_System_DEVICE_OSINFORMATION.Platform like “iOS”

Collection of all Android devices

select SMS_R_System.ResourceId, SMS_R_System.ResourceType, SMS_R_System.Name, SMS_R_System.SMSUniqueIdentifier, SMS_R_System.ResourceDomainORWorkgroup, SMS_R_System.Client from SMS_R_System inner join SMS_G_System_DEVICE_OSINFORMATION on SMS_G_System_DEVICE_OSINFORMATION.ResourceID = SMS_R_System.ResourceId where SMS_G_System_DEVICE_OSINFORMATION.Platform like “Android%”

Thanks, Lucian

Originally posted on Lucian’s blog at clouduccino.com

The SharePoint Governance Puzzle

If you have been using or managing SharePoint then you would know how quickly content, customisations and their growth, if not properly governed, can get out of control. Today most organisations deploying SharePoint have realised this and have SharePoint Governance Plans or Policies defined for their deployments. A Governance Plan can just be a brief document focusing on what users can and cannot do or it may be a detailed framework governing every part of SharePoint from custom development to usage and maintenance.

However, a large number of organisations are still struggling with effective governance despite of having a well-defined plan. Often an organisations’ focus is on defining a detailed and comprehensive governance plan that overlooks the process of actually implementing the plan – it is assumed that it is now up to IT support to ensure the processes and policies defined in the plan are enforced and implemented. Although a good number of them would take a step further and create a SharePoint Governance Group, however often the group itself struggles to enforce the standards and policies, specially in the area of day to day operations.

So let’s first take a look at the key implementation challenges and then what can be done to address them.

Implementation Challenges

A. The implementation deserves respect, treat it just like another project!

One of the reasons why the implementation doesn’t get due attention is the absence of some tangible outcomes. Implementing a plan and associated policies is not like delivering a system. Furthermore it is not that straightforward to measure the outcomes and see the associated benefits immediately. As a result, interest in rolling out the governance plan wanes soon after its creation and governance documents are filed and forgotten somewhere on the same SharePoint site that is supposed to be being governed!

B. Yes, it does require (a little bit) of effort and resources!

Another challenge is the availability and allocation of the people and time required to enforce the plan with consistency. This is further complicated when we recall the fundamental principle behind existence of SharePoint: user empowerment. This can give rise to uncontrolled SharePoint growth where the balance of end-user power and IT control is not right for the environment. In other words SharePoint governance is lot different and demanding than managing, say your Active Directory. This makes it difficult for IT to enforce the policies without hindering user productivity (and annoying them).

C. If they don’t know it then they won’t do it!

Finally, enforcing governance does not merely require flicking some switches. The first step in implementing any set of governance policies is communicating them to users and support staff. Users and staff often do not have the time and interest to read lengthy governance documents so you can’t just send them the documents and expect them to be consumed, understood and followed! Being humans (and as SharePoint users) they do not like tightly controlled environments either and need to be convinced that the controls you are putting place will eventually work for them. An unhappy and ill-informed user may, instead, negatively see the governance policies as a roadblock towards being productive.

The secret behind a successful governance plan implementation

Well, there is no secret here really. In summary, it requires the IT management taking the implementation seriously (just like any other project) and seeing the benefits; doing effective user training and communication; and investing in automation to assist the IT teams in enforcing the governance policies.

User Communication and Training

We know very well that no one loves to read lengthy documents. Furthermore, reading text is one thing and actually understanding the content’s meaning is another. So post-creation the next challenge is to communicate the plan to users (end user and support staff) while ensuring it is not too difficult for them for them to understand and remember all the key points.

One technique that can help here is creating summarised posters or cheat-sheets for specific group of users such as SharePoint site members, owners and administrators. You can combine the information with any visualisation techniques that you feel may be effective, like charts, tables and even comics. See a couple of examples below.

SharePoint Site Owners Poster

It is also quite helpful (I would rather say necessary) to integrate the governance plan ‘knowhow’ into HR and IT processes like employees’ induction, role change and termination procedures. This will ensure things happen when they are required to.

The next big thing is … Automation

To help with enforcing governance policies without creating a lot of overhead for IT support and end-users, you can also get assistance from some SharePoint governance tools. There are quite a few third party tools available for this purpose. These tools can provide:

  • finer control in managing the SharePoint environment;
  • powerful growth forecasting and management;
  • comprehensive and centralized security control and permission monitoring;
  • provisioning workflows;
  • critical alerts and notifications etc.

We can divide the tools into two categories – strategic and tactical. The basis of this subdivision is the product feature set, associated costs and resources required to implement them.

Tactical Tools

Tactical tools are low cost options that enable the support teams to analyse and understand their SharePoint implementation and assess its health with reference to the governance plan and policies.

1. SharePoint Documentation Kit (SPDocKit)

A windows application that can run standalone or be installed on the server. SPDocKit:

  • generates detailed farm documentation on the target SharePoint environment
  • compares SharePoint farm configuration against best practices
  • can take snapshot of the environment and allows the support to compare with other SharePoint farms or the same farm but a different point in time

2. The SharePoint Diagram Tool

This is a quite handy tool that enables you to check if your SharePoint sites are having some kind of mushroom growth or flourishing like a beautiful well-maintained garden. It reverse engineers your site structure and let you visualise it in tree structure form. The tool can generates its output in multiple formats and you can then use Excel, a browser or Visual Studio to render the site structure. Regular generation and analysis of site structure diagrams can be added to IT support processes for detecting any major violations of site-structure related governance policies.

The following screenshot shows a real world example , a complex site tree containing 600+ sites .

SharePoint Site Tree

Advanced Tools

1. ControlPoint by Metalogix:

Metalogix offers a suite of tools for migrating and managing SharePoint content and that includes ControlPoint. ControlPoint is a well-refined product with an extensive set of features. It includes all the key components such as provisioning workflows, content growth monitoring, user action reporting and site stats; governance policy and permission management etc.

2. AvePoint Docave Governance Automation:

A solid product for SharePoint content migration and automating governance workflows. It can be seen as a strategic investment i.e. having AvePoint as the provider of some key SharePoint add-ons. They offer a range of SharePoint products, both for SharePoint Online and on-premises. However the governance module mainly focuses on SharePoint sites and site objects provisioning and deletion workflows. You will need to buy additional modules for end-to-end monitoring and reporting purposes.

3. Sharegate – SharePoint migration and Management Tool:

An easy to use product offering both content migration and governance tools in one product. It allows SharePoint admins to manage security settings, monitor environment growth, get rid of unused and obsolete content, ensuring that SharePoint meets the organisational standards. However, if you do not have SharePoint content migration needs then you cannot buy the governance module as a standalone product.

4. Acceleratio Governance Kit for Office 365:

This is a low cost cloud based Office 365 governance tool (they do not have a version for SharePoint on-premise). They also have another product SharePoint Documentation Kit (described in the ‘Tactical Tools’ section previously) that can help with reporting and site structure analysis. With this governance kit administrators can setup and configure SharePoint Online rules. The rules then can be applied to a specific site, list or library. It can generate detailed reports.

Conclusion

I conclude here by emphasizing that first and foremost the implementation of a SharePoint governance plan should be taken and treated like a project. This approach ensures that the SharePoint governance policies do not just stay within the boundaries of a document but they do come out into action and reward everyone. Finally a picture is worth a thousand words, the following diagram concludes this article.

 

The SharePoint Puzzle

This article was originally published on my own blog at The SharePoint Governance Puzzle

EnableSkypeUI Where Art Thou?

Are you missing the EnableSkypeUI from Lync Management Shell in Lync Server 2013?

If you have deployed a Skype for Business (S4B) client into your Lync Server 2013 environment you may see the following error upon login:

Skype warning on Start

There are plenty of articles about how to switch the client via Office365 Powershell, Lync Management Shell or the Registry, but if you’re scratching your head on how to get the new parameter “CSClientPolicy” here are some steps that may have been missed.

The new client software that supports both the Lync 2013 and S4B user interface (UI) modes does a quick check for which server version you are running and then remediates the UI to the current version unless the registrar server responsed with the following attribute configured in the user’s CSClientPolicy:

EnableSkypeUI – true/false

  • True – Enables S4B UI
  • False – Enables Lync UI

If you don’t find the EnableSkypeUI attribute in Get-CsClientPolicy you will need to patch your servers with the latest cumulative update. The trick to this is if you’re automatically patching the CU updates to Lync Server you must make sure that you also manually patch the database. There is the possibility that you may have forgotten that the database needs a update and it is always a good idea to revisit. We must do this to make sure that not only the Front End Servers are up-to-date but also the Backend SQL database schemas have been updated to include the new setting(s).

Get the latest patches for the servers:

https://support.microsoft.com/en-au/kb/2809243

To run the Installer, run the following command on each Front End:

LyncServerUpdateInstaller.exe

Apply all the patches.

Apply the back-end database updates (SE Pool shown below):

Install-CsDatabase -ConfiguredDatabases –SqlServerFqdn <FQDN> -Verbose

Run Lync Management Shell:

Get-CsClientPolicy | select Enable* 

EnableSkypeUI should now be visible, but will be set to $null.

The behaviour of the change when set to $true is relatively smooth. The Lync client will show a message similar to the grab at the start of this post

Restart Lync to see new Skype UI

The toolbar icon will represent the current client mode also:

Old Lync Icon on Windows Taskbar

The client will completely exit and restart in Skype for Business mode:

Skype for Business Splash Screen

Also the relevent Taskbar icon will be updated for continuity:

Skype Icon on Windows Taskbar

How to implement Multi-Factor Authentication in Office 365 via ADFS, Part 5, the finale!

Originally posted in Lucians blog over @ clouduccino.com

I know what you’re thinking: does Lucian really have to create another part in this long MFA series? In short, probably not, but I’ll have saved your index finger the thousands of years or scrolling you would have done to read the entire brain dump in a one page post.

So to explain this ‘epilogue’, if you will, on MFA, using X.509 SSLs for your second factor of authentication is a powerful means to automate and manage a process for your mobile and external users. This blog post will explain how to leverage an on-prem Microsoft System Centre Configuration Manager (SCCM) 2012 R2 deployment linked to Microsoft InTune to deliver SSL’s to mobile and external devices to use in MFA.

Read More

How to implement Multi-Factor Authentication in Office 365 via ADFS – Part 4

Originally posted in Lucians blog over @ clouduccino.com

The final installment in the long series that’s taken me allot longer to get around to writing then initially I had thought. However, I hope it’s worth the wait and the solution that has been proven works well for you. Before I get into the technical aspects of the final piece of this MFA implementation puzzle, I’d like to make a quick shout out to all the awesome consultants at Kloud Solutions who helped both in the technical implementation but also with the initial design and work required to see this solution through- a big thank you!

In the previous blog post I went through essentially what an internal configuration of MFA would look like with everything ready for the ADAL component that was previously under NDA and preview only availability, is now generally available for testing. So let me quickly delve into that ADAL in Office 2013 and Office 365 component before an in-depth guide on how to utilize Microsoft InTune and System Centre Configuration Manager as a means to deliver SSL certificates to users and use those certificates as your second factor of authentication! Exciting as its been a long build up to get to this point with several moments where I was questioning whether this would work in the real world.. lets start..

Read More

Are you a Kloudie?

Every time I finish an interview, the existing Kloudie team ask me – “So, are they our next Kloudie?”

It got me thinking – who is a Kloudie? I am sure many of you who read this blog regularly already know that we have a technically skilled team who are working on some pretty awesome bleeding edge projects, so I have been asking myself recently what makes these people ‘Kloudies’ rather than just another consultant.

Kloudie – a definition.
n. A technical guru who is a member of the most awesome cloud solutions consulting team operating in Australia.

Kloudies come in all different shapes and sizes, but there are constants that cross technical niches, state lines and business units. All Kloudies are technically brilliant, have a truly collaborative approach and the ability to consistently devise new and innovate solutions to ensure our customers make their move to the cloud successfully. Our innovation ensures that we are providing our customers with the best service possible.

Technology aside – Kloudies are collaborative athletes. One of the reasons our guys are so successful in the field, is their ability to ask questions to the greater team, and have real time responses from their peers and colleagues. Collaboration is not just an ‘ask’, it’s a given at Kloud and a natural part of being a Kloudie.

Culture is a huge part of being a Kloudie. Even as we grow and develop, Kloudies offer support like a family. Kloudies are not just a line on a spreadsheet. We know that most of our consultants are driven and motivated, and that they have just as many drivers externally. Our Kloudies excel outside of the technical arena as biathletes, triathletes, charity crusaders, trivia masters, exhibiting artists, salsa dance aficionados, cyclists and Xbox champions. We know how important it is to have a balance, and encourage our guys to get involved and support each other.

Passion for what we do and the desire to be the best born in the cloud organisation is our mission statement, but it isn’t possible without the passion and drive from our consultants.

For me – I can honestly say, that each of our consultants carries this passion, drive and motivation. They love working for the best cloud services provider in Australia and we love supporting them in their roles so they can achieve what they set out to do.

So, do you want to be a Kloudie?

Connection Options When Building An Azure Hybrid Cloud Solution

If your business is migrating workloads to Azure the chances are at some point you will probably want to create a form of private interconnect with Azure. There is more than one way to achieve this, so in this post I’ll take a look at what options you have and the most appropriate scenarios for each.

We’ll work through the connection types from simplest (and quickest to provision) to more complex (where you’ll need IP networking expertise and hardware).

Hybrid Connection

This is your baseline interconnect option and is tied to the BizTalk Services offering within Azure. At time of writing the only Azure-based services that can leverage Hybrid Connections are Web Apps (formerly Websites) and Mobile Apps (formerly Mobile Services).

Hybrid Connections are a great way to quickly get access to on-premises resources without the complexity involved in firewall or VPN setups. If you look at the official documentation you’ll see there is no mention of firewall rules or VPN setup!

Your on-premises resources must be running on Server 2008 R2 or above in order to leverage this service offering which at its most restricted can work over standard HTTP(S) ports and nothing more.

  • Benefits:
    • Quick to setup (requires no changes on-prem)
    • Typically “just works” with most corporate network edge configurations
    • Doesn’t require a Virtual Network to be configured in Azure
    • Can be shared between multiple Web and Mobile Apps
    • Good for exposing single on-prem resources to consumers in Azure (i.e. DB on-prem / web in Azure).
  • Drawbacks:
    • Your security team may be unhappy with you :)
    • Performance may not meet your needs beyond simple use cases
    • TCP services requiring dynamic ports aren’t supported (think FTP)
    • Tied to BizTalk Services and utilises a range of other Azure services such as ACS and Azure SQL Database
    • In Preview (no SLA) and not available in all Azure Regions
    • Limited use cases in Azure (Web and Mobile Apps).

Point-to-Site VPN

The next step up from Hybrid Connections is Point-to-Site (P2S) VPN connections. These connections allow you to use a downloaded client to provide an SSTP VPN between a single on-premises (or Azure based) resource and a Virtual Network (VNet) in Azure.

This VPN setup is a good way to test out simple-to-medium complexity hybrid scenarios or proof-of-concepts without the need for dedicated VPN hardware in your corporate environment.

When setting up a P2S VPN you have a few items you need to succeed:

  • the IP address range that will be used for clients when they connect
  • a subnet defined on your VNet for the Azure Gateway that will host VPN connections
  • a running Gateway instance that will allow your VPN clients to connect
  • a valid x509 certificate that will be used by the VPN client.

As you can see, there are quite a few extra steps involved beyond the Hyrbid Connection! You can run up to 128 on-prem clients connected to an Azure VNet if needed.

  • Benefits:
    • Does not require dedicated on-premises networking hardware
    • SSTP can usually connect OK with most corporate network edge configurations
    • Can co-exist with Site-to-Site connections
    • Allows you expose services on a single on-prem resource to an entire Azure VNet.
  • Drawbacks:
    • You’ll need to understand IP networking to setup a VNet in Azure
    • Performance is still relatively limited due to the nature of SSTP
    • Isn’t an ‘always on’ proposition as requires an interactive user session on-prem to run the client
    • Only supports connection from a single on-prem resource running on Windows
    • You’ll need an x509 certificate for use with the VPN client.

Site-to-Site VPN

Now we start to get serious!

If you want to run a Site-to-Site (S2S) connection you will need to have dedicated VPN hardware or Windows Server 2012 (or above) running RRAS on-prem and some private IP address space for your Azure environment that doesn’t overlap with the on-premises network you’ll be connecting with.

This option is the first to really offer you a true hybrid environment where two networks can connect via the VPN. This is often the first step we see many enterprises take when adopting Azure as it is relatively quick to stand up and typically most customers have the necessary devices (or ones that meet Azure’s VPN requirements) available already.

When you setup your Gateway in Azure, the Azure platform will even handily provide you with a configuration script/template for whichever on-prem device you’ve selected.

  • Benefits:
    • Provides full network-to-network connectivity
    • Supports a growing number of standard VPN appliances
    • Foundation of support for multi-site connectivity
    • Can use Windows Server 2012 RRAS if you don’t have an appliance.
  • Drawbacks:
    • Maximum throughput of 100 Mbps
    • Doesn’t support redundant single site to single VNet connections.

Be aware: Forced Tunnelling

Before we move on to the next Azure connection type we do need to talk about Forced Tunelling. The current generation Azure VNet has a default route for all public Internet traffic which is out over Azure’s managed Internet infrastructure (it’s just there and you can’t manage it or turn it off). On some other public cloud platforms you can disable public internet traffic by not adding an Internet Gateway – on Azure that option isn’t currently available.

In order to mitigate some challenges around controlling the path public traffic takes from an Azure VNet, Microsoft introduced Forced Tunelling which can be used to force traffic bound for the Internet back over your VPN and into your on-prem environment.

You must plan your subnets appropriately and only apply Forced Tunelling to those where required. This is especially important if you will consume any of Azure’s PaaS offerings other than Web or Worker Roles which can be added to an Azure VNet.

Almost all of Azure’s PaaS services (even Blob Storage) are exposed as secured public Internet endpoints which means any call to these from a VNet configured for Forced Tunelling will result in that traffic heading back to your on-prem network and out your own Internet Gateway. Performance will take a hit and you will pay data egress charges on those calls as well as they will appear to originate from your on-prem Internet Gateway.

ExpressRoute

The grandpappy of all of them – and the one that requires the most planning and commitment. If you find yourself starting your hybrid journey here then either you have an existing investment in an MPLS WAN or you’re already co-located in an exchange that is providing ExpressRoute services.

The are two connection options for ExpressRoute:

  • Network Service Provider (NSP): utilises a new or existing MPLS WAN cross-connect into one or more Azure Region. Speeds 10 Mbps to 1 Gbps supported.
  • Exchange Provider (IXP): uses a new paired cross-connect in a data centre location when the IXP and Microsoft’s routers are co-located. Speeds 200 Mbps to 10 Gbps supported.

The officially support list of NSPs and IXPs is pretty small, but you can quite often work with your existing provider to get a connection into an IXP, or look to leverage offerings such as Equinix’s Cloud Exchange as a shortcut (for example, in Sydney 130+ network service providers provide services into Equinix).

Once you’re operating at this level you will definitely need the networking team in your organisation involved as you’ll be doing heavy lifting that requires solid knowledge of IP networking and specifically BGP.

  • Benefits:
    • A single ExpressRoute circuit can connect to multiple Azure VNets (up to 10) and across multiple Azure Subscriptions (also up to 10)
    • Redundant connection by default (a pair is provided when you connect)
    • Two peers provided: one for Azure public services and one for your private services. You can choose to not use either peer if you wish
    • Can support bursting to higher bandwidth levels (provider depending)
    • Offers an SLA for availability.
  • Drawbacks:
    • Requires that you have a relationship with an NSP or IXP in addition to Azure.
    • NSP bandwidth maximum is 1 Gbps
    • Maximum 4,000 route prefixes for BGP on all peers on a connection.

If you’re unsure how to get started here, but you have an existing WAN or co-location facility it may be worth talking to them about how to get a connection into Azure.

Be Aware: Default Routes and Public Peering

This topic falls under the same category as the earlier section on Forced Tunnelling for S2S VPNs.

When using ExpressRoute you can use BGP to advertise the default route for all your Azure VNets to be back over your ExpressRoute connection to your on-prem environment. Unlike the VPN connection scenario though, where all Azure PaaS services will route back over your on-prem Internet gateway, with ExpressRoute’s peering you can use the public peer as the shortcut back to Azure.

While this is a better option than you get with VPN it still means you are pushing Azure calls back to your ExpressRoute gateway so you will potentially see a performance hit and will see the data included if you are using an IXP connection.

Conclusion

So there we have it – a quick rundown of the techniques you have at your disposal when looking to create a private hybrid network environment that allows you to connect your existing locations with Azure.

HTH.

Hybrid Exchange Connectivity with Azure Traffic Manager

Does your exchange hybrid architecture need to have redundancy? How about an active/passive solution using Azure Traffic Manager elimating the need for a HLB device in your DMZ.

Currently there is a few topologies for configuring Hybrid Exchange with Office 365;

  1. Single Hybrid Server
  2. 2+ Hybrid Server behind a load balancer
  3. 2+ Hybrid Server with DNS round robin

A simple solution to make a redundant Hybrid Exchange design without using a HLB is to leverage Azure Traffic Manager to monitor and service the DNS namespace configured in on-premises Exchange and Office 365 configuration.

Traffic Manager offers:

  • It works at the DNS level, routing traffic between one or more hybrid public endpoints that sit behind the common DNS service name
  • Traffic management policy profiles like failover and performance metrics rather than DNS round robin and TTL.

In my scenario we will make the Primary Hybrid Server the responding IP until the event of a failure. We do this via pro-actively probing the health check page on a web services virtual directory on each Hybrid Server where NTLM challenge authentication is not required for the anonymous probe. If we think about all the services offered across exchange virtual directories that are exposed, Outlook Web Access with forms based authentication seems to be the standout. OWA has a health check page that can be used by load balancers to check for service outages and degredation.

“/owa/healthcheck.htm”

If in the event that this service becomes unresponsive, Traffic Manager will start responding with the secondary server endpoint IP until services resume. We have comfort in knowing that if the OWA page is unresponsive we can be assured that the Exchange Web Services (EWS) virtual directory probably is aswell.

New-AzureTrafficManagerProfile -Name "HybridExchange" -DomainName "KloudExHybrid.trafficmanager.net" -LoadBalancingMethod "Failover" -Ttl 30 -MonitorProtocol "Https" -MonitorPort 443 -MonitorRelativePath "/owa/healthcheck.htm" | Add-AzureTrafficManagerEndpoint -DomainName "hybrid1.kloud.com.au" -Status "Enabled" -Type "Any" | Add-AzureTrafficManagerEndpoint -DomainName "hybrid2.kloud.com.au" -Status "Enabled" -Type "Any" | Set-AzureTrafficManagerProfile

Create a CNAME for the hybrid namespace configured with Office 365 to KloudExHybrid.trafficmanager.net. For the example outlined in this blog, I would create a CNAME in my public DNS Zone for kloud.com.au with ‘exhybrid.kloud.com.au’ to ‘kloudexhybrid.trafficmanager.net’.

Exchange_Hybrid_TrafficManager

 

In this solution you will know that the primary Exchange Hybrid Server will always respond to Office 365 web services requests until the event of a failure. This will include free/busy lookups and also mailbox migrations. The benefits of using an active/passive hybrid namespace rather than a HA pair is that the through put of mailbox migration is much higher as EWS requests maintain a persistent connection with the originating Hybrid Server and no load balancer is intercepting/redirecting the traffic. We also have control on the failover/failback timings versus true DNS load balancing with round robin which could take longer to resolve.

This is a simple solution using a cloud service for eliminating the need of any fancy layer 7 load balancing and improving on the simplicity of DNS round robin.

Hot Tip: If you have a Database Availability Group (DAG) on-premises with mailboxes being migrated to Exchange Online. It would be of benefit for your active Hybrid Server to be closer to the mailbox server with the active copy to reduce mailbox move traffic on your local network links during synchronisation.