ADFS Metadata Conversion for Shibboleth

I recently blogged about the issues integrating Shibboleth Service Providers with ADFS. As an update to that blog one of Kloud’s super smart developers (Alexey Shcherbak) has re-written the FEMMA ADFS2Fed.py Python script in PowerShell, removing the need for Python and the LXML library! The ADFS2Fed converts ADFS metadata for consumption by a Shibboleth SP. Below is the output of Alexey’s labour, awesome work Alexey!:

$idpUrl = "https://federation.contoso.com";
$scope = "contoso.com";
$filename = ((Split-Path -parent $PSCommandPath) +"\federationmetadata.xml");

cls;
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Xml.Linq");
$xel = [System.Xml.Linq.XElement]::Load($filename);

$shibNS = New-Object System.Xml.Linq.XAttribute @(([System.Xml.Linq.XNamespace]::Xmlns + "shibmd"), "urn:mace:shibboleth:metadata:1.0");
$xel.Add($shibNS);

$scopeContent = New-Object System.Xml.Linq.XElement @("{urn:mace:shibboleth:metadata:1.0}Scope", (New-Object System.Xml.Linq.XAttribute @("regexp","false")),$scope);
$scope = New-Object System.Xml.Linq.XElement @("{urn:oasis:names:tc:SAML:2.0:metadata}Extensions",$scopeContent);
$xel.AddFirst($scope);

$authN = New-Object System.Xml.Linq.XElement @("{urn:oasis:names:tc:SAML:2.0:metadata}SingleSignOnService", (New-Object System.Xml.Linq.XAttribute @("Binding","urn:mace:shibboleth:1.0:profiles:AuthnRequest")), (New-Object System.Xml.Linq.XAttribute @("Location", ($idpUrl+"/adfs/ls/"))) );
$firstSSO = [System.Linq.Enumerable]::First( $xel.Descendants("{urn:oasis:names:tc:SAML:2.0:metadata}SingleSignOnService"));
$firstSSO.AddBeforeSelf($authN);

$xel.Elements("{http://www.w3.org/2000/09/xmldsig#}Signature")|%{ $_.Remove()};
$xel.Elements("{urn:oasis:names:tc:SAML:2.0:metadata}RoleDescriptor") | %{$_.Remove()};
$xel.Elements("{urn:oasis:names:tc:SAML:2.0:metadata}RoleDescriptor") | %{$_.Remove()};
$xel.Elements("{urn:oasis:names:tc:SAML:2.0:metadata}SPSSODescriptor")| %{$_.Remove()};

$xel.Save(($filename+"ForShibboleth.xml"), [System.Xml.Linq.SaveOptions]::None)

#$xel.ToString()| Out-File ($filename+"ForShibboleth.xml") -Force

Start-up like a pro or fast track cloud in your enterprise. . .

As part of my job I regularly interact with IT and business leaders from companies, across a diverse range of industries. A similarity I see across most businesses is that they contain a bunch of knowledge workers that all need to interact both internally and externally with common parties (internal departments / branches, customers, suppliers, vendors and government / regulatory bodies).

So how do knowledge workers in today’s highly connected world collaborate and communicate? Aside from telephone and face to face communication, email is still the primary tool of communication, why? Because it’s universally accepted and everyone in business has it. . . Is this good or bad? It certainly comes at a cost, being the productivity of your knowledge workers. . .

McKinsey & Company state that the average knowledge worker spends 28% of their day in their email client managing email and searching for information’ (McKinsey)

Now 28% is a significant proportion of one’s day, clawing back some of this time to focus on your core business is what competitive advantage is made of.

Interestingly, from my observations many established companies are slow to embrace technologies beyond email that could help to free up time. If this sounds like your business, surely improving productivity and increasing the focus on whatever it is that your business specialises in is of significant importance and priority?

Technology is an important part of everyone’s business and unfortunately it’s often viewed as a cost centre, rather than being viewed as a tool for competitive advantage.

If provisioning and managing IT services isn’t you’re core business, it will naturally be a deterring factor when considering the assimilation of a new technology within your business processes. This is where cloud technologies provide real agility. There is tangible value to be leveraged, particularly from higher level cloud services. Software as a Service (SaaS) and Platform as a Service (PaaS) offerings allow you to think differently and treat IT more like a utility (electricity or water) and consume it, ready-made and working, as service. Why not let someone else that specialises in technology run it for you? Best of all SaaS services are super quick to provision (often minutes) and can easily be trialled with low levels of risk or expense.

Competitive advantage is gleamed by making staff more focused and productive. Strive for solutions that provide your organisation with:

  • Knowledge retention
    - make it easy to seek answers to questions and store your Intellectual property
  • Effortless collaboration
    – make it easy for your staff to collaborate and communicate with everyone they need to be it inside or outside of your corporate / geographical boundary
  • Faster access to information
    – don’t make the mistake of making it a 15 step process to access corporate information or documents
  • Security and Governance
    – choose solutions that have built-in security and governance

So here are my tips for developing agility. . .

If you’re a Start-up, be born in the cloud. . . If you’re a well-established corporation / enterprise perhaps it’s time to think more like a start-up. Be agile, try new things and stay hungry for improvement. To a degree have a simplistic mindset to make things easier.

  • Don’t get stuck in the past
    – doing things the same way as you did five or ten years ago is a recipe for commercial irrelevance
  • Don’t collect Servers
    - if you’re not in the business of IT infrastructure, don’t go on a mission to amass a collection of server’s on-premises.
  • Establish a cloud presence
    – establish a presence with more than one vendor, why use only one? Pick the best bits from multiple vendors
  • Think services not servers
    – strive for the selection of SaaS and PaaS over IaaS wherever possible.
  • Have a strategy for Identity Management
    – avoid identity overload and retain centralised control of identity and access
  • Be ready to switch
    – be open to new solutions and service offerings and agile enough to switch
  • Review your existing Infrastructure landscape
    - identify candidates for transition to a cloud service, preferably SaaS / PaaS
  • Pilot and review some new technologies
    – identify processes ripe for disruption, try and seek feedback from your staff
  • Keep the governance
    – just because it’s in the cloud doesn’t mean you need to abandon your security and governance principles

By dedicating some time and resources, a platform to facilitate quick trial of new services can be achieved. Adopt a hungry mindset, explore cost savings and opportunities to improve productivity.

In Conclusion. . .

Wikipedia define competitive advantage as ‘occurring when an organisation acquires or develops an attribute or combination of attributes that allows it to outperform its competitors’ (Wikipedia). McKinsey & Company’s example of average knowledge worker time spent inside corporate email illustrates the opportunity that exists for improvement on a single front. Many more like this may exist within your business.

Transitioning to anything new can seem daunting. Start by creating a road map for the implementation and adoption of new technology within your business. Discuss, explore and seek answers to questions and concerns you have around cloud services. Adopt a platform that ensures you correctly select, implement and leverage your investment and yield competitive advantage.

If you’re not sure where to start, leverage the skills of others who have been through it many times before. Consider engaging a knowledgeable Kloud consultant to help your organisation with the formulation of a tailored cloud strategy.

 

 

Azure VM Security using Azure VM Security Extensions, ConfigMgr and SCM Part 1

This post to wrap up my session at TechEd Sydney 2014 : DCI315 Azure VM Security and Compliance Management with Configuration Manager and SCM.

In this blog post series we will dispell some of the myths and dive into Azure VM Security.

With Azure AU Geo launched on TechEd Sydney 2014, Azure now has 19 Regions. More and more enterprises start migrating their workloads into Azure. Most of our clients have the same question – How do we manage security and compliance on Azure VM?

Security for our Azure VMs is shared responsibility between Microsoft and us. The next question is – Who’s responsible for what ?

Below diagram is Shared Responsibility Model diagram which I borrow from Lori Woehler

sharedresponsibility diagram

We will focus on IaaS column from diagram above. Clearly, we have resposibility to look after O/S layer and above. The summary of our responsibilities as IaaS customer as follow:

  • Application Security
  • Access Control and Data Protection
  • Vulnerability Scanning, Penetration Testing
  • Logging, Monitoring, Incident Response
  • Protection, Patching and Hardening

There is no silver bullet to protect our Azure VM. The pro-active approach has to be taken to secure our Azure environment.This blog post will focus on Protection, Patching and Hardening our Azure VM. Let’s jump to our first focus.

Protect Azure VM

On this post we will use two different technique:

  • Using Azure VM Security Extensions (Out-of-the box solution)
  • Using Sytem Center Endpoint Protection which is our In-house AV Solution

Azure VM Security Extensions

Details for Azure Security Extensions can be found here. We will use Microsoft Anti-Malware for this post which recently announced its GA.  Microsoft Anti-Malware is built on the same anti-malware platform as MSE (Microsoft Security Essentials), Microsoft Forefront Endpoint Protection, Microsoft System Center Endpoint Protection, Windows Intune and Windows Defender.

We can deploy Microsoft Anti-Malware using Portal or Azure PowerShell or Visual Studio.

microsoft antimalware

 

We will use PowerShell deployment technique for this post. Script below will help us to deploy Microsoft Anti-Malware Security Extensions to existing Azure VM

Script below will check whether Microsoft Anti-Malware has been deployed to Azure VM

System Center Endpoint Protection

System Center Endpoint Protection is one of the security feature from System Center Configuration Manager known as SCCM or ConfigMgr. We will use ConfigMgr 2012 R2 on this post. ConfigMgr 2012 R2 is powerful enterprise-grade tool to maintain configuration, compliance and data protection users computers, notebooks, servers, mobile devices whether they are corporate-connected or cloud-based.

We will focus on Endpoint Protection solution to our Azure VMs. Four things need to be noted:

  • Endpoint Protection site system role need to be configured as endpoint protection point
    endpoint protection
  • Create Antimalware Policy and configure it
    antimalwarepolicy
  • Configure Client Device Settings and select Endpoint Protection
    clientdevice
  • Deploy ConfigMgr Agent with Endpoint Protection Agent and Deploy the Client Device Settings
    deployclientdevice

And we just deployed Anti Malware for our Azure VM

ep

Now what are the major benefits using ConfigMgr – Endpoint Protection instead Microsoft Antimalware VM Security Extensions?

  • Centralized Management
    ep console
  • Reporting Services
    epreport

The next post we will focus on Patch and Compliance Management using ConfigMgr and SCM

 

Decoding Behaviour Driven Development

I have been thinking for a while to write a piece on BDD (aka Behaviour Driven Development), not that it was absolutely essential (I consider myself as lazy) but more so because there is very little write up available on relevance of BDD and its use in a .NET environment. BDD is also known to many in the community as ATDD ( Acceptance Test Driven Development), certainly for some similarities that can not go unnoticed.

So really, why bother about BDD ?

It starts with the notion ‘Did you give me what you promised ?’

I am consciously not going into the detail definition of what is BDD et all, hoping that you have some amount of understanding of what it is. For absolute newbies, this is just another way of interpreting your solution by its behaviour rather than objective features  (and expressing your development & testing in a business knowledge that are ‘executable’).

In a traditional software testing, test cases are derived from the requirements (and nothing wrong in it !). The fact is – customers, at the end, will be bothered about the solution , less about the ‘requirements’ and what has been tested. The delivered solution/ product will always be matched against the customer expectations ( and even sometimes, unrealistic expectations).

And that’s where the problem creeps in !

The bi-directional traceability matrix to establish that links between stated requirements and delivered product will be of some assistance but often that’s lost in translation. With the adoption of Agile delivery approach, things have become a bit simpler but the problem doesn’t go away.

Make a living document : whose document is it anyway

BDD fills that gap. That very difference where features promised to be delivered never met the customers’ acceptance because of a communication gap between what is expected vs what is delivered.

When you think a bit deeper, there are two direct benefits delivered by BDD –

  1. You can organise your requirements (feature file) and test scenarios at the same place following a structured format. These test scenarios will often be the ‘Acceptance criteria’ for the feature and will be easily monitored (and managed) by business (PO, Business Analyst)
  2. Most importantly, these test scenarios written in natural language are ‘executable’. Yes, you can literally click the run button and the programs takes over from there. Which means, they can be integrated with the build (and follow continuous integration)

It almost becomes the single source of truth in delivering the software that can be integrated with your solution and the whole suite can run as a single entity. Remember, all that BDD is giving you a platform to structure your tests in a natural language. That means you can write any of your tests including, Unit, Integration and UI level automated tests.

I understand that – Tell me how to apply it ?

Often seen, organisations using the BDD approach are not applying it right , leading to a non-optimal usage and less effective implementation. We have to keep in mind that our objective here is to bridge the gap between business, developers and test people and BDD is giving us the interface to do that. To write an effective feature file, these three have to come together and will have to agree before a feature can be implemented. An understanding and agreeable environment is the key here – supplemented by the inputs from all three of them to make the BDD feature file more comprehensive, correct and complete.

And always keep in mind, we develop or test something what is important for end user, not what we want. End user behaviour is the key in adopting a BDD style framework. By elevating your focus from object to feature, you are able look through the lenses of a customer, enabling an app with a great user experience. 

Enough words, let’s see some action

Here is a simple demonstration of the way you can take a BDD path and implement using Specflow /Gherkin in a ASP.NET MVC app.

Project set-up:

If you are using Visual Studio 2010, 2012, install the library from Visual Studio gallery alongwith the Nugget packages.

https://visualstudiogallery.msdn.microsoft.com/9915524d-7fb0-43c3-bb3c-a8a14fbd40ee

Check the Integration section of the Specflow website for details –

http://www.specflow.org/documentation/Visual-Studio-2012-Integration/

The Specflow website has all other relevant details on similar implementation using other Visual Studio versions.

Feature file and Executable tests

Below is an example of a typical feature file where you will write all your tests for a feature. It is always a good practice to organise your tests by feature and you can logically group them under a feature file. Once you have outlined the tests , you can generate a Definition file which will contain your underlying code.

clip_image001

Definition file will give you to structure your code aligned with the scenarios outlined in the feature file. They will be replaced by the actual code that will allow you to run your tests – be it Unit, Integration or UI.

After you write your own piece of code in the bindings, it will look something like this. (You will continue to write the page Object Model to simulate the user behaviour in a similar way)

clip_image003

And, finally, one last thing before you run. Check that you have the right Test runner in App.config.

clip_image004

And you are all set to go !

clip_image005

There is no denying fact that BDD instil a good discipline. It is not something very new, not something radically different from what we have been doing till date ( you probably do an ‘Arrange -Act-Assert’ anyway) , but a great way to organise your thoughts, put them in a live document that is ‘not just another document lying somewhere’ but something you can execute and get an instant feedback.

Unable to Activate Office 365 ProPlus

I recently came across an issue with Office ProPlus on Windows 8.1 when working on a Click to Run deployment where it wasn’t possible to activate the Office product with the Office 365 service. Instead of being prompted for credentials, a dialog window opened stating “This feature has been disabled by your administrator.”

All well and good, apart from the fact that there were no policies in place to actually enforce this and as a result Office will operate with reduced functionality until it can be activated.

Upon further investigation, TechNet makes mention of a registry key that can be modified to change this behaviour.

HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Common\SignIn\SignInOptions allows you to tweak the following settings:

Value

Description

0

Default – displays both the user’s Microsoft account ID and the organsation ID.

1

This only displays their Microsoft account ID.

2

This only displays their organisation ID.

3

Does not display either ID type. The user will be unable to log on. If you set SignInOptions to 3, and a user triggers the logon page, no ID types will be offered to the user. Instead, the message “Sign in has been disabled” is displayed.

 

From this information it appears as though SignInOption 3 is being enforced, however there was no key at this location. Creating this key and setting the option to ‘2’ resulted in a blank sign in page:

Resetting the registry key to the default of ‘0’ did not change the behaviour. The ability to customise this behaviour also exists in the Group Policy Administration Templates for Office 2013 so I enforced the default behaviour of ‘0’.

Using ProcMon I could see that SignInOption ‘0’ was now being applied by Group Policy – but using a different registry location:

WINWORD.EXE 5648 RegQueryValue  HKU\[SID]\Software\Policies\Microsoft\office\15.0\common\signin\SignInOptions SUCCESS Type: REG_DWORD, Length: 4, Data: 0

A call to Microsoft has confirmed that this is a JavaScript bug within the Office product. The workaround as described above is a valid way to get around it, however if you are experiencing this issue, it is recommended to contact Microsoft as there is a hotfix available upon request.

Shibboleth Service Provider Integration with ADFS

If you’ve ever attempted to integrate a Shibboleth Service Provider (Relying Party) application with ADFS, you’d have quickly realised that Shibboleth and ADFS are quite different beasts. This blog covers off some of the key issues involved and provides details on how to get ADFS to play nice with a Shibby Service Provider (SP). This blog does not cover configuring ADFS to participate as a member in a Shibboleth Federation like InCommon or the Australian Access Federation (AAF). That type of integration presents a different set of challenges, contact us to discuss your needs.

Before we get to the details, some terminology varies between the two federation services, the table below lists the key differences. Shibboleth terminology will be used throughout this blog.

AD FS Name Shibboleth Name
Security token Assertion
Claims provider Identity provider (IdP)
Relying party Service provider (SP)
Claims Assertion attributes

Below are the key issues involved with integration of Shibboleth SPs with ADFS. Each will be expanded upon later in this blog.

1. Metadata incompatibility.

2. Incorrect SAML Name Format in assertions.

3. Missing Assertion Attributes.

1. Metadata Incompatibility

ADFS generates publishes its metadata https://<FederationServiceName>/FederationMetadata/2007-06/FederationMetadata.xml. There is no functionality to modify what is published. When a Shibboleth SP consumes ADFS metadata the following issues can arise:

1. ADFS Metadata contains information pertaining to WS.Trust and WS.Federation. This is not consumed by Shibboleth SPs.

2. Shibboleth Scope information is not generated by ADFS.

3. Shibboleth XML Name Space information is missing.

The easiest method to address these issues is to pre-process the metadata for consumption by the Shibboleth SP. For this, the Federation Metadata Manager (FEMMA) toolset is useful. This toolset was developed to assist in configuring ADFS to participate as a member in a Shibboleth Federation (e.g InCommon or Australian Access Federation). Whilst this is overkill for integration of a Shibby SP with ADFS, also included within this toolset is the ADFS2Fed.py Python script with reads ADFS metadata and corrects the above issues. Run it, configure the Shibboleth SP to retrieve IdP metadata from a local file, job done!

2. Incorrect SAML Name Format

When ADFS issues assertions configured using the standard ADFS Claims Rules interface it uses the name format urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified. Shibboleth expects urn:oasis:names:tc:SAML:2.0:attrname-format:uri. This issue unfortunately means that assertions will need to be issued by custom Claim Rules.

To apply the correct SAML Name Format to an assertion attribute from an ADFS attribute store, a two stage process is needed:

1. Retrieve the assertion attribute from the attribute store and store as an incoming assertion. For example, the custom ADFS Claims Rule below queries Active Directory for the authenticating user’s User Principal Name and stores the value into the incoming assertion with type of http://kloud/internal/userprincipalname:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"] => add(store = "Active Directory", types = ("http://kloud/internal/userprincipalname"), query = ";userPrincipalName;{0}", param = c.Value);

2. Issue a new assertion of the required type using the incoming assertion value. The following custom ADFS Claims Rule retrieves the incoming assertion of type http://Kloud/internal/userprincipalname, re-issues it as type urn:oid:1.3.6.1.4.1.5923.1.1.1.6, and assigns a name format of urn:oasis:names:tc:SAML:2.0:attrname-format:uri.

c:[Type == "http://kloud/internal/userprincipalname"] => issue(Type = "urn:oid:1.3.6.1.4.1.5923.1.1.1.6", Value = c.Value, Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/attributename"] = "urn:oasis:names:tc:SAML:2.0:attrname-format:uri");

3. Missing Assertion Attributes

By default, a Shibboleth SP expects assertions from the eduPerson class. Some of these have specific requirements, below are the troublesome ones and sample ADFS custom Claim Rules to get you going. Note – Scoped attributes must have a scope matching the scope provided in the IdP metadata, or by default the Shibboleth SP will drop them. If using the FEMMA ADFS2Fed.py script, the Shibboleth Scope is entered as a parameter.

1. eduPersonTargetedID (urn:oid:1.3.6.1.4.1.5923.1.1.1.10)

This assertion attribute is derived from SAML NameID. It is required to identify the Identity Provider, Service Provider and provide a consistent and obfuscated identifier for the user. Obfuscation of the user identifier ensures that whilst the user can be tracked across services, they cannot be identified directly to a named account.

To construct this first we grab an immutable identifier for the user – the users Active Directory Security Identifier (SID) is ideal as it is constant for the life of the account unlike Windows Account Name (sAMAccountName) which can change. We then use the ppid function to encrypt the SID using the federation service name of ADFS as a seed. Finally we store this value as an incoming assertion type of http://kloud/internal/persistentid.

 c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid"] => add(store = "_OpaqueIdStore", types = ("http://kloud/internal/persistentId"), query = "{0};{1};{2}", param = "ppid", param = c.Value, param = c.OriginalIssuer);

Next we the provide the IdP and SP name qualifiers (EntityIDs) as static strings and issue the assertion as the required type – urn:oid:1.3.6.1.4.1.5923.1.1.1.10:

 c:[Type == "http://kloud/internal/persistentId"] => issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType, Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/format"] = "urn:oid:1.3.6.1.4.1.5923.1.1.1.10", Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/spnamequalifier"] = "<SP Entity ID>", Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/namequalifier"] = "<IdP Entity ID>");

2. eduPersonUniqueId (urn:oid:1.3.6.1.4.1.5923.1.1.1.13)

Similarly to eduPersonTargetID, this Assertion Attribute is another obfuscated user identifier. It is scoped, therefore it is comprised of a unchanging unique identifier of the user concatenated with their domain e.g. <identifier>@kloud.com.au. Examples given in the eduPerson schema reference show a GUID as the user identifier. The Active Directory GUID fits requirements, however performing a query against Active Directory for the GUID value as shown will not result in a correctly formatted GUID. This is due to the conversion of the GUID binary value from Active Directory.

Recommended is either to implement a String Processing Store, or populate an attribute store with a GUID converted to a correctly formatted string. Once the GUID value is converted and stored in the incoming assertions pipeline is can be concatenated with the Scope value and assigned the correct name format, as shown below:

c:[Type == "http://kloud/internal/objectguid"] => issue(Type = "urn:oid:1.3.6.1.4.1.5923.1.1.1.13", Value = c.Value + "@kloud.com.au", Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/attributename"] = "urn:oasis:names:tc:SAML:2.0:attrname-format:uri");

3. eduPersonPrincipalName (urn:oid:1.3.6.1.4.1.5923.1.1.1.6)

This assertion attribute provides a non-obfuscated, scoped user identifier. Active Directory provides a User Principal Name value which is suitable for this purpose. It can be queried and stored into the incoming assertion pipeline like so:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"] => add(store = "Active Directory", types = ("http://kloud/internal/userprincipalname"), query = ";userPrincipalName;{0}", param = c.Value);

And issued in with the correct type and name format like this:

c:[Type == "http://kloud/internal/userprincipalname"] => issue(Type = "urn:oid:1.3.6.1.4.1.5923.1.1.1.6", Value = c.Value, Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/attributename"] = "urn:oasis:names:tc:SAML:2.0:attrname-format:uri");

4. eduPersonAffiliation (urn:oid:1.3.6.1.4.1.5923.1.1.1.1)

This attribute is used to categorise user accounts for the purpose of assigning access privileges. Refer to the eduPerson schema reference for a definition of affiliations. Methods to assign affiliation values vary between organisations, however in regards to issuing an assertion the recommended approach would be to populate an attribute either in Active Directory or another attribute store. Below is an example where the Active Directory attribute Employee Type (employeeType) stores the affiliation:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"] => add(store = "Active Directory", types = ("http://kloud/internal/affiliation"), query = ";employeeType;{0}", param = c.Value);

And issued in with the correct type and name format like this:

c:[Type == "http://kloud/internal/affiliation"] => issue(Type = "urn:oid:1.3.6.1.4.1.5923.1.1.1.1", Value = c.Value, Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/attributename"] = "urn:oasis:names:tc:SAML:2.0:attrname-format:uri");

To issue eduPersonScopedAffiliation (urn:oid:1.3.6.1.4.1.5923.1.1.1.9) issue the incoming assertion concatenated with the scope as below:

c:[Type == "http://kloud/internal/affiliation"] => issue(Type = "urn:oid:1.3.6.1.4.1.5923.1.1.1.9", Value = c.Value + "@kloud.com.au", Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/attributename"] = "urn:oasis:names:tc:SAML:2.0:attrname-format:uri");

 

5. eduPersonAssurance (urn:oid:1.3.6.1.4.1.5923.1.1.1.11)

The assertion provides information on the levels of assurance involved in the authentication of a user, both in the proof required for account creation and the strength of authentication mechanism/s used. The eduPersonAssurance assertion is a text string, e,g. “urn:mace:aaf.edu.au:iap:id:1″ indicates an Identity Assurance Level of 1, whilst “urn:mace:aaf.edu.au:iap:authn:1″ shows an Authentication Assurance Level of 1. In most instances this assertion can be issued as a simple value assertion as below:

=> issue(Type = "urn:oid:1.3.6.1.4.1.5923.1.1.1.11", Value = "urn:mace:aaf.edu.au:iap:id:1", Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/attributename"] = "urn:oasis:names:tc:SAML:2.0:attrname-format:uri");

 

Hopefully this blog helps you with your Shibboleth integration issues. In addition, federation services should be implemented with alignment to a larger identity and access management strategy. If you need assistance call us and we’ll sort it out for you!

Reducing the size of an Azure Web Role deployment package

If you’ve been working with Azure Web Roles and deployed them to an Azure subscription, you likely have noticed the substantial size of a simple web role deployment package. Even with the vanilla ASP.NET sample website the deployment package seems to be quite bloated. This is not such a problem if you have decent upload bandwidth, but in Australia bandwidth is scarce like water in the desert so let’s see if we can compress this deployment package a little bit. We’ll also look at the consequences of this large package within the actual Web Role instances, and how we can reduce the footprint of a Web Role application.

To demonstrate the package size I have created a new Azure cloud service project with a standard ASP.NET web role:

1

Packaging up this Azure Cloud Service project results in a ‘CSPKG’ file and service configuration file:

2

As you can see the package size for a standard ASPX web role is around 14MB. The CSPKG is created in the ZIP format, and if we have a look inside this package we can have a closer look at what’s actually deployed to our Azure web role:

3

The ApplicationWebRole_….. file is a ZIP file itself and contains the following:

4

The approot and sitesroot folders are of significant size, and if we have a closer look they both contain the complete WebRole application including all content and DLL files! These contents are being copied to the actual local storage disk within the web role instances. When you’re dealing with large web applications this could potentially lead to issues due to the limitation of the local disk space within web role instances, which is around the 1.45 GB mark.

So why do we have these duplicate folders? The approot is used during role start up by the Windows Azure Host Bootstrapper and could contain a derived class from RoleEntryPoint. In this web role you can also include a start-up script which you can use to perform any customisations within the web role environment, like for example registering assemblies in the GAC.

The sitesroot contains the actual content that is served by IIS from within the web role instances. If you have defined multiple virtual directories or virtual applications these will also be contained in the sitesroot folder.

So is there any need for all the website content to be packaged up in the approot folder? No, absolutely not. The only reason we have this duplicate content is that the Azure SDK packages up the web role for storage and both the approot as well as sitesroot folders due to the behaviour of the Azure Web Role Bootstrapper.

The solution to this is to tailor the deployment package a little bit and get rid of the redundant web role content. Let’s create a new solution with a brand new web role:

5

This web role will hold just hold the RoleEntryPoint derived class (WebRole.cs) so we can safely remove all other content, NuGet packages and unnecessary referenced assemblies. The web role will not contain any of the web application bits that we want to host in Azure. This will result in the StartupWebRole to look like this:

6

Now we can add or include the web application that we want to publish to an Azure Web Role into the Visual Studio solution. They key point is to not include this as a role in the Azure Cloud Service project, but add it as a ‘plain web application’ to the solution. The only web role we’re publishing to Azure is the ‘StartupWebRole’, and we’re going to package up the actual web application in a slightly different way:

7

The ‘MyWebApplication’ project does not need to contain a RoleEntryPoint derived class, since this is already present on the StartupWebRole. Next, we open up the ServiceDefinition.csdef in the Cloud Service project and make some modifications in order to publish our web application along the StartupWebRole:
8

There are a few changes that need to be made:

  1. The name attribute of the Site element is set to the name of the web role containing the actual web application, which is ‘MyWebApplication’ in this instance.
  2. The physicalDirectory attribute is added and refers to the location where the ‘MyWebApplication’ will be published prior to creating the Azure package.

Although this introduces the additional step of publishing the web role to a separate physical directory, we immediately notice the reduced size of the deployment package:

9

When you’re dealing with larger web applications that contain numerous referenced assemblies the savings in size can add up quickly.

Lync 2013 Basic Client – the forgotten client

I’ve had conversations with customers lately whom are looking to use Lync Server 2013 and currently don’t want to move their desktop SOE to Office 2013 suite with Lync 2013 Client. This can be a project in itself and one that IT Admins aren’t always prepared to look at. Whether this is because of the analysis needed to roll out the suite or they still are in an agreement that only allows them to Office 2010. Standing up a Lync Server 2013 environment and then rolling out the Lync 2010 client just makes me feel sad for all the end users, as they look at their ‘new‘ client which too many others around the world would look like the ‘old‘ client that we were happy to uninstall a few years ago.

Some IT Admins may have the approach

“I’m not an enterprise voice user and I don’t know the adoption of Lync Meetings yet. So I’m just expecting my users to use IM/P and potentially ‘some‘ video. Who cares what client I use”

Well in some context this statement is right, but with the wrong approach. The new Lync client isn’t just white in colour and offer a minimalistic design, it also adds a lot of additional functionality that is key to the user experience with 2013 backend servers pulling the strings. I think the general miss conception is that the server should provide all the grunt and the client should just be the shell to display the media stream. Well that isn’t entirely true, the Lync Server 2013 AVMCU does control rate matching and upscaling of video streams in a much better way on server 2013, but without the updated client endpoints, a 2010 client with none optimized video codec will never get the optimal stream it deserves like a 720p/1080p@30fps resolution for P2P or Lync Meeting video even if the client/network permits. A Lync 2010 client alongside a Lync 2013 client with same hardware and OS would present a VGA@15fps media stream to the user, while the Lync 2013 client could do the full 1080p@30fps. Also the conference/meeting experience will be hampered for the individual with only smaller selection of meeting modalities on offer at the lesser resolution. So in short, it will do less with only a single participant stream and in lesser quality on same hardware and OS (ouch).

“So how do I take advantage of these features without worrying about my Office Suite licensing? I don’t want to go back to my license rep and fight with them over how I get to 2013 from my current arrangement just to get Lync rolled out to my end users”.

The answer is the Lync 2013 Basic Client which offers a lot of feature sets minus some enterprise voice goodies, oh and by the way it’s FREE! I repeat again….FREE. Free in the terms of it’s a public Microsoft Download “Drive away no more to pay”.

Microsoft Lync Basic 2013 (32 Bit)

Microsoft Lync Basic 2013 (64 Bit)

Let’s look at the Basic Client in a line up against my current Office 2013 O365 Pro Plus client with my E3 subscription which entitles me to run the Office 2013 suite.

Full Vs Basic 2013 Clients

I’m seeing double….or close enough.

As I was halfway through writing this blog, Microsoft released an update allowing some nice additional features for the Basic Client that puts it on par with the full client for conferencing/meeting experience. Now your meetings on the basic client can support a full range of experiences with the big inclusion of multi-party video gallery.

http://support2.microsoft.com/kb/2998659

  • Users can record a conversation from the … menu in a conversation window.
  • Users can use the gallery view to see all users’ video instead of only the active speaker’s in a video conversation that has more than two people.
  • Users can switch mode between gallery and speaker.

These are some missing parts to the puzzle for the Basic Client that previously had it a clear step behind its bigger brother for feature parity.

If you’re looking at rolling out a Lync 2013 PoC to a user base to see if it is going to be adopted positively in your organisation, I recommend using the Lync 2013 Basic Client to meet your needs over the Lync Office 2010 client ever day of the week. It will far exceed the 2010 client for standard client access and user adoption will be positive. If you knew that you could buy the car with the ‘options pack‘ at no extra cost, I’d take it!

So what would your clients look like if you only had purchased Lync Server 2013 backend with no Office 2013 suite?

  • Desktop – Lync 2013 Basic Client (Free)
  • Tablet/Win8 App – Lync App from Store (Free)
  • Mobile – iOS, Android, Windows Phone from Stores (Free)

Comparison between 2013 clients here

Happy Lync’ing.

Direct Access on Azure, Why? Can? How?

Direct Access on Azure?

A customer recently requested Kloud to assist them in implementing a Windows 2012 R2 server based Direct Access (DA) service, as their work force had recently moved to a Windows 8 client platform.  What did surprise me was that they requested it be one of the first solutions to be hosted on their Microsoft Azure service.

Direct Access, for those unfamiliar with the technology, is essentially an ‘always on’ VPN style connection that provides a user access to a corporate network from any basic Internet network connection without any user interaction.  The connection is established prior to a user even logging into their laptop or tablet, so services such as Group Policy mapped drives and login scripts will execute just like a user logging into an internal corporate network.  This technology was introduced with Windows 7 Enterprise edition and continues with Windows 8 Enterprise edition.  Windows 10 appears to have this code as well (at least judging by the current technical preview and TechNet forum responses).

One of the first items to note is that Direct Access isn’t supported by Microsoft, at this stage in its life, on the Azure platform.  After implementing it successfully however for this customer’s requirements, I thought I would share some learnings about why this solution worked for this customer and how we implemented it, but also some items to note about why it may not be suitable for your environment.

This article can provide guidance if you’re considering a single DA server solution, however you might need to look for other advice if you’re looking at a multi-site, or multi-DA server solution requiring load balancing or high availability.  Primarily the guidance around IPv6 and static IP addressing that I address here may change if you look at these architectures.


 

Why?

My customer had the following business and technical requirements:

  • They own a large fleet of Windows 8 laptops that ‘live’ outside the corporate network and primarily never connect to the corporate network and therefore never communicate with internal systems such as Active Directory for Group Policy updates or internal anti-virus systems.  The customer wanted to ensure their laptop fleet could still be managed by these systems to ensure compliance and consistency in user interface ‘lockdown’ (using Group Policy) for support teams in aiding troubleshooting and security updates.
  • My customer wanted to remove their existing third-party SSL VPN solution to access internal services and recoup the licensing cost with this removal.  The Direct Access solution had already been ‘paid for’ in effect as the customer already had a Microsoft Enterprise Agreement.
  • The exiting SSL VPN solution forced all Internet access (‘forced tunnelling’) during the session through the corporate network costing the customer on ISP Internet download fees, especially for users working from home.
  • My customer did not have the budget to publish all existing internally accessible services to the Internet using publishing technologies such as Microsoft Web Application Proxy (WAP), for example.  This would require designing and implementing a WAP architecture, and then testing each service individually over that publishing platform.

Can’t or shouldn’t?

Microsoft Azure can host a Direct Access service, and for the most part it works quite well, but here are the underlying technologies that in my testing refuse to work with the Azure platform:

  • ‘Manage out’ – this term refers to the ability for servers or clients on the corporate network being able to establish a connection (that is it creates the network packet) directly to the Direct Access client that is connected only to the Internet.  There is no official guidance from Microsoft about why there is a limitation, however in my testing, I find that it is related to IPv6 and the lack of ‘IPv6 Broadcast’ address ability.  I didn’t get time to run Wireshark across it (plus my version of Wireshark wasn’t IPv6 aware!) so if anyone has found a workaround to get this working in Azure, shoot me an email! (michael.pearn@kloud.com.au).
  • Teredo – there are two types of Direct Access architectures on Windows Server 2012 R2: IP-HTTPS (that is an HTTPS tunnel is established between the client and the Direct Access server first, then all IPv6 communication occurs across this encrypted tunnel) or the use of ‘Teredo’ which is a IPv6 over IPv4 encapsulation technology.  Microsoft explains the various architectures best here: http://technet.microsoft.com/en-us/library/gg315307.aspx (although this article is in 2010 in the context of the now retired UAG product). Teredo however requires TWO network cards, and since Azure Virtual Servers only support one network card per server, then Teredo cannot be used as an option on Azure.  In all of my testing on Azure, I used IP-HTTPS.

The following is a good reason not (at least for now) to put Direct Access on Azure:

  • There is no native way, using Direct Access configuration, to deny a Direct Access client from reaching any server or FQDN on the internal network (ie. a ‘black list’), if that connection can be established from the Direct Access server.  For example, if a DA client attempts to download large files to servers reachable from the Direct Access server (such as large image or CAD drawings), then unless the server is hosted in Azure as well, all downloads will occur over the Azure VPN site-to-site connection.  This can prove very costly in terms of Azure fees.  My customer used their VPN hardware (which had a firewall service) to establish a ‘black list’ of IP addressed sites, that were still on-premise, to prevent Direct Access clients reaching these services across the Azure VPN (although communicating this to your end clients why they can’t reach these services can be difficult, as they’ll just get a 401 web error or mapped drive connection failure).

How?

The key to getting a working Direct Access solution on the Azure platform is primarily configuring the environment with the following items:

  • Ensure all Direct Access servers use a Static Private IP address.   The new Azure Portal can easily assign a Static IP address, but only if the Virtual Machine is built into a custom network first.  If a Virtual Server is built using the default Azure internal networking (10.x.x.x.), the server can be rebuilt into a custom network instead however the server object itself has to be deleted and rebuilt.  If the Virtual Disk is kept during the deletion process, then the new server can just use the existing VHD during the install of the new object.  Browse to the new Azure Portal (https://portal.azure.com) and use the web interface to first configure a private IP address that matches the network in use (see example picture below).  The ‘old’ Azure portal (https://manage.windowsazure.com) cannot set a static IP address directly through the web interface and PowerShell has to be used instead locally on the server.  A connection connection to the Azure service (via the Internet) is needed to set a static IP address and more instructions can be found here: http://msdn.microsoft.com/en-us/library/azure/dn630228.aspx.  To use the Azure PowerShell commands, the Azure PowerShell commandlets have to be installed first and an Internet connection has to be present for the Powershell to connect to the Azure service to allocate a static IP address.  More instructions on getting the Azure PowerShell be found here: http://azure.microsoft.com/en-us/documentation/articles/install-configure-powershell/
    AzureStaticIPaddress
  • Be sure to install all of the latest Hotfixes for Windows 8/8.1 clients and Server 2012 (with or without R2).  This article is excellent to follow the list of required updates:  http://support.microsoft.com/kb/2883952
  • Install at least one Domain Controller in Azure (this is a good article to follow: http://msdn.microsoft.com/en-us/library/azure/jj156090.aspx).  The Direct Access server has to be domain joined, and all Direct Access configuration (for the DA clients and the DA server) are stored as Group Policy objects.  Also, the Direct Access server itself performs all DNS queries for Direct Access clients on their behalf.  If a Domain Controller is local, then all DNS queries will be contained to the Azure network and not be forced to go over the Azure VPN site-to-site connection to the corporate network.  Do not forget to configure your AD Sites and Services to ensure that Domain Controller in Azure is contained within its own AD site so the AD replication does occur too often across the VPN connection (plus you don’t want your on-premise clients using the Azure DC for authentications).
  • When configuring Direct Access, at least as far as a single Direct Access server solution is required, do not modify the default DNS settings that are assigned by the configuration wizard.  It isn’t well documented or explained, but essentially the Direct Access server runs a local DNS64 service which essentially becomes the DNS server for all Direct Access clients (for all internal sites, not the Internet sites).  The DA configuration wizard assigns the IP address of the DA server itself and provides the IPv6 address of the server to the DA Client GPO for DNS queries.  The DA server will serve all DNS requests for addresses ending in the ‘DNS Suffix’ pool of FQDN’s specified in the DA settings.  If you have a ‘split-brain’ DNS architecture, for example you have ‘customer.com’ addresses on Internet DNS servers but you also overrule these records with an internal Primary zone (or stub records) for ‘customer.com’ for certain sites, then if you include ‘customer.com’ in the Direct Access DNS Suffix settings, then the client will only use internal DNS servers (at least the DNS servers that the Direct Access server can reach) for resolution for these names.  As with all things, test all of your corporate websites during the testing phase to ensure no conflicts.  One of the services that could cause conflicts would be solutions such as ADFS which are generally architected with the exact same internal/external FQDN (e.g. adfs.customer.com) – the internal FQDN generally points clients to use integrated authenticated, the external FQDN generally points to forms based authentication.  For my customer, re-directing all ‘customer.com’ requests to the internal corporate network, including ADFS, worked without issue.
  • If you’re using Hyper-V Windows 8 clients in your testing, be aware that in my experience the testing experience is ‘patchy’ at best.  I did not have time to get a Windows 8 client VHD imported into Azure (there’s no native Windows 8 templates to use in Azure) so I used a local Hyper-V Windows 8 client in my testing and used the Offline Domain Join plus Group Policy option (there was no point to point network connection between my Azure DA server and my Hyper-V test client).  My experience was that the Hyper-V DA client would connect to the DA FQDN but often the DNS service provided by the DA server was very intermittent.  This article is excellent to follow to get the Direct Access Client Settings Group Policy onto the offline Hyper-V client for testing:  http://technet.microsoft.com/en-us/library/jj574150.aspx.  My experience is that a physical client should be used (if possible) for all Direct Access testing.
  • CNAME records can be used instead of using the native Azure published ‘service.cloudapp.net’ FQDN for the DA service itself.  My client successfully used a vanity CNAME (e.g. directaccess.customer.com) pointing to their ‘service.cloudapp.net’ name, with a matching wildcard certificate that used the ‘vanity’ name in the subject field (e.g. *.customer.com).

Tips?

The following are some general points to follow to help get a DA service running on Azure:

  • If you’re using Windows 8.1 for testing, and you’re finding the Direct Access Client Settings GPO are not reaching your test clients, then check the WMI filter to ensure they are not being excluded from targeting.  In my customer’s environment, the version only allowed 6.2% clients (ie. Windows 8, not Windows 8.1).  Be sure the WMI filter includes ‘like 6.3%’ in the filter, others Windows 8.1 clients will not receive the Direct Access policy correctly:

 

Select * from Win32_OperatingSystem WHERE (ProductType = 3) OR ((Version LIKE '6.2%' OR Version LIKE '6.3%') AND (OperatingSystemSKU = 4 OR OperatingSystemSKU = 27 OR OperatingSystemSKU = 72 OR OperatingSystemSKU = 84)) OR (Version LIKE '6.1%' AND (OperatingSystemSKU = 4 OR OperatingSystemSKU = 27 OR OperatingSystemSKU = 70 OR OperatingSystemSKU = 1 OR OperatingSystemSKU = 28 OR OperatingSystemSKU = 71))
 
  • Once you’ve configured the Server and distributed the Direct Access Group Policy objects to your target clients, use the Direct Access Troubleshooter utility (http://www.microsoft.com/en-au/download/details.aspx?id=41938), see picture below.  This tool is essential to rapidly determine where failures might be occurring in your configuration.  It is limited however in that it does not report on certificate failures (eg. FQDN to subject mis-matches, expired certificates etc.) and it is limited in reporting on errors related to DNS name lookups (of which I provide a bit more guidance below).

DirectAccessTroubleshooter

  • Verify that the client can establish an IP-HTTPS tunnel to your Direct Access server – if it cannot, then the problem is most likely the certificate you’ve used in the Client section of your DA configuration, or access to the Direct Access FQDN you have published the DA service under.  If the tunnel has established correctly, then you should see connection objects appear in the Windows Firewall location (in both Main Mode and Quick Mode), see picture below.  If you cannot see any objects, then test the Direct Access FQDN using a web browser – standard testing behaviour is to see no SSL certificate errors and an HTTP ‘404’ error.

SAssociations

  • It may be easier to get the solution working first with the ‘self-signed’ certificate option provided by Microsoft first (the certificate is actually copied to the client using the Direct Access Client GPO), then move the solution to a third party, customer owned certificate.  This will rule out certificate problems first.
  • If the IP-HTTPS tunnel is established, and you see positive connection numbers appear for clients in the Direct Access console, but the clients still cannot reach internal services over their respectiveFQDNs, then the problem is most likely the DNS configuration of your solution.  The key to testing this is using the following commands:
    • Open a command prompt (cmd.exe) with administrator credentials
    • Type ‘nslookup
    • Type ‘server’, space and then the IPv6 address of the Direct Access server, for example ‘server <ipv6 address>‘.  This forces the session to use the server you have specified for all future queries you type
    • Type the FQDN of your local Active Directory e.g. ‘latte.internal‘.  You should successfully see a response coming back from NSLookup with an IPv6 address of a domain controller.  The IPv6 is the translated IPv4 address that the DNS64 service is providing to the DA client (and not an IPv6 address bound to the network card of the domain controller in case you’re confused).
  • If you cannot get any positive response for the DNS service on the Direct Access server, check:
    • The IPv6 address of the Direct Access server (see picture below) should match the exact IPv6 address that is in the NRPT policy of the Direct Access client.

DAIPv6

  • To verify the IPv6 address is configured correctly on the Direct Access client in theNRPT policy, the following two methods can be used:
    • 1. On the Direct Access Client, open Powershell.exe with Administrator privileges and type ‘Get-DnsClientNrptPolicy‘:

 

NRPTpolicy1

  • 2. On the Direct Access Client, open the Registry (Regedit.exe) and browse to the following registry location:

NRPTpolicy2

  • Using Either method above, the IPv6 address in the NRPT policy listed HAS TO match the IPv6 address of your Direct Access server.  If these addresses do not match, then verify the Direct Access Client GPO settings have reached your clients successfully.  The major reason in assigning a static IP address in Azure to the DA server is so that the IPv6 address allocated by the Azure service remains the same, even if the DA server is shut down (and IP address de-allocated from the VM) from the Azure portal.  Alternatively, modify the NRPT registry directly above to match the IPv6 address used on the DA server (but only do this during the test phase).  It is best practice to ensure clients update their DA GPO settings regularly.  All changes to DA settings should be delivered by GPOs where possible.
  • Finally, ensure that the Firewall service is enabled on your Direct Access clients for the ‘private profile’ (at least).  The firewall service is used to establish the Direct Access connection.  If the Firewall service is disabled on the Private Profile, then the solution will not establish a connection to the Direct Access service.  The firewall service can be disabled without issue on the domain or public profiles (however that is not generally recommended).

privatefirewall

I hope this article has proved useful to you and your organisation in reviewing or implementing Direct Access on Azure.

How to find out if your Azure Subscription can use the Australian Regions

Today’s a great day to be looking to move services to the public cloud in Australia with Microsoft announcing the availability of their local Microsoft Azure Australian Geography.

You can find out if your existing Azure Subscription has access to the two Regions (Australia East and Australia Southeast) by running the following PowerShell snippet.


# we're assuming you've already setup your subscription for Powershell
# using Import-AzurePublishSettingsFile

Get-AzureLocation | Where-Object {$_.DisplayName.Contains("Australia")}

If you get a result back that looks similar to the below then you’re good to go!

PowerShell Results Window

If you don’t get a result back then stay tuned as there will be more information coming soon from Microsoft on how to get access to these Regions.