Where’s the source!

SauceIn this post I will talk about data (aka the source)! In IAM there’s really one simple concept that is often misunderstood or ignored. The data going out of any IAM solution is only as good as the data going in. This may seem simple enough but if not enough attention is paid to the data source and data quality then the results are going to be unfavourable at best and catastrophic at worst.
With most IAM solutions data is going to come from multiple sources. Most IAM professionals will agree the best place to source the majority of your user data is going to be the HR system. Why? Well simply put it’s where all important information about the individual is stored and for the most part kept up to date, for example if you were to change positions within the same company the HR systems are going to be updated to reflect the change to your job title, as well as any potential direct report changes which may come as a result of this sort of change.
I also said that data can come and will normally always come from multiple sources. At typical example of this generally speaking, temporary and contract staff will not be managed within the central HR system, the HR team simply put don’t care about contractors. So where do they come from, how are they managed? For smaller organisations this is usually something that’s manually done in AD with no real governance in place. For the larger organisations this is less ideal and can be a security nightmare for the IT team to manage and can create quite a large security risk to the business, so a primary data source for contractors becomes necessary what this is is entirely up to the business and what works for them, I have seen a standard SQL web application being used to populate a database, I’ve seen ITSM tools being used, and less common is using the IAM system they build to manage contractor accounts (within MIM 2016 this is through the MIM Portal).
There are many other examples of how different corporate applications can be used to augment the identity information of your user data such as email, phone systems and to a lessor extent physical security systems building access, and datacentre access, but we will try and keep it simple for the purpose of this post. The following diagram helps illustrate the dataflow for the different user types.

IAM Diagram

What you will notice from the diagram above, is even though an organisation will have data coming from multiple systems, they all come together and are stored in a central repository or an “Identity Vault”. This is able to keep an accurate record of the information coming from multiple sources to compile what is the users complete identity profile. From this we can then start to manage what information is flowed to downstream systems when provisioning accounts, and we can also ensure that if any information was to change, it can be updated to the users profiles in any attached system that is managed through the enterprise IAM Services.
In my next post I will go into the finer details of the central repository or the “Identity Vault”

So in summary, the source of data is very important in defining an IAM solution, it ensures you have the right data being distributed to any managed downstream systems regardless of what type of user base you have. My next post we will dig into the central repository or the Identity Vault, this will go into details around how we can set precedence to data from specific systems to ensure that if there is a difference in the data coming from the difference sources that only the highest precedence will be applied we will also discuss how we augment the data sets to ensure that we are also only collecting the necessary information related to the management of that user and the applications that use within your business.

As per usual, if you have any comments or questions on this post of any of my previous posts then please feel free to comment or reach out to me directly.

Windows 10 Domain Join + AAD and MFA Trusted IPs

Background

Those who have rolled out Azure MFA (in the cloud) to non-administrative users are probably well aware of the nifty Trusted IPs feature.   For those that are new to this, the short version is that this capability is designed to make it a little easier on the end user experience by allowing you to define a set of ‘trusted locations’ (e.g. your corporate network) in which MFA is not required.

This capability works via two methods:

  • Defining a set of ‘Trusted” IP addresses.  These IP addresses will be the public facing IP addresses of your Web Proxies and/or network gateways and firewalls
  • Utilising issued claims from Federated Users.   This uses the insidecorporatenetwork = true claim, sent by ADFS, to determine that this user is coming from a ‘trusted location’.  Enabling this capability is discussed in this article.

The Problem

Now, the latter of these is what needs further consideration when you are looking to moving to the ‘modern world’ of Windows 10 and Azure AD (AAD).  Unfortunately, due to some changes made in the way that ‘Win10 Domain Joined with AAD Registration (AAD+DJ) machines performs Single Sign On (SSO) with AAD, the method of utilising federated claims to determine a ‘trusted location’ for MFA will no longer work.

To understand why this is the case, I highly encourage that you first read Jairo Cadena‘s truly excellent blog series that discuss in detail around how Win10 AAD SSO and its associated services works.  The key takeaways from those posts are that Win10 now has this concept of a Primary Refresh Token (PRT) and with this approach to authentication you now have the following changes:

  • The PRT is what is used to obtain access tokens to AAD applications
  • The PRT is cached and has a sliding window lifetime from 14 days up to 90 days
  • The use of the PRT is built into the Windows 10 credential provider.  Both IE and Edge know to utilise the PRT when communicating with AAD
  • It effectively replaces the ADFS with Integrated Windows Auth (IWA) approach to achieve SSO with Azure AD
    • That is, the auth flow is no longer: Browser –> Login to AAD –> Redirect to ADFS –> Perform IWA SSO –> SAML Token provided with claims –> AAD grants access
    • Instead, the auth flow is a lot more streamlined:  Browser –> Login and provide PRT to AAD –> AAD grants access

Hopefully from this auth flow change you can see why Microsoft have done this.  Because the old way relied on IWA to perform ‘seamless’ SSO, it only worked when the device was domain joined and you had line of sight to a DC to perform kerberos.  So when connecting externally, you would always see the prompt from the ADFS forms based authentication.  In the new way, whenever an auth prompt came in from AAD, the credential provider could see this and immediately provide the cached PRT, providing SSO regardless of your network location.  It also meant that you no longer needed a domain joined machine to achieve ‘seamless’ SSO!

The side effect though is that because the SAML token provided by ADFS is no longer involved in gaining access, Azure AD loses visibility on those context based claims like insidecorporatenetwork which subsequently means that specific Trusted IPs feature no longer works.   While this is most commonly used for MFA scenarios, be aware that this will also apply to any Azure AD Conditional Access rules you define that uses the Trusted IPs criteria (e.g. block access to app when external).

Side Note: If you want to confirm this behaviour yourself, simply use a Win10 machine that is both Domain Joined and AAD Registered, perform a fiddler capture, and compare the sign in experience differences between a IE and Edge (i.e. PRT aware) and Chrome (i.e. not PRT aware)

The Solution/Workaround?

So, you might ask, how do you fix this apparent gap in capability?   Does this mean you’re going backwards now?   For any enterprise customer of decent size, managing a set of IP address ranges may not be practical or desireable in order to drive MFA (or conditional access) behaviours between internal and external users.   The federated user claim method was a simple, low admin, way of solving that problem.

To answer this question, I would actually take a step back and look at the underlying problem that you’re trying to solve.  If we remind ourselves of the MFA mantra, the idea is to ensure that the user provides “something they know” (e.g. a secret/password) and “something they have” (e.g. a mobile device) to prove their ‘trustworthiness’.

When we make a decision to allow an MFA bypass for internal users, we are predicating this on the fact that, from a security standpoint, they have met their ‘trustworthiness’ level through a seperate means.  This might be through a security access card that lets them into an office location or utilising a corporate laptop that can perform a VPN connection.  Both of which ultimately lets them connect to the internal network and thus is what you use as your criteria for granting them the luxury of not having to perform another factor of authentication.

So with that in mind, what you could then do is to also expand that critera to include Domain Joined machines.  That is, if a user is utilising a corporate issued device that has been domain joined (and registered to AAD), this can now act as your “something you have” aspect of the MFA mantra to prove your trustworthiness, and so you no longer need to differentiate whether they are actually internal or external anymore.

To achieve this, you’ll need to use Azure AD Conditional Access policies, and modify your Access Grant rules to look something like that below:

Win10PRT1

You’ll also need to perform the steps outlined in the How to configure automatic registration of Windows domain-joined devices with Azure Active Directory article to ensure the devices properly identify themselves as being domain joined.

Side Note:  If you include the Workplace Join packages as outlined above, this approach can also expand to Windows 7 and Windows 8.1 devices.

Side Note 2: You can also include Intune managed mobile devices for your ‘bypass criterias’ if you include the Require device to be marked as compliant critera as well.

Fun Fact: You’ll note that in my image the (preview) reference for ‘require one of the selected controls’ is still there.  This is because until recently (approx. May/June 2017), the MFA or domain joined device criteria didn’t acutally work because of the behaviour/order of how the evaluations were being done.  When AAD was evaluating the domain joined criteria, if it failed it would immediately block access rather then trying the MFA approach next, thus preventing an ‘or’ scenario.   This has now been fixed and I expect the (preview) tag to be removed soon.

Summary

The move to the modern ‘any where, any device’ approach to end user computing means that there is a need to start re-assessing how you approach security.  Old world views of security being defined via network boundaries will eventually disappear and instead you’ll need to consider user-and device based contexts to define when to initiate security controls.

With Windows 10’s approach to authentication with AAD, internal and external access is no longer relevant and should not be used for your criteria in driving MFA or conditional access. Instead, use the device based conditions such as ‘device compliance’ or ‘domain join’ as one of your deciding factors.

Enabling and Scripting Azure Virtual Machine Just-In-Time Access

Last week (19 July 2017) one of Microsoft’s Azure Security Center’s latest features went from Private Preview to Public Preview. The feature is Azure Just in time Virtual Machine Access.

What is Just in time Virtual Machine access ?

Essentially JIT VM Access is a wrapper for automating an Azure Network Security Group rule set for access to an Azure VM(s) for a temporal period on a set of network ports restricted to a source IP/Network.

Personally I’d done something a little similar earlier in the year by automating the update of an NSG inbound rule to allow RDP only for my current public IP Address. Details on that are here. But that is essentially now redundant.

Enabling Just in time VM Access

In the Azure Portal Select the Security Center icon.

In the central pane you will find an option to Enable Just in time VM Access. Select that link.

In the right hand pane you will then see a link for Try Just in time VM Access. Select that.

If you have not previously enabled the Security Center you will need to select a Pricing Tier. The Free Tier does not include the JIT VM Access, but you should get an option for a 60 day trial for the Standard Tier that does.

With everything enabled you can select Recommended to see a list of VM’s that JIT VM Access can be enabled for.

I’ve selected one of mine from the list and then selected Enable JIT on 1 VM.

In the Enable JIT VM Config you can add and remove ports as required. Also the maximum timeframe for the access. The Per-Request for source IP will enable the rule for the requester and their current IP.  Select Ok.

With the rule configured you can now Request access

When requesting access we can tailor the access based on what is in the rule. Select the ports we want from within the policy and IP Range or Current IP and reduce the timeframe if required. Then select Open Ports.

For the VM we can now see that JIT VM Access has been requested and is currently active.

Looking at the Network Security Group that is associated with the VM we can see the rules that JIT VM Access has put in place. We can also see that the rules are against my current IP Address.

Automating JIT VM Access Requests via PowerShell

Now that we have Just-in-time VM Access all configured for our VM, the reality is I just want to invoke the access request via PowerShell, start-up my VM (as they would normally be stopped unless in use) and utilise the resource.

The script below is a simplified version of the my previous script to automate NSG rules  detailed here. It assumes you enabled JIT VM Access as per the manual process above, and that your VM would normally be in an off state and you’re about to enable access, start it up and connect.

You will need to have the AzureRM and the new Azure-Security-Center PowerShell Modules. If you are running PowerShell 5.1 or later you can install them by un-remarking lines 3 and 5.

Update lines 13, 15 and 19 for your Resource Group name, Virtual Machine name and the link to your RDP file. Update line 21 for the number of hours to request access (in line with your policy).

Line 28 uses the new Invoke-ASCJITAccess cmdlet from the Azure-Security-Center Powershell module to request access. 

Summary

This simplifies the management of NSG Rules for access to VM’s and reduces the exposure of VM’s to brute force attacks. It also simplifies for me the access to a bunch of VM’s I only have running on an ad-hoc basis.

Looking into the Azure-Security-Center PowerShell module there are cmdlets to also manage the JIT Policies.

Social Engineering Is A Threat To Your Organisation

social_engineering
Of the many attacks, hacks and exploits perpetrated against organisations. One of the most common vulnerabilities businesses face and need to guard against is the result of the general goodness or weakness, depending on how you choose to look at it, of our human natures exploited through means of social engineering.

Social engineering is a very common problem in cyber security. It consists of the simple act of getting an individual to unwittingly perform an unsanctioned or undersirable action under false pretenses. Whether granting access to a system, clicking a poisoned link, revealing sensitive information or any other improperly authorised action. The act relies on the trusting nature of human beings, their drive to help and work with one another. All of which makes social engineering hard to defend against and detect.

Some of the better known forms of social engineering include:

Phishing

Phishing is a technique of fraudulently obtaining private information. Typically, the phisher sends an e-mail that appears to come from a legitimate business—a bank, or credit card company—requesting “verification” of information and warning of some dire consequence if it is not provided. The e-mail usually contains a link to a fraudulent web page that seems legitimate—with company logos and content—and has a form requesting everything from a home address to an ATM card’s PIN or a credit card number. [Wikipedia]

Tailgating

An attacker, seeking entry to a restricted area secured by unattended, electronic access control, e.g. by RFID card, simply walks in behind a person who has legitimate access. Following common courtesy, the legitimate person will usually hold the door open for the attacker or the attackers themselves may ask the employee to hold it open for them. The legitimate person may fail to ask for identification for any of several reasons, or may accept an assertion that the attacker has forgotten or lost the appropriate identity token. The attacker may also fake the action of presenting an identity token. [Wikipedia]

Baiting

Baiting is like the real-world Trojan horse that uses physical media and relies on the curiosity or greed of the victim. In this attack, attackers leave malware-infected floppy disks, CD-ROMs, or USB flash drives in locations people will find them (bathrooms, elevators, sidewalks, parking lots, etc.), give them legitimate and curiosity-piquing labels, and waits for victims. For example, an attacker may create a disk featuring a corporate logo, available from the target’s website, and label it “Executive Salary Summary Q2 2012”. The attacker then leaves the disk on the floor of an elevator or somewhere in the lobby of the target company. An unknowing employee may find it and insert the disk into a computer to satisfy his or her curiosity, or a good Samaritan may find it and return it to the company. In any case, just inserting the disk into a computer installs malware, giving attackers access to the victim’s PC and, perhaps, the target company’s internal computer network. [Wikipedia]

Water holing

Water holing is a targeted social engineering strategy that capitalizes on the trust users have in websites they regularly visit. The victim feels safe to do things they would not do in a different situation. A wary person might, for example, purposefully avoid clicking a link in an unsolicited email, but the same person would not hesitate to follow a link on a website he or she often visits. So, the attacker prepares a trap for the unwary prey at a favored watering hole. This strategy has been successfully used to gain access to some (supposedly) very secure systems. [Wikipedia]

Quid pro quo

Quid pro quo means something for something. An attacker calls random numbers at a company, claiming to be calling back from technical support. Eventually this person will hit someone with a legitimate problem, grateful that someone is calling back to help them. The attacker will “help” solve the problem and, in the process, have the user type commands that give the attacker access or launch malware. [Wikipedia]

Now do something about it!

As threats to orginisation’s cyber security go. Social engineering is a significant and prevalent threat, and not to be under-estimated.

However the following are some of the more effective means of guarding against it.

  1. Be vigilent…
  2. Be vigilent over the phone, through email and online.
  3. Be healthily skeptical and aware of your surroundings.
  4. Always validate the requestor’s identity before considering their request.
  5. Validate the request against another member of staff if necessary.

Means of mitigating social engineering attacks:

  1. Use different logins for all resources.
  2. Use multi-factor authentication for all sensitive resources.
  3. Monitor account usage.

Means of improving your staff’s ability to detect social engineering attacks:

  1. Educate your staff.
  2. Run social engineering simulation exercises across your organisation.

Ultimately of course the desired outcome of trying to bolster your’s organisation’s ability to detect a social engineering attack. Is a situation where the targeted user isn’t fooled by the attempt against their trust and performs accordingly, such as knowing not to click the link in an email purporting to help them retrieve their lost banking details for example.

Some additional tips:

  1. Approach all unsolicited communications no matter who the originator claims to be with skepticism.
  2. Pay close attention to the target URL of all links by hovering your cursor over them to hopefully reveal their true destination.
  3. Look to the HTTPS digital certificate of all sensitive websites you visit for identity information.
  4. Use spam filtering, Antivirus software and anti-phising software.

Cloud Security Research: Cross-Cloud Adversary Analytics

Newly published research from security firm Rapid7 is painting a worrying picture of hackers and malicious actors increasingly looking for new vectors against organizations with resources hosted in public cloud infrastructure environments.

Some highlights of Rapid7’s report:

  • The six cloud providers in our study make up nearly 15% of available IPv4 addresses on the internet.
  • 22% of Softlayer nodes expose database services (MySQL & SQL Server) directly to the internet.
  • Web services are prolific, with 53-80% of nodes in each provider exposing some type of web service.
  • Digital Ocean and Google nodes expose shell (Telnet & SSH) services at a much higher rate – 86% and 74%, respectively – than the other four cloud providers in this study.
  • A wide range of attacks were detected, including ShellShock, SQL Injection, PHP webshell injection and credentials attacks against ssh, Telnet and remote framebuffer (e.g. VNC, RDP & Citrix).

Findings included nearly a quarter of hosts deployed in IBM’s SoftLayer public cloud having databases publicly accessible over the internet, which should be a privacy and security concern to those organization and their customers.

Many of Google’s cloud customers leaving shell access publicly accessible over protocols such as SSH and much worse still, telnet which is worrying to say the least.

Businesses using the public cloud being increasingly probed by outsiders looking for well known vulnerabilities such as OpenSSL Heartbleed (CVE-2014-0160), Stagefright (CVE-2015-1538) and Poodle (CVE-2014-3566) to name but a few.

Digging further into their methodologies, looking to see whether these were random or targeted. It appears these actors are honing their skills in tailoring their probes and attacks to specific providers and organisations.

Rapid7’s research was conducted by means of honey traps, hosts and services made available solely for the purpose of capturing untoward activity with a view to studying how these malicious outsiders do their work. What’s more the company has partnered with Microsoft, Amazon and others under the auspices of projects Heisenberg and Sonar to leverage big data analytics to mine the results of their findings and scan the internet for trends.

Case in point project Heisenberg saw the deployment of honeypots in every geography in partnership with all major public cloud providers. And scanned for compromised digital certifcates in those environments. While project Sonar scanned millions of digital certificates on the internet for sings of the same.

However while the report leads to clear evidence showing that hackers are tailoring their attacks to different providers and organisations. It reads as somewhat more of an indictment of the poor standard of security being deployed by some organisations in the public cloud today. Than a statement on the security practices of the major providers.

The 2016 national exposure survey.

Read about the Heisenberg cloud project (slides).

Security Vulnerability Revealed in Azure Active Directory Connect

Microsoft ADFS

The existence of a new and potentially serious privilege escalation and password reset vulnerability in Azure Active Directory Connect (AADC) was recently made public by Microsoft.

https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnectsync-whatis

Fixing the problem can be achieved by means of an upgrade to the latest available release of AADC 1.1.553.0.

https://www.microsoft.com/en-us/download/details.aspx?id=47594

The Microsoft security advisory qualifies the issue as important and was published on Technet under reference number 4033453:

https://technet.microsoft.com/library/security/4033453.aspx#ID0EN

Azure Active Directory Connect as we know takes care of all operations related to the synchronization of identity information between on-premises environments and Active Directory Federation Services (ADFS) in the cloud. The tool is also the recommended successor to Azure AD Sync and DirSync.

Microsoft were quoted as saying…

The update addresses a vulnerability that could allow elevation of privilege if Azure AD Connect Password writeback is mis-configured during enablement. An attacker who successfully exploited this vulnerability could reset passwords and gain unauthorized access to arbitrary on-premises AD privileged user accounts.

When setting up the permission, an on-premises AD Administrator may have inadvertently granted Azure AD Connect with Reset Password permission over on-premises AD privileged accounts (including Enterprise and Domain Administrator accounts)

In this case as stated by Microsoft the risk consists of a situation where a malicious administrator resets the password of an active directory user using “password writeback”. Allowing the administrator in question to gain privileged access to a customer’s on-premises active directory environment.

Password writeback allows Azure Active Directory to write passwords back to an on-premises Active Directory environment. And helps simplify the process of setting up and managing complicated on-premises self-service password reset solutions. It also provides a rather convenient cloud based means for users to reset their on-premises passwords.

Users may look for confirmation of their exposure to this vulnerability by checking whether the feature in question (password writeback) is enabled and whether AADC has been granted reset password permission over on-premises AD privileged accounts.

A further statement from Microsoft on this issue read…

If the AD DS account is a member of one or more on-premises AD privileged groups, consider removing the AD DS account from the groups.

CVE reference number CVE-2017-8613 was attributed to the vulnerability.

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-8613

Resolving the ‘Double Auth’ prompt issue in ADFS with Azure AD Conditional Access MFA

As mentioned in my previous post, Using ADFS on-premises MFA with Azure AD Conditional Access, if you have implemented Azure AD Conditional Access to enforce MFA for all your Cloud Apps and you are using the SupportsMFA=true parameter to direct MFA execution to your ADFS on-premises MFA server you may have encountered what I call the ‘Double Auth’ prompt issue.

While this doesn’t happen across all Cloud Apps, you will see it on the odd occasion (in particular the Intune Company Portal and Azure AD Powershell Cmdlets) and it has the following symptoms:

  1. User signs into Azure AD App (e.g. Azure AD Powershell with Modern Auth support)
  2. User sees auth prompt, enters their username, which redirects to ADFS
  3. User enters credentials and clicks enter
  4. It looks like it signs in successfully but then ADFS reappears and the user is prompted to enter credentials again.
  5. After the second successful attempt, the user is then prompted for MFA as expected

 

DoubleAuth

Understanding the reason behind why this happens is reliant on two things:

  1. The background I provided in the blog post I referenced above, specifically that when SupportsMFA is being used, two requests to ADFS are sent by Azure AD instead of one as part of the authentication process when MFA is involved.
  2. Configuration and behaviour of the prompt=login behaviour of Azure AD, which is discussed in this Microsoft Docs article.

So to delve into this, let’s crack out our trusty Fiddler tool to look at what’s happening:

DoubleAuth3a.png

Highlighted in the image above is the culprit.  You’ll see in the request strings that the user is being sent to ADFS with two key parameters wauth=… and wfresh=0.  What is happening here is that this particular Azure AD application has decided that as part of sign in, they want to ensure that ‘fresh credentials’ are being provided (say, to ensure the correct user creds are used).  They do this by telling Azure AD to generate a request with prompt=login, however as noted in the article referenced, because some legacy ADFS systems don’t understand this ‘modern’ parameter, the default behaviour is for Azure AD to pre-emptively translate this request into two ‘WS-Fed’ parameters that they can understand.   In particular, wfresh=0 as per the WS-Fed specs means:

…  If specified as “0” it indicates a request for the IP/STS to re-prompt the user for authentication before issuing the token….

The problem of course is that ADFS sees the wfresh=0 parameter in both requests and will abide by that behaviour by prompting the user for credentials each time!

So, the fix for this is fairly simple and is in fact (very vaguely) called out in the technet article I’ve referenced above – which is to ensure that Azure AD uses the NativeSupport configuration so that it sends the parameter as-is to ADFS to interpret instead of pre-emptively translating it.

The specific command to run is:

Set-MsolDomainFederationSettings –DomainName yourFederatedDomain.com -PromptLoginBehavior NativeSupport

The prerequisite to this fix is to ensure that you are either running:

  • ADFS 2016
  • ADFS 2012 R2 with the July 2016 update rollup

Once this update is applied (remember that these DomainFederationSettings changes can take up to 15-30 mins) you’ll be able to see the difference via Fiddler – ADFS is sent with a prompt=login parameter instead and its only for the first request so  the overall experience is the credential prompt only occurs once.

DoubleAuth4.png

Hope that saves a few hairs for anyone out there who’s come across this issue!

Using ADFS on-premises MFA with Azure AD Conditional Access

With the recent announcement of General Availability of the Azure AD Conditional Access policies in the Azure Portal, it is a good time to reassess your current MFA policies particularly if you are utilising ADFS with on-premises MFA; either via a third party provider or with something like Azure MFA Server.

Prior to conditional MFA policies being possible, when utilising on-premises MFA with Office 365 and/or Azure AD the MFA rules were generally enabled on the ADFS relying party trust itself.  The main limitation with this of course is the inability to define different MFA behaviours for the various services behind that relying party trust.  That is, within Office 365 (Exchange Online, Sharepoint Online, Skype for Business Online etc.) or through different Azure AD Apps that may have been added via the app gallery (e.g. ServiceNow, SalesForce etc.).  In some circumstances you may have been able to define some level of granularity utilising custom authorisation claims, such as bypassing MFA for activesync and legacy  authentication scenarios, but that method was reliant on special client headers or the authentication endpoints that were being used and hence was quite limited in its use.

Now with Azure AD Conditional Access policies, the definition and logic of when to trigger MFA can, and should, be driven from the Azure AD side given the high level of granularity and varying conditions you can define. This doesn’t mean though that you can’t keep using your on-premises ADFS server to perform the MFA, you’re simply letting Azure AD decide when this should be done.

In this article I’ll show you the method I like to use to ‘migrate’ from on-premises MFA rules to Azure AD Conditional Access.  Note that this is only applicable for the MFA rules for your Azure AD/Office 365 relying party trust.  If you are using ADFS MFA for other SAML apps on your ADFS farm, they will remain as is.

Summary

At a high level, the process is as follows:

  1. Configure Azure AD to pass ‘MFA execution’ to ADFS using the SupportsMFA parameter
  2. Port your existing ADFS MFA rules to an Azure AD Conditional Access (CA) Policy
  3. Configure ADFS to send the relevant claims
  4. “Cutover” the MFA execution by disabling the ADFS MFA rules and enabling the Azure AD CA policy

The ordering here is important, as by doing it like this, you can avoid accidentally forcing users with a ‘double MFA’ prompt.

Step 1:  Using the SupportsMFA parameter

The crux of this configuration is the use of the SupportsMFA parameter within your MSOLDomainFederationSettings configuration.

Setting this parameter to True will tell Azure AD that your federated domain is running an on-premises MFA capability and that whenever it determines a need to perform MFA, it is to send that request to your STS IDP (i.e. ADFS) to execute, instead of triggering its own ‘Azure Cloud MFA’.

To perform this step is a simple MSOL PowerShell command:

Set-MsolDomainFederationSettings -domain yourFederatedDomain.com -SupportsMFA $true

Pro Tip:  This setting can take up to 15-30 mins to take effect.  So make sure you factor in this into your change plan.  If you don’t wait for this to kick in before cutting over your users will get ‘double MFA’ prompts.

Step 2:  Porting your existing MFA Rules to Azure AD Conditional Access Policies

There’s a whole article in itself talking about what Azure AD CA policies can do nowadays, but for our purposes let’s use the two most common examples of MFA rules:

  1. Bypass MFA for users that are a member of a group
  2. Bypass MFA for users on the internal network*

Item 1 is pretty straight forward, just ensure our Azure AD CA policy has the following:

  • Assignment – Users and Groups:
    • Include:  All Users
    • Exclude:  Bypass MFA Security Group  (simply reuse the one used for ADFS if it is synced to Azure AD)

MFABypass1

Item 2 requires the use of the Trusted Locations feature.  Note that at the time of writing, this feature is still the ‘old’ MFA Trusted IPs feature hosted in the Azure Classic Portal.   Note*:  If you are using Windows 10 Azure AD Join machines this feature doesn’t work.  Why this is the case will be an article in itself, so I’ll add a link here when I’ve written that up.

So within your Azure AD CA policy do the following:

  • Conditions – Locations:
    • Include:  All Locations
    • Exclude:  All Trusted IPs

MFABypass2.png

Then make sure you click on Configure all trusted locations to be taken to the Azure Classic Portal.  From there you must set Skip multi-factor authentication for requests from federated users on my intranet

MFABypass3.png

This effectively tells Azure AD that a ‘trusted location’ is any authentication requests that come in with a InsideCorporateNetwork claim.

Note:  If you don’t use ADFS or an IDP that can send that claim, you can always use the actual ‘Trusted IP addresses’ method.

Now you can define exactly which Azure AD apps you want MFA to be enabled for, instead of all of them as you had originally.

MFABypass7.png

Pro Tip:  If you are going to enable MFA on All Cloud Apps to start off with, check the end of this article for some extra caveats you should consider for, else you’ll start breaking things.

Finally, to make this Azure AD CA policy actually perform MFA, set the access controls:

MFABypass8.png

For now, don’t enable the policy just yet as there is more prep work to be done.

Step 3:  Configure ADFS to send all the relevant claims

So now that Azure AD is ready for us, we have to configure ADFS to actually send the appropriate claims across to ‘inform’ it of what is happening or what it is doing.

The first is to make sure we send the InsideCorporateNetwork claim so Azure AD can apply the ‘bypass for all internal users’ rule.  This is well documented everywhere, but the short version is, within your Microsoft Office 365 Identity Platform relying party trust in ADFS and Add a new Issuance Transform Rule to pass through the Inside Corproate Network Claim:

MFABypass4

Fun fact:   The Inside Corporate Network claim is automatically generated by ADFS when it detects that the authentication was performed on the internal ADFS server, rather then through the external ADFS proxy (i.e. WAP).  This is why it’s a good idea to always use an ADFS proxy as opposed to simply reverse proxying your ADFS.  Without it you can’t easily tell whether it was an ‘internal’ or ‘external’ authentication request (plus its more secure).

The other important claim to send through is the authnmethodsreferences claim.  Now you may already have this if you were following some online Microsoft Technet documentation when setting up ADFS MFA.  If so, you can skip this step.

This claim is what is generated when ADFS successfully performs MFA.  So think of it as a way for ADFS to tell Azure AD that it has performed MFA for the user.

MFABypass6

Step 4: “Cutover” the MFA execution

So now that everything is prepared, the ‘cutover’ can be performed by doing the following:

  1. Disable the MFA rules on the ADFS Relying Party Trust
    Set-AdfsRelyingPartyTrust -TargetName "Microsoft Office 365 Identity Platform" -AdditionalAuthenticationRules $null
  2. Enable the Azure AD CA Policy

Now if it all goes as planned, what should happen is this:

  1. User attempts sign into an Azure AD application.  Since their domain is federated, they are redirected to ADFS to sign in.
  2. User will perform standard username/password authentication.
    • If internal, this is generally ‘SSO’ with Windows Integrated Auth (WIA).  Most importantly this user will get a ‘InsideCorporateNetwork’ = true claim
    • If external, this is generally a Forms Based credential prompt
  3. Once successfully authenticated, they will be redirected back to Azure AD with a SAML token.  Now is actually when Azure AD will assess the CA policy rules and determines whether the user requires MFA or not.
  4. If they do, Azure AD actually generates a new ADFS sign in request, this time specifically stating via the wauth parameter to use multipleauthn. This will effectively tell ADFS to execute MFA using its configured providers
  5. Once the user successfully completes MFA, they will go back to Azure AD with this new SAML token that contains a claim telling Azure AD that MFA has now been performed and subsequently lets the user through

This is what the above flow looks like in Fiddler:

MFABypass9.png

This is what your end-state SAML token should like as well:

MFABypass10

The main takeaway is that Step 4 is the new auth flow that is introduced by moving MFA evaluation into Azure AD.  Prior to this, step 2 would have simply perform both username/password authentication and MFA in the same instance rather then over two requests.

Extra Considerations when enabling MFA on All Cloud Apps

If you decide to take a ‘exclusion’ approach to MFA enforcement for Cloud Apps, be very careful with this.  In fact you’ll even see Microsoft giving you a little extra warning about this.

MFABypass12

The main difference with taking this approach compared to just doing MFA enforcement at the ADFS level is that you are now enforcing MFA on all cloud identities as well!  This may very well unintentionally break some things, particularly if you’re using ‘cloud identity’ service accounts (e.g. for provisioning scripts or the like).  One thing that will definitely break is the AADConnect account that is created for directory synchronisation.

So at a very minimum, make sure you remember to add the On-Premises Directory Synchronization Service Account(s) into the exclusion list for for your Azure AD MFA CA policy.

The very last thing to call out is that some Azure AD applications, such as the Intune Company Portal and Azure AD Powershell cmdlets, can cause a ‘double ADFS prompt’ when MFA evaluation is being done in Azure AD.   The reason for this and the fix is covered in my next article Resolving the ‘double auth’ prompt issue with Azure AD Conditional Access MFA and ADFS so make sure you check that out as well.

 

An Identity Consultants Summary of the recent Cloud Identity Summit 2017

I’ve just returned from Chicago and the Cloud Identity Summit that was held at the Sheraton Grand Chicago. It was my first CIS conference and reminded me a lot of the now defunct Quest Experts Conference and The Burton Group Conference, both in terms of the content and scale. It definitely had a more intimate feel than the massive Microsoft Ignite category of event which attracts 25k+ attendees. 1400 attendees at CIS was a record for this event, but it still meant you got the 1:1 time with vendors and speakers which is fantastic.

Just like the Quest “The Experts Conference” (TEC) and The Burton Group Conference if you pick your sessions based on the synopsis and the speaker the sessions can be highly technical 400+ level and worthy of the 30 hr journey to get to the conference. I focused on my particular subject of Identity, so this summary is biased towards that track.

A summary of my takeaways that I’ll briefly detail in the post are:

  • ID Pro
  • Strong Authentication / Goodbye Passwords
  • PAM and IGA
  • SCIM 2.0
  • FIDO 2.0

And before I forget, CIS is dead; long live CIS, now known as Identiverse which will be in Boston in June (24-27) 2018. Ping Identity have renamed the conference moving forward.

 

ID Pro

Ian Glazer in his keynote on Tuesday announced what has been missing from the IDAM Community. A professional organisation to represent it. Named ID Pro with all the details available here, it is professional organisation for IDAM exponents. Join now here for US$150. Supported by the Kantara initiative this organisation already has the support and backing of the industry.

 

Strong Authentication / Goodbye Passwords

There were numerous sessions around this topic. And it was fantastic to see that the eco-system to support the holy grail future of No Passwords, but Strong Authentication is now present. Alex Simons summed it up nicely in his keynote on Wednesday but setting the goal of 1 Billion Logins (without passwords) by 2018 launching the hashtag to go with it #1Billionby2018 Checkout the FIDO 2.0 summary further below.

 

Privileged Account Management and Identity Governance & Access

Privileged Account Management and Identity Governance & Access are better together. We knew this anyway and I’ve been approaching it this way with my solutions. It was therefore refreshing to be entertained by Kelly Grizzle in his session When meets through their mutual friend . In essence SailPoint have been working heavily on their IGA offering but with the help of SCIM now at 2.0 they’ve been working with PAM vendors such as CyberArk to provide the integration and visibility the two need. Kelly entertaining and informative presentation can be found here.

 

SCIM 2.0

Mentioned above in the PAM and IGA summary, SCIM 2.0 is now ready for prime time. Whilst SCIM has been around for some time it hasn’t seen widespread adoption in my circles. But that’s about to change. Microsoft have been using it as a primary integration method with Azure AD with the likes of Facebook for Work. Microsoft also have a SCIM MA for Microsoft Identity Manager. I’ll be experimenting with it in the near future.

 

FIDO 2.0

FIDO first came on to my radar about 4 years ago. It is in a lot of the workflows we do every day (if you have a modern operation system – Windows 8+ and bio-metrics on your device or a FIDO compliant token). With FIDO 2.0  and U2F v1.1 and UAF v1.1 now complete the foundation and enabling services for Strong Authentication are ready to go.

 

Summary of the Summary

I’ve tried hard to not make this too wordy, but the takeaway is this. Identity is the foundation of who you are and what you do. With all the other disruption in the IT industry around cloud and mobility, identity is always the enabler. Get it right and you can make life easier for your users, more visible for your information security officers and auditable for your compliance requirements. Just keep up, as it’s moving very fast.

Another global ransomware attack is fast spreading !!!

With the events that have escalated since last night here, is a quick summary and initial response from Kloud on how organisations can take proactive steps to mitigate the current situation. Please feel free to distribute internally as you see fit.

 Background

A new wave of powerful cyber-attack (Petya – as referred to in the blue comments below) hit Europe on Tuesday in a possible reprise of a widespread ransomware assault in May that affected 150 countries.  As reported this ransomware demands are targeting the government and key infrastructure systems all over the world.

Those behind the attack appeared to have exploited the same type of hacking tool used in the WannaCry ransomware attack that infected hundreds of thousands of computers in May 2017 before a British researcher created a ‘kill-switch’. This is a different variant to the old WannaCry ransomware, however this nasty piece of ransomware works very differently from any other known variants due to the following:

  • Petya uses the NSA Eternalblue exploit but also spreads in internal networks with WMIC and PSEXEC. In some cases fully patched systems can also get hit with this exploit;
  • Unlike other traditional ransomware, Petya does not encrypt files on a targeted system one by one;
  • Instead, Petya reboots victims computers and encrypts the hard drive’s master file table (MFT) and renders the master boot record (MBR) inoperable, restricting access to the full system by seizing information about file names, sizes, and location on the physical disk; and
  • Petya ransomware replaces the computer’s MBR with its own malicious code that displays the ransom note and leaves computers unable to boot.

The priority is to apply emergency patches and ensure you are up to date.

Make sure users across the business do not click on links and attachments from people or email addresses they do not recognise.

How is it spreading?

Like most of the ransomware attacks ‘Petya’ ransomware is exploiting SMBv1 EternalBlue exploit, just like WannaCry, and taking advantage of unpatched Windows machines.

Petya ransomware is successful in spreading because it combines both a client-side attack (CVE-2017-0199) and a network based threat (MS17-010)

EternalBlue is a Windows SMB exploit leaked by the infamous hacking group Shadow Brokers in its April 2017 data dump, who claimed to have stolen it from the US intelligence agency NSA, along with other Windows exploits.

What assets are at risk?

All unpatched computer systems and all users who do not practice safe online behaviour.

 What is the impact?

The nature of the attack itself is not new. Ransomware spreads by emails with malicious links or attachments have been increasing in recent years.

This ransomware attack follows a relatively typical formula:

  • Locks all the data on a computer system;
  • Provides instructions on what to do next, which includes a demand of ransom; (Typically US$300 ransom in bitcoins)
  • Demand includes paying ransom in a defined period of time otherwise the demand increases, leading to complete destruction of data; and
  • Uses RSA-2048 bit encryption, which makes decryption of data extremely difficult. (next to impossible in most cases)

Typically, these types of attacks do not involve the theft of information, but rather focus on generating cash by preventing critical business operations until the ransom is paid, or the system is rebuilt from unaffected backups.

This attack involving ransomware known as ‘NotPetya’, also referred to as ‘Petya’ or ‘Petwrap’ spreads rapidly, exploiting a weakness in unpatched versions of Microsoft Windows (Windows SMB1 vulnerability).

 Immediate Steps if compromised…

Petya ransomware encrypts systems after rebooting the computer. So if your system is infected with Petya ransomware and it tries to restart, just do not power it back on.

“If machine reboots and you see a message to restart, power off immediately! This is the encryption process. If you do not power on, files are fine.” <Use a LiveCD or external machine to recover files>

 Steps to mitigate the risk…

We need to act fast. Organisations need to ensure staff are made aware of the risk, reiterating additional precautionary measures, whilst simultaneously ensuring that IT systems are protected including:

  • Prioritise patching systems immediately which are performing critical business functions or holding critical data first;
  • Immediately patch Windows machines in the environment (post proper testing). The patch for the weakness identified was released in March 2017 as part of MS17-010 / CVE-2017-01;
  • Disable the unsecured, 30-year-old SMBv1 file-sharing protocol on your Windows systems and servers;
  •  Since Petya Ransomware is also taking advantage of WMIC and PSEXEC tools to infect fully-patched Windows computers, you are also advised to disable WMIC (Windows Management Instrumentation Command-line)
  • Companies should forward a cyber security alert and communications to their employees (education is the Key) requesting them to be vigilant at this time of heightened risk and reminding them to;
    • Not open emails from unknown sources.
    • Be wary of unsolicited emails that demand immediate action.
    • Not click on links or download email attachments sent from unknown users or which seem suspicious.
    • Clearly defined actions for reporting incidents.
  •  Antivirus vendors have been working to release signatures in order protect systems. Organisations should ensure that all systems are running current AV DAT signatures. Focus should be on maintaining currency;
  • Maintain up-to-date backups of files and regularly verify that the backups can be restored. Priority should be on ensuring systems performing critical business functions or holding critical data are verified first;
  • Monitor your network, system, media and logs for any malicious software, possible ex-filtration of data, abnormal behaviour or unauthorised network connections;
  • Practice safe online behaviour; and
  • Report all incidents to your (IT) helpdesk or Security Operations team immediately.

These attacks happen quickly and unexpectedly. You also need to act swiftly to close any vulnerabilities in your systems.

AND

What to if you believe you have been attacked by “Petya” and need assistance?

Under new mandatory breach reporting laws organisations may have obligations to report this breach to the privacy commissioner (section 26WA).

If you or anyone in the business would like to discuss the impact of this attack or other security related matters, we are here to help.

Please do not hesitate to contact us for any assistance.