A Way to Keep Logs Safe on Disposable Servers

Automatic replacement of failed cloud configuration items is a life-saver. Having items recover themselves with no ops team intervention can be a life-saver too, and not to mention a sleep-saver. Relieved from the responsibility of having to restore service, the only outstanding task is often to explain what happened.

What if the thing that failed was an EC2 application server running RedHat and the logs were on the server’s now-replaced volumes though? The contents of /var/log are gone, and while we might be capturing them in a log aggregator like Splunk or a syslog system of some sort, those aren’t always simple to compile into a report or send to an application vendor for a post mortem.  I had this problem recently and the solution was to move logs off the instance using systemd.path, or path unit configuration. This replaces an old concept of inotify tools, which aren’t available in EC2 RedHat instance’s repositories.

Using path unit configuration we configure three things;

  1. A path watcher that that sets out what directory we’re going to watch for changes
  2. A service file that describes a script to run when that path has any changes
  3. A script that conditionally copies or moves contents off to a file share or somewhere safe in case our instance is retired, whether that was deliberate or a nasty surprise

The path watcher

Create a file in /etc/systemd/system called something descriptive like logsaver.path, with the following contents.

[Unit]

Description= Triggers a service that keeps historical log files out of the blast radius of an instance replacement.

Documentation= man:systemd.path

[Path]

PathModified=/var/log/fragileapp

[Install]

WantedBy=multi-user.target

This has some metadata about what we’re trying to achieve, then a path, and some information about how it’s targeted.

The service

The next step is to create a file in /etc/systemd/logsaver.service, with the following contents

[Unit]

Description= Starts our log rescue script

Documentation= man:systemd.service

[Service]

Type=oneshot

ExecStart=/home/kloud/rescueRotatedLogs.sh

This is some metadata again saying what we’re doing, and a service which runs once (oneshot), the script specified in ExecStart.

Now we’ve got a path to watch, and the location of a script, let’s write the actual script.

The script

In our rescueRotatedLogs.sh script, we can do the following;

#!/bin/bash

# Move files with a datestamp like 2015-04-03 in them somewhere else

for file in $(ls /var/log/fragileapp | grep -E "20[0-9]{2}\-(0[1-9]|1[0-2])\-([0-2][0-9]|3[0-1])")

do

  mv /var/log/fragileapp/"$file" /mnt/aMountpointToAShareOrDFSNS

done

I know what you’re thinking, regex! But it doesn’t need to be complicated. Linux system logs are usually straightforward with an application.log and then a bunch of 2015-03-22-application.log.gz or other format that means the logs have been rotated out and the app is currently writing to the main log. These are the ones that the vendor is going to need to diagnose, or the post incident review committee is going to want, and the regex just finds anything with an even vaguely datelike string in the name (that regex’ll find invalid dates too like 2099, but hopefully the instance didn’t crash because the app writes weird dates).

Once we have this all set up, we kick it off by starting our path watcher;

systemctl start logsaver.path

Now if you did this in a test system you’ll notice that all the rotated logs with an ISO 8601 ‘yyyy-mm-dd’ formatted date in them just got sent to the share. You could replace;

mv /var/log/fragileapp/"$file" /mnt/aMountpointToAShareOrDFSNamespace

with

aws s3 put /var/log/fragileapp/"$file" s3://AnS3Bucket –sse AES256

If you have the AWS CLI tools installed on your instances and you want to be fully cloud (and who doesn’t).

Or maybe you’re multi-cloud

az storage blob upload --container-name $aContainerYouMadeBefore --file $file --name $blob_name

You could also replace the grepping for a regex with just moving any zipped files with the extensions zip or tar.gz.  Or anything with an extension of .1 – .12 to handle logrotate rotated files. Implementing this, we always have the historical logs of our disposable instance, in a non-disposable place, and are better positioned to understand what happened to our servers, without being overly attached to them.

 

 

 

How to access Microsoft Identity Manager Hybrid Report data using PowerShell, Graph API and oAuth2

Hybrid Reporting is a great little feature of Microsoft Identity Manager. A small agent installed on the MIM Sync Server will send reporting data to Azure for MIM SSPR and MIM Group activities. See how to install and configure it here.

But what if you want to get the reporting data without going to the Azure Portal and looking at the Audit Reports ? Enter the Azure AD Reports and Events REST API that is currently in preview.  It took me a couple of cracks and getting this working, because documentation is a little vague especially when accessing it via PowerShell and oAuth2. So I’ve written it up and hope it helps for anyone else looking to go down this route.

Gotchas

Accessing the Reports via the API has a couple of caveats that I had to work through:

  • Having the correct permissions to access the report data. Pretty much everything you read tells you that you need to be a Global Admin. Once I had my oAuth tokens I messed around a little and a was able to also get the following from back from the API when purposely using an identity that didn’t have the right permissions. The key piece is “Api request is not from global admin or security admin or security reader role”. I authorized the WebApp using an account that is in the Security Reader Role, and can successfully access the report data.

"Name":  "mimSsgmGroupActivityEvents",
"Name":  "mimSsprActivityEvents",
"Name":  "mimSsprRegistrationActivityEvents",

Here is the full list of Reports available as of 24 May 2017.

{
    "Name":  "b2cAuthenticationCountSummary",
    "LicenseRequired":  "False"
}
{
    "Name":  "b2cMfaRequestCount",
    "LicenseRequired":  "False"
}
{
    "Name":  "b2cMfaRequestEvent",
    "LicenseRequired":  "False"
}
{
    "Name":  "b2cAuthenticationEvent",
    "LicenseRequired":  "False"
}
{
    "Name":  "b2cAuthenticationCount",
    "LicenseRequired":  "False"
}
{
    "Name":  "b2cMfaRequestCountSummary",
    "LicenseRequired":  "False"
}
{
    "Name":  "tenantUserCount",
    "LicenseRequired":  "False"
}
{
    "Name":  "applicationUsageDetailEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "applicationUsageSummaryEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "b2cUserJourneySummaryEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "b2cUserJourneyEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "cloudAppDiscoveryEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "mimSsgmGroupActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "ssgmGroupActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "mimSsprActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "ssprActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "mimSsprRegistrationActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "ssprRegistrationActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "threatenedCredentials",
    "LicenseRequired":  "False"
}
{
    "Name":  "compromisedCredentials",
    "LicenseRequired":  "False"
}
{
    "Name":  "auditEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "accountProvisioningEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "signInsFromUnknownSourcesEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "signInsFromIPAddressesWithSuspiciousActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "signInsFromMultipleGeographiesEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "signInsFromPossiblyInfectedDevicesEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "irregularSignInActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "allUsersWithAnomalousSignInActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "signInsAfterMultipleFailuresEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "applicationUsageSummary",
    "LicenseRequired":  "True"
}
{
    "Name":  "userActivitySummary",
    "LicenseRequired":  "False"
}
{
    "Name":  "groupActivitySummary",
    "LicenseRequired":  "True"
}

How to Access the Reporting API using PowerShell

What you need to do is;

  • Register a WebApp
  • Get an oAuth2 Authentication Code using an account that is either Global Admin or in the Security Admin or Security Reader Azure Roles
  • Use your Bearer and Refresh tokens to query for the reports you’re interested in

Register your WebApp

In the Azure Portal create a new Web app/API app and assign it https://localhost as the Reply URL. Record the Application ID for use in the PowerShell script.

Assign the Read Directory data permission as shown below

Obtain a key from the Keys option on your new Web App.  Record it for use in the PowerShell script.

Generate an Authentication Code, get a Bearer and Refresh Token

Update the following script, changing Lines 5 & 6 for the ApplicationID/ClientId and Client Secret for the WebApp you created above.

Run the script and you will be prompted to authenticate. Use an account in the tenant where you created the Web App that is a Global Admin or in the Security Admin or Security Reader Azure Roles. You will need to change the location where you want the refresh.token stored (line 18).

If you’ve done everything correctly you have authenticated, got an AuthCode which was then used to get your Authorization Tokens. The value of the $Authorization variable should look similar to this;

Now you can use the Refresh token to generate new Authorization Tokens when they time out, simply by calling the Get-NewTokens function included in the script above.

Querying the Reporting API

Now that you have the necessary prerequisites sorted you can query the Reporting API.

Here are a couple of simple queries to return some data to get you started. Update the script for the tenant name of your AzureAD. With the $Authorization values from the script above you can get data for the MIM Hybrid Reports.

Check Patch Status of ‘WannaCrypt’ / ‘WannaCry’ using PowerShell

A short but sweet blog today, mindful that today most Australians will be coming back to work after the ‘WannaCrypt’ attack that was reported in the media on Friday.

I would like to just point out the work of Kieran Walsh – he’s done the ‘hard yards’ of extracting all of the Knowledge Base (KB) article numbers that you need to be searching for, to determine your patching status of Microsoft Security Bulletin MS17-010  (https://technet.microsoft.com/en-us/library/security/ms17-010.aspx).  Microsoft’s detailed blog about the ‘WannaCrypt ransomware’ can be found here: https://blogs.technet.microsoft.com/mmpc/2017/05/12/wannacrypt-ransomware-worm-targets-out-of-date-systems/

If you don’t have an Enterprise patch deployment tool such as SCCM or WSUS (there are many many others), Kieran’s script executes a simple ‘Get-Hotfix’ PowerShell command remotely against a Windows Server or workstation, and uses all the computer objects in Active Directory as a reference.  I personally haven’t run this yet, so please test this first against a test AD if you have one.  The ‘Get-Hotfix’ command is relatively ‘benign’ so the risk is low.

Conversely, if you’re looking to run this on your local workstation, I’ve modified his script and made a simple ‘local’ check.  Copy and paste this into a PowerShell window with ‘administrator’ permissions:

#— Script start

# List of all HotFixes containing the patch
$hotfixes = @(‘KB4012598’, ‘KB4012212’, ‘KB4012215’, ‘KB4015549’, ‘KB4019264’, ‘KB4012213’, ‘KB4012216’, ‘KB4015550’, ‘KB4019215’, ‘KB4012214’, ‘KB4012217’, ‘KB4015551’, ‘KB4019216’, ‘KB4012606’, ‘KB4015221’, ‘KB4016637’, ‘KB4019474’, ‘KB4013198’, ‘KB4015219’, ‘KB4016636’, ‘KB4019473’, ‘KB4013429’, ‘KB4015217’, ‘KB4015438’, ‘KB4016635’, ‘KB4019472’, ‘KB4018466’)
# Search for the HotFixes
$hotfix = Get-HotFix | Where-Object {$hotfixes -contains $_.HotfixID} | Select-Object -property “HotFixID”
# See if the HotFix was found
if (Get-HotFix | Where-Object {$hotfixes -contains $_.HotfixID}) {write-host “Found hotfix” $_.HotfixID

} else {

write-host “Didn’t find hotfix”

}
#— Script end
Please follow all official Microsoft advice in applying the correct patch as per the security bulletin link above.  Conversely look to disable ‘SMBv1’ services on your workstations until you can get them patched.  Good luck.
** Update @ 4:30pm (15/05/2017).  In my testing, I’ve found the Windows 10 patches listed in the Security Bulletin have been superseded by newer KB numbers.  I’ve added three KB’s for the 64-bit version of Windows 10, version 1511.  I’d suggest looking at the ‘Package Details’ tab of the Microsoft Catalog site (e.g http://www.catalog.update.microsoft.com/Search.aspx?q=KB4013198) for the latest KB numbers.  I’ll try to add all KBs for Windows 10 by tomorrow AEST (the 16th).  Alternative, keep an eye on updates to Kieran’s script as he gets update from the community.
a1.PNG

** Update @ 5pm – The MS blog about the ransomware attack itself specifically states Windows 10 machines are not impacted even though there are patches for the security bulletin that apply to Windows 10.  Ignore Windows 10 devices in your report unless there’s updated information from Microsoft.

** Update @ 8pm: Kieran has updated his script to exclude Windows 10 computer objects from the AD query.

** Update @ 9:30 am 16/05:  Updated list of KBs from Kieran’s script (who has been sourcing the latest KB list from the community)

** Updated @ 2pm 17/05:  Updated list of KBs (including Windows 10 updates) from the comments area from Kieran’s script (user: d83194).  For future updates, I’d suggest reviewing Kieran’s comments for the latest KB articles.  I’ll let you make the decision about whether to keep the Windows 10 filter (-notlike ‘Windows 10‘) in Kieran’s script.  Maybe produce two reports (with Windows 10/without Windows 10).

Understanding Password Sync and Write-back

For anyone who has worked with Office 365/Azure AD and AADConnect, you will of course be aware that we can now sync passwords two ways from Azure AD to our on-premises AD. This is obviously a very handy thing to do for myriad reasons, and an obvious suggestion for a business intending to utilise Office 365. The conversation with the security bod however, might be a different kettle of fish. In this post, I aim to explain how the password sync and write-back features work, and hopefully arm you with enough information to have that chat with the security guys.

Password Sync to AAD

If you’ve been half-listening to any talks around password sync, the term ‘it’s not the password, it’s a hash of a hash’ is probably the line you walked away with, so let’s break down what that actually means. First up, a quick explanation of what it actually means to hash a value. Hashing a value is applying a cryptographic algorithm to a string to irreversibly convert that string into another sting of a fixed length. This differs from encryption in that with encryption we intend to be able to retrieve the data we have stored, so must hold a decryption key – with hashing, we are unable to derive the original string from the hashed string, and nor do we want to. If it makes no sense to you that we wouldn’t want to reverse a hashed value, it will by the end of this post.

So, in Active Directory when a user sets their password, the value stored is not actually the password itself, it’s an MD4 hash of the password once it’s been converted to Unicode Little Endian format. Using this process, my password “nicedog” ends up in Active Directory as 2993E081C67D79B9D8D3D620A7CD058E. So, this is the first part of the “hash of a hash”.

The second part of the “hash of a hash” is going to go buck wild with the hashing. Once the password has been securely transmitted to AADConnect (I won’t detail that process, because we end up with our original MD4 hash anyway), the MD4 hash is then:

  1. Converted to a 32-byte hexadecimal string
  2. Converted into binary with UTF-16 encoding
  3. Salted with 10 additional bytes
  4. Resultant string hashed and re-hashed 1000 times with HMAC-SHA256 via PBKDF2

So, as you can see, we’re actually hashing the hash enough to make Snoop Dogg blush. Now, we finally have the have the hash which is securely sent to Azure AD via an SSL channel. At this point, the hashed value which we send is so far removed from the original hash that even the most pony-tailed, black tee-shirt and sandal wearing security guy would have to agree it’s probably pretty secure. Most importantly, if Azure AD was ever compromised and an attacker did obtain your users password hashes, they are unable to be used to attack any on-premises infrastructure with a “pass the hash” attack, which would be possible if AD’s ntds.dit was compromised and password hashes extracted.

So now Azure AD has these hashes, but doesn’t know your passwords – how are user’s passwords validated? Well it’s just a matter of doing the whole thing again with a user provided password when that user logs on. The password is converted, salted, hashed and rehashed 1000 times in the exact same manner, meaning the final hash is exactly the same as the one stored in Azure AD – the two hashes are compared, and if we have a match, we’re in. Easy. Now let’s go the other way.

Password Write-back to AD

So how does it work going back the other way? Differently. Going back the other way, we can’t write a hash ourselves to Active Directory, so we need to get the actual password back from AAD to AD and essentially perform a password reset on-prem allowing AD to then hash and store the password. There are a couple of things which happen when we install AADConnect and enable Password Write-back to allow us to do this

  1. Symmetric keys are created by AAD and shared with AADConnect
  2. A tenant specific Service Bus is created, and a password for it shared with AADConnect

The scenario in which we are writing a password back from AAD to AD is obviously a password reset event. This is the only applicable scenario where we would need to, because a password is only set in AAD if that user is “cloud only”, so they would of course not have a directory to write back to. Only synced users need password write-back, and only upon password reset. So AAD gets the password back on-premises by doing the following:

  1. User’s submitted password is encrypted with the 2048-bit RSA Key generated when you setup write-back
  2. Some metadata is added to the package, and it is re-encrypted with AES-GCM
  3. Message sent to the Service Bus via and SSL/TLS channel
  4. On-premises AADConnect agent wakes up and authenticates to Service Bus
  5. On-premises agent decrypts package
  6. AADConnect traces cloudanchor attribute back to the connected AD account via the AADConnect sync engine
  7. Password reset is performed for that user against a Primary Domain Controller

So, that’s pretty much it. That’s how we’re able to securely write passwords back and forth between our on-premises environment and AAD without Microsoft ever needing to store a user’s password. It’s worth noting for that extra paranoid security guy though that if he’s worried about any of the encryption keys or shared passwords being compromised, you can re-generate the lot simply by disabling and re-enabling password writeback.

Hopefully this post has gone some way towards helping you have this particular security conversation, no matter what year the security guys Metallica tour tee-shirt is from.

Secure your VSTS Release Management Azure VM deployments with NSGs and PowerShell

siliconvalve

One of the neat features of VSTS’ Release Management capability is the ability to deploy to Virtual Machine hosted in Azure (amongst other environments) which I previously walked through setting up.

One thing that you need to configure when you use this deployment approach is an open TCP port to the Virtual Machines to allow remote access to PowerShell and WinRM on the target machines from VSTS.

In Azure this means we need to define a Network Security Group (NSG) inbound rule to allow the traffic (sample shown below). As we are unable to limit the source address (i.e. where VSTS Release Management will call from) we are stuck creating a rule with a Source of “Any” which is less than ideal, even with the connection being TLS-secured. This would probably give security teams a few palpitations when they look at it too!

Network Security Group

We might be able to determine a…

View original post 637 more words

It’s time to get your head out of the clouds!

head-in-the-clouds1

For those of you who know me, you are probably thinking “Why on earth would we be wanting to get our heads out of the “Cloud” when all you’ve been telling me for years now is the need to adopt cloud!

This is true for the most part, but my point here is many businesses are being flooded by service providers in every direction to adopt or subscribe to their “cloud” based offering, furthermore ICT budgets are being squeezed forcing organisations into SaaS applications. The question that is often forgotten is the how? What do you need to access any of these services? And the answer is, an Identity!

It is for this reason I make the title statement. For many organisations, preparing to utilise a cloud offering` can require months if not years of planning and implementing organisational changes to better enable the ICT teams to offer such services to the business. Historically “enterprises” use local datacentres and infrastructure to provide all business services, where everything is centrally managed and controlled locally. Controlling access to applications is as easy as adding a user to a group for example. But to utilise any cloud service you need to take a step back and get your head back down on ground zero, look at whether your environment is suitably ready. Do you have an automated simple method of providing seamless access to these services, do you need to invest in getting your business “Cloud Ready”?

image1This is where Identity comes to the front and centre, and it’s really the first time for a long time now Identity Management has become a centrepiece to the way a business can potentially operate. This identity provides your business with the “how’ factor in a more secure model.

So, if you have an existing identity management platform great, but that’s not to say it’s ready to consume any of these offerings, this just says you have the potential means. For those who don’t have something in place then welcome! Now I’m not going to tell you products you should or shouldn’t use, this is something that is dictated by your requirements, there is no “one size fits all” platform out there, in fact some may need a multitude of applications to deliver on the requirements that your business has, but one thing every business needs is a strategy! A roadmap on how to get to that elusive finish line!

A strategy is not just around the technical changes that need to be made but also around the organisational changes, the processes, and policies that will inevitably change as a consequence of adopting a cloud model. One example of this could be around the service management model for any consumption based services, the way you manage the services provided by a consumption based model is not the same as a typical managed service model.

And these strategies start from the ground up, they work with the business to determine their requirements and their use cases. Those two single aspects can either make or break an IAM service. If you do not have a thorough and complete understanding of the requirements that your business needs, then whatever the solution being built is, will never succeed.

So now you may be asking so what next? What do we as an organisation need to do to get ourselves ready for adopting cloud? Well all in all there are basically 5 principles you need to undertake to set the foundation of being “Cloud Ready”

  1. Understand your requirements
  2. Know your source of truth/s (there could be more then one)
  3. Determine your central repository (Metaverse/Vault)
  4. Know your targets
  5. Know your universal authentication model

Now I’m not going to cover all these in this one post, otherwise you will be reading a novel, I will touch base on these separately. I would point out that many of these you have probably already read, these are common discussion points online and the top 2 topics are discussed extensively by my peers. The point of difference I will be making will be around the cloud, determining the best approach to these discussions so when you have your head in the clouds, you not going to get any unexpected surprises.

Inviting Microsoft Account users to your Azure AD-secured VSTS tenant

siliconvalve

I’ve done a lot of external invite management for VSTS after the last few years, and generally without fail we’ll have issues getting everyone on-boarded easily. This blog post is a reference for me (and I guess you too) to understand the invite process and document the experience the invited user has.

There are two sections to this blog post:

1. Admin instructions to invite users.

2. Invited user instructions.

Select whichever one applies to you.

The starting point for this post is that external user hasn’t yet been invited to your Azure AD tenant. The user doing in the inviting is also not an Azure AD Global Admin, but I has rights in an Azure tenant.

The Invite to Azure AD

Log into an Azure subscription using your Azure AD account and select Subscriptions. Ideally this shouldn’t be a production tenant!

Select Subscription

I am going to start by…

View original post 721 more words

ADFS v 3.0 (2012 R2) Migration to ADFS 4.0 (2016) – Part 3 – Azure MFA Integration

In Part 1 and Part 2 of this series we have covered the migration from ADFS v3 to ADFS 2016. In this series we will continue our venture in configuring Azure MFA in ADFS 2016.

Azure MFA – What is it about?

It is a bit confusing when we mention that we need to enable Azure MFA on ADFS. Technically, this method is actually integrating Azure MFA with ADFS. MFA itself is authenticating on Azure AD, however, ADFS is prompting you enter an MFA code which will be verified with the Azure AD to sign you in.

In theory, this by itself is not a multi-factor authentication. When users choose to login with a multi-factor authentication on ADFS, they’re not prompted to enter a password, they simply will login with the six digit code they receive on their mobile devices.

Is this secure enough? Arguably. Of course users had to previously set up their MFA to be able to login by choosing this method, but if someone has control or possession of your device they could simply login with the six digit code. Assuming the device is not locked, or MFA is setup to receive calls or messages (on some phones message notifications will appear on the main display), almost anyone could login.

Technically, this is how Azure MFA will look once integrated with the ADFS server. I will outline the process below, and show you how we got this far.

7

Once you select Azure Multi-Factor Authentication you will be redirected to another page

8

And when you click on “Sign In” you will simply sign in to the Office or Azure Portal, without any other prompt.

The whole idea here is not much about security as much as it is about simplicity.

Integrating Azure MFA on ADFS 2016

Important note before you begin: Before integrating Azure MFA on ADFS 2016, please be aware that users should already have setup MFA using the Microsoft Authenticator mobile app. Or they can do it while first signing in, after being redirected to the ADFS page. The aim of this post is to use the six digit code generated by the mobile app.

If users have MFA setup to receive calls or texts, the configuration in this blog (making Azure MFA as primary) will not support that. To continue using SMS or a call, whilst using Azure MFA, the “Azure MFA” need to be configured as a secondary authentication method, under “Multi-Factor”, and “Azure MFA” under “Primary” should be disabled.

Integrating Azure MFA on ADFS 2016, couldn’t be any easier. All that is required, is running few PowerShell cmdlets and enabling the authentication method.

Before we do so however, let’s have a look at our current authentication methods.

0

As you have noticed, that we couldn’t enable Azure MFA without first configuring Azure AD Tenant.

The steps below are intended to be performed on all AD FS servers in the farm.

Step 1: Open PowerShell and connect to your tenant by running the following:

Connect-MsolService

Step 2: Once connected, you need to run the follow cmdlets to configure the AAD tenant:

$cert = New-AdfsAzureMfaTenantCertificate -TenantID swayit.onmicrosoft.com

When successful, head to the Certificate snap in, and check that a certificate with the name of your tenant has been added in the Personal folder.

2a22

Step 3: In order to enable the AD FS servers to communicate with the Azure Multi-Factor Auth Client, you need to add the credentials to the SPN for the Azure Multi-Factor Auth Client. The certificate that we generated in a previsou step,  will serve as these credentials.

To do so run the following cmdlet:

New-MsolServicePrincipalCredential -AppPrincipalId 981f26a1-7f43-403b-a875-f8b09b8cd720 -Type asymmetric -Usage verify -Value $cert

3

Note that the GUID 981f26a1-7f43-403b-a875-f8b09b8cd720 is not made up, and it is the GUID for the Azure Multi Factor Authentication client. So you basically can copy/paste the cmdlet as is.

Step 4: When you have completed the previous steps, you can now configure the ADFS Farm by running the following cmdlet:

Set-AdfsAzureMfaTenant -TenantId swayit.onmicrosoft.com -ClientId 981f26a1-7f43-403b-a875-f8b09b8cd720

Note how we used the same GUID from the previous step.

4

When that is complete, restart the ADFS service on all your ADFS farm servers.

net stop adfssrv

net start adfssrv

Head back to your ADFS Management Console and open the Authentication method and you will notice that Azure MFA has been enabled, and the message prompt disappeared.

5

6

If the Azure MFA Authentication methods were not enabled, then enable them manually and restart the services again (on all your ADFS servers in the farm).

Now that you have completed all the steps, when you try and access Office 365 or the Azure Portal you will be redirected to the pages posted above.

Choose Azure Multi-Factor Authentication

7

Enter the six digit code you have received.

8

And then you’re signed in.

10

By now you have completed migrating from ADFS v3 to ADFS 2016, and in addition have integrated Azure MFA authentication with your ADFS farm.

The last part in this series will be about WAP 2012 R2 upgrade to WAP 2016. So please make sure to come back tomorrow and check in details the upgrade process.

I hope you’ve enjoyed the posts so far. For any feedback or questions, please leave a comment below.

 

 

ADFS v 3.0 (2012 R2) Migration to ADFS 4.0 (2016) – Part 1

Introduction

With the release of Windows Server 2016, Microsoft has introduced new and improved features. One of those features is ADFS 4.0, better known as ADFS 2016.

Organisations have already started leveraging ADFS 2016 as it covers most of their requirement, specially in terms of security.

In this series of blog posts, I will demonstrate how you can upgrade from ADFS v 3.0 (Running Windows Server 2012 R2) to ADFS 2016 (Running Windows Server 2016 Datacenter). In the series to come I will also cover Web Application Proxy (WAP) migration from Windows Server 2012 R2 to Windows Server 2016. Moreover, I will cover integration of Azure MFA with the new ADFS 2016.

The posts in this series assume you have knowledge in Windows Servers, AD, ADFS, WAP, and MFA. This blog post will not go into detailed step-by-step installation of roles and features. This blog post also assumes you have a running environment of AD, ADFS/WAP (2012 R2), AAD Connect already configured.

What’s New in ADFS 2016?

ADFS 2016 offers new and improved features included:

  • Eliminate Passwords from the Extranet
  • Sign in with Azure Multi-factor Authentication
  • Password-less Access from Compliant Devices
  • Sign in with Microsoft Passport
  • Secure Access to Applications
  • Better Sign in experience
  • Manageability and Operational Enhancements

For detailed description on the aforementioned points, please refer to this link.

Current Environment

  • 2x ADFS v3 Servers (behind an internal load balancer)
  • 2x WAP 2012 R2 Server (behind an external load balancer)
  • 2x AD 2012 R2 Servers
  • 1x AAD Connect server

At a high level design, this is how the ADFS/WAP environment looks:

sso

Future environment:

  • 2x ADFS 2016 Servers (behind the same internal load balancer)
  • 2x WAP 2016 Servers (behind the same external load balancer)
  • 2x AD 2012 R2 Servers
  • 1x AAD Connect Server

Planning for your ADFS and WAP Migration

At first, you need to make sure that your applications can support ADFS 2016, some legacy applications may not be supported.

The steps to implement SSO are as follows:

  1. Active Directory schema update using ‘ADPrep’ with the Windows Server 2016 additions
  2. Build Windows Server 2016 servers with ADFS and install into the existing farm and add the servers to the Azure load balancer
  3. Promote one of the ADFS 2016 servers as “primary” of the farm, and point all other secondary servers to the new “primary”
  4. Build Windows Server 2016 servers with WAP and add the servers to the Azure load balancer
  5. Remove the WAP 2012 servers from the Azure load balancer
  6. Remove the ADFSv3 servers from the Azure load balancer
  7. Raise the Farm Behavior Level feature (FBL) to ‘2016’
  8. Remove the WAP servers from the cluster
  9. Upgrade the WebApplicationProxyConfiguration version to ‘2016’
  10. Configure ADFS 2016 to support Azure MFA and complete remaining configuration

The steps for the AD schema upgrade are as follows:

  1. Prior to starting, the Active Directory needs to be in a health state, in particular replication needs to be performing without error.
  2. The Active Directory needs to be backed-up. Best to backup (at a minimum) a few Active Directory Domain Controllers including the ‘system state’
  3. Identify which Active Directory Domain Controller maintains the Schema Master role
  4. Perform the update using an administrative account by temporarily adding the account to the Schema Admin group
  5. Download and have handy the Windows Server 2016 installation media
  6. When ready to update the schema, perform the following:
  7. Open an elevated command prompt and navigate to support\adprep directory in the Windows Server 2016 installation media. Run the following: adprep /forestprep.
  8. Once that completes run the following: adprep/domainprep

Upgrading the Active Directory schema will not impact your current environment, nor will it raise the domain/forest level.

Part 2 of this series will be published early next week. Therefore make sure to please come back and check in details the migration process.

 

Configuring AWS Web Application Firewall

In a previous blog, we discussed Site Delivery with AWS CloudFront CDN, one aspect in that blog was not covered and that was WAF (Web Application Firewall).

What is Web Application Firewall?

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules.

When to create your WAF?

Since this blog is related to CloudFront, and the fact that WAF is tightly integrated with CloudFront, our focus will be on that. Your WAF rules will run and be applied in all Edge Locations you specify during your CloudFront configuration.

It is recommended that you create your WAF before deploying your CloudFront Distribution, since CloudFront takes a while to be deployed, any changes applied after its deployment will take equal time to be updated. Even after you attach the WAF to the CloudFront Distribution (this is done in “Choose Resource” during WAF configuration – shown below).

Although there is no general rule here, it is up to the organisation or administrator to apply WAF rules before or after the deployment of CloudFront distribution.

Main WAF Features

In terms of security, WAF protects your applications from the following attacks:

  • SQL Injection
  • DDoS attacks
  • Cross Site Scripting

In terms of visibility, WAF gives you the ability to monitor requests and attacks through CloudWatch integration (excluded from this blog). It gives you raw data on location, IP Addresses and so on.

How to setup WAF?

Setting up WAF can be done in few ways, you could either use CloudFormation template, or configure the setting on the WAF page.

Since each organisation is different, and requirements change based on applications, and websites, the configuration present on this blog are considered general practice and recommendation. However, you will still need to tailor the WAF rules according to your own needs.

WAF Conditions

For the rules to function, you need to setup filter conditions for your application or website ACL.

I already have WAF setup in my AWS Account, and here’s a sample on how conditions will look.

1-waf

If you have no Conditions already setup, you will see something like “There is no IP Match conditions, please create one”.

To Create a condition, have a look at the following images:

In here, we’re creating a filter that an HTTP method contains a threat after HTML decoding.

2-waf

Once you’ve selected your filters, click on “Add Filter”. The filter will be added to the list of filters, and once you’re done adding all your filters, create your condition.

3-waf

You need to follow the same procedure to create your conditions for SQL Injection for example.

WAF Rules

When you are done with configuring conditions, you can create a rule and attach it to your web ACL. You can attach multiple rules to an ACL.

Creating a rule – Here’s where you specify the conditions you have created in a previous step.

4-waf

From the list of rules, select the rule you have created from the drop down menu, and attach it to the ACL.

5-waf

In the next steps you will have the option to choose your AWS Resource, in this case one of my CloudFront Distributions. Review and create your Web ACL.

6-waf

7-waf

Once you click on create, go to your CloudFront distribution and check its status, it should show “In progress”.

WAF Sample

Since there isn’t a one way for creating a WAF rule, and if you’re not sure where to begin, AWS gives you a good way to start with a CloudFormation template that will create WAF sample rules for you.

This sample WAF rule will include the following found here:

  • A manual IP rule that contains an empty IP match set that must be updated manually with IP addresses to be blocked.
  • An auto IP rule that contains an empty IP match condition for optionally implementing an automated AWS Lambda function, such as is shown in How to Import IP Address Reputation Lists to Automatically Update AWS WAF IP Blacklists and How to Use AWS WAF to Block IP Addresses That Generate Bad Requests.
  • A SQL injection rule and condition to match SQL injection-like patterns in URI, query string, and body.
  • A cross-site scripting rule and condition to match Xss-like patterns in URI and query string.
  • A size-constraint rule and condition to match requests with URI or query string >= 8192 bytes which may assist in mitigating against buffer overflow type attacks.
  • ByteHeader rules and conditions (split into two sets) to match user agents that include spiders for non–English-speaking countries that are commonly blocked in a robots.txt file, such as sogou, baidu, and etaospider, and tools that you might choose to monitor use of, such as wget and cURL. Note that the WordPress user agent is included because it is used commonly by compromised systems in reflective attacks against non–WordPress sites.
  • ByteUri rules and conditions (split into two sets) to match request strings containing install, update.php, wp-config.php, and internal functions including $password, $user_id, and $session.
  • A whitelist IP condition (empty) is included and added as an exception to the ByteURIRule2 rule as an example of how to block unwanted user agents, unless they match a list of known good IP addresses

Follow this link to create a stack in the Sydney region.

I recommend that you review the filters, conditions, and rules created with this Web ACL sample. If anything, you could easily update and edit the conditions as you desire according to your applications and websites.

Conclusion

In conclusion, there are certain aspects of WAF that need to be considered, like choosing an appropriate WAF solution and managing its availability, and you have to be sure that your WAF solution can keep up with your applications.

The best feature of WAF, and since it is integrated with CloudFront it can be used to protect websites even if they’re not hosted in AWS.

I hope you found this blog informative. Please feel free to add your comments below.

Thanks for reading.