AD FS 2016 and InvalidNameIDPolicy using SAML Authentication to SailPoint IdentityNow

Context

I recently had a seemingly simple task for a customer to setup a AD FS 2016 relying party trust for their SailPoint IdentityNow deployment. Sounds easy right?

In this scenario AD FS 2016 was to be the Identity Provider (IdP) and IdentityNow the Service Provider (SP). Our end-goal of the solution was to allow the customer’s users to authenticate via SAML into IdentityNow using their corporate AD DS email address and password. Great outcome from a user experience perspective and for corporate governance too!

Configuration Setup and Problem Encountered

Following SailPoint’s guide here I setup IdentityNow as a Service Provider using the Email attribute as the SAML NameID.

I then moved onto creating a new AD FS 2016 relying party trust using the sp-metadata.xml file downloaded directly from the customer’s IdentityNow portal. After some quick research of the claims required I created the following 2x AD FS Issuance Transform Rules within my new RPT:

  • Rule #1: Send LDAP Attribute (E-Mail-Addresses) as an Outgoing Claim (E-Mail Address)

  • Rule #2 Transform an Incoming Claim (E-Mail Address) to an Outgoing Claim (Name ID) with the Outgoing name ID format (Email)

Unfortunately during my testing I was continually returned the following web page message from the customer’s IdentityNow portal. 

This was occurring after the initial AD FS authentication and token being issued.

Whilst the web page error is vague in it’s description of the error, I knew that because the initial AD FS authentication had succeeded that I was dealing with a claims issue between the IdP and SP. 

InvalidNameIDPolicy SAML Response

Diving into the SAML response using Fiddler and a SAML decoder I could see a SAML status code of “InvalidNameIDPolicy“. Problem discovered! The most useful and easily accessible diagnostic information was actually straight out of the AD FS server’s local event viewer logs under Applications and Services Logs > ADFS > Admin (in hindsight I should have looked here first!).

Events #364 and #321 also verified that the NameIDPolicy required from IdentityNow was not being met by the AD FS token issued.

Event ID #364
Encountered error during federation passive request. 
The SAML request contained a NameIDPolicy that was not satisfied by the issued token.

Event ID #321
The SAML authentication request had a NameID Policy that could not be satisfied. 

Tip: If you encounter the same problem I had, have a look at the detail of these two events and compare the Requested NameIDPolicy versus the Actual NameIDPolicy to discover what exactly is missing from the AD FS token.

Sending SPNameQualifier as a Claim

The resolution to this problem for me was to ensure that an SPNameQualifier value was sent as a claim property from AD FS to IdentityNow.

As far as I know, this is an undocumented requirement to have SAML authentication tokens from AD FS 2016 accepted by SailPoint IdentityNow.

The SPNameQualifier value needed to match the Entity ID specified in our IdentityNow portal under Admin > Global > Security Settings > Service Provider.

Because I couldn’t find SPNameQualifier property in any of the Claim rule templates I used a Custom Rule which you can create as shown below.

The following Claim rule combines my original Rule #2 (described at the beginning of this post) with the new claim property for SPNameQualifier.

Note: If using the below claim code remember to replace “insertValueHere” with your Entity ID specified in IdentityNow.

After updating my claim rule with the above change a quick test of authenticating to IdentityNow via AD FS SAML was successful and I could also finally see SAML authentication events from the IdentityNow Activity Tab.

Happy days! 🙂

Summary

In conclusion when configuring SAML authentication via AD FS 2016 (IdP) to IdentityNow (SP) you may need to insert a SPNameQualifier value as an outgoing claim property from AD FS. The SPNameQualifier value should match the Entity ID value specified in your IdentityNow portal.

Cheers, Jesse

Plugging the Gaps in Azure Policy – Part Two

Introduction

Welcome to the second and final part of my blogs on how to plug some gaps in Azure Policy. If you missed part one, this second part isn’t going to be a lot of use without the context from that, so maybe head on back and read part one before you continue.

In part one, I gave an overview of Azure Policy, a basic idea of how it works, what the gap in the product is in terms of resource evaluation, and a high-level view of how we plug that gap. In this second part, I’m going to show you how I built that idea out and provide you some scripts and a policy so you can spin up the same idea in your own Azure environments.

Just a quick note, that this is not a “next, next, finish” tutorial – if you do need something like that, there are plenty to be found online for each component I describe. My assumption is that you have a general familiarity with Azure as a whole, and the detail provided here should be enough to muddle your way through.

I’ll use the same image I used in part one to show you which bits we’re building, and where that bit fits in to the grand scheme of things.

We’re going to create a singular Azure Automation account, and we’re going to have two PowerShell scripts under it. One of those scripts will be triggered by a webhook, which will receive a POST from Event Grid, and the other will be fired by a good old-fashioned scheduler. I’m not going to include all the evaluations performed in my production version of this (hey, gotta hold back some IP right?) but I will include enough for you to build on for your own environment.

The Automation Account

When creating the automation account, you don’t need to put a lot of thought into it. By default when you create an automation account, it is going to automatically create as Azure Run As account on your behalf. If you’re doing this in your own lab, or an environment you have full control over, you’ll be able to do this step without issue , but typically in an Azure environment you may have access to build resources within a subscription, but perhaps not be able to create Azure AD objects – if that level of control applies to your environment, you will likely need to get someone to manually create an Azure AD Service Principal on your behalf. For this example, we’ll just let Azure Automation create the Run As account, which, by default, will have contributor access on the subscription you are creating the account under (which is plenty for what we are doing). You will also notice a “Classic” Run As account is also created – we’re not going to be using that, so you can scrap it. Good consultants like you will of course figure out the least permissions required for the production account and implement that accordingly rather than relying on these defaults.

The Event-Based Runbook

The Event-Based Runbook grabs parameters from POSTed JSON which we get from Event Hub. The JSON we get contains enough information about an individual resource which has been created or modified that we are able to perform an evaluation on that resource alone. In the next section, I will give you a sample of what that JSON looks like.

When we create this event-based Runbook, obviously we need somewhere to receive the POSTed JSON, so we need to create a Webhook. If you’ve never done this before, it’s a fairly straight forward exercise, but you need to be aware of the following things

  • When creating the Webhook, you are displayed the tokenized URL at the point of creation. Take note of it, you won’t be seeing it again and you’ll have to re-create the webhook if you didn’t save your notepad.
  • This URL is open out to the big bad internet. Although the damage you can cause in this instance is limited, you need to be aware that anyone with the right URL can hit that Webhook and start poking.
  • The security of the Webhook is contained solely in that tokenised URL (you can do some trickery around this, but it’s out of scope for this conversation) so in case the previous two points weren’t illustrative enough, the point is that you should be careful with Webhook security.

Below is the script we will use for the event-driven Runbook.

So, what are the interesting bits in there we need to know about? Well firstly, the webhook data. You can see we ingest the data initially into the $WebhookData variable, then store it in a more useful format in the $InputJSON variable, and then break it up into a bunch of other more useful variables $resourceUri, $status and $subject. The purpose in each of those variables is described below

 

VariablePurpose
$resourceUriThe resource URI of the resource we want to evaluate
$statusThe status of the Azure operation we received from Event Grid. If the operation failed to make a change for example, we don’t need to re-evaluate it.
$subjectThe subject contains the resource type, this helps us to narrow down the scope of our evaluation

 

As you can see, aside from dealing with inputs at the top, the script essentially has two parts to it: the tagging function, and the evaluation. As you can see from the evaluation (line 78-88) we scope down the input to make sure we only ever bother evaluating a resource if it’s one we care about. The evaluation itself, as you can see is really just saying “hey, does this resource have more than one NIC? If so, tag the resource using the tagging function. If it doesn’t? remove the tag using the tagging function”. Easy.

The Schedule-Based Runbook

The evaluations (and the function) we have in the schedule-based Runbook is essentially the same as what we have in the event-based one. Why do we even have the schedule-based Runbook then? Well, imagine for a second that Azure Automation has fallen over for a few minutes, or someone publishes dud code, or one of many other things happens which means the automation account is temporarily unavailable – this means the fleeting event which may occur one time only as a resource is being created is essentially lost to the ether, Having the schedule-based books means we can come back every 24 hours (or whatever your organisation decides) and pick up things which may have been missed.

The schedule-based runbook obviously does not have the ability to target individual resources, so instead it must perform an evaluation on all resources. The larger your Azure environment, the longer the processing time, and potentially the higher the cost. Be wary of this and make sensible decisions.

The schedule-based runbook PowerShell is pasted below.

Event Grid

Event Grid is the bit which is going to take logs from our Azure Subscription and allow us to POST it to our Azure Automation Webhook in order to perform our evaluation. Create your Event Grid Subscription with the “Event Grid Schema”, the “Subscription” topic type (using your target subscription) and listening only for “success” event types. The final field we care about on the Event Subscription create form, is for the Webhook – this is the one we created earlier in our Azure Automation Runbook, and now is the time to paste that value in.

Below is an example of the JSON we end up getting POSTed to our Webhook.

Azure Policy

And finally, we arrive at Azure Policy itself. So once again to remind you, all we are doing at this point is performing a compliance evaluation on a resource based solely on the tag applied to it, and accordingly, the policy itself is very simple. Because this is a policy based only on the tag, it means the only effect we can really use is “Audit” – we cannot deny creation of resources based on these evaluations.

The JSON for this policy is pasted below.

And that’s it, folks – I hope these last two blog posts have given you enough ideas or artifacts to start building out this idea in your own environments, or building out something much bigger and better using Azure Functions in place of our Azure Automation examples!

If you want to have a chat about how Azure Policy might be useful for your organisation, by all means, please do reach out, as a business we’ve done a bunch of this stuff now, and I’m sure we can help you to plug whatever gaps you might have.

 

Plugging the Gaps in Azure Policy – Part One

Introduction

Welcome to the first part of a two part blog on Azure Policy. Multi-part blogs are not my usual style, but the nature of blogging whilst also being a full time Consultant is that you slip some words in when you find time, and I was starting to feel if I wrote this in a single part, it would just never see the light of day. Part one of this blog deals with the high-level overview of what the problem is, and how we solved it at a high level, part two will include the icky sticky granular detail, including some scripts which you can shamelessly plagiarise.

Azure Policy is a feature complete solution which performs granular analysis on all your Azure resources, allowing your IT department to take swift and decisive action on resources which attempt to skirt infrastructure policies you define. Right, the sales guys now have their quotable line, let’s get stuck in to how you’re going to deliver on that.

Azure Policy Overview

First, a quick overview of what Azure Policy actually is. Azure Policy is a service which allows you to create rules (policies) which allow you to take an action on an attempt to create or modify an Azure resource. For example, I might have a policy which says “only allow VM SKU’s of Standard_D2s_v3” with the effect of denying the creation of said VM if it’s anything other than that SKU. Now, if a user attempts to create a VM other than the sizing I specify, they get denied – same story if they attempt to modify an existing VM to use that SKU. Deny is just one example of an “effect” we can take via Azure Policy, but we can also use Audit, Append, AuditIfNotExists, DeployIfNotExists, and Disabled.

Taking the actions described above obviously requires that you evaluate the resource to take the action. We do this using some chunks of JSON with fairly basic operators to determine what action we take. The properties you plug into a policy you create via Azure Policy, are not actually direct properties of the resource you are attempting to evaluate, rather we have “Aliases”, which map to those properties. So, for example, the alias for the image SKU we used as an example is “Microsoft.Compute/virtualMachines/imageSku”, which maps to the path “properties.storageProfile.imageReference.sku” on the actual resource. This leads me to….

The Gap

If your organisation has decided Azure Policy is the way forward (because of the snazzy dashboard you get for resource compliance, or because you’re going down the path of using baked in Azure stuff, or whatever), you’re going to find fairly quickly that there is currently not a one to one mapping between the aliases on offer, and the properties on a resource. Using a virtual machine as an example, we can use Azure Policy to take an effect on a resource depending on its SKU (lovely!) but up until very recently, we didn’t have the ability to say if you do spin up a VM with that SKU, that it should only ever have a single NIC attached. The existing officially supported path to getting such aliases added to Policy is via the Azure Policy GitHub (oh, by the way if you’re working with policy and not frequenting that GitHub, you’re doing it wrong). The example I used about the multiple NIC’s, you can see was a requested as an alias by my colleague Ken on October 22nd 2018, and marked as “deployed” into the product on February 14th 2019. Perhaps this is not bad for the turnaround from request to implementation into the product speaking in general terms, but not quick enough when you’re working on a project which relies on that alias for a delivery deadline which arrives months before February 14th 2019. A quick review of both the open and closed issues on the Azure Policy GitHub gives you a feel for the sporadic nature of issues being addressed, and in some cases due to complexity or security, the inability to address the issues at all. That’s OK, we can fix this.

Plugging the Gap

Something we can use across all Azure resources in Policy, are fields. One of the fields we can use, is the tag on a resource. So, what we can do here is report compliance status to the Azure Policy dashboard not based on the actual compliance status of the resource, but based on whether or not is has a certain tag applied to it – that is to say, a resource can be deemed compliant or non-compliant based on whether or not it has a tag of a certain value – then, we can use something out of band to evaluate the resources compliance and apply the compliance tag. Pretty cunning huh?

So I’m going to show you how we built out this idea for a customer. In this first part, you’re going to get the high-level view of how it hangs together, and in the second part I will share with you the actual scripts, policies, and other delicious little nuggets so you can build out a demo yourself should it be something you want to have a play with. Bear in mind the following things when using all this danger I am placing in your hands:

  • This was not built to scale, more as a POC, however;
    • This idea would be fine for handling a mid-sized Azure environment
  • This concept is now being built out using Azure Functions (as it should be)
  • Roll-your-own error handling and logging, the examples I will provide will contain none
  • Don’t rely on 100% event-based compliance evaluation (I’ll explain why in part 2)
  • I’m giving you just enough IP to be dangerous, be a good Consultant

Here’s a breakdown of how the solution hangs together. The example below will more or less translate to the future Functions based version, we’ll just scribble out a couple bits, add a couple bits in.

So, from the diagram above, here’s the high-level view what’s going on:

  1. Event Grid forwards events to a webhook hanging off a PowerShell Runbook.
  2. The PowerShell Runbook executes a script which evaluates the resource forwarded in the webhook data, and applies, removes, or modifies a tag accordingly. Separately, a similar PowerShell runbook fires on a schedule. The schedule-based script contains the same evaluations as the event-driven one, but rather than evaluate an individual resource, it will evaluate all of them.
  3. Azure Policy evaluates resources for compliance, and reports on it. In our case, compliance is simply the presence of a tag of a particular value.

Now, that might already be enough for many of you guys to build out something like this on your own, which is great! If you are that person, you’re probably going to come up with a bunch of extra bits I wouldn’t have thought about, because you’re working from a (more-or-less) blank idea. For others, you’re going to want some gnarly config and scripts so you can plug that stuff into your own environment, tweak it up, and customise it to fit your own lab – for you guys, see you soon for part two!

Kloud has been building out a bunch of stuff recently in Azure Policy, using both complex native policies, and ideas such as the one I’ve detailed here. If your organisation is looking at Azure Policy and you think it might be a good fit for your business, by all means, reach out for a chat. We love talking about this stuff.

Azure AD/Active Directory User Security Evaluation Reporter

During December 2018 – February 2019 Microsoft have run an online Microsoft Graph Security Hackathon on Devpost.

The criteria of the hackathon was;

  • Build or update a functioning Microsoft Graph-powered solution that leverages the Microsoft Graph Security API

Following the announcement of the Hackathon I was encouraged by Kloud management to enter. During the busy month of December I started to formulate a concept for entry in the Hackathon taking learnings from the hackathon I entered in 2018. Over the Xmas holiday period I started working on my entry which continued into January and February at nights and weekends.

Problem

A Security Administrator within an Organisation enables security related configuration options on an Azure Tenant to implement security controls that align an organisation with Microsoft recommendations and best practice.

The Azure Security Score provides an evaluation on the alignment of an organisation with best practice, however to some extent it still requires end users to have the right configuration for security related elements of their profile. But as a Service Desk Operator or Cyber Security Officer there isn’t a single view of a user’s security posture that can give you an individual user security score summary. My solution……

Microsoft User Security Evaluation Reporter (USER)

Microsoft User Security Evaluation Reporter (USER) is an Azure AD and Active Directory tool for use by the Service Desk and Cyber Security Officers to get instant visibility of an organisations Azure Security Score that allows them to then evaluate current risks within an organisation right down to individual users.

When the Microsoft USER loads the current Azure Security Score is retrieved, evaluated and displayed for alignment with Microsoft Recommendations. Also, on load the last 5 Active Security Risk Events are displayed.

Microsoft USER Recent Risk Events and Azure Secure Score.PNG

The Service Desk Operator or Cyber Security Officer can select one of the recent Security Events or search for a user and drill down into the associated identity. They will be quickly able to understand the users’ individual security posture aligned with best practice.

What are the recent Security Risk Events for that user? Does that user;

  • Have MFA enabled? Is MFA enabled with an Authenticator App as the primary method?
  • Is the users Active Directory password in the Pwned Passwords v4 list from Have I Been Pwned?
  • Has the user recently being attempting Azure Password Reset functions?
  • What are the last 10 logins for that user?
  • What is the base user information for that user and what devices are registered to that user? Are they Azure AD Joined?

User Secure Score Summary.PNG

The clip below gives a walk through with more detail of my Microsoft USER tool.

How I built it

The solution is built using;

  • NodeJS and Javascript
  • leveraging Azure Functions to interface with Azure AD, Microsoft Graph, Azure Table Service
  • Lithnet Password Protection for Active Directory that in turn leverages the Have I Been Pwned v4 dataset
  • All secrets are stored in Azure Key Vault.
  • The WebApp is Application Insights enabled.
  • The WebApp is deployed using a Docker Container into Azure App Service

The architecture is shown below.

MS User Security Evaluation Reporter Architecture

The Code

A Repo with the code can be found here. Keep in mind I’m not a developer and this is my first WebApp that was put together late at night and over weekends and only tested in Chrome and Edge. The Readme also contains hopefully everything you should need to deploy it.

CyberArk PAM- Eliminate Hard Coded Credentials using Java REST API Calls

Still in many Organization hard coded credentials are stored in Application config files for making application-to-application connection, in scripts (ex: scheduled tasks) and config files. Generally, these are high privileged service accounts and its passwords are set to be never changed.
Keeping hard coded credentials always risk to the organizations security posture. CyberArk provides a solution called Application Identity Manager using which, the passwords of Privileged Service Accounts can be stored centrally in Password Vault, logged, rotated and retrieved in many different ways.
CyberArk supports two approaches to eliminate hard-coded credentials, which are;

  1. Credentials Provider (CP). This required an agent needs to be installed on each server where the application or script is running.
  2. Central Credential Provider (CCP).

In this post I’m giving more details on how to retrieve credentials via CCP using Java REST API call. Applications that require credentials to access a remote device or to execute another application remotely can request the relevant credentials from the CCP via REST or SOAP calls.

Pre-Requisites:The CCP installation consist of two parts, which are;
1) Credential Provider for Windows ( 2012 R2, 2016)
2) Install the Central Credential Provider web service (IIS 6, 7.5, or 10)
Client Requirements:
The Central Credential Provider works with applications on any operating system, platform or framework that can invoke REST or SOAP web service requests.
CCP Supported Client Authentication:
1) Client certificates
2) The address of the machine where the application is running
3) Windows domain Operating System user
Overview
In this example I’ve used Java to make REST API call to CCP Web Services using Certificate and Client IP authentication. This approach will work with any programming languages like .Net, Python, PowerShell etc.
1. On board Required Application into CyberArk via Password Vault Web Access (PVWA) Web Portal.
2. Create Required Platform and Safe.
3. On-board Required Privileged Service Account into CyberArk via PVWA
4. Add Application we have created in step1 as the member of this Safe with Retrieve permission enabled
5. Add Provider Users as a member to the Safe, which was created as part of CCP initial installations.
6. Add the Certificate into CCP’s IIS server, the same certificate will be used for client authentication
7. Add the certificate into java keys store using Java key tool command
8. Run the java Code.
Implementations:
On-board Application => Login to PVWA =>Go to Application Tab =>Add Application : provide Application name, Owner and other details. Go to Allowed machine Tab=> Enter the IP Address of the machine where the java code is running. I’ve also added time restriction which ensures credentials will be released only within the time limit after the successful authentication.

Create Platform and Safe: Login to PVWA =>Go to Administration=>Platform Management => Select Windows Domain Accounts => Duplicate it and modify according to the Password requirements.

On-board Account and Add Members: Login to PVWA =>Go to Accounts => Account View => Add Account => Enter the actual Privileged Account details by selecting the Safe and Platform we have created in the previous steps. In this example I’ve chosen AD as my target account but we can any platform accounts as per the need since CyberArk support all most all of the platforms.

Add Application as a Safe Member: Login to PVWA =>Go to Policies =>Access Control =>Select the Safe=> Edit members => Add Application as member with Retrieve permissions and Add Provider user with Retrieve and List permissions

Add Certificate into IIS Server: Either we can use Self Signed or CA Signed client Certificate. I’ve added a signed AD Domain certificate for PVWA SSL connection, so I’m going to use the same certificate into my Client Java code. To add Certificate into Java key Store: I’ve java installed in my client machine (192.168.2.41) where my java code will run to make REST API calls. The below key tool command must be executed via CMD.
keytool -importcert -storepass changeit -keystore "C:\Program Files\Java\jdk1.8.0_181\jre\lib\security\cacerts" -alias compsrv01 -file certnew.cer
Java Code:
Note : if you look at the java code there is no hard coded credentials or tokens used to authenticate into CCP, Its simply uses the certificate for authentication. 

Java Output: 

The outcome of the above REST API call will be a JSON and we can get
the user name, password (content) from it. These credentials will be dynamically referred in the target scripts,
applications which will then use the credentials to perform its tasks.

Summary:
With Simple REST API calls, CyberArk Vaulted account’s credentials can be retried using combination of certificate and client server IP authentication. This will be helpful for eradicating embed hard coded credentials in the application config files, scheduler jobs, scripts etc.

Automatic Key Rotation for Azure Services

Securely managing keys for services that we use is an important, and sometimes difficult, part of building and running a cloud-based application. In general I prefer not to handle keys at all, and instead rely on approaches like managed service identities with role-based access control, which allow for applications to authenticate and authorise themselves without any keys being explicitly exchanged. However, there are a number of situations where do we need to use and manage keys, such as when we use services that don’t support role-based access control. One best practice that we should adopt when handling keys is to rotate (change) them regularly.

Key rotation is important to cover situations where your keys may have compromised. Common attack vectors include keys having been committed to a public GitHub repository, a log file having a key accidentally written to it, or a disgruntled ex-employee retaining a key that had previously been issued. Changing the keys means that the scope of the damage is limited, and if keys aren’t changed regularly then these types of vulnerability can be severe.

In many applications, keys are used in complex ways and require manual intervention to rotate. But in other applications, it’s possible to completely automate the rotation of keys. In this post I’ll explain one such approach, which rotates keys every time the application and its infrastructure components are redeployed. Assuming the application is deployed regularly, for example using a continuous deployment process, we will end up rotating keys very frequently.

Approach

The key rotation process I describe here relies on the fact that the services we’ll be dealing with – Azure Storage, Cosmos DB, and Service Bus – have both a primary and a secondary key. Both keys are valid for any requests, and they can be changed independently of each other. During each release we will pick one of these keys to use, and we’ll make sure that we only use that one. We’ll deploy our application components, which will include referencing that key and making sure our application uses it. Then we’ll rotate the other key.

The flow of the script is as follows:

  1. Decide whether to use the primary key or the secondary key for this deployment. There are several approaches to do this, which I describe below.
  2. Deploy the ARM template. In our example, the ARM template is the main thing that reads the keys. The template copies the keys into an Azure Function application’s configuration settings, as well as into a Key Vault. You could, of course, output the keys and have your deployment script put them elsewhere if you want to.
  3. Run the other deployment logic. For our simple application we don’t need to do anything more than run the ARM template deployment, but for many deployments  you might copy your application files to a server, swap the deployment slots, or perform a variety of other actions that you need to run as part of your release.
  4. Test the application is working. The Azure Function in our example will perform some checks to ensure the keys are working correctly. You might also run other ‘smoke tests’ after completing your deployment logic.
  5. Record the key we used. We need to keep track of the keys we’ve used in this deployment so that the next deployment can use the other one.
  6. Rotate the other key. Now we can rotate the key that we are not using. The way that we rotate keys is a little different for each service.
  7. Test the application again. Finally, we run one more check to ensure that our application works. This is mostly a last check to ensure that we haven’t accidentally referenced any other keys, which would break our application now that they’ve been rotated.

We don’t rotate any keys until after we’ve already switched the application to using the other set of keys, so we should never end up in a situation where we’ve referenced the wrong keys from the Azure Functions application. However, if we wanted to have a true zero-downtime deployment then we could use something like deployment slots to allow for warming up our application before we switch it into production.

A Word of Warning

If you’re going to apply this principle in this post or the code below to your own applications, it’s important to be aware of an important limitation. The particular approach described here only works if your deployments are completely self-contained, with the keys only used inside the deployment process itself. If you provide keys for your components to any other systems or third parties, rotating keys in this manner will likely cause their systems to break.

Importantly, any shared access signatures and tokens you issue will likely be broken by this process too. For example, if you provide third parties with a SAS token to access a storage account or blob, then rotating the account keys will cause the SAS token to be invalidated. There are some ways to avoid this, including generating SAS tokens from your deployment process and sending them out from there, or by using stored access policies; these approaches are beyond the scope of this post.

The next sections provide some detail on the important steps in the list above.

Step 1: Choosing a Key

The first step we need to perform is to decide whether we should use the primary or secondary keys for this deployment. Ideally each deployment would switch between them – so deployment 1 would use the primary keys, deployment 2 the secondary, deployment 3 the primary, deployment 4 the secondary, etc. This requires that we store some state about the deployments somewhere. Don’t forget, though, that the very first time we deploy the application we won’t have this state set. We need to allow for this scenario too.

The option that I’ve chosen to use in the sample is to use a resource group tag. Azure lets us use tags to attach custom metadata to most resource types, as well as to resource groups. I’ve used a custom tag named CurrentKeys to indicate whether the resources in that group currently use the primary or secondary keys.

There are other places you could store this state too – some sort of external configuration system, or within your release management tool. You could even have your deployment scripts look at the keys currently used by the application code, compare them to the keys on the actual target resources, and then infer which key set is being used that way.

A simpler alternative to maintaining state is to randomly choose to use the primary or secondary keys on every deployment. This may sometimes mean that you end up reusing the same keys repeatedly for several deployments in a row, but in many cases this might not be a problem, and may be worth the simplicity of not maintaining state.

Step 2: Deploy the ARM Template

Our ARM template includes the resource definitions for all of the components we want to create – a storage account, a Cosmos DB account, a Service Bus namespace, and an Azure Function app to use for testing. You can see the full ARM template here.

Note that we are deploying the Azure Function application code using the ARM template deployment method.

Additionally, we copy the keys for our services into the Azure Function app’s settings, and into a Key Vault, so that we can access them from our application.

Step 4: Testing the Keys

Once we’ve finished deploying the ARM template and completing any other deployment steps, we should test to make sure that the keys we’re trying to use are valid. Many deployments include some sort of smoke test – a quick test of core functionality of the application. In this case, I wrote an Azure Function that will check that it can connect to the Azure resources in question.

Testing Azure Storage Keys

To test connectivity to Azure Storage, we run a query against the storage API to check if a blob container exists. We don’t actually care if the container exists or not; we just check to see if we can successfully make the request:

Testing Cosmos DB Keys

To test connectivity to Cosmos DB, we use the Cosmos DB SDK to try to retrieve some metadata about the database account. Once again we’re not interested in the results, just in the success of the API call:

Testing Service Bus Keys

And finally, to test connectivity to Service Bus, we try to get a list of queues within the Service Bus namespace. As long as we get something back, we consider the test to have passed:

You can view the full Azure Function here.

Step 6: Rotating the Keys

One of the last steps we perform is to actually rotate the keys for the services. The way in which we request key rotations is different depending on the services we’re talking to.

Rotating Azure Storage Keys

Azure Storage provides an API that can be used to regenerate an account key. From PowerShell we can use the New-AzureRmStorageAccountKey cmdlet to access this API:

Rotating Cosmos DB Keys

For Cosmos DB, there is a similar API to regenerate an account key. There are no first-party PowerShell cmdlets for Cosmos DB, so we can instead a generic Azure Resource Manager cmdlet to invoke the API:

Rotating Service Bus Keys

Service Bus provides an API to regenerate the keys for a specified authorization rule. For this example we’re using the default RootManageSharedAccessKey authorization rule, which is created automatically when the Service Bus namespace is provisioned. The PowerShell cmdlet New-AzureRmServiceBusKey can be used to access this API:

You can see the full script here.

Conclusion

Key management and rotation is often a painful process, but if your application deployments are completely self-contained then the process described here is one way to ensure that you continuously keep your keys changing and up-to-date.

You can download the full set of scripts and code for this example from GitHub.

SharePoint Integration for Health Care eLearning – Moving LMS to the Cloud

Health care systems often face challenges in the way of being unkept and unmaintained or managed by too many without consistency in content and harbouring outdated resources. A lot of these legacy training and development systems also wear the pain of constant record churning without a supportable record management system. With the accrual of these records over time forming a ‘Big Data concern’, modernising these eLearning platforms may be the right call to action for medical professionals and researchers. Gone should be the days of manually updating Web Vista on regular basis.
Cloud solutions for Health Care and Research should be well on its way, but the better utilisation of these new technologies will play a key factor in how confidence is invested by health professionals in IT providing a means for departmental education and career development moving forward.
Why SharePoint Makes Sense (versus further developing Legacy Systems)
Every day, each document, slide image and scan matters when the paying customer’s dollar is placed on your proficiency to solve pressing health care challenges. Compliance and availability of resources aren’t enough – streamlined and collaborative processes, from quality control to customer relationship management, module testing and internal review are all minimum requirements for building a modern, online eLearning centre i.e. a ‘Learning Management System’.
ELearningIndustry.com has broken down ten key components that a Learning Management System (LMS) requires in order to be effective. From previous cases, working in developing an LMS, or OLC (Online Learning Centre) Site using SharePoint, these ten components can indeed be designed within the online platform:

  1. Strong Analytics and Report Generation – for the purposes of eLearning, e.g. dashboards which contain progress reports, exam scores and other learner data, SharePoint workflows allows for progress tracking of training and user’s engagement with content and course materials while versioning ensures that learning managers, content builders (subject matter experts) and the learners themselves are on the same page (literally).
  1. Course Authoring Capability – SharePoint access and user permissions are directly linked to your Active Directory. Access to content can be managed, both from a hierarchical standpoint or role-based if we’re talking content authors. Furthermore, learners can have access to specific ‘modules’ allocated to them based on department, vocation, etc.
  1. Scalable Content Hosting – flexibility of content or workflows, or plug-ins (using ‘app parts’) to adapt functionality to welcome new learners where learning requirements may shift to align with organisational needs.
  1. Certifications – due to the availability and popularity of SharePoint online in large/global Enterprises, certifications for anywhere from smart to super users is available from Microsoft affiliated authorities or verified third-parties.
  1. Integrations (with other SaaS software, communication tools, etc.) – allow for exchange of information through API’s for content feeds and record management e.g. with virtual classrooms, HR systems, Google Analytics.
  1. Community and Collaboration – added benefit of integrated and packaged Microsoft apps, to create channels for live group study, or learner feedback, for instance (Skype for Business, Yammer, Microsoft Teams).
  1. White Labelling vs. Branding – UI friendly, fully customisable appearance. Modern layout is design flexible to allow for the institutes branding to be proliferated throughout the tenant’s SharePoint sites.
  1. Mobile Capability – SharePoint has both a mobile app and can be designed to be responsive to multiple mobile device types
  1. Customer Support and Success – as it is a common enterprise tool, support by local IT should be feasible with any critical product support inquiries routed to Microsoft
  1. Support of the Institutes Mission and Culture – in Health Care Services, where the churn of information and data pushes for an innovative, rapid response, SharePoint can be designed to meet these needs where, as an LMS, it can adapt to continuously represent the expertise and knowledge of Health Professionals.

Outside of the above, the major advantage for health services to make the transition to the cloud is the improved information security experience. There are still plenty of cases today where patients are at risk of medical and financial identity fraud due to inadequate information security and manual (very implicitly hands-on) records management processes. Single platform databasing, as well as the from-anywhere accessibility of SharePoint as a Cloud platform meets the challenge of maintaining networks, PCs, servers and databases, which can be fairly extensive due to many health care institutions existing beyond hospitals, branching off into neighbourhood clinics, home health providers and off-site services.

Psychodynamics Revisited: Data Privacy

business camera coffee connection
How many of you, between waking up and your first cup of hot, caffeinated beverage, told the world something about yourselves online? Whether it be a social media status update, an Instagram photo or story post or even a tweak to your personal profile on LinkedIn. Maybe, yes, maybe no, although I would hedge my bets that you’ve at least checked your notifications, emails or had a scroll through the newsfeed.
Another way to view this question would be: how many of you interacted with the internet in some way since waking up? The likeliness is probably fairly high.
In my first blog looking into the topic of Psychodynamics, I solely focused on our inseparable connection to modern technologies – smartphones, tablets, etc. – and the access that these facilitate for us to interact with the society and the world. For most of us, this is an undeniable truth of daily life. A major nuance of this relationship between people and technology and one that I think we are probably somewhat in denial about is the security and privacy of our personal information online.
To the Technology Community at large, it’s no secret that our personal data is proliferated by governments and billion dollar corporations on a constant basis. Whatever information – and more importantly, information on that information – that’s desired, going to the highest bidder, or for the best market rate. Facebook, for instance, doesn’t sell your information outright. That would be completely unethical and see devaluation to their brand trust. What it does is sell access to you, to the advertisers and large corporations connected through it, which in turn gives them valuable, consumer data, to advertise, target and sell back to you based on your online habits.
My reasoning for raising this topic in regard to psychodynamics and technological-behavioral patterns is for consultants and tech professionals to consider what data privacy means to our/your valued clients.
I was fortunate to participate this past week in a seminar hosted by the University of New South Wales’ Grand Challenges Program, established to promote research in technology and human development. The seminar featured guest speaker Professor Joe Cannataci, the UN’s Special Rapporteur on the right to privacy, who’s in town to discuss with our Federal Government recent privacy issues, specifically amid concerns about the security of the Government’s My Health Record system (see full discussion here on ABC’s RN Breakfast Show) Two key points raised during the seminar, and from Professor Cannataci’s general insights were:

  1. Data analytics targeting individuals/groups are focused largely on the metadata, not the content data of what an individual or group of individuals is producing. What this means is that businesses are more likely to not look at content as scalable unless there are metrics and positive/viral trends in viewership/content consumption patterns.
  2. Technology, it’s socialisation and personal information privacy issues are no longer specific to a generation — “boomers”, “millennials” — or context (though countries like China and Russia prohibit and filter certain URLs and web services). That is to say, in the daily working routine of an individual, their engagement with technology and the push to submit data to get a task done may, in some instances, formulate an unconscious processing pattern over time where we get used to sharing our personal information, adopting the mindset “well, I have nothing to hide”. I believe we’ve likely all been guilty of it before. Jane might not think about how sharing her client’s files with her colleague Judy to assist with advising on a matter may put their employer in breach of a binding confidentiality agreement.

 
My recent projects saw heavy amounts of content extraction and planning, not immediately considering the meta-data trends and what business departments likely needs were for this content, focusing on documented business processes over the data usage patterns. Particularly working with cloud technologies that were new for the given clients, there was a very basic understanding of what this entailed in regards to data privacy and the legalities around this (client sharing, data visibility, GDPR, to name a few). Perhaps a consideration here is investigating further how these trends play into and, possibly, deviate business processes, rather than look at them as separate factors in information processing.
Work is work, but so is our duty to due diligence, best practices and understanding how we, as technology professionals, can absolve some of these ethical issues in today’s technology landscape.
For more on Professor Joe Cannataci, please see his profile on the United Nations page.
Credit to UNSW Grand Challenges Program. For more info, please see their website or follow their Facebook page (irony intended)

Securing your Web front-end with Azure Application Gateway Part 2

In part one of this post we looked at configuring an Azure Application Gateway to secure your web application front-end, it is available here.
In part two we will be looking at some additional post configuration tasks and how to start investigating whether the WAF is blocking any of our application traffic and how to check for this.
First up we will look at configuring some NSG (Network Security Group) inbound and outbound rules for the subnet that the Application Gateway is deployed within.
The inbound rules that you will require are below.

  • Allow HTTPS Port 443 from any source.
  • Allow HTTP Port 80 from any source (only if required for your application).
  • Allow HTTP/HTTPS Port 80/443 from your Web Server subnet.
  • Allow Application Gateway Health API ports 65503-65534. These are required for correct operation of your Application Gateway.
  • Deny-all other traffic, set this rule as a high value ie. 4000 – 4096.

The outbound rules that you will require are below.

  • Allow HTTPS Port 443 from your Application Gateway to any destination.
  • Allow HTTP Port 80 from your Application Gateway to any destination (only if required for your application).
  • Allow HTTP/HTTPS Port 80/443 to your Web Server subnet.
  • Allow Internet traffic using the Internet traffic tag.
  • Deny-all other traffic, set this rule as a high value ie. 4000 – 4096.

Now we need to configure the Application Gateway to write Diagnostic Logs to a Storage Account. To do this open the Application Gateway from within the Azure Portal, find the Monitoring section and click on Diagnostic Logs.

Click on Add diagnostic settings and browse to the Storage Account you wish to write logs to, select all log types and save the changes.
DiagStg
Now that the Application Gateway is configured to store diagnostic logs (we need the ApplicationFirewallLog) you can start testing your web front-end. To do this, firstly you should set the WAF to “Detection” mode which will log any traffic that would have been blocked. This setting is only recommended for testing purposes and should not be permanent state.
To change this setting open your Application Gateway from within the Azure Portal and click Web Application Firewall under settings.

Change the Firewall mode to “Detection” for testing purposes. Save the changes.

Now you can start your web front-end testing. Any traffic that would be blocked will now be allowed, however it will still create a log entry showing you the details for the traffic that would be blocked.
Once testing is completed open your Storage Account from within the Azure Portal and browse to the insights-logs-applicationgatewayfirewalllog container, continue opening the folder structure and find the date and time of the testing. The log file is named PT1H.json, download it to your local computer.
Open the PT1H.json file. Any entries for traffic that would be blocked will look similar to the below.
[code language=”javascript”]
{
“resourceId”: “/SUBSCRIPTIONS/….”,
“operationName”: “ApplicationGatewayFirewall”,
“time”: “2018-07-03T03:30:59Z”,
“category”: “ApplicationGatewayFirewallLog”,
“properties”: {
“instanceId”: “ApplicationGatewayRole_IN_1”,
“clientIp”: “202.141.210.52”,
“clientPort”: “0”,
“requestUri”: “/uri”,
“ruleSetType”: “OWASP”,
“ruleSetVersion”: “3.0”,
“ruleId”: “942430”,
“message”: “Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)”,
“action”: “Detected”,
“site”: “Global”,
“details”: {
“message”: “Warning. Pattern match \”,
“file”: “rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf”,
“line”: “1002”
},
“hostname”: “website.com.au”
}
[/code]
This will give you useful information to either fix your application or disable a rule that is blocking traffic, the “ruleId” section of the log will show you the rule that requires action. Rules should only be disabled temporarily while you remediate your application. They can be disabled/enabled from within the Web Application Firewall tab within Application Gateway, just make sure the “Advanced Rule Configuration” box is ticked so that you can see them.
This process of testing and fixing code/disabling rules should continue until you can complete a test without any errors showing in the logs. Once no errors occur you can change the Web Application Firewall mode back to “Prevention” mode which will make the WAF actively block traffic that does not pass the rule sets.
Something important to note is the below log entry type with a ruleId of “0”. This error would need to be resolved by remediating the code as the default rules cannot be changed within the WAF. Microsoft are working on changing this, however at the time of writing it cannot be done as the default data length cannot be changed. Sometimes this will occur with a part of the application that cannot be resolved, if this is the case you would need to look at another WAF product such as a Barracuda etc.
[code language=”javascript”]
{
“resourceId”: “/SUBSCRIPTIONS/…”,
“operationName”: “ApplicationGatewayFirewall”,
“time”: “2018-07-03T01:21:44Z”,
“category”: “ApplicationGatewayFirewallLog”,
“properties”: {
“instanceId”: “ApplicationGatewayRole_IN_0”,
“clientIp”: “1.136.111.168”,
“clientPort”: “0”,
“requestUri”: “/…./api/document/adddocument”,
“ruleSetType”: “OWASP”,
“ruleSetVersion”: “3.0”,
“ruleId”: “0”,
“message”: “”,
“action”: “Blocked”,
“site”: “Global”,
“details”: {
“message”: “Request body no files data length is larger than the configured limit (131072).. Deny with code (413)”,
“data”: “”,
“file”: “”,
“line”: “”
},
“hostname”: “website.com.au”
}
[/code]
 
In this post we looked at some post configuration tasks for Application Gateway such as configuring NSG rules to further protect the network, configure diagnostic logging and how to check the Web Application Firewall logs for application traffic that would be blocked by the WAF. The Application Gateway can be a good alternative to dedicated appliances as it is easier to configure and manage. However, in some cases where more control of WAF rule sets are required a dedicated WAF appliance may be required.
Hopefully this two part series helps you with your decision making when it comes to securing your web front-end applications.

Securing your Web front-end with Azure Application Gateway Part 1

I have just completed a project with a customer who were using Azure Application Gateway to secure their web front-end and thought it would be good to post some findings.
This is part one in a two part post looking at how to secure a web front-end using Azure Application Gateway with the WAF component enabled. In this post I will explain the process for configuring the Application Gateway once deployed. You can deploy the Application Gateway from an ARM Template, Azure PowerShell or the portal. To be able to enable the WAF component you must use a Medium or Large instance size for the Application Gateway.
Using Application Gateway allows you to remove the need for your web front-end to have a public endpoint assigned to it, for instance if it is a Virtual Machine then you no longer need a Public IP address assigned to it. You can deploy Application Gateway in front of Virtual Machines (IaaS) or Web Apps (PaaS).
An overview of how this will look is shown below. The Application Gateway requires its own subnet which no other resources can be deployed to. The web server (Virtual Machine) can be assigned to a separate subnet, if using a web app no subnet is required.
AppGW
 
The benefits we will receive from using Application Gateway are:

  • Remove the need for a public endpoint from our web server.
  • End-to-end SSL encryption.
  • Automatic HTTPS to HTTPS redirection.
  • Multi-site hosting, though in this example we will configure a single site.
  • In-built WAF solution utilising OWASP core rule sets 3.0 or 2.2.9.

To follow along you will require the Azure PowerShell module version of 3.6 or later. You can install or upgrade following this link
Before starting you need to make sure that an Application Gateway with an instance size of Medium or Large has been deployed with the WAF component enabled and that the web server or web app has been deployed and configured.
Now open PowerShell ISE and login to your Azure account using the below command.
[code language=”powershell”]
Login-AzureRmAccount
[/code]
Now we need to set our variables to work with. These variables are your Application Gateway name, the resource group where you Application Gateway is deployed, your Backend Pool name and IP, your HTTP and HTTPS Listener names, your host name (website name), the HTTP and HTTPS rule names, your front end (Private) and back end (Public) SSL Names along with your Private certificate password.
NOTE: The Private certificate needs to be in PFX format and your Public certificate in CER format.
Change these to suit your environment and copy both your pfx and cer certificate files to C:\Temp\Certs on your computer.
[code language=”powershell”]
# Application Gateway name.
[string]$ProdAppGw = “PRD-APPGW-WAF”
# The resource group where the Application Gateway is deployed.
[string]$resourceGroup = “PRD-RG”
# The name of your Backend Pool.
[string]$BEPoolName = “BackEndPool”
# The IP address of your web server or URL of web app.
[string]$BEPoolIP = “10.0.1.10”
# The name of the HTTP Listener.
[string]$HttpListener = “HTTPListener”
# The name of the HTTPS Listener.
[string]$HttpsListener = “HTTPSListener”
# Your website hostname/URL.
[string]$HostName = “website.com.au”
# The HTTP Rule name.
[string]$HTTPRuleName = “HTTPRule”
# The HTTPS Rule name.
[string]$HTTPSRuleName = “HTTPSRule”
# SSL certificate name for your front-end (Private cert pfx).
[string]$FrontEndSSLName = “Private_SSL”
# SSL certificate name for your back-end (Public cert cer).
[string]$BackEndSSLName = “Public_SSL”
# Password for front-end SSL (Private cert pfx).
[string]$sslPassword = “<Enter your Private Certificate pfx password here.>”
[/code]

Our first step is to configure the Front and Back end HTTPS settings on the Application Gateway.

Save the Application Gateway as a variable.

[code language=”powershell”]
$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
-ResourceGroupName $resourceGroup
[/code]

Add the Front-end (Private) SSL certificate. If you have any issues with this step you can upload the certificate from within the Azure Portal by creating a new Listener.

[code language=”powershell”]
Add-AzureRmApplicationGatewaySslCertificate -ApplicationGateway $AppGw `
-Name $FrontEndSSLName -CertificateFile “C:\Temp\Certs\PrivateCert.pfx” `
-Password $sslPassword
[/code]

Save the certificate as a variable.

[code language=”powershell”]
$AGFECert = Get-AzureRmApplicationGatewaySslCertificate -ApplicationGateway $AppGW `
-Name $FrontEndSSLName
[/code]

Configure the front-end port for SSL.

[code language=”powershell”]
Add-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
-Name “appGatewayFrontendPort443” `
-Port 443
[/code]

Add the back-end (Public) SSL certificate.

[code language=”powershell”]
Add-AzureRmApplicationGatewayAuthenticationCertificate -ApplicationGateway $AppGW `
-Name $BackEndSSLName `
-CertificateFile “C:\Temp\Certs\PublicCert.cer”
[/code]

Save the back-end (Public) SSL as a variable.

[code language=”powershell”]
$AGBECert = Get-AzureRmApplicationGatewayAuthenticationCertificate -ApplicationGateway $AppGW `
-Name $BackEndSSLName
[/code]

Configure back-end HTTPS settings.

[code language=”powershell”]
Add-AzureRmApplicationGatewayBackendHttpSettings -ApplicationGateway $AppGW `
-Name “appGatewayBackendHttpsSettings” `
-Port 443 `
-Protocol Https `
-CookieBasedAffinity Enabled `
-AuthenticationCertificates $AGBECert
[/code]

Apply the settings to the Application Gateway.

[code language=”powershell”]
Set-AzureRmApplicationGateway -ApplicationGateway $AppGw
[/code]

The next stage is to configure the back-end pool to connect to your Virtual Machine or Web App. This example is using the IP address of the NIC attached to the web server VM. If using a web app as your front-end you can configure it to accept traffic only from the Application Gateway by setting an IP restriction on the web app to the Application Gateway IP address.

Save the Application Gateway as a variable.

[code language=”powershell”]
$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
-ResourceGroupName $resourceGroup
[/code]

Add the Backend Pool Virtual Machine or Web App. This can be a URL or an IP address.

[code language=”powershell”]
Add-AzureRmApplicationGatewayBackendAddressPool -ApplicationGateway $AppGw `
-Name $BEPoolName `
-BackendIPAddresses $BEPoolIP
[/code]

Apply the settings to the Application Gateway.

[code language=”powershell”]
Set-AzureRmApplicationGateway -ApplicationGateway $AppGw
[/code]

The next steps are to configure the HTTP and HTTPS Listeners.

Save the Application Gateway as a variable.

[code language=”powershell”]
$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
-ResourceGroupName $resourceGroup
[/code]

Save the front-end port as a variable – port 80.

[code language=”powershell”]
$AGFEPort = Get-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
-Name “appGatewayFrontendPort”
[/code]

Save the front-end IP configuration as a variable.

[code language=”powershell”]
$AGFEIPConfig = Get-AzureRmApplicationGatewayFrontendIPConfig -ApplicationGateway $AppGw `
-Name “appGatewayFrontendIP”
[/code]

Add the HTTP Listener for your website.

[code language=”powershell”]
Add-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HttpListener `
-Protocol Http `
-FrontendIPConfiguration $AGFEIPConfig `
-FrontendPort $AGFEPort `
-HostName $HostName
[/code]

Save the HTTP Listener for your website as a variable.

[code language=”powershell”]
$AGListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HTTPListener
[/code]

Save the front-end SSL port as a variable – port 443.

[code language=”powershell”]
$AGFESSLPort = Get-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
-Name “appGatewayFrontendPort443”
[/code]

Add the HTTPS Listener for your website.

[code language=”powershell”]
Add-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HTTPSListener `
-Protocol Https `
-FrontendIPConfiguration $AGFEIPConfig `
-FrontendPort $AGFESSLPort `
-HostName $HostName `
-RequireServerNameIndication true `
-SslCertificate $AGFECert
[/code]

Apply the settings to the Application Gateway.

[code language=”powershell”]
Set-AzureRmApplicationGateway -ApplicationGateway $AppGw
[/code]

The final part of the configuration is to configure the HTTP and HTTPS rules and the HTTP to HTTPS redirection.

First configure the HTTPS rule.

Save the Application Gateway as a variable.

[code language=”powershell”]
$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
-ResourceGroupName $resourceGroup
[/code]

Save the Backend Pool as a variable.

[code language=”powershell”]
$BEP = Get-AzureRmApplicationGatewayBackendAddressPool -ApplicationGateway $AppGW `
-Name $BEPoolName
[/code]

Save the HTTPS Listener as a variable.

[code language=”powershell”]
$AGSSLListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HttpsListener
[/code]

Save the back-end HTTPS settings as a variable.

[code language=”powershell”]
$AGHTTPS = Get-AzureRmApplicationGatewayBackendHttpSettings -ApplicationGateway $AppGW `
-Name “appGatewayBackendHttpsSettings”
[/code]

Add the HTTPS rule.

[code language=”powershell”]
Add-AzureRmApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
-Name $HTTPSRuleName `
-RuleType Basic `
-BackendHttpSettings $AGHTTPS `
-HttpListener $AGSSLListener `
-BackendAddressPool $BEP
[/code]

Apply the settings to the Application Gateway.

[code language=”powershell”]
Set-AzureRmApplicationGateway -ApplicationGateway $AppGw
[/code]

Now configure the HTTP to HTTPS redirection and the HTTP rule with the redirection applied.

Save the Application Gateway as a variable.

[code language=”powershell”]
$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
-ResourceGroupName $resourceGroup
[/code]

Save the HTTPS Listener as a variable.

[code language=”powershell”]
$AGSSLListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HttpsListener
[/code]

Add the HTTP to HTTPS redirection.

[code language=”powershell”]
Add-AzureRmApplicationGatewayRedirectConfiguration -Name ProdHttpToHttps `
-RedirectType Permanent `
-TargetListener $AGSSLListener `
-IncludePath $true `
-IncludeQueryString $true `
-ApplicationGateway $AppGw
[/code]

Apply the settings to the Application Gateway.

[code language=”powershell”]
Set-AzureRmApplicationGateway -ApplicationGateway $AppGw
[/code]

Save the Application Gateway as a variable.

[code language=”powershell”]
$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
-ResourceGroupName $resourceGroup
[/code]

Save the redirect as a variable.

[code language=”powershell”]
$Redirect = Get-AzureRmApplicationGatewayRedirectConfiguration -Name ProdHttpToHttps `
-ApplicationGateway $AppGw
[/code]

Save the HTTP Listener as a variable.

[code language=”powershell”]
$AGListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HttpListener
[/code]

Add the HTTP rule with redirection to HTTPS.

[code language=”powershell”]
Add-AzureRmApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
-Name $HTTPRuleName `
-RuleType Basic `
-HttpListener $AGListener `
-RedirectConfiguration $Redirect
[/code]

Apply the settings to the Application Gateway.

[code language=”powershell”]
Set-AzureRmApplicationGateway -ApplicationGateway $AppGw
[/code]
 
In this post we covered how to configure Azure Application Gateway to secure a web front-end whether running on Virtual Machines or Web Apps. We have configured the Gateway for end-to-end SSL encryption and automatic HTTP to HTTPS redirection removing this overhead from the web server.

In part two we will look at some additional post configuration tasks and how to make sure your web application works with the WAF component.

Follow Us!

Kloud Solutions Blog - Follow Us!