CyberArk PAM- Eliminate Hard Coded Credentials using Java REST API Calls

Still in many Organization hard coded credentials are stored in Application config files for making application-to-application connection, in scripts (ex: scheduled tasks) and config files. Generally, these are high privileged service accounts and its passwords are set to be never changed.
Keeping hard coded credentials always risk to the organizations security posture. CyberArk provides a solution called Application Identity Manager using which, the passwords of Privileged Service Accounts can be stored centrally in Password Vault, logged, rotated and retrieved in many different ways.
CyberArk supports two approaches to eliminate hard-coded credentials, which are;

  1. Credentials Provider (CP). This required an agent needs to be installed on each server where the application or script is running.
  2. Central Credential Provider (CCP).

In this post I’m giving more details on how to retrieve credentials via CCP using Java REST API call. Applications that require credentials to access a remote device or to execute another application remotely can request the relevant credentials from the CCP via REST or SOAP calls.

Pre-Requisites:The CCP installation consist of two parts, which are;
1) Credential Provider for Windows ( 2012 R2, 2016)
2) Install the Central Credential Provider web service (IIS 6, 7.5, or 10)
Client Requirements:
The Central Credential Provider works with applications on any operating system, platform or framework that can invoke REST or SOAP web service requests.
CCP Supported Client Authentication:
1) Client certificates
2) The address of the machine where the application is running
3) Windows domain Operating System user
Overview
In this example I’ve used Java to make REST API call to CCP Web Services using Certificate and Client IP authentication. This approach will work with any programming languages like .Net, Python, PowerShell etc.
1. On board Required Application into CyberArk via Password Vault Web Access (PVWA) Web Portal.
2. Create Required Platform and Safe.
3. On-board Required Privileged Service Account into CyberArk via PVWA
4. Add Application we have created in step1 as the member of this Safe with Retrieve permission enabled
5. Add Provider Users as a member to the Safe, which was created as part of CCP initial installations.
6. Add the Certificate into CCP’s IIS server, the same certificate will be used for client authentication
7. Add the certificate into java keys store using Java key tool command
8. Run the java Code.
Implementations:
On-board Application => Login to PVWA =>Go to Application Tab =>Add Application : provide Application name, Owner and other details. Go to Allowed machine Tab=> Enter the IP Address of the machine where the java code is running. I’ve also added time restriction which ensures credentials will be released only within the time limit after the successful authentication.

Create Platform and Safe: Login to PVWA =>Go to Administration=>Platform Management => Select Windows Domain Accounts => Duplicate it and modify according to the Password requirements.

On-board Account and Add Members: Login to PVWA =>Go to Accounts => Account View => Add Account => Enter the actual Privileged Account details by selecting the Safe and Platform we have created in the previous steps. In this example I’ve chosen AD as my target account but we can any platform accounts as per the need since CyberArk support all most all of the platforms.

Add Application as a Safe Member: Login to PVWA =>Go to Policies =>Access Control =>Select the Safe=> Edit members => Add Application as member with Retrieve permissions and Add Provider user with Retrieve and List permissions

Add Certificate into IIS Server: Either we can use Self Signed or CA Signed client Certificate. I’ve added a signed AD Domain certificate for PVWA SSL connection, so I’m going to use the same certificate into my Client Java code. To add Certificate into Java key Store: I’ve java installed in my client machine (192.168.2.41) where my java code will run to make REST API calls. The below key tool command must be executed via CMD.
keytool -importcert -storepass changeit -keystore "C:\Program Files\Java\jdk1.8.0_181\jre\lib\security\cacerts" -alias compsrv01 -file certnew.cer
Java Code:
Note : if you look at the java code there is no hard coded credentials or tokens used to authenticate into CCP, Its simply uses the certificate for authentication. 

Java Output: 

The outcome of the above REST API call will be a JSON and we can get
the user name, password (content) from it. These credentials will be dynamically referred in the target scripts,
applications which will then use the credentials to perform its tasks.

Summary:
With Simple REST API calls, CyberArk Vaulted account’s credentials can be retried using combination of certificate and client server IP authentication. This will be helpful for eradicating embed hard coded credentials in the application config files, scheduler jobs, scripts etc.

Automatic Key Rotation for Azure Services

Securely managing keys for services that we use is an important, and sometimes difficult, part of building and running a cloud-based application. In general I prefer not to handle keys at all, and instead rely on approaches like managed service identities with role-based access control, which allow for applications to authenticate and authorise themselves without any keys being explicitly exchanged. However, there are a number of situations where do we need to use and manage keys, such as when we use services that don’t support role-based access control. One best practice that we should adopt when handling keys is to rotate (change) them regularly.

Key rotation is important to cover situations where your keys may have compromised. Common attack vectors include keys having been committed to a public GitHub repository, a log file having a key accidentally written to it, or a disgruntled ex-employee retaining a key that had previously been issued. Changing the keys means that the scope of the damage is limited, and if keys aren’t changed regularly then these types of vulnerability can be severe.

In many applications, keys are used in complex ways and require manual intervention to rotate. But in other applications, it’s possible to completely automate the rotation of keys. In this post I’ll explain one such approach, which rotates keys every time the application and its infrastructure components are redeployed. Assuming the application is deployed regularly, for example using a continuous deployment process, we will end up rotating keys very frequently.

Approach

The key rotation process I describe here relies on the fact that the services we’ll be dealing with – Azure Storage, Cosmos DB, and Service Bus – have both a primary and a secondary key. Both keys are valid for any requests, and they can be changed independently of each other. During each release we will pick one of these keys to use, and we’ll make sure that we only use that one. We’ll deploy our application components, which will include referencing that key and making sure our application uses it. Then we’ll rotate the other key.

The flow of the script is as follows:

  1. Decide whether to use the primary key or the secondary key for this deployment. There are several approaches to do this, which I describe below.
  2. Deploy the ARM template. In our example, the ARM template is the main thing that reads the keys. The template copies the keys into an Azure Function application’s configuration settings, as well as into a Key Vault. You could, of course, output the keys and have your deployment script put them elsewhere if you want to.
  3. Run the other deployment logic. For our simple application we don’t need to do anything more than run the ARM template deployment, but for many deployments  you might copy your application files to a server, swap the deployment slots, or perform a variety of other actions that you need to run as part of your release.
  4. Test the application is working. The Azure Function in our example will perform some checks to ensure the keys are working correctly. You might also run other ‘smoke tests’ after completing your deployment logic.
  5. Record the key we used. We need to keep track of the keys we’ve used in this deployment so that the next deployment can use the other one.
  6. Rotate the other key. Now we can rotate the key that we are not using. The way that we rotate keys is a little different for each service.
  7. Test the application again. Finally, we run one more check to ensure that our application works. This is mostly a last check to ensure that we haven’t accidentally referenced any other keys, which would break our application now that they’ve been rotated.

We don’t rotate any keys until after we’ve already switched the application to using the other set of keys, so we should never end up in a situation where we’ve referenced the wrong keys from the Azure Functions application. However, if we wanted to have a true zero-downtime deployment then we could use something like deployment slots to allow for warming up our application before we switch it into production.

A Word of Warning

If you’re going to apply this principle in this post or the code below to your own applications, it’s important to be aware of an important limitation. The particular approach described here only works if your deployments are completely self-contained, with the keys only used inside the deployment process itself. If you provide keys for your components to any other systems or third parties, rotating keys in this manner will likely cause their systems to break.

Importantly, any shared access signatures and tokens you issue will likely be broken by this process too. For example, if you provide third parties with a SAS token to access a storage account or blob, then rotating the account keys will cause the SAS token to be invalidated. There are some ways to avoid this, including generating SAS tokens from your deployment process and sending them out from there, or by using stored access policies; these approaches are beyond the scope of this post.

The next sections provide some detail on the important steps in the list above.

Step 1: Choosing a Key

The first step we need to perform is to decide whether we should use the primary or secondary keys for this deployment. Ideally each deployment would switch between them – so deployment 1 would use the primary keys, deployment 2 the secondary, deployment 3 the primary, deployment 4 the secondary, etc. This requires that we store some state about the deployments somewhere. Don’t forget, though, that the very first time we deploy the application we won’t have this state set. We need to allow for this scenario too.

The option that I’ve chosen to use in the sample is to use a resource group tag. Azure lets us use tags to attach custom metadata to most resource types, as well as to resource groups. I’ve used a custom tag named CurrentKeys to indicate whether the resources in that group currently use the primary or secondary keys.

There are other places you could store this state too – some sort of external configuration system, or within your release management tool. You could even have your deployment scripts look at the keys currently used by the application code, compare them to the keys on the actual target resources, and then infer which key set is being used that way.

A simpler alternative to maintaining state is to randomly choose to use the primary or secondary keys on every deployment. This may sometimes mean that you end up reusing the same keys repeatedly for several deployments in a row, but in many cases this might not be a problem, and may be worth the simplicity of not maintaining state.

Step 2: Deploy the ARM Template

Our ARM template includes the resource definitions for all of the components we want to create – a storage account, a Cosmos DB account, a Service Bus namespace, and an Azure Function app to use for testing. You can see the full ARM template here.

Note that we are deploying the Azure Function application code using the ARM template deployment method.

Additionally, we copy the keys for our services into the Azure Function app’s settings, and into a Key Vault, so that we can access them from our application.

Step 4: Testing the Keys

Once we’ve finished deploying the ARM template and completing any other deployment steps, we should test to make sure that the keys we’re trying to use are valid. Many deployments include some sort of smoke test – a quick test of core functionality of the application. In this case, I wrote an Azure Function that will check that it can connect to the Azure resources in question.

Testing Azure Storage Keys

To test connectivity to Azure Storage, we run a query against the storage API to check if a blob container exists. We don’t actually care if the container exists or not; we just check to see if we can successfully make the request:

Testing Cosmos DB Keys

To test connectivity to Cosmos DB, we use the Cosmos DB SDK to try to retrieve some metadata about the database account. Once again we’re not interested in the results, just in the success of the API call:

Testing Service Bus Keys

And finally, to test connectivity to Service Bus, we try to get a list of queues within the Service Bus namespace. As long as we get something back, we consider the test to have passed:

You can view the full Azure Function here.

Step 6: Rotating the Keys

One of the last steps we perform is to actually rotate the keys for the services. The way in which we request key rotations is different depending on the services we’re talking to.

Rotating Azure Storage Keys

Azure Storage provides an API that can be used to regenerate an account key. From PowerShell we can use the New-AzureRmStorageAccountKey cmdlet to access this API:

Rotating Cosmos DB Keys

For Cosmos DB, there is a similar API to regenerate an account key. There are no first-party PowerShell cmdlets for Cosmos DB, so we can instead a generic Azure Resource Manager cmdlet to invoke the API:

Rotating Service Bus Keys

Service Bus provides an API to regenerate the keys for a specified authorization rule. For this example we’re using the default RootManageSharedAccessKey authorization rule, which is created automatically when the Service Bus namespace is provisioned. The PowerShell cmdlet New-AzureRmServiceBusKey can be used to access this API:

You can see the full script here.

Conclusion

Key management and rotation is often a painful process, but if your application deployments are completely self-contained then the process described here is one way to ensure that you continuously keep your keys changing and up-to-date.

You can download the full set of scripts and code for this example from GitHub.

SharePoint Integration for Health Care eLearning – Moving LMS to the Cloud

Health care systems often face challenges in the way of being unkept and unmaintained or managed by too many without consistency in content and harbouring outdated resources. A lot of these legacy training and development systems also wear the pain of constant record churning without a supportable record management system. With the accrual of these records over time forming a ‘Big Data concern’, modernising these eLearning platforms may be the right call to action for medical professionals and researchers. Gone should be the days of manually updating Web Vista on regular basis.
Cloud solutions for Health Care and Research should be well on its way, but the better utilisation of these new technologies will play a key factor in how confidence is invested by health professionals in IT providing a means for departmental education and career development moving forward.
Why SharePoint Makes Sense (versus further developing Legacy Systems)
Every day, each document, slide image and scan matters when the paying customer’s dollar is placed on your proficiency to solve pressing health care challenges. Compliance and availability of resources aren’t enough – streamlined and collaborative processes, from quality control to customer relationship management, module testing and internal review are all minimum requirements for building a modern, online eLearning centre i.e. a ‘Learning Management System’.
ELearningIndustry.com has broken down ten key components that a Learning Management System (LMS) requires in order to be effective. From previous cases, working in developing an LMS, or OLC (Online Learning Centre) Site using SharePoint, these ten components can indeed be designed within the online platform:

  1. Strong Analytics and Report Generation – for the purposes of eLearning, e.g. dashboards which contain progress reports, exam scores and other learner data, SharePoint workflows allows for progress tracking of training and user’s engagement with content and course materials while versioning ensures that learning managers, content builders (subject matter experts) and the learners themselves are on the same page (literally).
  1. Course Authoring Capability – SharePoint access and user permissions are directly linked to your Active Directory. Access to content can be managed, both from a hierarchical standpoint or role-based if we’re talking content authors. Furthermore, learners can have access to specific ‘modules’ allocated to them based on department, vocation, etc.
  1. Scalable Content Hosting – flexibility of content or workflows, or plug-ins (using ‘app parts’) to adapt functionality to welcome new learners where learning requirements may shift to align with organisational needs.
  1. Certifications – due to the availability and popularity of SharePoint online in large/global Enterprises, certifications for anywhere from smart to super users is available from Microsoft affiliated authorities or verified third-parties.
  1. Integrations (with other SaaS software, communication tools, etc.) – allow for exchange of information through API’s for content feeds and record management e.g. with virtual classrooms, HR systems, Google Analytics.
  1. Community and Collaboration – added benefit of integrated and packaged Microsoft apps, to create channels for live group study, or learner feedback, for instance (Skype for Business, Yammer, Microsoft Teams).
  1. White Labelling vs. Branding – UI friendly, fully customisable appearance. Modern layout is design flexible to allow for the institutes branding to be proliferated throughout the tenant’s SharePoint sites.
  1. Mobile Capability – SharePoint has both a mobile app and can be designed to be responsive to multiple mobile device types
  1. Customer Support and Success – as it is a common enterprise tool, support by local IT should be feasible with any critical product support inquiries routed to Microsoft
  1. Support of the Institutes Mission and Culture – in Health Care Services, where the churn of information and data pushes for an innovative, rapid response, SharePoint can be designed to meet these needs where, as an LMS, it can adapt to continuously represent the expertise and knowledge of Health Professionals.

Outside of the above, the major advantage for health services to make the transition to the cloud is the improved information security experience. There are still plenty of cases today where patients are at risk of medical and financial identity fraud due to inadequate information security and manual (very implicitly hands-on) records management processes. Single platform databasing, as well as the from-anywhere accessibility of SharePoint as a Cloud platform meets the challenge of maintaining networks, PCs, servers and databases, which can be fairly extensive due to many health care institutions existing beyond hospitals, branching off into neighbourhood clinics, home health providers and off-site services.

Psychodynamics Revisited: Data Privacy

business camera coffee connection
How many of you, between waking up and your first cup of hot, caffeinated beverage, told the world something about yourselves online? Whether it be a social media status update, an Instagram photo or story post or even a tweak to your personal profile on LinkedIn. Maybe, yes, maybe no, although I would hedge my bets that you’ve at least checked your notifications, emails or had a scroll through the newsfeed.
Another way to view this question would be: how many of you interacted with the internet in some way since waking up? The likeliness is probably fairly high.
In my first blog looking into the topic of Psychodynamics, I solely focused on our inseparable connection to modern technologies – smartphones, tablets, etc. – and the access that these facilitate for us to interact with the society and the world. For most of us, this is an undeniable truth of daily life. A major nuance of this relationship between people and technology and one that I think we are probably somewhat in denial about is the security and privacy of our personal information online.
To the Technology Community at large, it’s no secret that our personal data is proliferated by governments and billion dollar corporations on a constant basis. Whatever information – and more importantly, information on that information – that’s desired, going to the highest bidder, or for the best market rate. Facebook, for instance, doesn’t sell your information outright. That would be completely unethical and see devaluation to their brand trust. What it does is sell access to you, to the advertisers and large corporations connected through it, which in turn gives them valuable, consumer data, to advertise, target and sell back to you based on your online habits.
My reasoning for raising this topic in regard to psychodynamics and technological-behavioral patterns is for consultants and tech professionals to consider what data privacy means to our/your valued clients.
I was fortunate to participate this past week in a seminar hosted by the University of New South Wales’ Grand Challenges Program, established to promote research in technology and human development. The seminar featured guest speaker Professor Joe Cannataci, the UN’s Special Rapporteur on the right to privacy, who’s in town to discuss with our Federal Government recent privacy issues, specifically amid concerns about the security of the Government’s My Health Record system (see full discussion here on ABC’s RN Breakfast Show) Two key points raised during the seminar, and from Professor Cannataci’s general insights were:

  1. Data analytics targeting individuals/groups are focused largely on the metadata, not the content data of what an individual or group of individuals is producing. What this means is that businesses are more likely to not look at content as scalable unless there are metrics and positive/viral trends in viewership/content consumption patterns.
  2. Technology, it’s socialisation and personal information privacy issues are no longer specific to a generation — “boomers”, “millennials” — or context (though countries like China and Russia prohibit and filter certain URLs and web services). That is to say, in the daily working routine of an individual, their engagement with technology and the push to submit data to get a task done may, in some instances, formulate an unconscious processing pattern over time where we get used to sharing our personal information, adopting the mindset “well, I have nothing to hide”. I believe we’ve likely all been guilty of it before. Jane might not think about how sharing her client’s files with her colleague Judy to assist with advising on a matter may put their employer in breach of a binding confidentiality agreement.

 
My recent projects saw heavy amounts of content extraction and planning, not immediately considering the meta-data trends and what business departments likely needs were for this content, focusing on documented business processes over the data usage patterns. Particularly working with cloud technologies that were new for the given clients, there was a very basic understanding of what this entailed in regards to data privacy and the legalities around this (client sharing, data visibility, GDPR, to name a few). Perhaps a consideration here is investigating further how these trends play into and, possibly, deviate business processes, rather than look at them as separate factors in information processing.
Work is work, but so is our duty to due diligence, best practices and understanding how we, as technology professionals, can absolve some of these ethical issues in today’s technology landscape.
For more on Professor Joe Cannataci, please see his profile on the United Nations page.
Credit to UNSW Grand Challenges Program. For more info, please see their website or follow their Facebook page (irony intended)

Securing your Web front-end with Azure Application Gateway Part 2

In part one of this post we looked at configuring an Azure Application Gateway to secure your web application front-end, it is available here.
In part two we will be looking at some additional post configuration tasks and how to start investigating whether the WAF is blocking any of our application traffic and how to check for this.
First up we will look at configuring some NSG (Network Security Group) inbound and outbound rules for the subnet that the Application Gateway is deployed within.
The inbound rules that you will require are below.

  • Allow HTTPS Port 443 from any source.
  • Allow HTTP Port 80 from any source (only if required for your application).
  • Allow HTTP/HTTPS Port 80/443 from your Web Server subnet.
  • Allow Application Gateway Health API ports 65503-65534. These are required for correct operation of your Application Gateway.
  • Deny-all other traffic, set this rule as a high value ie. 4000 – 4096.

The outbound rules that you will require are below.

  • Allow HTTPS Port 443 from your Application Gateway to any destination.
  • Allow HTTP Port 80 from your Application Gateway to any destination (only if required for your application).
  • Allow HTTP/HTTPS Port 80/443 to your Web Server subnet.
  • Allow Internet traffic using the Internet traffic tag.
  • Deny-all other traffic, set this rule as a high value ie. 4000 – 4096.

Now we need to configure the Application Gateway to write Diagnostic Logs to a Storage Account. To do this open the Application Gateway from within the Azure Portal, find the Monitoring section and click on Diagnostic Logs.

Click on Add diagnostic settings and browse to the Storage Account you wish to write logs to, select all log types and save the changes.
DiagStg
Now that the Application Gateway is configured to store diagnostic logs (we need the ApplicationFirewallLog) you can start testing your web front-end. To do this, firstly you should set the WAF to “Detection” mode which will log any traffic that would have been blocked. This setting is only recommended for testing purposes and should not be permanent state.
To change this setting open your Application Gateway from within the Azure Portal and click Web Application Firewall under settings.

Change the Firewall mode to “Detection” for testing purposes. Save the changes.

Now you can start your web front-end testing. Any traffic that would be blocked will now be allowed, however it will still create a log entry showing you the details for the traffic that would be blocked.
Once testing is completed open your Storage Account from within the Azure Portal and browse to the insights-logs-applicationgatewayfirewalllog container, continue opening the folder structure and find the date and time of the testing. The log file is named PT1H.json, download it to your local computer.
Open the PT1H.json file. Any entries for traffic that would be blocked will look similar to the below.

{
"resourceId": "/SUBSCRIPTIONS/....",
"operationName": "ApplicationGatewayFirewall",
"time": "2018-07-03T03:30:59Z",
"category": "ApplicationGatewayFirewallLog",
"properties": {
  "instanceId": "ApplicationGatewayRole_IN_1",
  "clientIp": "202.141.210.52",
  "clientPort": "0",
  "requestUri": "/uri",
  "ruleSetType": "OWASP",
  "ruleSetVersion": "3.0",
  "ruleId": "942430",
  "message": "Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)",
  "action": "Detected",
  "site": "Global",
  "details": {
    "message": "Warning. Pattern match \",
    "file": "rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf",
    "line": "1002"
  },
  "hostname": "website.com.au"
}

This will give you useful information to either fix your application or disable a rule that is blocking traffic, the “ruleId” section of the log will show you the rule that requires action. Rules should only be disabled temporarily while you remediate your application. They can be disabled/enabled from within the Web Application Firewall tab within Application Gateway, just make sure the “Advanced Rule Configuration” box is ticked so that you can see them.
This process of testing and fixing code/disabling rules should continue until you can complete a test without any errors showing in the logs. Once no errors occur you can change the Web Application Firewall mode back to “Prevention” mode which will make the WAF actively block traffic that does not pass the rule sets.
Something important to note is the below log entry type with a ruleId of “0”. This error would need to be resolved by remediating the code as the default rules cannot be changed within the WAF. Microsoft are working on changing this, however at the time of writing it cannot be done as the default data length cannot be changed. Sometimes this will occur with a part of the application that cannot be resolved, if this is the case you would need to look at another WAF product such as a Barracuda etc.

{
"resourceId": "/SUBSCRIPTIONS/...",
"operationName": "ApplicationGatewayFirewall",
"time": "2018-07-03T01:21:44Z",
"category": "ApplicationGatewayFirewallLog",
"properties": {
  "instanceId": "ApplicationGatewayRole_IN_0",
  "clientIp": "1.136.111.168",
  "clientPort": "0",
  "requestUri": "/..../api/document/adddocument",
  "ruleSetType": "OWASP",
  "ruleSetVersion": "3.0",
  "ruleId": "0",
  "message": "",
  "action": "Blocked",
  "site": "Global",
  "details": {
    "message": "Request body no files data length is larger than the configured limit (131072).. Deny with code (413)",
    "data": "",
    "file": "",
    "line": ""
  },
  "hostname": "website.com.au"
}

 
In this post we looked at some post configuration tasks for Application Gateway such as configuring NSG rules to further protect the network, configure diagnostic logging and how to check the Web Application Firewall logs for application traffic that would be blocked by the WAF. The Application Gateway can be a good alternative to dedicated appliances as it is easier to configure and manage. However, in some cases where more control of WAF rule sets are required a dedicated WAF appliance may be required.
Hopefully this two part series helps you with your decision making when it comes to securing your web front-end applications.

Securing your Web front-end with Azure Application Gateway Part 1

I have just completed a project with a customer who were using Azure Application Gateway to secure their web front-end and thought it would be good to post some findings.
This is part one in a two part post looking at how to secure a web front-end using Azure Application Gateway with the WAF component enabled. In this post I will explain the process for configuring the Application Gateway once deployed. You can deploy the Application Gateway from an ARM Template, Azure PowerShell or the portal. To be able to enable the WAF component you must use a Medium or Large instance size for the Application Gateway.
Using Application Gateway allows you to remove the need for your web front-end to have a public endpoint assigned to it, for instance if it is a Virtual Machine then you no longer need a Public IP address assigned to it. You can deploy Application Gateway in front of Virtual Machines (IaaS) or Web Apps (PaaS).
An overview of how this will look is shown below. The Application Gateway requires its own subnet which no other resources can be deployed to. The web server (Virtual Machine) can be assigned to a separate subnet, if using a web app no subnet is required.
AppGW
 
The benefits we will receive from using Application Gateway are:

  • Remove the need for a public endpoint from our web server.
  • End-to-end SSL encryption.
  • Automatic HTTPS to HTTPS redirection.
  • Multi-site hosting, though in this example we will configure a single site.
  • In-built WAF solution utilising OWASP core rule sets 3.0 or 2.2.9.

To follow along you will require the Azure PowerShell module version of 3.6 or later. You can install or upgrade following this link
Before starting you need to make sure that an Application Gateway with an instance size of Medium or Large has been deployed with the WAF component enabled and that the web server or web app has been deployed and configured.
Now open PowerShell ISE and login to your Azure account using the below command.

Login-AzureRmAccount

Now we need to set our variables to work with. These variables are your Application Gateway name, the resource group where you Application Gateway is deployed, your Backend Pool name and IP, your HTTP and HTTPS Listener names, your host name (website name), the HTTP and HTTPS rule names, your front end (Private) and back end (Public) SSL Names along with your Private certificate password.
NOTE: The Private certificate needs to be in PFX format and your Public certificate in CER format.
Change these to suit your environment and copy both your pfx and cer certificate files to C:\Temp\Certs on your computer.

# Application Gateway name.
[string]$ProdAppGw = "PRD-APPGW-WAF"
# The resource group where the Application Gateway is deployed.
[string]$resourceGroup = "PRD-RG"
# The name of your Backend Pool.
[string]$BEPoolName = "BackEndPool"
# The IP address of your web server or URL of web app.
[string]$BEPoolIP = "10.0.1.10"
# The name of the HTTP Listener.
[string]$HttpListener = "HTTPListener"
# The name of the HTTPS Listener.
[string]$HttpsListener = "HTTPSListener"
# Your website hostname/URL.
[string]$HostName = "website.com.au"
# The HTTP Rule name.
[string]$HTTPRuleName = "HTTPRule"
# The HTTPS Rule name.
[string]$HTTPSRuleName = "HTTPSRule"
# SSL certificate name for your front-end (Private cert pfx).
[string]$FrontEndSSLName = "Private_SSL"
# SSL certificate name for your back-end (Public cert cer).
[string]$BackEndSSLName = "Public_SSL"
# Password for front-end SSL (Private cert pfx).
[string]$sslPassword = "<Enter your Private Certificate pfx password here.>"

Our first step is to configure the Front and Back end HTTPS settings on the Application Gateway.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Add the Front-end (Private) SSL certificate. If you have any issues with this step you can upload the certificate from within the Azure Portal by creating a new Listener.

Add-AzureRmApplicationGatewaySslCertificate -ApplicationGateway $AppGw `
-Name $FrontEndSSLName -CertificateFile "C:\Temp\Certs\PrivateCert.pfx" `
-Password $sslPassword

Save the certificate as a variable.

$AGFECert = Get-AzureRmApplicationGatewaySslCertificate -ApplicationGateway $AppGW `
            -Name $FrontEndSSLName

Configure the front-end port for SSL.

Add-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
-Name "appGatewayFrontendPort443" `
-Port 443

Add the back-end (Public) SSL certificate.

Add-AzureRmApplicationGatewayAuthenticationCertificate -ApplicationGateway $AppGW `
-Name $BackEndSSLName `
-CertificateFile "C:\Temp\Certs\PublicCert.cer"

Save the back-end (Public) SSL as a variable.

$AGBECert = Get-AzureRmApplicationGatewayAuthenticationCertificate -ApplicationGateway $AppGW `
            -Name $BackEndSSLName

Configure back-end HTTPS settings.

Add-AzureRmApplicationGatewayBackendHttpSettings -ApplicationGateway $AppGW `
-Name "appGatewayBackendHttpsSettings" `
-Port 443 `
-Protocol Https `
-CookieBasedAffinity Enabled `
-AuthenticationCertificates $AGBECert

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The next stage is to configure the back-end pool to connect to your Virtual Machine or Web App. This example is using the IP address of the NIC attached to the web server VM. If using a web app as your front-end you can configure it to accept traffic only from the Application Gateway by setting an IP restriction on the web app to the Application Gateway IP address.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Add the Backend Pool Virtual Machine or Web App. This can be a URL or an IP address.

Add-AzureRmApplicationGatewayBackendAddressPool -ApplicationGateway $AppGw `
-Name $BEPoolName `
-BackendIPAddresses $BEPoolIP

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The next steps are to configure the HTTP and HTTPS Listeners.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the front-end port as a variable – port 80.

$AGFEPort = Get-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
            -Name "appGatewayFrontendPort"

Save the front-end IP configuration as a variable.

$AGFEIPConfig = Get-AzureRmApplicationGatewayFrontendIPConfig -ApplicationGateway $AppGw `
                -Name "appGatewayFrontendIP"

Add the HTTP Listener for your website.

Add-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HttpListener `
-Protocol Http `
-FrontendIPConfiguration $AGFEIPConfig `
-FrontendPort $AGFEPort `
-HostName $HostName

Save the HTTP Listener for your website as a variable.

$AGListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
              -Name $HTTPListener

Save the front-end SSL port as a variable – port 443.

$AGFESSLPort = Get-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
               -Name "appGatewayFrontendPort443"

Add the HTTPS Listener for your website.

Add-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HTTPSListener `
-Protocol Https `
-FrontendIPConfiguration $AGFEIPConfig `
-FrontendPort $AGFESSLPort `
-HostName $HostName `
-RequireServerNameIndication true `
-SslCertificate $AGFECert

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The final part of the configuration is to configure the HTTP and HTTPS rules and the HTTP to HTTPS redirection.

First configure the HTTPS rule.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the Backend Pool as a variable.

$BEP = Get-AzureRmApplicationGatewayBackendAddressPool -ApplicationGateway $AppGW `
       -Name $BEPoolName

Save the HTTPS Listener as a variable.

$AGSSLListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
                 -Name $HttpsListener

Save the back-end HTTPS settings as a variable.

$AGHTTPS = Get-AzureRmApplicationGatewayBackendHttpSettings -ApplicationGateway $AppGW `
           -Name "appGatewayBackendHttpsSettings"

Add the HTTPS rule.

Add-AzureRmApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
-Name $HTTPSRuleName `
-RuleType Basic `
-BackendHttpSettings $AGHTTPS `
-HttpListener $AGSSLListener `
-BackendAddressPool $BEP

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

Now configure the HTTP to HTTPS redirection and the HTTP rule with the redirection applied.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the HTTPS Listener as a variable.

$AGSSLListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
                 -Name $HttpsListener

Add the HTTP to HTTPS redirection.

Add-AzureRmApplicationGatewayRedirectConfiguration -Name ProdHttpToHttps `
-RedirectType Permanent `
-TargetListener $AGSSLListener `
-IncludePath $true `
-IncludeQueryString $true `
-ApplicationGateway $AppGw

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the redirect as a variable.

$Redirect = Get-AzureRmApplicationGatewayRedirectConfiguration -Name ProdHttpToHttps `
            -ApplicationGateway $AppGw

Save the HTTP Listener as a variable.

$AGListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
              -Name $HttpListener

Add the HTTP rule with redirection to HTTPS.

Add-AzureRmApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
-Name $HTTPRuleName `
-RuleType Basic `
-HttpListener $AGListener `
-RedirectConfiguration $Redirect

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

 
In this post we covered how to configure Azure Application Gateway to secure a web front-end whether running on Virtual Machines or Web Apps. We have configured the Gateway for end-to-end SSL encryption and automatic HTTP to HTTPS redirection removing this overhead from the web server.

In part two we will look at some additional post configuration tasks and how to make sure your web application works with the WAF component.

EU GDPR – is it relevant to Australian companies?

The new General Data Protection Regulation (GDPR) from the European Union (EU) imposes new rules on organisations that offer goods and services to the people in the EU, or collects and analyses data tied to EU residents, no matter where the organisations or the data processing is located. GDPR comes into force in May 2018.
If your customers reside in the EU, whether you have a presence in the EU or not, then GDPR applies to you. The internet lets you interact with customers where ever they are, and GDPR applies to anyone that deals with EU people where ever they are.
And the term personal data covers everything from IP address, to cookie data, to submitted forms, to CCTV and even to a photo of a landscape that can be tied to an identity. Then there is sensitive personal data, such as ethnicity, sexual orientation and genetic data, which have enhanced protections.
And for the first time there are very strong penalties for non-compliance – the maximum fine for a GDPR breach is EU$20M, or 4% of worldwide annual turnover. The maximum fine can be imposed for the most serious infringements e.g. not having sufficient customer consent to process data or violating the core of Privacy by Design concepts.
Essentially GDPR states that organisations must:

  • provide clear notice of data collection
  • outline the purpose the data is being used for
  • collect the data needed for that purpose
  • ensure that the data is only kept as long as required to process
  • disclose whether the data will be shared within or outside or the EU
  • protect personal data using appropriate security
  • individuals have the right to access, correct and erase their personal data, and to stop an organisation processing their data
  • and that organisations notify authorities of personal data breaches.

Specific criteria for companies required to comply are:

  • A presence in an EU country
  • No presence in the EU, but it processes personal data of European residents
  • More than 250 employees
  • Fewer than 250 employees but the processing it carries out is likely to result in a risk for the rights and freedoms of data subject, is not occasional, or includes certain types of sensitive personal data. That effectively means almost all companies.

What does this mean in real terms to common large companies? Well…

  • Apple turned over about USD$230B in 2017, so the maximum fine applicable to Apple would be USD$9.2B
  • CBA turned over AUD$26B in 2017 and so their maximum fine would “only” be AUD$1B
  • Telstra turned over AUD$28.2B in 2017, the maximum fine would be AUD$1.1B.

Ouch.
The GDPR legislation won’t impact Australian businesses, will it? What if an EU resident gets a Telstra phone or CBA credit/travel card whilst on holiday in Australia or if your organisation has local regulatory data retention requirements that appear, on the surface at least, at odds with GDPR obligations…
I would get legal advice if the organisation provides services that may be used by EU nationals.
In a recent PWC “Pulse Survey: US Companies ramping up General Data Protection Regulation (GDPR) budgets” 92% of responses stated that GDPR is one of several top priorities.
Technology cannot alone make an organisation GDPR compliant. There must be policy, process, people changes to support GDPR. But technology can greatly assist organisations that need to comply with GDPR.
Microsoft has invested in providing assistance to organisations impacted by GDPR.
Office 365 Advanced Data Governance enables you to intelligently manage your organisation’s data with classifications. The classifications can be applied automatically, for example, if there is GDPR German PII data present in the document the document can be marked as confidential when saved. With the document marked the data can be protected, whether that is to encrypt the file or assign permissions based on user IDs, or add watermarks indicating sensitivity.
An organisation can choose to encrypt their data at rest in Office 365, Dynamics 365 or Azure with their own encryption keys. Alternatively, a Microsoft generated key can be used.  Sounds like a no-brainer, all customers will use customer keys. However, the customer must have a HSM (Hardware Security Module) and a proven key management capability.
Azure Information Protection enables an organisation to track and control marked data. Distribution of data can be monitored, and access and access attempts logged. This information can allow an organisation to revoke access from an employee or partner if data is being shared without authorisation.
Azure Active Directory (AD) can provide risk-based conditional access controls – can the user credentials be found in public data breaches, is it an unmanaged device, are they trying to access a sensitive app, are they a privileged user or have they just completed an impossible trip (logged in five minutes ago from Australia, the current attempt is from somewhere that is a 12 hour flight away) – to assess the risk of the user and the risk of the session and based on that access can be provided, or request multi-factor authentication (MFA), or limit or deny access.
Microsoft Enterprise Mobility + Security (EMS) can protect your cloud and on-premises resources. Advanced behavioural analytics are the basis for identifying threats before data is compromised. Advanced Threat Analytics (ATA) detects abnormal behaviour and provides advanced threat detection for on-premises resources. Azure AD provides protection from identity-based attacks and cloud-based threat detection and Cloud App Security detects anomalies for cloud apps. Cloud App Security can detect what cloud apps are being used, as well as control access and can support compliance efforts with regulatory mandates such as Payment Card Industry (PCI), Health Insurance Accountability and Portability Act (HIPAA), Sarbanes-Oxley (SOX), General Data Protection Regulation (GDPR) and others. Cloud App Security can apply policies to apps from Microsoft or other vendors, such as Box, Dropbox, Salesforce, and more.
Microsoft provides a set of compliance and security tools to help organisations meet their regulatory obligations. To reiterate policy, process and people changes are required to support GDPR.
Please discuss your legal obligations with a legal professional to clarify any obligations that the EU GDPR may place on your organisation. Remember May 2018 is only a few months away.

Validating a Yubico YubiKeys' One Time Password (OTP) using Single Factor Authentication and PowerShell

Multi-factor Authentication comes in many different formats. Physical tokens historically have been very common and moving forward with FIDO v2 standards will likely continue to be so for many security scenarios where soft tokens (think Authenticator Apps on mobile devices) aren’t possible.
Yubico YubiKeys are physical tokens that have a number of properties that make them desirable. They don’t use a battery (so aren’t limited to the life of the battery), they come in many differing formats (NFC, USB-3, USB-C), can hold multiple sets of credentials and support open standards for multi-factor authentication. You can checkout Yubico’s range of tokens here.
YubiKeys ship with a configuration already configured that allows them to be validated against YubiCloud. Before we configure them for a user I wanted a quick way to validate that the YubiKey was valid. You can do this using Yubico’s demo webpage here but for other reasons I needed to write my own. There wasn’t any PowerShell examples anywhere, so now that I’ve worked it out, I’m posting it here.

Prerequisites

You will need a Yubikey. You will need to register and obtain a Yubico API Key using a Yubikey from here.

Validation Script

Update the following script to change line 2 for your ClientID that  you received after registering against the Yubico API above.

Running the script validates that the Key if valid.
YubiKey Validation.PNG
Re-running the submission of the same key (i.e I didn’t generate a new OTP) gets the expected response that the Request is Replayed.
YubiKey Validation Failed.PNG

Summary

Using PowerShell we can negate the need to leverage any Yubico client libraries and validate a YubiKey against YubiCloud.
 

Identifying Active Directory Users with Pwned Passwords using Microsoft/Forefront Identity Manager v2, k-Anonymity and Have I Been Pwned

Background

In August 2017 Troy Hunted released a sizeable list of Pwned Passwords. 320 Million in fact.
I subsequently wrote this post on Identifying Active Directory Users with Pwned Passwords using Microsoft/Forefront Identity Manager which called the API and sets a boolean attribute in the MIM Service that could be used with business logic to force users with accounts that have compromised passwords to change their password on next logon.
Whilst that was a proof of concept/discussion point of sorts AND  I had a disclaimer about sending passwords across the internet to a third-party service there was a lot of momentum around the HIBP API and I developed a solution and wrote this update to check the passwords locally.
Today Troy has released v2 of that list and updated the API with new features and functionality. If you’re playing catch-up I encourage you to read Troy’s post from August last year, and my two posts about checking Active Directory passwords against that list.

Leveraging V2 (with k-Anonymity) of the Have I Been Pwned API

With v2 of the HIBP passwod list and API the number of leaked credentials in the list has grown to half a billion. 501,636,842 Pwned Passwords to be exact.
With the v2 list in conjunction with Junade Ali from Cloudflare the API has been updated to be leveraged with a level of anonymity. Instead of sending a SHA-1 hash of the password to check if the password you’re checking is on the list you can now send a truncated version of the SHA-1 hash of the password and you will be returned a set of passwords from the HIBP v2 API. This is done using a concept called k-anonymity detailed brilliantly here by Junade Ali.
v2 of the API also returns a score for each password in the list. Basically how many times the password has previously been seen in leaked credentials lists. Brilliant.

Updated Pwned PowerShell Management Agent for Pwned Password Lookup

Below is an updated Password.ps1 script for the previous API version of my Pwned Password Management Agent for Microsoft Identity Manager. It functions by;

  • taking the new password received from PCNS
  • hashes the password to SHA-1 format
  • looks up the v2 HIBP API using part of the SHA-1 hash
  • updates the MIM Service with Pwned Password status

Checkout the original post with all the rest of the details here.

Summary

Of course you can also download (recommended via Torrent) the Pwned Password dataset. Keep in mind that the compressed dataset is 8.75 GB and uncompressed is 29.4 GB. Convert that into an On-Premise SQL Table(s) as I did in the linked post at the beginning of this post and you’ll be well in excess of that.
Awesome work from Tory and Junade.
 

Follow Us!

Kloud Solutions Blog - Follow Us!