7 tips for making UX work in Agile teams

Agile is here to stay. Corporates love it, start-ups embrace it and developers live by it. So there is no denying that Agile is going nowhere and we have to work with it. For a number of years, I’ve tried to align User Experience practices with Agile methods and haven’t met with great success every time.

But nevertheless, there are a lot of lessons that I’ve learnt during the process and I’m going to share 7 tips that always worked for me.

agile-and-ux

Create a shared vision early on

Get all the decision makers (Dev leads, Project managers and Project sponsors) in one room. Get a whiteboard and discuss why are we developing this product? What problems are we trying to solve? Once you have an overall theme, ask more specific questions such as how many app downloads are we targeting in the first week?

This workshop will give you a snapshot of a shared vision and common goals of the organisation. During every checkpoint of this project, this shared vision will serve as a guide, helping teams prioritise user stories and make the right trade-offs along the way.

Engage stakeholders wherever possible

Regardless of how many people in your team are in agreement, most of the times the decision makers are the Project Sponsors or Division Managers. You do not want them to appear randomly during sprint 3 planning and poop on it.

I highly recommend cultivating strong relationships with these stakeholders early on in the project. Invite them to all UX workshops, and if they can’t/don’t attend, find a way to communicate the summary of the meeting in an engaging way (not an email with a PDF attachment). I use to put together a Keynote slide and have it ready on my iPad for a quick 5-minute summary.

Work at least one sprint ahead of the Dev team

The chances of getting everything from research, wireframes, designs and development done for a single card – in one sprint is implausible.

You’ll struggle to get everything going at the same time. When you are designing, the developers are counting sheep because they are waiting on you to give them something to work with. You don’t want to be the reason behind the declining burn chart. Always be at least one sprint (if not two sprints) ahead of the development team. Sometimes it takes longer to research and validate design decisions, but if you are a sprint ahead – you are not holding up the developers, and you have ample time to respond to design challenges.

Foster a collaborative culture

Needless to say – Collaborate as much as you can. Try to get the team involved (just the people sitting around you is fine) for even small things such a changing a button’s colour. It makes them feel important; makes them feel good and fosters a culture of collaboration.

If you don’t collaborate with the team on small (or big) things, don’t expect them to tell you everything either. Your opinion might not be very valuable in most of the Dev discussions such whether to use ReactJS or Angular, but knowing that the Devs are going to use a certain JS Library – will definitely help you in (one or the other way) planning future sprints.

Follow an Iterative Design Process

DO NOT design mock-ups, to begin with. I know all the customers want to see something real that they can sell to their bosses. But the pretty design approach falls on its face every time. I want my customers to detach themselves from aesthetics and focus on structure and interaction first. Once we have worked out the hardware, then we can look at building the software.

Try Iterative Design Process. Sketch on the whiteboard, get the stakeholders to put a vision on paper and come up with a structure first. Then iterate. Here is my design process:

  1. Paper sketches
  2. Low fidelity wireframes (on white board / PC)
  3. Interactive wireframes – B/W (on PC)
  4. Draft Designs – in Colour
  5. Final Designs
  6. Pass onto the build team.

Do a round of user testing with at least 5 people

User testing is not expensive, it does not take days or weeks, and you don’t have to talk to 25 people.

There is a lot of research on how testing only 5 users is highly effective and valuable to product development. Pick users from different demographic, put an interactive wireframe together and run it past them for about 30 – 45 minutes. After 3 users, you’ll start noticing common themes appearing. And after 5, you’ll have enough pointers to take back to the team for another round of iteration. Repeat this process every two to four sprints.

Hold a brief stand-up meeting every day

Hold a stand-up meeting first thing in the morning. The aim is to keep everyone updated on progress, recognise blockers and pick-up new cards.  This ensures all the team members are on the same page and are working towards a common goal.

However, be mindful of the time since some discussions are lengthier and may need to be taken offline. We generally time-box stand-ups for 15 minutes.

ADFS v 2.0 Migration to ADFS 2016

Introduction

Some organisations may still have ADFS v2 or ADFS v2.1 running in their environment, and haven’t yet moved to ADFS v3. In this blog, we will discuss how can you move away from ADFS v2 or ADFS v2.1 and migrate or upgrade to ADFS 2016.

In previous posts, Part 1 and Part 2 we have covered the migration of ADFS v3.0 to ADFS 2016. I have received some messages on LinkedIn to cover the migration process from ADFS v2 to ADFS 2016 as there currently isn’t much information about this.

As you may have noticed from the previous posts, upgrading to ADFS 2016 couldn’t be any easier. In ADFS v2 however, the process is as simple, albeit differently than upgrading from ADFS v2 or ADFS v2.1 to ADFS v3.

Migration Process

Before we begin however, this post assumes you already have a running ADFS v2 or ADFS v2.1 environment. This blog post will not go into a step-by-step installation of ADFSv2/ADFSv2.1, and will only cover the migration or upgrade to ADFS 2016.

This blog post assumes you already have a running Windows Server 2016 with the ADFS 2016 role installed, if not, please follow the procedures outlined in part 2.

Prerequisites

Prior to commencing the upgrade, there are few tasks you need to perform.

  1. Take a note of your ADFS Server Display Name
  2. Take a note of your Federation Service Name
  3. Take note of your federation Service Identifier
  4. Export the service communication certificate (if you don’t already have a copy of it)
  5. Install/Import the service communication certificate onto ADFS 2016 server

Notes:

  • There is no need to make your ADFS 2016 server as primary, since this should have been a new installation. You can’t add to an ADFS v2/ADFS v2.1 farm anyway.
  • There is no need to raise the farm behavior level, since this is not a farm member like we did when migrating from ADFS v3 to ADFS 2016.
  • However, you will still need to upgrade the schema as outlined in part 2.

Before you begin, you will need to download the following PowerShell script found here.

Those scripts can also be found in the support\adfs location in the Windows Server 2016 installation file. Those scripts are provided by Microsoft and not Kloud.

The two main functions, are the export and import federation configuration.

Let’s begin

Firstly, we will need to export the federation configuration by running the “export-federationconfiguration.ps1”.

Here are the current relying party trust I have in the ADFS v2.1.

The “Claims Provider Trust” is Active Directory, this toll will be exported and imported.

1

1- Navigate to the folder you have just downloaded:

Then, type

.\export-federationconfiguration.ps1 -path "c:\users\username\desktop\exported adfs configuration"

 

 

 

3

Once successful, you will see the same results in the above picture.

Open your folder, and you should see the extracted configuration in .xml files.

4

2- Head over your ADFS 2016 Server and copy/paste both the folder in which you have extracted your federation configuration, and the one you downloaded that includes the scripts.

Then open PowerShell and run:

.\import-federationconfiguration.ps1 -path "c:\users\username\desktop\exported adfs configuration"

 

6

When successful, you will see a similar message as above.

And now when you open the ADFS management you should see the same relying party trust as you had in ADFS v2/ADFS v2.1.

7

Basically, by now you have completed the move from ADFSv2/ADFSv2.1 to ADFS 2016.

Notice how the token-signing and token-decrypting certificates are the same. The screenshots below are only of the token-signing certificates only, for the purpose of this blog.

ADFS v2.1:

8

ADFS 2016:

9

You can also check the certificates through PowerShell:

Get-AdfsCertificate

Last thing, make sure that service account that was running the Active Direction Federation Services in ADFS v2/ADFS v2.1 is the same one running in ADFS 2016.

Notice the message in the exported results:

Ensure that the destination farm has the farm name ‘adfs.iknowtech.com.au’ and uses service account ‘IKNOWTECH\svc-adfs’.

If this is not setup, then head over your services and select the account that you were originally using to run the ADFS service.

In addition, make sure that the service account has read-only access to the certificate private key.

10

Conclusion

This is a very straight forward process, all that you need to be sure of is to have the right components in place, service certificate installed, service account setup, etc.

After you have complete the steps, follow the steps here to configure your Web Application Proxy. Although this covers a migration, but it also helps you in configuring a new deployment.

I hope you’ve found this informative. Should you have any question or feedback please leave a comment below.

 

WAP (2012 R2) Migration to WAP (2016)

In Part 1, and Part 2 of this series we have covered the migration from ADFS v3 to ADFS 2016. In part 3 we have discussed the integration of Azure MFA with ADFS 2016, and in this post (technically part 4) we will cover the migration or better yet upgrade WAP 2012 R2 to WAP 2016.

Again, this blog assumes you already have installed the Web Application Proxy feature while adding the Remote Access role. And have prepared the WAP server to be able to establish a trust with the Federation Service.

In addition, a friendly reminder once again that the domain name and federation service name have changed from swayit.com.au to iknowtech.com.au. The certificate of swayit.com.au expired before completing the lab, hence the change.

Before we begin however, the current WAP servers (WAP01, WAP02) are the only connected servers that are part of the cluster:

1

To install the WebApplicationProxy, run the following cmdlet in PowerShell:

Install-WindowsFeature Web-Application-Proxy -IncludeManagementTools

Once installed, follow these steps:

Step 1: Specify the federation service name, and provide the local Administrator account for your ADFS server.

2

Step2: Select your certificate

3

Step 3: Confirm the Configuration4

Do this for both the WAP servers you’re adding to the cluster.

Alternatively, you could do so via PowerShell:


$credential = Get-Credential
Install-WebApplicationProxy -FederationServiceName "sts.swayit.com.au" -FederationServiceTrustCredential $credential -CertificateThumbprint "071E6FD450A9D10FEB42C77F75AC3FD16F4ADD5F" 

Once complete, the WAP servers will be added to the cluster.

Run the following cmdlet to get the WebApplication Configuration:

Get-WebApplicationProxyConfiguration

You will notice that we now have all four WAP servers in the ConnectedServersName and are part of the cluster.

You will also notice that the ConfigurationVersion is Windows Server 2012 R2. Which we will need to upgrade to Windows Server 2016.

5

Head back to the one of the previous WAP servers running in Windows Server 2012 R2, and run the following cmdlet to remove the old servers from the cluster, and keep only the WAP 2016 Servers:

 Set-WebApplicationProxyConfiguration -ConnectedServersName WAP03, WAP04 

Once complete, check the WebApplicationProxyConfiguration by running the Get-WebApplicationProxyConfiguration cmdlet.

Notice the change in the ConnectServersName (this information is obtained from WAP Server 2012 R2).

6

If you run the Get-WebApplicationProxyConfiguration from WAP 2016, you will get a more detailed information.

6-1

The last remaining step before publishing a Web Application (in my case) was to upgrade the ConfigurationVersion, as it was still in Windows Server 2012 R2 mode.

If you already have published Web Application, you can do this any time.

Set-WebApplicationProxyConfiguration -UpgradeConfigurationVersion

When successful, check again your WebApplicationProxyConfiguration by running

Get-WebApplicationProxyConfiguration

Notice the ConfigurationVersion:

8

You have now completed the upgrade and migration of your Web Application Proxy servers.

If this is a new deployment, of course you don’t need to go through this whole process. You WAP 2016 servers would already be in a Windows Server 2016 ConfigurationVersion.

Assuming this was a new deployment or if you simply need to publish a new Web Application, continue reading and follow the steps below.

Publishing a Web Application

There’s nothing new or different in publishing a Web Application in WAP 2016. It’s pretty similar to WAP 2012 R2. The only addition Microsoft added, is a redirection from HTTP to HTTPS.

Steps 1: Publishing a New Web Application

9

Step 2: Once complete, it will appear on the published Web Applications. Also notice that we only have WAP03, and WAP04 as the only WAP servers in the cluster as we have previously remove WAP01 and WAP02 running Windows Server 2012 R2.

7

There you have it, you have now upgraded your WAP servers that were previously running WAP 2012 R2 to WAP 2016.

By now, you have completed migrating from ADFS v3 to ADFS 2016, integrated Azure MFA with ADFS 2016, and upgraded WAP 2012 R2 to WAP 2016. No additional configuration is required, we have reached the end of our series, and this concludes the migration and upgrade of your SSO environment.

I hope you’ve enjoyed those posts and found them helpful. For any feedback or questions, please leave a comment below.

ADFS v 3.0 (2012 R2) Migration to ADFS 4.0 (2016) – Part 3 – Azure MFA Integration

In Part 1 and Part 2 of this series we have covered the migration from ADFS v3 to ADFS 2016. In this series we will continue our venture in configuring Azure MFA in ADFS 2016.

Azure MFA – What is it about?

It is a bit confusing when we mention that we need to enable Azure MFA on ADFS. Technically, this method is actually integrating Azure MFA with ADFS. MFA itself is authenticating on Azure AD, however, ADFS is prompting you enter an MFA code which will be verified with the Azure AD to sign you in.

In theory, this by itself is not a multi-factor authentication. When users choose to login with a multi-factor authentication on ADFS, they’re not prompted to enter a password, they simply will login with the six digit code they receive on their mobile devices.

Is this secure enough? Arguably. Of course users had to previously set up their MFA to be able to login by choosing this method, but if someone has control or possession of your device they could simply login with the six digit code. Assuming the device is not locked, or MFA is setup to receive calls or messages (on some phones message notifications will appear on the main display), almost anyone could login.

Technically, this is how Azure MFA will look once integrated with the ADFS server. I will outline the process below, and show you how we got this far.

7

Once you select Azure Multi-Factor Authentication you will be redirected to another page

8

And when you click on “Sign In” you will simply sign in to the Office or Azure Portal, without any other prompt.

The whole idea here is not much about security as much as it is about simplicity.

Integrating Azure MFA on ADFS 2016

Important note before you begin: Before integrating Azure MFA on ADFS 2016, please be aware that users should already have setup MFA using the Microsoft Authenticator mobile app. Or they can do it while first signing in, after being redirected to the ADFS page. The aim of this post is to use the six digit code generated by the mobile app.

If users have MFA setup to receive calls or texts, the configuration in this blog (making Azure MFA as primary) will not support that. To continue using SMS or a call, whilst using Azure MFA, the “Azure MFA” need to be configured as a secondary authentication method, under “Multi-Factor”, and “Azure MFA” under “Primary” should be disabled.

Integrating Azure MFA on ADFS 2016, couldn’t be any easier. All that is required, is running few PowerShell cmdlets and enabling the authentication method.

Before we do so however, let’s have a look at our current authentication methods.

0

As you have noticed, that we couldn’t enable Azure MFA without first configuring Azure AD Tenant.

The steps below are intended to be performed on all AD FS servers in the farm.

Step 1: Open PowerShell and connect to your tenant by running the following:

Connect-MsolService

Step 2: Once connected, you need to run the follow cmdlets to configure the AAD tenant:

$cert = New-AdfsAzureMfaTenantCertificate -TenantID swayit.onmicrosoft.com

When successful, head to the Certificate snap in, and check that a certificate with the name of your tenant has been added in the Personal folder.

2a22

Step 3: In order to enable the AD FS servers to communicate with the Azure Multi-Factor Auth Client, you need to add the credentials to the SPN for the Azure Multi-Factor Auth Client. The certificate that we generated in a previsou step,  will serve as these credentials.

To do so run the following cmdlet:

New-MsolServicePrincipalCredential -AppPrincipalId 981f26a1-7f43-403b-a875-f8b09b8cd720 -Type asymmetric -Usage verify -Value $cert

3

Note that the GUID 981f26a1-7f43-403b-a875-f8b09b8cd720 is not made up, and it is the GUID for the Azure Multi Factor Authentication client. So you basically can copy/paste the cmdlet as is.

Step 4: When you have completed the previous steps, you can now configure the ADFS Farm by running the following cmdlet:

Set-AdfsAzureMfaTenant -TenantId swayit.onmicrosoft.com -ClientId 981f26a1-7f43-403b-a875-f8b09b8cd720

Note how we used the same GUID from the previous step.

4

When that is complete, restart the ADFS service on all your ADFS farm servers.

net stop adfssrv

net start adfssrv

Head back to your ADFS Management Console and open the Authentication method and you will notice that Azure MFA has been enabled, and the message prompt disappeared.

5

6

If the Azure MFA Authentication methods were not enabled, then enable them manually and restart the services again (on all your ADFS servers in the farm).

Now that you have completed all the steps, when you try and access Office 365 or the Azure Portal you will be redirected to the pages posted above.

Choose Azure Multi-Factor Authentication

7

Enter the six digit code you have received.

8

And then you’re signed in.

10

By now you have completed migrating from ADFS v3 to ADFS 2016, and in addition have integrated Azure MFA authentication with your ADFS farm.

The last part in this series will be about WAP 2012 R2 upgrade to WAP 2016. So please make sure to come back tomorrow and check in details the upgrade process.

I hope you’ve enjoyed the posts so far. For any feedback or questions, please leave a comment below.

 

 

ADFS v 3.0 (2012 R2) Migration to ADFS 4.0 (2016) – Part 2

In Part 1 of this series we have been getting ready for our ADFS v3.0 migration to ADFS v4.0 (ADFS 2016).

In part 2 we will cover the migration process, step-by-step. However, a friendly reminder that this series does not cover installation of ADFS and federation from scratch. This post assumes you already have a federated domain and Single Sign On (SSO) for your applications.

You may notice domain change and federation service name change from swayit.com.au to iknowtech.com.au. This doesn’t impact our migration, the certificate for swayit.com.au expired before completing the lab. : )

Migration Process – ADFS – Phase 1:

Assuming you already have installed the Active Directory Federation Services on your new ADFS 2016 servers, and if not, then you could do so through PowerShell:

Install-windowsfeature adfs-federation -IncludeManagementTools

Once complete, follow these steps:

Step 1: Add the new ADFS 2016 server to the existing farm

1-add-farm

Step 2: Connect to AD2

Step 3: Specify the primary Federation server (or federation service).3

Step 4: Select your certificate4

Step 5: Select your service account. For the sake of this lab, I created a user and gave it permission to run the ADFS service. It is advisable however, to use a group managed service account (gMSA).5

Step 6: Complete.

The warnings below are irrelevant to the ADFS 2016 server being added to the farm.6

Alternatively, you could do so through PowerShell:

If you’re using Windows Internal Database:

 Import-Module ADFS

#Get the credential used for the federation service account

$serviceAccountCredential = Get-Credential -Message "Enter the credential for the Federation Service Account."

Add-AdfsFarmNode `
-CertificateThumbprint:"071E6FD450A9D10FEB42C77F75AC3FD16F4ADD5F" `
-PrimaryComputerName:"sts.swayit.com.au" `
-ServiceAccountCredential:$serviceAccountCredential 

or:

Import-Module ADFS

$credentials = Get-Credential

Install-AdfsFarm `
-CertificateThumbprint:"071E6FD450A9D10FEB42C77F75AC3FD16F4ADD5F" `
-FederationServiceDisplayName:"SwayIT" `
-FederationServiceName:"sts.swayit.com.au" `
-GroupServiceAccountIdentifier:"SWAYIT\ADFSgMSA`$"
Once the machine has restarted, open the ADFS Management Console, and you’ll notice it’s not the primary federation server in the farm. Now you need to make the newly added ADFS 2016 server as primary. Follow the steps below.

Once the newly added ADFS 2016 server run the following cmdlet:

Step 1:

Set-AdfsSyncProperties -Role PrimaryComputer

8

Open the ADFS Management Console and you’ll notice that ADFS03 (our ADFS2016 server) is now primary:

10

Step 2: Run the following cmdlet on all other federation servers in the farm

Set-AdfsSyncProperties -Role SecondaryComputer -PrimaryComputerName ADFS03.swayit.com.au 

Step 3: Run the following cmdlet on the secondary server to confirm that the ADFS Properties are correct.

Get-AdfsSyncProperties

9-5

Migration Process – ADPREP – Phase 2

Now that we’ve made our new ADFS 2016 Server as primary, it is time to upgrade the schema.

Assuming you had already downloaded Windows Server 2016 ISO file, and if not, you can obtain a copy from TechNet Evaluation Centre.

I performed these steps on the ADFS2016 server:

  1. Open a command prompt and navigate to support\adprep directory.
  2. Type in: adprep /forestprep

1

Once the first step is complete, you will get “The command has completed successfully.”

Next run: adprep /domainprep2

Migration Process – ADFS – Phase 3:

At this stage we had already completed:

  • Adding ADFS 2016 to the existing farm
  • Promoting one of the new ADFS2016 server as primary
  • Pointing all secondary server to the primary server
  • Upgraded the schema

Next phase is to remove the existing ADFS v3 (ADFS 2012 R2) from the Azure Load Balancer (or any load balancer you have in place).

After you have removed ADFS v3 from the load balancer, and possibly from the farm (or simply by having them turned off) you will need to raise the Farm Behavior Level (FBL).

When raising the FBL, any ADFS v3 server will be removed from the farm automatically. So you don’t have to remove them yourself.

When the ADFS v3 servers are no longer part of the farm, I would like to recommend to keep them turned off, should anything go wrong you simply can go back on turning the ADFS v3 servers, make one primary, and in this case you may avoid impacting the business.

If you find yourself in this situation, just make sure everything else is pointing to the ADFS v3.

When you’re ready again, just start from the beginning in adding ADFS 2016 back to the farm.

Here are the steps:

  1. On the Primary ADFS 2016 server open an elevated PowerShell and run the following cmdlet:
Invoke-AdfsFarmBehaviorLevelRaise

12

As you may have noticed, it automatically detected which ADFS servers the operation will be performed on. Both ADFS03 and ADFS04 are ADFS 2016 versions.

During the process, you will see the usual PowerShell execution message:

12-1

Once complete, you will see a successful message:

13

If the service account had failed to be added to the Enterprise Key Admins group, do it manually.

In order to confirm the Farm Behavior Level, run the following cmdlet:

Get-AdfsFarmInformation

14

If you go to https://portal.office.com, and enter the email address of a federated domain, you should be redirected to your ADFS login page:

15

And this is it. You have successfully migrated from ADFS v3.0 to ADFS 2016.

The next post in our series is on Azure MFA integration with ADFS 2016, so make sure to please come back tomorrow to check in details the configuration process.

I hope you’ve enjoyed this post. For any feedback or questions, please leave a comment below.

ADFS v 3.0 (2012 R2) Migration to ADFS 4.0 (2016) – Part 1

Introduction

With the release of Windows Server 2016, Microsoft has introduced new and improved features. One of those features is ADFS 4.0, better known as ADFS 2016.

Organisations have already started leveraging ADFS 2016 as it covers most of their requirement, specially in terms of security.

In this series of blog posts, I will demonstrate how you can upgrade from ADFS v 3.0 (Running Windows Server 2012 R2) to ADFS 2016 (Running Windows Server 2016 Datacenter). In the series to come I will also cover Web Application Proxy (WAP) migration from Windows Server 2012 R2 to Windows Server 2016. Moreover, I will cover integration of Azure MFA with the new ADFS 2016.

The posts in this series assume you have knowledge in Windows Servers, AD, ADFS, WAP, and MFA. This blog post will not go into detailed step-by-step installation of roles and features. This blog post also assumes you have a running environment of AD, ADFS/WAP (2012 R2), AAD Connect already configured.

What’s New in ADFS 2016?

ADFS 2016 offers new and improved features included:

  • Eliminate Passwords from the Extranet
  • Sign in with Azure Multi-factor Authentication
  • Password-less Access from Compliant Devices
  • Sign in with Microsoft Passport
  • Secure Access to Applications
  • Better Sign in experience
  • Manageability and Operational Enhancements

For detailed description on the aforementioned points, please refer to this link.

Current Environment

  • 2x ADFS v3 Servers (behind an internal load balancer)
  • 2x WAP 2012 R2 Server (behind an external load balancer)
  • 2x AD 2012 R2 Servers
  • 1x AAD Connect server

At a high level design, this is how the ADFS/WAP environment looks:

sso

Future environment:

  • 2x ADFS 2016 Servers (behind the same internal load balancer)
  • 2x WAP 2016 Servers (behind the same external load balancer)
  • 2x AD 2012 R2 Servers
  • 1x AAD Connect Server

Planning for your ADFS and WAP Migration

At first, you need to make sure that your applications can support ADFS 2016, some legacy applications may not be supported.

The steps to implement SSO are as follows:

  1. Active Directory schema update using ‘ADPrep’ with the Windows Server 2016 additions
  2. Build Windows Server 2016 servers with ADFS and install into the existing farm and add the servers to the Azure load balancer
  3. Promote one of the ADFS 2016 servers as “primary” of the farm, and point all other secondary servers to the new “primary”
  4. Build Windows Server 2016 servers with WAP and add the servers to the Azure load balancer
  5. Remove the WAP 2012 servers from the Azure load balancer
  6. Remove the ADFSv3 servers from the Azure load balancer
  7. Raise the Farm Behavior Level feature (FBL) to ‘2016’
  8. Remove the WAP servers from the cluster
  9. Upgrade the WebApplicationProxyConfiguration version to ‘2016’
  10. Configure ADFS 2016 to support Azure MFA and complete remaining configuration

The steps for the AD schema upgrade are as follows:

  1. Prior to starting, the Active Directory needs to be in a health state, in particular replication needs to be performing without error.
  2. The Active Directory needs to be backed-up. Best to backup (at a minimum) a few Active Directory Domain Controllers including the ‘system state’
  3. Identify which Active Directory Domain Controller maintains the Schema Master role
  4. Perform the update using an administrative account by temporarily adding the account to the Schema Admin group
  5. Download and have handy the Windows Server 2016 installation media
  6. When ready to update the schema, perform the following:
  7. Open an elevated command prompt and navigate to support\adprep directory in the Windows Server 2016 installation media. Run the following: adprep /forestprep.
  8. Once that completes run the following: adprep/domainprep

Upgrading the Active Directory schema will not impact your current environment, nor will it raise the domain/forest level.

Part 2 of this series will be published early next week. Therefore make sure to please come back and check in details the migration process.

 

UX is money

User Experience (UX) is money – it’s as simple as that. Be in it, or you will lose out – one way or another.

In the current ‘Age of the Customer‘, UX can have an impact on virtually every part of your business – and if you don’t adapt, you risk getting left behind – and perhaps worse – not even satisfying your customers.

UX, done correctly, should impact all of the following money-related aspects of your business:

  • Customer Experience (CX) (of your company, and it’s services/products)
  • Customer satisfaction
  • Business strategy
  • Brand loyalty
  • Identifying innovation and new business opportunities
  • Product and service differentiation
  • Product and service design
  • Product development (as UX helps you identify the best options to be developed for your budget)
  • User interfaces – UX improves usability, usefulness and visual design, which in turns increases user satisfaction and loyalty.

Whilst not exhaustive, even this basic list provides a good idea how important UX is to business and to businesses being profitable.

And if you’re in the business of providing UX services to clients (like we are):

  • UX is good business for both service provider and client, generating a combined wealth and mutually beneficial partnership
  • UX increases client satisfaction by helping deliver great products and services for clients – a win-win situation for both customer and service provider.

In a nutshell

UX thus:

  • Brings in more money
    • via increased customer satisfaction
    • via a more tailored experience
    • delivering increased sales
  • Saves money
    • by helping drive more effective product development
    • by identifying and helping deliver what you know that your customers actually want
    • and thus delighting customers, increasing word of mouth and business reputation
    • and therefore achieving free advertising for your company

Times have changed

The traditional view of the customer (where businesses decide the products/services they want to create or deliver, and then figuratively “force them down customers’ throats”, via marketing and sales) – is a sure way to get left behind in a fast moving world. No longer is there a single point of exchange where value is ‘extracted’ from the customer (the “transaction”).

Today, the bar has been raised by companies that have embedded UX and design culture in to their organisation. In order to have a competitive advantage, businesses must thoroughly understand customers, build products to meet the customers’ needs, and then delight and excite these customers throughout the user’s whole experience with a company.

joseph-pine-the-progression-of-economic-value

Joseph Pine TED Talk: The progression of Economic Value

And by understanding your customers/users, your organisation will be able to think differently and push the boundaries, be different from the masses, and gain a unique competitive advantage.

“There’s no longer any real distinction between business strategy and the design of the user experience”
Bridget van Kralingen, Senior VP of IBM Global Business Services

How have times changed?

To quote a few UX experts:

“User Experience encompasses all aspects of the end-user’s interaction with the company, its services, and its products” – Nielsen Norman Group

“UX is a means to drive product innovation and differentiation, as well as to enrich corporate cultures. UX successfully drives a number of mission-critical business key performance indicators including customer engagement, retention, and loyalty” – Nancy Dickenson

Companies at the forefront are now competing to provide the best customer experience – and conversely – companies that are not providing a great customer nor user experience are not at the forefront, and falling behind.

User Experience:

  • provides a means to affect the the overall customer experience with company and its brand, by affecting the customer at every digital touchpoint they have with a company
  • is a philosophy accompanied by a set of processes that allow you to gain a thorough understanding of your users so that you can not only satisfy your customers but delight them.
  • does not exist without *users*.
  • is a competence, not a function.

Speak to us if you would like help with tailoring a User Experience and realising the value that UX will bring to your business.

Configuring AWS Web Application Firewall

In a previous blog, we discussed Site Delivery with AWS CloudFront CDN, one aspect in that blog was not covered and that was WAF (Web Application Firewall).

What is Web Application Firewall?

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules.

When to create your WAF?

Since this blog is related to CloudFront, and the fact that WAF is tightly integrated with CloudFront, our focus will be on that. Your WAF rules will run and be applied in all Edge Locations you specify during your CloudFront configuration.

It is recommended that you create your WAF before deploying your CloudFront Distribution, since CloudFront takes a while to be deployed, any changes applied after its deployment will take equal time to be updated. Even after you attach the WAF to the CloudFront Distribution (this is done in “Choose Resource” during WAF configuration – shown below).

Although there is no general rule here, it is up to the organisation or administrator to apply WAF rules before or after the deployment of CloudFront distribution.

Main WAF Features

In terms of security, WAF protects your applications from the following attacks:

  • SQL Injection
  • DDoS attacks
  • Cross Site Scripting

In terms of visibility, WAF gives you the ability to monitor requests and attacks through CloudWatch integration (excluded from this blog). It gives you raw data on location, IP Addresses and so on.

How to setup WAF?

Setting up WAF can be done in few ways, you could either use CloudFormation template, or configure the setting on the WAF page.

Since each organisation is different, and requirements change based on applications, and websites, the configuration present on this blog are considered general practice and recommendation. However, you will still need to tailor the WAF rules according to your own needs.

WAF Conditions

For the rules to function, you need to setup filter conditions for your application or website ACL.

I already have WAF setup in my AWS Account, and here’s a sample on how conditions will look.

1-waf

If you have no Conditions already setup, you will see something like “There is no IP Match conditions, please create one”.

To Create a condition, have a look at the following images:

In here, we’re creating a filter that an HTTP method contains a threat after HTML decoding.

2-waf

Once you’ve selected your filters, click on “Add Filter”. The filter will be added to the list of filters, and once you’re done adding all your filters, create your condition.

3-waf

You need to follow the same procedure to create your conditions for SQL Injection for example.

WAF Rules

When you are done with configuring conditions, you can create a rule and attach it to your web ACL. You can attach multiple rules to an ACL.

Creating a rule – Here’s where you specify the conditions you have created in a previous step.

4-waf

From the list of rules, select the rule you have created from the drop down menu, and attach it to the ACL.

5-waf

In the next steps you will have the option to choose your AWS Resource, in this case one of my CloudFront Distributions. Review and create your Web ACL.

6-waf

7-waf

Once you click on create, go to your CloudFront distribution and check its status, it should show “In progress”.

WAF Sample

Since there isn’t a one way for creating a WAF rule, and if you’re not sure where to begin, AWS gives you a good way to start with a CloudFormation template that will create WAF sample rules for you.

This sample WAF rule will include the following found here:

  • A manual IP rule that contains an empty IP match set that must be updated manually with IP addresses to be blocked.
  • An auto IP rule that contains an empty IP match condition for optionally implementing an automated AWS Lambda function, such as is shown in How to Import IP Address Reputation Lists to Automatically Update AWS WAF IP Blacklists and How to Use AWS WAF to Block IP Addresses That Generate Bad Requests.
  • A SQL injection rule and condition to match SQL injection-like patterns in URI, query string, and body.
  • A cross-site scripting rule and condition to match Xss-like patterns in URI and query string.
  • A size-constraint rule and condition to match requests with URI or query string >= 8192 bytes which may assist in mitigating against buffer overflow type attacks.
  • ByteHeader rules and conditions (split into two sets) to match user agents that include spiders for non–English-speaking countries that are commonly blocked in a robots.txt file, such as sogou, baidu, and etaospider, and tools that you might choose to monitor use of, such as wget and cURL. Note that the WordPress user agent is included because it is used commonly by compromised systems in reflective attacks against non–WordPress sites.
  • ByteUri rules and conditions (split into two sets) to match request strings containing install, update.php, wp-config.php, and internal functions including $password, $user_id, and $session.
  • A whitelist IP condition (empty) is included and added as an exception to the ByteURIRule2 rule as an example of how to block unwanted user agents, unless they match a list of known good IP addresses

Follow this link to create a stack in the Sydney region.

I recommend that you review the filters, conditions, and rules created with this Web ACL sample. If anything, you could easily update and edit the conditions as you desire according to your applications and websites.

Conclusion

In conclusion, there are certain aspects of WAF that need to be considered, like choosing an appropriate WAF solution and managing its availability, and you have to be sure that your WAF solution can keep up with your applications.

The best feature of WAF, and since it is integrated with CloudFront it can be used to protect websites even if they’re not hosted in AWS.

I hope you found this blog informative. Please feel free to add your comments below.

Thanks for reading.

Site Delivery with AWS CloudFront CDN

Nowadays, most companies are using some sort of a Content Delivery Network (CDN) to improve the performance and high availability of their sites, those include Azure CDN, CloudFlare, CloudFront, Varnish, and so on.

In this blog however, I will demonstrate how you can deliver your entire website through AWS’s CloudFront. This blog will not go through other CDN services. This blog also assumes you have knowledge of AWS services, DNS, and CDN.

What is CloudFront?

Amazon CloudFront is a global content delivery network (CDN) service that accelerates delivery of your websites, APIs, video content or other web assets. It integrates with other Amazon Web Services products to give developers and businesses an easy way to accelerate content to end users with no minimum usage commitments.

CloudFront delivers the contents of your websites through global datacentres known as “Edge Locations”.

Assuming the webserver is located in New York, and you’re accessing the website from Melbourne, then the latency will be greater than someone trying to access the website from London.

CloudFront’s Edge Locations will serve the content of a website depending on location. That is, if you’re trying to access a New York based website from Melbourne, you will be directed to the closest Edge Location available for users from Australia. There are two Edge Locations in Australia, one in Melbourne and one in Sydney.

How CloudFront delivers content?

Please note that contents aren’t delivered from the first request (whether you use CloudFront or any other CDN solution). That is, the first user who accesses a page from Melbourne (first request), the contents of the page hadn’t been cached (yet) and it will be fetched from the webserver. The second user who accesses the website (second request), will get the contents from the Edge Location.

Here’s how:

drawing1

The main features of CloudFront are:

  • Edge Location: This is the location where content will be cached. This is separate to an AWS Region or Availability Zone
  • Origin: This is the origin of all the files that the CDN will distribute. This can be either an S3 bucket, an EC2 instance, and ELB or Route53.
  • Distribution: The name given the CDN which consists of collection of Edge Location
  • Web Distribution: Typically used for Websites.
  • RTMP: Typically used for Media Streaming (Not covered in this blog).

In this blog, we will be covering “Web Distribution”.

There are multiple ways to “define” your origin. You could either upload your contents to an S3 bucket, or let CloudFront cache objects from your webserver.

I advise you to keep the same naming conventions you have previously used for your website.

There’s no real difference between choosing an S3 bucket, or your webserver to deliver contents, or even an Elastic Load Balancer.

What matters however, should you choose for CloudFront to cache objects from your origin, you may need to change your hostname. Alternatively, if your DNS registrar allows it, you can make an APEX DNS change.

Before we dive deep into setting up and configuring CloudFront, know that it is fairly a very simple process, and we will be using CloudFront GUI to achieve this.

Alternatively you can use other third party tools like S3 Browser, Cloudberry Explorer, or even CloudFormation if you have several websites you’d like to enable CloudFront for. These tools are excluded from this blog.

Setup CloudFront with S3

Although I do not recommend this approach because S3 is designed as a storage service and not a delivery (content) service, under load it will not provide you with optimum performance.

  1. Create your bucket
  2. Upload your files
  3. Make your content public (this is achieved through Permissions. Simply choose grantee “everyone”.

Configuring CloudFront with S3

As aforementioned, configuring CloudFront is very straightforward. Here are the steps for doing so.

I will explain the different settings at the end of each image.

Choose your origin (domain name, S3 bucket, ELB…). If you have an S3 bucket or an ELB already configured, they will show in the drop down menu.

You could simply follow the selected options in the image for optimal performance and configuration of your CloudFront distribution.

5-cdn

  • Origin path: This is optional and usually not needed to be specified. This is basically a directory in your bucket in which you’re telling CloudFront to request the content from.
  • Origin ID: This is automatically populated, but you can change it. Its only function is for you to distinguish origins if you have multiple origins in the same distributions.
  • Restrict Bucket Access: This is for users to access your CloudFront URL e.g. 123456.cloudfront.net rather than the S3 URL.
  • Origin Access Identity: This is required if you want your users to always access your Amazon S3 content using CloudFront URLs. You can use the same Access Identity for all your distributions. In fact, it is recommended you do so to make life simpler.
  • Grant Read Permissions on Bucket: This applies on the “Origin Access Identity” so CloudFront can access objects in your Amazon S3 bucket. This is automatically applied.
  • Viewer Protocol Policy: This is to specify how users should access your origin domain name. If you have a website that accepts both HTTP and HTTPS, then choose that. CloudFront will fetch the contents based on this viewer policy. That is, if a user typed in http://iknowtech.com.au then CloudFront will fetch content over HTTP. If HTTPS is used, then CloudFront will fetch contents over HTTPS. If your website only accepts HTTPS, then choose that option.
  •  Allowed HTTP Methods: This basically is used for commerce websites or websites with login forms which requires data from end users for better performance. You can keep it default on “Get, Head”. Nevertheless, make sure to configure your webserver to handle “Delete” appropriately, otherwise users might be able to delete contents.
  • Cached HTTP Methods: You will have an additional “Options”, if you choose the specified HTTP Methods shown above. This is to specify the methods in which you want CloudFront to do caching.

In the second part of the configuration:

6-cdn

  • Price Class: This is to specify which regions of available Edge Locations you want to “serve” your website from.
  • AWS WAF Web ACL: Web Application Firewall (WAF) is a set of ACL rules which you create to protect your website from attacks, e.g. SQL Injections etc. I highlighted that on purpose as there will be another blog for that alone.
  • CNAME: If you don’t want to use CloudFront’s URL e.g.123456.cloudfront.net, and instead you want to use your own domain, then specifying a CNAME is a good idea, and you can specify up to 100 CNAMEs. Nevertheless, there may be a catch. Most DNS hosting services may not allow you to edit the APEX zone of your records. And if you create a CNAME for http://www.domain.com to point to 123456.cloudfront.net, then any requests coming from htttp://domain.com will not be going through CloudFront. And if you have a redirection set up in your webserver, for any request coming from http://www.domain.com to go http://domain.com then there’s no point configuring CloudFront.
  • SSL Certificates: You could either use CloudFront’s certificate, and it is a wildcard certificate of *.cloudfront.net, or you can request to use your own domain’s certificate.
  • Supported HTTP Versions: What you need to know is that CloudFront always forwards requests to the origin using HTTP/1.1. This also is based on the viewer policy, most modern websites support all HTTP methods shown above. HTTP/2 is usually faster. Read more here for more info on HTTP/2 support. In theory this sounds ideal, technically however, nothing much is happening in the backend of CloudFront.
  • Logging: Choose to have logging on. Logs are saved in an S3 bucket.
  • Bucket for Logs: Specify the bucket you want to save the logs onto.
  • Log Prefix: Choose a prefix for your logs. I like to include the domain name for each log of each domain.
  • Cookie logging: Not quite important to have it turned on.
  • Enable IPv6: You can have IPv6 enabled, however as of this writing this is still being deployed.
  • Distribution State: You can choose to deploy/create your CloudFront distribution with either in an enabled or disabled state.

Once you’ve completed the steps above, click on “Create Distribution”. It might take anywhere from 10 to 30 minutes for CloudFront to be deployed. Average waiting time is 15 minutes.

Setup and Configure your DNS Records

Once the Distribution is started, head over to “Distributions” in CloudFront, then click on the Distribution ID, take note of the domain name: d2hpa1xghjdn8b.cloudfront.net.

Head over to your DNS records and add the CNAME (or CNAMEs) you have specified in earlier steps to point to d2hpa1xghjdn8b.cloudfront.net.

Do not wait until the Distribution is complete, add the DNS records while the Distribution is being deployed. This will at least give time for your DNS to propagate, since CloudFront takes anywhere between 10 to 30 minutes to be deployed.

17-dns

18-dns

If you’re delivering the website through an ELB, then you can use the ELB’s CNAME to point your site to it.

Here’s what will appear eventually once the CloudFront Distribution is complete. Notice the URLs: http://d80u8wk4w5p58.cloudfront.net/nasa_blue_marble.jpg (I may remove this link in the future).

8-cdn

You can also access it via: http://cdn.iknowtech.com.au/nasa_blue_marble.jpg (I may also remove this link in the future).

10-cdn

Configuring CloudFront with Custom Origin

Creating a CloudFront Distribution based on Custom Origin, that is to allow CloudFront to cache directly from your domain, is pretty much the same as above, with some differences, as shown below. Every other setting is the same as above.

9-cdn

The changes, as you can see relate to SSL Protocols, HTTP and HTTPS ports.

  • Original SSL Protocols: This is to specify which SSL Protocols CloudFront will use when establishing a connection to your origin. If you don’t have SSLv3, keep it disabled. If you do, and your origin does not support v1, v1.1. and v1.2, then choose SSLv3.
  • Origin Protocol Policy: This is the same as Viewer Protocol Policy discussed above. If you choose “Match Viewer” then it will work with both HTTP and HTTPS. Obviously, it also depends on how your website is set up.
  • HTTP and HTTPS ports: Leave default ports.

Configure CloudFront with WordPress

If you have a WordPress page, it is most probably the easiest way to configure CloudFront for WordPress. Through the use of plugins, you can change the hostname.

  1. Install the W3 Total Cache plugin in your WordPress page.
  2. Enable CDN and choose CloudFront. This is found in the “General Settings” tab of the W3 plugin.11-cdn
  3. While scrolling down to CDN, you may enable other forms of “caching” found in settings.
  4. After saving your changes, click on “CDN” tab of the W3 plugin.
  5. Key in a the required information. I suggest you create an IAM user with permission to CloudFront to be used here.

Note that I used cdn2.iknowtech.com.au because I had already used cdn.iknowtech.com.au. CloudFront will detect this setting and give you an error if you try and use the same CNAME.

12-cdn

Once your settings are saved, here’s how it’ll look.

Note the URLs: http://d2hpa1xghjdn8b.cloudfront.net (I may remove this link in the future).13-cdn

You can also access it via: http://cdn2.iknowtech.com.au (I may also remove this link the future).

14-cdn

 

 

To make sure your CDN is working, you can perform some test, using any of the followings: gtmetrix.com, pingdom.com or webpagetest.org .

Here are the results from gtmetrix, tested for iknowtech.com.au

15-cdn

The same result for cdn2.iknowtech.com.au.

Notice the page load time after the second request.

16-cdn

And that’s it. All you need to know about to create a CloudFront Distribution.

Conclusion

CloudFront is definitely a product to use if you’re looking for CDN. It is true you have many other CDN products out there, but CloudFront is one of the easiest, highly-available CDNs in the market.

Before you actually utilise CloudFront or any other CDN solutions, just be mindful of your hostnames. You need your primary domain or record to be cached.

I hope you found this blog informative. Please feel free to post your comments below.

Thanks for reading.

 

Forcing MFA in Amazon Web Services

Many organisations will want to enforce MFA for an added security layer for their users. As each service is different, in some cases enforcing MFA may not be as easy as it sound.

In AWS, an administrator cannot simply “tick” to enable MFA on all users (as of this writing). However, MFA can be enforced on API calling, to “force” a user to setup MFA. Think of it as a backdoor, to forcing or enabling MFA on all your IAM users.

The only way in which that can be achieved, is by creating a policy. This policy is a based on a JSON script. You could use AWS for that, or Visual Studio.

In this blog, I will demonstrate how to enforce MFA in AWS for all your IAM users. This blog will not go through setting up MFA for IAM users. This could be achieved by following the steps here .

This blog assumes you have some AWS knowledge and experience, and will not go into detailing how to attach a policy or step-by-step creating a policy, this can be found here .

How Does AWS define MFA?

AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your AWS account settings and resources.

You can enable MFA for your AWS account and for individual IAM users you have created under your account. MFA can be also be used to control access to AWS service APIs.

You can read more about it here.

What is API Calling?

Enforcing MFA on API calling means that an IAM user, for example will not be able to make any changes to EC2 instances, or launch new EC2 instances unless they’re MFA authenticated. Even if users have full admin permissions.

This is not to be confused with enforcing MFA upon signing in, the IAM user will still have to setup MFA to complete tasks in AWS.

Not very descriptive, but an IAM user will be faced with the following error messages if MFA isn’t setup.

ec2

In terms of Route53, the IAM users will have different error messages.

route53

In this case, it is up for administrators to let IAM users know that MFA should be setup upon signing in.

Setting up and creating the MFA Policy

The MFA policy that we will create and apply, will only allow the use of IAM services, for users to setup their MFA, among other configurations.

iam

Creating the MFA script

Before you start, you will need your IAM ARN (Amazon Resource Name). For IAM users, this will look like this “arn:aws:iam::123456789:user/test”

For groups, it will look like this “arn:aws:iam::123456789:group/AmazonEC2FullAccess

Basically, the ARN is your AWS Account ID which you can find under “My Account”.

The ARN will be used in the script, also too if required for particular users or groups (explained in the examples below). Nevertheless, in this script we will allow MFA to be enforced on all groups and users by default.

Don’t forget to replace your AWS ARN.


{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAllUsersToListAccounts",
"Effect": "Allow",
"Action": [
"iam:ListAccountAliases",
"iam:ListUsers"
],
"Resource": [
"arn:aws:iam::123456789:group/*"
]
},
{
"Sid": "AllowIndividualUserToSeeTheirAccountInformation",
"Effect": "Allow",
"Action": [
"iam:ChangePassword",
"iam:CreateLoginProfile",
"iam:DeleteLoginProfile",
"iam:GetAccountPasswordPolicy",
"iam:GetAccountSummary",
"iam:GetLoginProfile",
"iam:UpdateLoginProfile"
],
"Resource": [
"arn:aws:iam::123456789:group/${aws:username}"
]
},
{
"Sid": "AllowIndividualUserToListTheirMFA",
"Effect": "Allow",
"Action": [
"iam:ListVirtualMFADevices",
"iam:ListMFADevices"
],
"Resource": [
"arn:aws:iam::123456789:group/mfa/*",
"arn:aws:iam::123456789:user/${aws:username}"
]
},
{
"Sid": "AllowIndividualUserToManageThierMFA",
"Effect": "Allow",
"Action": [
"iam:CreateVirtualMFADevice",
"iam:DeactivateMFADevice",
"iam:DeleteVirtualMFADevice",
"iam:EnableMFADevice",
"iam:ResyncMFADevice"
],
"Resource": [
"arn:aws:iam::123456789:group/mfa/${aws:username}",
"arn:aws:iam::123456789:group/${aws:username}"
]
},
{
"Sid": "DoNotAllowAnythingOtherThanAboveUnlessMFAd",
"Effect": "Deny",
"NotAction": "iam:*",
"Resource": "*",
"Condition": {
"Null": {
"aws:MultiFactorAuthAge": "true"
}
}
}
]
}

If you want to enforce MFA on particular groups or users, then you will have to include them as follow.

Example:

“Resource”: [

“arn:aws:iam::123456789:group/AmazonEC2FullAccess:mfa/*”,

“arn:aws:iam::123456789:group/AmazonEC2FullAccess:user/${aws:username}”,

“arn:aws:iam::123456789:group/AmazonAppStreamFullAccess:mfa/*”,

“arn:aws:iam::123456789:group/AmazonAppStreamFullAccess:user/${aws:username}”

]

},

Note that in the script, in resource we specified all groups by adding * at the end. Same applies for users if you need to include all users.

Example:

“Resource”: [
“arn:aws:iam::123456789:group/*”
]
},

Previously created users may have to have this policy attached to them manually. Creating IAM users after creating this policy will be attached to the newly created users by default.

I hope you found this blog informative. Please feel free to post your comments below.

Thanks for reading.