ADFS Service Communication Certificate Renewal Steps

Hi Guys, adfs service comprises of certificates which serve different purpose for federation service. In this blog post I will share a brief description of these certificates, their purpose and will discuss renewal process of service communication certificate.

 

Type of ADFS Certificates and their purpose

 

Certificate Type Description Purpose
Service Communication certificate

 

Standard Secure Sockets Layer (SSL) certificate that is used for securing communications between federation servers, clients, Web Application Proxy, and federation server proxy computers. Ensures the identity of a remote computer

Proves your identity to a remote computer

 

Encryption Certificates

 

  Token decryption
Signing Certificates

 

Standard X.509 certificate that is used for securely signing all tokens Token signing

 

 

Renewal Steps

Service Communication certificate

In comparison this certificate is very similar to IIS certificate used to secure a website. It is generally issued by a trusted CA authority and can be either SAN or wild card certificate. This certificate is installed an all ADFS servers in the farm and update procedure should be done on primary ADFS server. Below is the list of steps involved in renewal.

 

  1. Generate CSR from primary ADFs server. This can be done via IIS.
  2. Once certificate is issued, add new certificate in Certificate store.
  3. Verify Private Key on the certificate. Make sure new certificate has the private key.
  4. Assign Permissions to the Private Key for ADFS service account. Right click on the certificate, click manage private keys, add adfs service account and assign permissions as shown in below screenshot.

 

 adfs

  1. From ADFS console select “Set Service Communication Certificate”
  2. Select new certificate from prompted list of certificates.
  3. Run Get-AdfsSslCertificate. Make a note of the thumbprint of the new certificate.
  4. If it’s unclear which certificate is new, open MMC snapin, locate the new certificate and scroll down in the list of properties to see the thumbprint.
  5. Run

 

  1. Restart the ADFS service
  2. Copy and import the new certificate to the Web Application Proxy/Proxies
  3. On each wap server run following cmdlet.

 

That’s it you are all done. You can verify that new certificate has been assigned to adfs service by executing Run Get-AdfsSslCertificate. Another verification step would be to open the browser and navigate to federation page. Here you should be able to see the new certificate in the browser. I will further discuss encryption and signing certificate renewal process in upcoming blogs.

 

 

Resolving presence not up-to-date & unable to dial-in the conferences via PSTN issues Lync 2013

 

Recently I’ve been working with one SFB customer recently. I met some unique issue and I would like to share the experience of what I did to solve the problem

Issue Description: After SQL patching on Lync servers, all users’ presence was not up-to-date and people are unable to dial in to the scheduled conference.

Investigation:

when I used Lync shell moving testing user from SBA pool A to pool B on one FE server, but I checked the user pool info on the SBA pool A, the result still showed the testing user is under pool A. This indicates either the FE Lync databases are not syncing with each other properly or there are database corruptions.

I checked all the Lync FE servers, all the Lync services are running. all look good. I re-tested the conference scenarios, the PSTN conference bridge number is unavailable while people can still make incoming/outgoing calls.

I decided to go back to check the logs on all the Lync FE servers, I noticed on one of the Lync FE servers, I got “Warning: Revocation status unknown. Cannot contact the revocation server specified in certificate”, weird, does this mean there was something wrong with the cert on this FE server? No way, I didn’t see this error on the other FE server, both FE servers are supposed to use the same certs, this means it’s not the cert issue. It is something wrong with the FE server.

Next, I tried to turn off all the Lync services on the problematic FE server to see if it made any difference. Interesting thing happened, once I did that, all users’ presence became updated and also the PSTN conference bridge number became available. I could dial in from my mobile after that. it verified it was server issue.

Root Cause:

What caused the FE server having the cert error? Which cert was used on this FE server? I manually relaunched the deployment wizard, wanted to compare the certs between the 2 FE servers. Then I noticed that the Lync server configurations are not up-to-date from the database store level. This was a surprise to me because there was no change on the topology, so I never thought about re-run the deployment wizard after FE SQL patching. On the other FE server which was working as expected, I can see all the green checks on each step of the deployment wizard. Bingo, I believed all the inconsistent issues from users end were related with the inconsistent SQL databases across all the two FE ends.

<

p style=”margin-left:36pt;”>

 

Solution:

Eventually, after the change request approved by the CAB, re-run the deployment wizard to sync the SQL store and also re-assign the certs to Lync services resolved the issue.

 

Hopefully it can help someone else who have similar issues.

Resolving presence not up-to-date & unable to dial-in the conferences via PSTN issues Lync 2013

 

Recently I’ve been working with one SFB customer recently. I met some unique issue and I would like to share the experience of what I did to solve the problem

Issue Description: After SQL patching on Lync servers, all users’ presence was not up-to-date and people are unable to dial in to the scheduled conference.

Investigation:

when I used Lync shell moving testing user from SBA pool A to pool B on one FE server, but I checked the user pool info on the SBA pool A, the result still showed the testing user is under pool A. This indicates either the FE Lync databases are not syncing with each other properly or there are database corruptions.

I checked all the Lync FE servers, all the Lync services are running. all look good. I re-tested the conference scenarios, the PSTN conference bridge number is unavailable while people can still make incoming/outgoing calls.

I decided to go back to check the logs on all the Lync FE servers, I noticed on one of the Lync FE servers, I got “Warning: Revocation status unknown. Cannot contact the revocation server specified in certificate”, weird, does this mean there was something wrong with the cert on this FE server? No way, I didn’t see this error on the other FE server, both FE servers are supposed to use the same certs, this means it’s not the cert issue. It is something wrong with the FE server.

Next, I decided to turn off all the Lync services on the problematic FE server to see if it made any difference. Interesting thing happened, once I did that, all users’ presence became updated and also the PSTN conference bridge number became available. I could dial in from my mobile after that. it verified it was server issue.

Root Cause:

What caused the FE server having the cert error? Which cert was used on this FE server? I manually relaunched the deployment wizard, wanted to compare the certs between the 2 FE servers. Then I noticed that the Lync server configurations are not up-to-date from the database store level. This was a surprise to me because there was no change on the topology, so I never thought about re-run the deployment wizard after FE SQL patching. On the other FE server which was working as expected, I can see all the green checks on each step of the deployment wizard. Bingo, I believed all the inconsistent issues from users end were related with the inconsistent SQL databases across all the two FE ends.

<

p style=”margin-left:36pt;”>

 

Solution:

Eventually, after the change request approved by the CAB, re-run the deployment wizard to sync the SQL store and also re-assign the certs to Lync services resolved the issue.

 

Hopefully it can help someone else who have similar issues.

MIM2016 Upgrade Hanging on Custom Action – SetPermissionEval

I was upgrading a client’s environment from FIM2010 R2 to MIM2016, during the upgrade of the Synchronization service, the installer appeared stuck, I waited for over an hour, there was no activity and no progress update. I checked the msi installation log, and found the last activity was CustomAction = SetPermissionEval, ActionType=3073. Other than this, there was no errors or any indication of failures.

msilog

According to this TechNet article, SetPermissionEval sets access permission (ACLs) for file folders, registry, DCOM launch/access permission and WMI.

ExtensionsCache

So I opened the Process Monitor, I discovered the reason was the hidden folder Microsoft Forefront Identity Manager\2010\Synchronization Service\ExtensionsCache, this directory contained over 260,000 folders with approximately 2 million objects, the SetPermissionEval custom action was applying ACL on each of them!

procmon

I couldn’t find the exact purpose for the ExtensionsCache, there is no Microsoft documentation on it nor any mention in the official upgrade guidance or best practice, however by looking at the contents of the folder, I suspect FIM/MIM creates these folders when running synchronisation or export using custom code based extension rules.

Based on an earlier forum post, I decided to delete the contents in this folder

delete

Once all the items are deleted, I restarted the synchronization service upgrade, the upgrade continued and finished without delay.

I still don’t understand why the installer file should try to set the file permission in the cache directory, when the whole directory content could be removed without problem, why brother?

Anyway if you are upgrading or patching your FIM or MIM instance, it might be worthwhile to check your ExtensionsCache hidden directory, if you have too many folders there, stop the synchronization service and delete those cache folders to avoid this problem.

Community AppStack (Part 1) – Decentralised Realtime Chat Demo With IPFS PubSub and Cycle.js

Introduction

I’ve been trying to come up with a good general-purpose tech stack for building web apps for small, local community user-bases. There are lots of great “share-economy-style” collaborative use cases at this level of organisation and I want a vehicle to test out ideas, to see what works and what doesn’t in my local area, from a social standpoint.

I obviously want to make what I develop freely available in hopes that others might contribute ideas or code and I want people to be able to trust that it does what it says on the tin to be able to use it, so everything needs to go up on github.

Constraints

I’m a big believer in the power of constraints to focus the design process and these were nice and clear for this endeavour in regards to the technology to be used.

  1. Cost – needs to be as close to free to operate as possible
  2. Usability – the barriers to someone using the apps built on the stack need to be as low as possible
  3. Administrative simplicity – spinning up an instance of the stack, needs to be a straight-forward as possible
  4. Open Source tech is to be favoured over proprietary (where feasible)
  5. Flexibility to extend into native mobile versions in future

Serverless Cloud Provider

Being a huge fan of the serverless paradigm and the opportunities it unlocks (not least from a cost perpective), I knew I would be leaning heavily on the serverless capabilities of one of the major cloud providers to build out my solution. Already being well versed in what Azure has to offer (and admittedly knowing precious little about the alternatives), the choice was easy.

Realtime comms on a shoe-string

Get an effectively free static website up is laughably easy these days. So many great services like now and glitch exist in addition to the big-name cloud providers who all have free tiers, but for me the trick was real-time communication (via websockets or similar) was never covered on those platforms (or at least not to the scale I needed to potentially support). Same goes for the many real-time “pub-sub-as-a-service” 3rd party offerings – their free tiers all stop at 100 simultaneous connections (which is probably way more than I need, but still a constraint that I could well bump up against at some point). Plus, as per point 3 above, I don’t particularly want people to have to sign up to multiple cloud providers to get this all working.

The solution: the distributed magic that is IPFS pubsub.

Interplanetary File System (IPFS)

I won’t go into any detail into what IPFS is here – the official website is as good an intro as you could want. Suffice it to say, it is a distributed storage platform for web content, including publish-subscribe realtime messaging capbilities (experimental feature). You don’t have to pay to host an IPFS node, and through some voodoo that I have no idea about, you can even host one in a web browser (a decent modern one). So basically, we get ephemeral (don’t expect it to stick around in the IPFS network) realtime comms for free.

N.B. In the next installment in this series of posts, I’ll be persisting the realtime messages to Azure Table Storage… stay tuned…

Reactive Web Framework – Cycle.js

I’m a relative newcomer to React (love React Native, less enamoured with the original web version), but having worked with RxJS on a large Angular project recently, I was keen to see if I could find a framework that does a better job of managing state (yes, React has redux and and various side effects plugins, but they all seemed a little tacked-on) and uses the power of observables and found one in Cycle.js

Cycle has state management as it’s raison detré or at least, the way that it is structured kind of relegates state to a by-product of how data flows through your application, as opposed to treating it as an object you need to explicitly maintain.

IPFS Driver

Cycle uses a abstraction called a “driver” to handle any external effects (incoming or outgoing) to your application. The primary ones are drivers for interacting with the virtual DOM and making HTTP requests, but there are many others, including myriad community efforts. I couldn’t find one for IPFS, so created one to wrap the ipfs-pubsub-room helper library.

Here we are setting up the incoming (listening for new messages being broadcasted) and outgoing (broadcasting messages input by the user) observable streams. There are more API methods for ipfs-pubsub-room (e.g. for sending private messages to individual peers) but for this example, we’ll stick with the basics.

A quick note on importing modules: I wasn’t able to get IPFS-JS to bundle cleanly on my Windows machine, so am just loading from a CDN in a script tag. Works just fine for the purposes of this demo.

Show me the demo already!

Alright, you’ve been patient – here you go: https://ipfs-cycle-chat.azurewebsites.net/

N.B. Use Chrome for best results – haven’t seen this work in IE or Edge.

I won’t paste the rest of the code here, but you check out the full sample at https://github.com/balbany/ipfs-cycle-demo. Note that the main logic of the Cycle.js app is lifted pretty-much wholesale from CreaturePhil’s cyclejs-chat socket.io sample (cheers Phil!) and the styling is pilfered from here.

Just the beginning…

As alluded to before, this is just the first in a planned series of posts as I build out my little project that I’m calling Community AppStack for now. Please check back for the next installment (or follow me on Twitter @BruceAlbany) and let me know if you liked this one in the comments!

OWIN and Microsoft Account Authentication Bug

When accessing the email claims using OWIN and a Microsoft Account, you might encounter situations where your application does not receive a email claim, when you expect. We’ve experienced this issue most commonly with hotmail.com and outlook.com logins. Note below where the email claim is not completed in the left screen (a hotmail.com account) but is available for my kloud.com.au email address.

In this situation, there is a bug in the Owin Microsoft Account identity provider where a value isn’t correctly being checked for null. You can see the detail here: https://github.com/aspnet/AspNetKatana/issues/107.

This bug will be fixed in the 4.0.0 version of the Microsoft.Owin.Security.MicrosoftAccount nuget package, but if you need to use it now, you may add a new class called “MSProvider” to your project, and include it as part of your configuration options for your MicrosoftAccountAuthentication:

This will then solve your issue, and your claims will now always return email addresses.

Show me the code!

The code for this demo is hosted on github: https://github.com/bsmithb2/MicrosoftAccountEmailIssue. You will need to create a Converged App in the Microsoft portal (https://apps.dev.microsoft.com/#/appList) and then add the Client Id and Secret to your Microsoft Account configuration above. You’ll also need to add a “Web Platform” about halfway down with these details and then save:

microsoft account.PNG

 

Exchange Online – Mapi over Http Transition

Microsoft has announced that from 31st October 2017, outlook clients using RPC over Http protocol to connect to Office 365 will be no longer supported. Only Mapi over Http clients will be in action onwards. This announcement has left many administrators thinking, What exactly does that mean for my organization? What actions are required to avoid any business impact? Is it time to update outlook clients and upto what level? And last but not the least how can I verify if all necessary steps have been taken to ensure business as usual. Lets try to answer these questions one by one.

So what does this announcement means for any organization? In simple words, any outlook client which still use RPC over Http to connect to Office 365 will be retired and hence would require to be updated if possible. This means that outlook 2007 and earlier versions will be retired and will be no longer able to connect to exchange online. So, this would require following actions from Office 365 administrators.

  1. Update Outlook 2007 or earlier versions of outlook to latest outlook version.
  2. For outlook 2010 and higher minimum required updates are following :
Office version Update Build number
Office 2016 The December 8, 2015 update
  • Subscription: 16.0.6568.20xx
  • MSI: 16.0.4312.1001
Office 2013 Office 2013 Service Pack 1 (SP1) and the December 8, 2015 update 15.0.4779.1002
Office 2010 Office 2010 Service Pack 2 (SP2) and the December 8, 2015 update 14.0.7164.5002

Note The December 8, 2015 updates for Office are listed in Microsoft Knowledge Base article, 3121650: “December 8, 2015, update for Office”. It is  recommended that you keep outlook clients updated with the most recent product updates as several MAPI over HTTP issues have been fixed since December 2015.

Additionally, you may have to make sure that Outlook clients aren’t using a registry key to disable MAPI over HTTP. For more information, see Microsoft Knowledge Base article, 2937684 : “Outlook 2013 or 2016 may not connect using MAPI over HTTPs as expected”.

Now while you make all efforts to ensure you meet the deadline and take all necessary steps to update your environment, you do not need assurance that you have completed your job. A simple report providing information from office 365 about outlook clients connecting to your tenant should do the job. Lets get this report now following below steps.

To retrieve this information, enable owner access auditing for each mailbox, and then query the audit log for the Outlook version that’s used to log on to the mailbox. To do this, follow these steps:

  1. Connect to Exchange Online using remote PowerShell.
  2. Enable mailbox auditing for the owner. To do this, run one of the following commands:
    • For one mailbox:

    • For all mailboxes:

Note: Mailbox auditing may take up to 24 hours to get enabled.

  1. Search the audit log. To do this, run one of the following commands:
    • For one mailbox:

    • For all mailboxes and export results to a .csv file

The above powershell command will produce a comprehensive report which you can use as a guide line to ensure that all your clients are ready for switch to Mapi over Http. Here is a sample output.

Identifying Active Directory Users with Pwned Passwords using Microsoft/Forefront Identity Manager

Update: An element of this solution details checking passwords online (using the Have I Been Pwned API). Troy explains succinctly in his blog-post announcing the pwned passwords list why this is a bad idea. If you are looking to implement the concept I detail in this post then WE STRONGLY recommend using a local copy of the pwned password list.
THIS POST HERE details using a local SQL Database to hold the Pwned Passwords Datasets and the change to the Management Agent to query the SQL DB instead of the HIBP API.  

Introduction

Last week (3 Aug 2017) Troy Hunt released a sizeable list of Pwned Passwords. 320 Million in fact. I encourage you strongly to have a read about the details here.

Troy also extended his HaveIBeenPwned API to include the ability to query as to whether a password has been pwned and is likely to be used in a brute force attack.

Microsoft provide a premium license feature in Azure Active Directory (Azure Active Directory Identity Protection) whereby leaked credential sets are checked and Admins alerted via reports. But what if you aren’t licensed for the Azure AD Premium Features, or you want something a little more customised and you have Microsoft/Forefront Identity Manager? That is what this post covers.

Overview

The following diagram looks a little more complicated than what it really is. The essence though is that password changes can come from a multitude of different scenarios. Using Microsoft’s Password Change Notification Service (PCNS) we can capture password changes and send them to Microsoft Identity Manager so that we can synchronise the password to other systems, or for this use case we can lookup to see if the users new password is on the pwned password list.

This post will cover creating the Pwned Password FIM/MIM Management Agent and flagging a boolean attribute in the MIM Service to indicate whether a users password is on the pwned password or not.

PwnedPassword Overview.png

Prerequisites

There are a few components to this solution depicted above. You will need;

  • FIM/MIM Synchronisation Server
    • with an Active Directory Management Agent configured (most likely you will have a Projection Rule on this MA to get your users into the Metaverse)
    • not shown in the diagram above you will also need the MIM MA configured to sync users from the Metaverse to the MIM Service
  • FIM/MIM Service and Portal Server (can be on the same server as above)
  • Microsoft Password Change Notification Service (PCNS). This MS PFE PCNS implementation document covers it quite well and you will need;
    • the PCNS AD Schema Extension installed
    • the PCNS AD Password Filters installed on all your (writeable) Domain Controllers
    • PCNS configured to send password changes to your FIM/MIM Sync Server
  • Granfeldt PowerShell Management Agent (that we will use to check users passwords against the Have I Been Pwned pwned password API)
  • Lithnet Resource Management PowerShell Module
    • download it from here and install it on your FIM/MIM Server as the Pwned Password MA will use this module to populate the Pwned Password Status for users in the MIM Service
  • Windows Management Framework (PowerShell) 5.x

Getting Started with the Granfeldt PowerShell Management Agent

If you don’t already have it go get it from here. Søren’s documentation is pretty good but does assume you have a working knowledge of FIM/MIM and this blog post is no different.

Four items of note for this solution;

  • You must have an Export.ps1 file. Even though we’re not doing exports on this MA, the PS MA configuration requires a file for this field. The .ps1 doesn’t need to have any logic/script inside it. It just needs to be present
  • The credentials you give the MA to run this MA are the credentials for the account that has permissions to the On Premise Active Directory where we will be importing users from to join to our Metaverse so we can pass password changes to this Management Agent
  • The same account as above will also need to have permissions in the MIM Service as we will be using the account to update the new attribute we are going to create
  • The path to the scripts in the PS MA Config must not contain spaces and be in old-skool 8.3 format. I’ve chosen to store my scripts in an appropriately named subdirectory under the MIM Extensions directory. Tip: from a command shell use dir /x to get the 8.3 directory format name. Mine looks like C:\PROGRA~1\MICROS~4\2010\SYNCHR~1\EXTENS~2\PwnedPWD

With the Granfeldt PowerShell Management Agent downloaded from Codeplex and installed on your FIM/MIM Server we can create our Pwned Password Management Agent.

Creating the Pwned PowerShell Management Agent

On your FIM/MIM Sync Server create a new sub-directory under your Extensions Directory. eg. PwnedPWD in C:\Program Files\Microsoft Forefront Identity Manager\2010\Synchronization Service\Extensions then create a sub-directory under PwnedPWD named DebugC:\Program Files\Microsoft Forefront Identity Manager\2010\Synchronization Service\Extensions\PwnedPWD\Debug

Copy the following scripts (schema.ps1, import.ps1, export.ps1, password.ps1) and put them into the C:\Program Files\Microsoft Forefront Identity Manager\2010\Synchronization Service\Extensions\PwnedPWD directory

Schema.ps1

The following schema.ps1 script sets up the object class (user) and a handful of attributes from Active Diretory that will be useful for logic that we may implement in the future based on users password status.

Import.ps1

The import.ps1 script connects to Active Directory to import our AD users into the Pwned Password Management Agent so we can join to the Metaverse object already present for users on the Active Directory Management Agent. The user needs to be joined to the Metaverse on our new MA so they are addressable as a target for PCNS.

Export.ps1

As detailed earlier, we aren’t using an Export script in this solution.

Password.ps1

The Password script receives password changes as they occur from Active Directory and looks up the Have I Been Pwned API to see if the new password is present on the list or not and sets a boolean attribute for the pwned password status in the MIM Service.

On your FIM/MIM Sync Server from the Synchronisation Manager select Create Management Agent from the right hand side pane.  Select PowerShell from the list of Management Agents. Select Next.

PwnedPwdMA1a

Give your MA a Name and a Description. Select Next. 

PwnedPwdMA1b

Provide the 8.1 style path to your Schema.ps1 script copied from the steps earlier. Provide an AD sAMAccountName and Password that also has permissions to the MIM Service as detailed in the Prerequisites. Select Next.

PwnedPwdMA2

Provide the paths to the Import.ps1, Export.ps1 and Password.ps1 scripts copied in earlier. Select Next.

PwnedPwdMA3

Select Next.

PwnedPwdMA4

Select the user checkbox. Select Next.

PwnedPwdMA5

Select all the attributes in the list. Select Next.

PwnedPwdMA6

Select Next.

PwnedPwdMA7

Select Next.

PwnedPwdMA8

Create a Join Rule for your environment. eg.  sAMAccountName => person:Accountname  Select Next.

PwnedPwdMA9

Create an import flow rule for user:pwdLastSet => person:pwdLastSet. Select Next.

PwnedPwdMA10

Select Next.

PwnedPwdMA11

Ensure that Enabled password management is selected, then select Finish.

PwnedPwdMA12

With the Pwned Password MA created and configured we need to create at least a Stage (Full Import) and Full Sync Run Profiles and execute them to bring in the users from AD and join them to the Metaverse.

This should be something you’re already familiar with.

RunProfiles

When running the Synchronisation we get the joins we expect. In my environment PwdLastSet was configured to sync to the MIM Service and hence the Outbound Sync to on the MIM Service MA.

Sync and join

MIM Service Configuration

In the MIM Service we will create a custom boolean attribute that will hold the pwned status of the users password.

Schema

Connect to your MIM Portal Server with Administrator privileges and select Schema Management from the right hand side menu.

Select All Attributes then select New

Provide an attribute name (System name) and a Display Name with a Data Type of Boolean. Provide a Description and select Finish

Select Submit

Search for User in Resource Types then select the User checkbox from the search results and select Binding then select New.

In the Resource Type box type User then click the validate field button (the one with the green tick). In the Attribute Type box type Pwned Password then click the validate field button (the one with the green tick). Select Finish

Select  Submit

Configure the Active Directory MA to send passwords to the Pwned Passwords MA

On your existing Active Directory Management Agent select Properties. Select Configure Directory Partitions then under Password Synchronization enable the checkbox Enable this partition as a password synchronization source. Select Targets and select your newly created Pwned Password MA. Select Ok then Ok again.

Password Target2.PNG

Testing the End to End Pwned Password Check

Now you should have configured;

  • PCNS including installation of the Active Directory filters
  • The existing Active Directory Management Agent as a Password Source
  • The existing Active Directory Management Agent to send password change events to the Pwned Password MA

Select a user in Active Directory Users and Computers, right click the user and select Reset Password.

ChangePassword1

I first provided a password I know is on the pwned list, Password1

ChangePassword2

ChangePassword3

With PCNS Logging enabled on the MIM Sync Server I can see the password event come through.

ChangePassword4

Checking in the Pwned Password MA debug log we can see in the debug logging for the user we changed the password for and that when it was checked against Have I Been Pwned the password is flagged as pwned.

Note: If you implement the solution in a production environment obviously remove the password from being logged. 

ChangePassword5

In the MIM Portal search for and locate the user the password we just changed the password for.

ChangePassword7.PNG

Select the user. Scroll to the bottom and select Advanced View. Select the Extended Attributes tab. Scroll down and we can see the Pwned Password shows as checked.

ChangePassword6

Now repeating the process with a password that isn’t in the Pwned Password list. After changing the password in Active Directory Users and Computers the password went through its sync path. The log shows the password isn’t in the list.

ChangePassword8

And the MIM Portal shows the Boolean value for Pwned Password is now not selected.

ChangePassword9

Summary

Using PCNS and FIM/MIM we can check whether our Active Directory users are using passwords that aren’t in the Pwned Password list.

What we can then do if their password is in the Pwned Password list is a number of things based on what the security policy is and even what type of user it is. You’ll notice that I’ve included additional attributes in the MA that we can flow through the Metaverse and into the MIM Service that may help with some of those decisions (such as adminCount which indicates if the user is an Administrator).

Potentially for Admin users we could create a workflow in the MIM Service that forces their account to change password on next logon. For other users we could create a workflow that sends them a notification letting them know that they should change their password.

Either way, we now have visibility of the state of users passwords. Big thanks to Troy for adding Pwned Passwords to his Have I Been Pwned API.

 

Reiterating: An element of this solution details checking passwords online (using the Have I Been Pwned API). Troy explains succinctly in his blog-post announcing the pwned passwords list why this is a bad idea. If you are looking to implement the concept I detail in this post then WE STRONGLY recommend using a local copy of the pwned password list.  

Don’t Make This Cloud Storage Mistake

In recent months a number of large profile data leaks have occurred which have made millions of customers’ personal details easily available to anyone on the internet. Three recent cases GOP, Verizon and WWE involved incorrectly configured Amazon S3 buckets (Amazon was not at fault in any way).

Even though it is unlikely you will ever find the URLs to Public Cloud storage such as Amazon S3 or Azure Storage Accounts, they are surprisingly easy to find using the search engine SHODAN which scours the internet for hidden URLs. This then allows hackers or anyone access to an enormous amount of internet-connected devices, from Cloud storage to web-cams.

Better understanding of the data that you wish to store in the Cloud can help you make a more informed decision on the method of storage.

Data Classification

Before you even look at storing your company or customer data in the Cloud you should be classifying your data in some way. Most companies classify their data according to sensitivity. This process then gives you a better understanding of how your data should be stored.

One possible method is to divide data into several categories, based upon the impact to the business in the event of an unauthorised release. For example, the first category would be public, which is intended for release and poses no risk to the business. The next category is low business impact (LBI), which might include data or information that does not contain Personally Identifiable Information (PII) or cover sensitive topics but would generally not be intended for public release. Medium business impact (MBI) data can include information about the company that might not be sensitive, but when combined or analysed could provide competitive insights, or some PII that is not of a sensitive nature but that should not be released for privacy protection. Finally, high business impact (HBI) data is anything covered by any regulatory constraints, involves reputational matters for the company or individuals, anything that could be used to provide competitive advantage, anything that has financial value that could be stolen, or anything that could violate sensitive privacy concerns.

Next, you should set policy requirements for each category of risk. For example, LBI might require no encryption. MBI might require encryption in transit. HBI, in addition to encryption in transit, would require encryption at rest.

The Mistake – Public vs Private Cloud Storage

When classifying the data to be stored in the Cloud the first and most important question is “Should this data be available to the public, or just to individuals within the company?”

Once you have answered this question you can now configure your Cloud storage whether Amazon S3, Azure Storage accounts or whichever provider you are using. One of the most important options available when configuring Cloud storage is whether it is set to “Private” or “Public” access. This is where the mistake was made in the cases mentioned earlier. In all of these cases the Amazon S3 buckets were set to “Public“, however the data stored within them was of a private nature.

The problem here is the understanding of the term “Public” when configuring Cloud storage. Some may think that the term “Public” means that the data is available publicly to all individuals within your company, however this is not the case. The term “Public” means that your data is available to anyone who can access your Cloud Storage URL, whether they are within your company or a member of the general public.

This setting is of vital importance, once you are sure this is correct you can then worry about other features that may be required such as encryption in transit and encryption at rest.

This is a simple error with a big impact which can cost your company or customer a lot of money and even more importantly their reputation.

Brisbane O365 Saturday

On the weekend I had a pleasure of presenting to the O365 Saturday Brisbane event. Link below

http://o365saturdayaustralia.com/

In my presentation I demonstrated a new feature within Azure AD that allows the automatic assigment of licences to any of your Azure subscriptions using Dynamic Groups. So what’s cool about this feature?

Well, if you have a well established organisational structure within your on-premise AD and you are synchronising any of the attributes that you need to identity this structure, then you can have your users automatically assigned licences based on their job type, department or even location. The neat thing about this is it drastically simplifies the management of your licence allocation, which until now has been largely done through complicated scripting processes either through an enterprise IAM system, or just through the service desk when users are being setup for the first time.

You can view or download the presentation by clicking on the following link.

O365 Saturday Automate Azure Licencing

Throughout the Kloud Blog there is an enormous amount of material featuring the new innovative way to use Azure AD to advance your business, and if you would like a more detailed post on this topic then please place a comment below and I’ll put something together.