Be SOLID: uncle Bob

We have discussed STUPID issues in programming. The shared modules and tight coupling leads to dependency issues in design. The SOLID principles address those dependency issues in OOP.

SOLID acronym was popularized by Robert Martin as generic design principles dictated by common sense in OOP. They mainly address dependencies and tight coupling. We will discuss SOLID one by one and try to relate each of them with the underline problems and how they try to solve them.

          S – Single Responsibility Principle – SRP

“There should not be more than one reason for something to exist.”

 img.png

As the name suggest, a module/ class etc. should not have more than one responsibility in a system. The more a piece of code is doing, or trying to do, the more fragile, rigid, and difficult to (re)use it gets. Have a look at the code below:

 

class EmployeeService

{

               //constructor(s)//

                Add(Employee emp)

               {

                              //…..//

using (var db = new <Some Database class/ service>()) // or some SINGLETON or factory call, Database.Get()

                              {

try

{

                                              db.Insert(emp);

             db.Commit();

             //more code

}

catch(…)

{

   db.Rollback();

}

}

//….

}

}

 

All looks good yes? There are genuine issues with this code. The EmployeeService has too much responsibilities. The database handling should not be a responsibility of this class. Because of baked in database handling details, the EmployeeService has become rigid and harder to reuse or extend for multiple databases, for example. It’s like a Swiss knife; it looks easy but very rigid and inextensible.

Let’s KISS (keep it simple, stupid) it a bit.

 

//…

Database db = null;

public EmployeeService ()

{

               //…

               db = Database.Get(); // or a constructor etc.

//..

}

Add(Employee emp)

{

               //…

               db.Add<Employee>(emp);

//..

}

 

We have removed the database handling details from the EmployeeService class. This makes the code a bit cleaner and maintainable. It also ensures that everything is doing their job and their job only. Now the class care less about how the database is handled and more about Employee, its true purpose.

Also note that, SRP does not mean a structure/ class will only have a single function/ property etc. It means a piece of code should only have one responsibility related to the business: An Entity service should only be concerned about handling entities and not anything else like database handlings, logging, handling sub entities directly (like saving employee address explicitly) etc.

SRP might increase total number of classes in a module but it also increases their simplicity and reusability. This means in a longer run the codebase remains flexible and adoptive to changes. Singletons are often regarded as opposite of SRP because they quickly become God objects doing too many things (Swiss knife) and introducing too many hidden dependencies into a system.

           O – Open Close Principle – OCP

“Once done, don’t change it, extend it”

motorcycle-sidecar

A class in a system must not be open to any changes, except bug fixing. That means we should not introduce changes to a class to add new features/ functionality to it. Now this does not sound practical because every class would evolve relatively to the business it represents. The OCP says that to add new features, the classes must be extended (open) instead of modified (close). And this introduces abstractions as a part of a business need to add new features into classes, instead of just a fancy have-it.

Developing our classes in the form of abstractions (interfaces/ abstract classes) provides multiple implementation flexibility and greater reusability. It also ensures that once a piece of code is tested, it does not go through another cycle of code changes and retesting for new features. Have a look the above EmployeeService class.

 

class EmployeeService

{

               void Add(Employee emp)

{

//..

db.Add<Employee>(emp);

//…

}

//…

}

 

Now if there was a new requirement that would request an email to be sent to the Finance department, if the newly added employee is a contractor, say. We will have to make changes to this class. Let’s redo the service for the new feature.

 

void Add(Employee emp)

{

//..

db.Add<Employee>(emp);

 

if (emp.Type == EmplolyeeType.Contractor)

               //… send email to finance

//…

}

//…

 

The above, though seems straightforward and a lot easier, is a code smell. It introduces rigid code and hardwired conditioning into a class that would demand retesting all existing use cases related to EmployeeService on top of the new ones. It also makes the code cluttered and harder to manage and reuse as the requirements evolve with time. Instead what we could do is be close to modifications and open to extensions.

 

interface IEmployeeService

{

               void Add(Employee employee);

               //…

}

 

And then;

 

class EmployeeService : IEmployeeService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

}

class ContractorService: IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

}

 

Of course, we could have an abstract Employee service class instead of the interface that will have a virtual Add method with add the employee functionality, that would be DRY.

Now instead of a single EmployeeService class we have separate classes that are extensions of the EmployeeService abstraction. This way we can keep adding new features into the service without having a need to retest any existing ones. This also removed the unnecessary cluttering and rigidness from the code and made it more reusable.

          L – Liskov Substitution Principle – LSP

“If your duck needs batteries, it’s not a duck”

So Liskov worded the principle as:

               If for each object obj1 of type S, there is an object obj2 of type T, such that for all programs P defined in terms of T, the behaviour of P is unchanged when obj1 is substituted for obj2 then S is a subtype of T

Sounds too complex? I know. Let us say that in English instead.

If we have a piece of code using an object of class Parent, then the code should not have any issues, if we replace Parent object with an object of its Child, where Child inherits Parent.

likso1.jpg

Take the same Employee service code and try to add a new feature in it, Get leaves for an employee.

 

interface IEmployeeService

{

               void Add(Employee employee);

               int GetLeaves(int employeeId);

               //…

}

class EmployeeService : IEmployeeService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

int GetLeaves (int employeeId)

{

               // Calculate and return leaves

}

}

class ContractorService : IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

int GetLeaves (int employeeId)

{

               //throw some exception

}

}

 

Since the ContractorService does not have any business need to calculate the leaves, the GetLeaves method just throws a meaningful exception. Make sense, right? Now let’s see the client code using these classes, with IEmployeeService as Parent and EmployeeService and ContractorService as its children.

 

IEmployeeService employeeService = new EmployeeService();

IEmployeeService contractorService = new ContractorService ();

employeeService. GetLeaves (<id>);

contractorService. GetLeaves (<id2>);

 

The second line will throw an exception at RUNTIME. At this level, it does not mean much. So what? Just don’t invoke GetLeaves if it’s a ContractorService. Ok let’s modify the client code a little to highlight the problem even more.

 

List<IEmployeeService> employeeServices = new List<IEmployeeService>();

employeeServices.Add(new EmployeeService());

employeeServices.Add(new ContractorService ());

CalculateMonthlySalary(employeeServices);

//..

void CalculateMonthlySalary(IEnumerable<IEmployeeService> employeeServices)

{

               foreach(IEmployeeService eService in employeeServices)

               {

               int leaves = eService. GetLeaves (<id>);//this will break on the second iteration

               //… bla bla

}

}

 

The above code will break the moment it tries to invoke GetLeaves in that loop the second time. The CalculateMonthlySalary knows nothing about ContractorService and only understands IEmployeeService, as it should. But its behaviour changes (it breaks) when a child of IEmployeeService (ContractorService) is used, at runtime. Let’s solve this:

 

interface IEmployeeService

{

               void Add(Employee employee);

               //…

}

interface ILeaveService

{

               int GetLeaves (int employeeId);

               //…

}

class EmployeeService : IEmployeeService, ILeaveService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

int GetLeaves (int employeeId)

{

               // Calculate and return leaves

}

}

class ContractorService : IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

}

 

Now the client code to calculate leaves will be

 

void CalculateMonthlySalary(IEnumerable<ILeaveService> leaveServices)

 

and wallah, the code is as smooth as it gets. The moment we try to do.

 

List<ILeaveService> leaveServices = new List<ILeaveService>();

leaveServices.Add(new EmployeeService());

leaveServices.Add(new ContractorService ()); //Compile-time error.

CalculateMonthlySalary(leaveServices);

 

It will give us a compile-time error. Because the method CalculateMonthlySalary is now expecting IEnumberable of ILeaveService to calculate leaves of employees, we have a List of ILeaveService, but ContractorService does not implements ILeaveService.

 

List<ILeaveService> leaveServices = new List<ILeaveService>();

leaveServices.Add(new EmployeeService());

leaveServices.Add(new EmployeeService ());

CalculateMonthlySalary(leaveServices);

 

LSP helps fine graining the business requirements and operational boundaries of the code. It also helps identifying the responsibilities of a piece of code and what kind of resources it would need to do its job. This increases SRP, enhances decoupling and reduces useless dependencies (CalculateMonthlySalary does not care about the whole IEmployeeService anymore, and only depends upon ILeaveService).

Breaking down responsibilities sometimes can be a bit hard in complex business requirements and LSP also tends to increase the number of isolated code units (classes, interfaces etc.). But it becomes apparent in simple and carefully designed structures where Tight Coupling and Duplications are avoided.

          I – Interface Segregation Principle – ISP

Don’t give me something I don’t need”

 oop-principles

In LSP, we did see that the method CalculateMonthlySalary had no use of the complete IEmployeeService, it only needed a subset of IEmployeeService, the GetLeaves method. This is, in its basic form, the ISP. It asks to identify the resources needed by a piece of code to do its job and then only provide those resources to it, nothing more. ISP finds real dependencies in code and eliminates unwanted ones. This helps in decoupling code greatly, helps recognising code dependencies, and ensures code isolation and security (CalculateMonthlySalary does not have any access to Add method anymore).

ISP advocates module customization based on OCP; identify requirements and isolate code by creating smaller abstractions, instead of modifications. Also, when we fine-grain pieces of code using ISP, the individual components become smaller. This increases their testability, manageability and reusability. Have a look:

 

class Employee

{

               //…

               string Id;

               string Name;

               string Address;

               string Email;

//…

}

void SendEmail(Employee employee)

{

               //.. uses Name and Email properties only

}

 

The above is a violation of ISP. The method SendEmail has no use of the class Employee, it only uses a Name and an Email to send out emails but is dependent on Employee class definition. This introduces unnecessary dependencies into the system, though it seems small at start. Now the SendEmail method can only be used for Employees and with nothing else; no reusability. Also, it has access to all the other features of Employee, without any requirements; security and isolation. Let’s rewrite it.

 

 void SendEmail(string name, string email)

{

               //.. sends email of whatever

}

 

Now the method does not care about any changes in the Employee class; dependency identified and isolated. It can be reused and is testable with anything, instead of just Employee. In short, don’t be misguided by the word Interface in ISP. It has its usage everywhere.

          D – Dependency Inversion Principle – DIP

To exist, I did not depend upon my sister, and my sister not upon me. We both depended upon our parents

Remember the example we discussed in SRP, where we introduced Database class into the EmployeeService.

 

class EmployeeService : IEmployeeService, ILeaveService

{

Database db = null;

public EmployeeService ()

{

                              //…

                              db = Database.Get(); // or a constructor etc. the EmployeeService is dependent upon Database

//..

}

Add(Employee emp)

{

                              //…

                              db.Add<Employee>(emp);

//..

}

}

 

The DIP dictates that:

               No high-level modules (EmployeeService) should be dependent upon any low-level modules (Database), instead both should depend upon abstractions. And abstractions should not depend upon details, details should depend upon abstractions.

 dip

The EmployeeService here is a high-level module that is using, and dependent upon a low-level module, Database. This introduces a hidden dependency on the Database class. This also increases the coupling between EmployeeService and the Database. The client code using EmployeeService now must have access to the Database class definition, even though it’s not exposed to it and does not know, apparently, that Database class/ service/ factory/ interface exists.

Also note that it does not matter if, to get a Database instance, we use a singleton, factory, or a constructor. Inverting a dependency does not mean replacing its constructor with a service / factory/ singleton call, because then the dependency is just transformed into another class/ interface while remain hidden. One change in Database.Get, for example, could have unforeseen implications on the client code using the EmployeeService without knowing. This makes the code rigid and tightly coupled to details, difficult to test and almost impossible to reuse.

Let’s change it a bit.

 

 class EmployeeService : IEmployeeService, ILeaveService

{

Database db = null;

 public EmployeeService (Database database)

{

                              //…

                              db = database;

//..

}

Add(Employee emp)

{

                              //…

                              db.Add<Employee>(emp);

//..

}

//…

}

 

We have moved the Getting of Database to an argument in the constructor (because the scope of the db variable is class level). Now EmployeeService is not dependent upon the details of Database instantiation. This solves one problem but the EmployeeService is still dependent upon a low-level module (Database). Let’s change that:

 

IDatabase db = null;

public EmployeeService (IDatabase database)

{

//…

         db = database;

//..

}

 

We have replaced the Database with an abstraction (IDatabase). EmployeeService does not depend, nor care about any details of Database anymore. It only cares about the abstraction IDatabase. The Database class will be implementing the IDatabase abstraction (depends upon an abstraction).

Now the actual database implementation can be replaced anytime (testing) with a mock or with any other database details, as per the requirements and the Service will not be affected.

We have covered, in some details, the SOLID design principles, with few examples to understand the underline problems and how those can be solved using these principles. As can be seen, most of the time, a SOLID principle is just common sense and not making STUPID mistakes in designs and giving the business requirements some thought.

 

Set your eyes on the Target!

1015red_F1CoverStory.jpg

So in my previous posts I’ve discussed a couple of key points in what I define as the basic principles of Identity and Access Management;

Now that we have all the information needed, we can start to look at your target systems. Now in the simplest terms this could be your local Active Directory (Authentication Domain), but this could be anything, and with the adoption of cloud services, often these target systems are what drives the need for robust IAM services.

Something that we are often asked as IAM consultants is why. Why should the corporate applications be integrated with any IAM Service, and these are valid questions. Sometimes depending on what the system is and what it does, integrating with an IAM system isn’t a practical solution, but more often there are many benefits to having your applications integrated with and IAM system. These benefits include:

  1. Automated account provisioning
  2. Data consistency
  3. If supported Central Authentication services

Requirements

With any target system much like the untitled1IAM system itself, the one thing you must know before you go into any detail are the requirements. Every target system will have individual requirements. Some could be as simple as just needing basic information, first name, last name and date of birth. But for most applications there is allot more to it, and the requirements will be derived largely by the application vendor, and to a lessor extent the application owners and business requirements.

IAM Systems are for the most part extremely flexible in what they can do, they are built to be customized to an enormous degree, and the target systems used by the business will play a large part in determining the amount of customisations within the IAM system.

This could be as simple as requiring additional attributes that are not standard within both the IAM system and your source systems, or could also be the way in which you want the IAM system to interact with the application i.e. utilising web services and building custom Management Agents to connect and synchronise data sets between.

But the root of all this data is when using an IAM system you are having a constant flow of data that is all stored within the “Vault”. This helps ensure that any changes to a user is flowed to all systems, and not just the phone book, it also ensures that any changes are tracked through governance processes that have been established and implemented as part of the IAM System. Changes made to a users’ identity information within a target application can be easily identified, to the point of saying this change was made on this date/time because a change to this persons’ data occurred within the HR system at this time.

Integration

Most IAM systems will have management agents or connectors (the phases can vary depending on the vendor you use) built for the typical “Out of Box” systems, and these will for the most part satisfy the requirements of many so you don’t tend to have to worry so much about that, but if you have “bespoke” systems that have been developed and built up over the years for your business then this is where the custom management agents would play a key part, and how they are built will depend on the applications themselves, in a Microsoft IAM Service the custom management agents would be done using an Extensible Connectivity Management Agent (ECMA). How you would build and develop management agents for FIM or MIM is quite an extensive discussion and something that would be better off in a separate post.

One of the “sticky” points here is that most of the time in order to integrate applications, you need to have elevated access to the applications back end to be able to populate data to and pull data from the application, but the way this is done through any IAM system is through specific service accounts that are restricted to only perform the functions of the applications.

Authentication and SSO

Application integration is something seen to tighten the security of the data and access to applications being controlled through various mechanisms, authentication plays a large part in the IAM process.

During the provisioning process, passwords are usually set when an account is created. This is either through using random password generators (preferred), or setting a specific temporary password. When doing this though, it’s always done with the intent of the user resetting their password when they first logon. The Self Service functionality that can be introduced to do this enables the user to reset their password without ever having to know what the initial password was.

Depending on the application, separate passwords might be created that need to be managed. In most cases IAM consultants/architects will try and minimise this to not being required at all, but this isn’t always the case. In these situations, the IAM System has methods to manage this as well. In the Microsoft space this is something that can be controlled through Password Synchronisation using the “Password Change Notification Service” (PCNS) this basically means that if a user changes their main password that change can be propagated to all the systems that have separate passwords.

SONY DSCMost applications today use standard LDAP authentication to provide access to there application services, this enables the password management process to be much simpler. Cloud Services however generally need to be setup to do one of two things.

  1. Store local passwords
  2. Utilise Single Sign-On Services (SSO)

SSO uses standards based protocols to allow users to authenticate to applications with managed accounts and credentials which you control. Examples of these standard protocols are the likes of SAML, oAuth, WS-Fed/WS-Trust and many more.

There is a growing shift in the industry for these to be cloud services however, being the likes of Microsoft Azure Active Directory, or any number of other services that are available today.
The obvious benefit of SSO is that you have a single username or password to remember, this also greatly reduces the security risk that your business has from and auditing and compliance perspective having a single authentication directory can help reduce the overall exposure your business has to compromise from external or internal threats.

Well that about wraps it up, IAM for the most part is an enabler, it enables your business to be adequately prepared for the consumption of Cloud services and cloud enablement, which can help reduce the overall IT spend your business has over the coming years. But one thing I think I’ve highlighted throughout this particular series is requirements requirements requirements… repetitive I know, but for IAM so crucially important.

If you have any questions about this post or any of my others please feel free to drop a comment or contact me directly.

 

What’s a DEA?

In my last post I made a reference to a “Data Exchange Agreement” or DEA, and I’ve since been asked a couple of times about this. So I thought it would be worth while writing a post about what it is, why it’s of value to you and to your business.

So what’s a DEA? Well in simply terms it’s exactly what the name states, it’s an agreement that defines the parameters in which data is exchanged between Service A and Service B. Service A being the Producer of Attributes X and Services B, the consumers. Now I’ve intentionally used a vague example here as a DEA is used amongst many services in business and or government and is not specifically related to IT or IAM Services. But if your business adopts a controlled data governance process, it can play a pivotal role in the how IAM Services are implemented and adopted throughout the entire enterprise.

So what does a DEA look like, well in an IAM service it’s quite simple, you specify your “Source” and your “Target” services, an example of this could be the followings;

Source
ServiceNow
AurionHR
PROD Active Directory
Microsoft Exchange
Target
PROD Active Directory
Resource Active Directory Domain
Microsoft Online Services (Office 365)
ServiceNow

As you can see this only tells you where the data is coming from and where it’s going to, it doesn’t go into any of the details around what data is being transported and in which direction. A separate section in the DEA details this and an example of this is provided below;

MIM Flow Service Now Source User Types Notes
accountName –> useraccountname MIM All  
employeeID –> employeeid AurionHR All  
employeeType –> employeetype AurionHR All  
mail <– email Microsoft Exchange All  
department –> department AurionHR All
telephoneNumber –> phone PROD AD All  
o365SourceAnchor –> ImmutableID Resource Domain All  
employeeStatus –> status AurionHR All  
dateOfBirth –> dob AurionHR CORP Staff yyyy-MM-dd
division –> region AurionHR CORP Staff  
firstName –> preferredName AurionHR CORP Staff  
jobTitle –> jobtitle AurionHR CORP Staff  
positionNumber –> positionNumber AurionHR CORP Staff
legalGivenNames <– firstname ServiceNow Contractors
localtionCode <– location ServiceNow Contractors  
ManagerID <– manager ServiceNow Contractors  
personalTitle <– title ServiceNow Contractors  
sn <– sn ServiceNow Contractors  
department <– department ServiceNow Contractors
employeeID <– employeeid ServiceNow Contractors  
employeeType <– employeetype ServiceNow Contractors  

This might seem like a lot of detail, but this is actually only a small section of what would be included in a DEA of this type, as the whole purpose of this agreement is to define what attributes are managed by which systems and going to which target systems, and as many IAM consultants can tell you, would be substantially more then what’s provided in this example. And this is just an example for a single system, this is something that’s done for all applications that consume data related to your organisations staff members.

One thing that you might also notice is that I’ve highlighted 2 attributes in the sample above in bold. Why might you ask? Well the point of including this was to highlight data sets that are considered “Sensitive” and within the DEA you would specify this being classified as sensitive data with specific conditions around this data set. This is something your business would define and word to appropriately express this but it could be as simple as a section stating the following;

“Two attributes are classed as sensitive data in this list and cannot be reproduced, presented or distributed under any circumstances”

One challenge that is often confronted within any business is application owners wanting “ownership” of the data they consume. Utilising a DEA provides clarity over who owns the data and what your applications can do with the data they consume removing any uncertainty.

To summarise this post, the point of this wasn’t to provide you with a template, or example DEA to use, it was to help explain what a DEA is, what its used for and examples of what parts can look like. No DEA is the same, and providing you with a full example DEA is only going to make you end up recreating it from scratch anyway. But it is intended to help you with understanding what is needed.

As with any of my posts if you have any questions please pop a comment or reach out to me directly.

 

The Vault!

Vault

The vault or more precisely the “Identity Vault” is a single pane view of all the collated data of your users, from the various data source repositories. This sounds like a lot of jargon but it’s quite simple really.

In the diagram below we look at a really simple attribute firstName (givenName within AD) DataFlow

As you will see at the centre is the attribute, and branching off this is all the Connected Systems, i.e. Active Directory. What this doesn’t illustrate very well is the specific data flow, where this data is coming from and where it’s going to. This comes down to import and export rules as well as any precedence rules that you need to put in place.

The Identity Vault, or Central Data Repository, provides a central store of an Identities information aggregated from a number of sources. It’s also able to identify the data that exists within each of the connected systems from which it either collects the identity information from or provides the information to as a target system. Sounds pretty simple right?

Further to all the basics described above, each object in the Vault has a Unique Identifier, or an Anchor. This is a unique value that is automatically generated when the user is created to ensure that regardless of what happens to the users details throughout the lifecycle of the user object, we are able to track the user and update changes accordingly. This is particularly useful when you have multiple users with the same name for example, it avoids the wrong person being updated when changes occur.

Attribute User 1 User 2
FirstName John John
LastName Smith Smith
Department Sales Sales
UniqueGUID 10294132 18274932

So the table above provides the most simplest forms of a users identity profile, whereas a complete users identity profile will consist of many more attributes, some of which maybe custom attributes for specific purposes, as in the example demonstrated below;

Attribute ContributingMA Value
AADAccountEnabled AzureAD Users TRUE
AADObjectID AzureAD Users 316109a6-7178-4ba5-b87a-24344ce1a145
accountName MIM Service jsmith
cn PROD CORP AD Joe Smith
company PROD CORP AD Contoso Corp
csObjectID AzureAD Users 316109a6-7178-4ba5-b87a-24344ce1a145
displayName MIM Service Joe Smith
domain PROD CORP AD CORP
EXOPhoto Exchange Online Photos System.Byte[]
EXOPhotoChecksum Exchange Online Photos 617E9052042E2F77D18FEFF3CE0D09DC621764EC8487B3517CCA778031E03CEF
firstName PROD CORP AD Joe
fullName PROD CORP AD Joe Smith
mail PROD CORP AD joe.smith@contoso.com.au
mailNickname PROD CORP AD jsmith
o365AccountEnabled Office365 Licensing TRUE
o365AssignedLicenses Office365 Licensing 6fd2c87f-b296-42f0-b197-1e91e994b900
o365AssignedPlans Deskless, MicrosoftCommunicationsOnline, MicrosoftOffice, PowerAppsService, ProcessSimple, ProjectWorkManagement, RMSOnline, SharePoint, Sway, TeamspaceAPI, YammerEnterprise, exchange
o365ProvisionedPlans MicrosoftCommunicationsOnline, SharePoint, exchange
objectSid PROD CORP AD AQUAAAAAAAUVAAAA86Yu54D8Hn5pvugHOA0CAA==
sn PROD CORP AD Smith
source PROD CORP AD WorkDay
userAccountControl PROD CORP AD 512
userPrincipalName PROD CORP AD jsmith@contoso.com.au

So now we have more complete picture of the data, where it’s come from and how we connect that data to a users’ identity profile. We can start to look at how we synchronise that data to any and all Managed targets. It’s very important to control this flow though, to do so we need to have in place strict governance controls about what data is to be distributed throughout the environment.

One practical approach to managing this is by using a data exchange agreement. This helps the organisation have a more defined understanding of what data is being used by what application and for what purpose, it also helps define a strict control on what the application owners can do with the data being consumed for example, strictly prohibiting the application owners from sharing that data with anyone, without the written consent of the data owners.

In my next post we will start to discuss how we then manage target systems, how we use the data we have to provision services and manage the user information through what’s referred to as synchronisation rules.

As with all my posts if, you have any questions please drop me a note.

 

Protect Your Business and Users from Email Phishing in a Few Simple Steps

The goal of email phishing attacks is obtain personal or sensitive information from a victim such as credit card, passwords or username data, for malicious purposes. That is to say trick a victim into performing an unwitting action aimed at stealing sensitive information from them. This form of attack is generally conducted by means of spoofed emails or instant messaging communications which try to deceive their target as to the nature of the sender and purpose of the email they’ve received. An example of which would be an email claiming to be from a bank asking for credential re-validation in the hope of stealing them by means of a cloned website.

Some examples of email Phishing attacks.

Spear phishing

Phishing attempts directed at specific individuals or companies have been termed spear phishing. Attackers may gather personal information about their target to increase their probability of success. This technique is by far the most successful on the internet today, accounting for 91% of attacks. [Wikipedia]

Clone phishing

Clone phishing is a type of phishing attack whereby a legitimate, and previously delivered, email containing an attachment or link has had its content and recipient address(es) taken and used to create an almost identical or cloned email. The attachment or link within the email is replaced with a malicious version and then sent from an email address spoofed to appear to come from the original sender. It may claim to be a resend of the original or an updated version to the original. This technique could be used to pivot (indirectly) from a previously infected machine and gain a foothold on another machine, by exploiting the social trust associated with the inferred connection due to both parties receiving the original email. [Wikipedia]

Whaling

Several phishing attacks have been directed specifically at senior executives and other high-profile targets within businesses, and the term whaling has been coined for these kinds of attacks  In the case of whaling, the masquerading web page/email will take a more serious executive-level form. The content will be crafted to target an upper manager and the person’s role in the company. The content of a whaling attack email is often written as a legal subpoena, customer complaint, or executive issue. Whaling scam emails are designed to masquerade as a critical business email, sent from a legitimate business authority. The content is meant to be tailored for upper management, and usually involves some kind of falsified company-wide concern. Whaling phishers have also forged official-looking FBI subpoena emails, and claimed that the manager needs to click a link and install special software to view the subpoena. [Wikipedia]

Staying ahead of the game from an end user perspective

  1. Take a very close look at the sender’s email address.

Phishing email will generally use an address that looks genuine but isn’t (e.g. accounts@paypals.com) or try to disguise the email’s real sender with what looks like a genuine address but isn’t using HTML trickery (see below).

  1. Is the email addressed to you personally?

Companies with whom you have valid accounts will always address you formally by means of your name and surname. Formulations such as ‘Dear Customer’ is a strong indication the sender doesn’t know you personally and should perhaps be avoided.

  1. What web address is the email trying to lure you to?

Somewhere within a phishing email, often surrounded by links to completely genuine addresses, will be one or more links to the means by which the attacker is to steal from you. In many cases a web site that looks genuine enough, however there are a number of ways of confirming it’s validity.

  1. Hover your cursor over any link you receive in an email before you click it if you’re unsure because it will reveal the real destination sometimes hidden behind deceptive HTML. Also look at the address very closely. The deceit may be obvious or well hidden in a subtle typo (e.g. accouts@app1e.com).

a. Be wary of URL redirection services such as bit.ly which hide the ultimate destination of a link.

b. Be wearing of very long URLs. If in doubt do a Google search for the root domain.

c. Does the email contain poor grammar and spelling mistakes?

d. Many times the quality of a phishing email isn’t up to the general standard of a company’s official communications. Look for spelling mistakes, barbarisms, grammatical errors and odd characters in they email as a sign that something may be wrong.

 

Mitigating the impact of Phishing attacks against an organization

  1. Implement robust email and web access filtering.

  2. User education.

  3. Deploy an antivirus endpoint protection solution.

  4. Deploy Phishing attack aware endpoint protection software.

 

Where’s the source!

SauceIn this post I will talk about data (aka the source)! In IAM there’s really one simple concept that is often misunderstood or ignored. The data going out of any IAM solution is only as good as the data going in. This may seem simple enough but if not enough attention is paid to the data source and data quality then the results are going to be unfavourable at best and catastrophic at worst.
With most IAM solutions data is going to come from multiple sources. Most IAM professionals will agree the best place to source the majority of your user data is going to be the HR system. Why? Well simply put it’s where all important information about the individual is stored and for the most part kept up to date, for example if you were to change positions within the same company the HR systems are going to be updated to reflect the change to your job title, as well as any potential direct report changes which may come as a result of this sort of change.
I also said that data can come and will normally always come from multiple sources. At typical example of this generally speaking, temporary and contract staff will not be managed within the central HR system, the HR team simply put don’t care about contractors. So where do they come from, how are they managed? For smaller organisations this is usually something that’s manually done in AD with no real governance in place. For the larger organisations this is less ideal and can be a security nightmare for the IT team to manage and can create quite a large security risk to the business, so a primary data source for contractors becomes necessary what this is is entirely up to the business and what works for them, I have seen a standard SQL web application being used to populate a database, I’ve seen ITSM tools being used, and less common is using the IAM system they build to manage contractor accounts (within MIM 2016 this is through the MIM Portal).
There are many other examples of how different corporate applications can be used to augment the identity information of your user data such as email, phone systems and to a lessor extent physical security systems building access, and datacentre access, but we will try and keep it simple for the purpose of this post. The following diagram helps illustrate the dataflow for the different user types.

IAM Diagram

What you will notice from the diagram above, is even though an organisation will have data coming from multiple systems, they all come together and are stored in a central repository or an “Identity Vault”. This is able to keep an accurate record of the information coming from multiple sources to compile what is the users complete identity profile. From this we can then start to manage what information is flowed to downstream systems when provisioning accounts, and we can also ensure that if any information was to change, it can be updated to the users profiles in any attached system that is managed through the enterprise IAM Services.
In my next post I will go into the finer details of the central repository or the “Identity Vault”

So in summary, the source of data is very important in defining an IAM solution, it ensures you have the right data being distributed to any managed downstream systems regardless of what type of user base you have. My next post we will dig into the central repository or the Identity Vault, this will go into details around how we can set precedence to data from specific systems to ensure that if there is a difference in the data coming from the difference sources that only the highest precedence will be applied we will also discuss how we augment the data sets to ensure that we are also only collecting the necessary information related to the management of that user and the applications that use within your business.

As per usual, if you have any comments or questions on this post of any of my previous posts then please feel free to comment or reach out to me directly.

Security Vulnerability Revealed in Azure Active Directory Connect

Microsoft ADFS

The existence of a new and potentially serious privilege escalation and password reset vulnerability in Azure Active Directory Connect (AADC) was recently made public by Microsoft.

https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnectsync-whatis

Fixing the problem can be achieved by means of an upgrade to the latest available release of AADC 1.1.553.0.

https://www.microsoft.com/en-us/download/details.aspx?id=47594

The Microsoft security advisory qualifies the issue as important and was published on Technet under reference number 4033453:

https://technet.microsoft.com/library/security/4033453.aspx#ID0EN

Azure Active Directory Connect as we know takes care of all operations related to the synchronization of identity information between on-premises environments and Active Directory Federation Services (ADFS) in the cloud. The tool is also the recommended successor to Azure AD Sync and DirSync.

Microsoft were quoted as saying…

The update addresses a vulnerability that could allow elevation of privilege if Azure AD Connect Password writeback is mis-configured during enablement. An attacker who successfully exploited this vulnerability could reset passwords and gain unauthorized access to arbitrary on-premises AD privileged user accounts.

When setting up the permission, an on-premises AD Administrator may have inadvertently granted Azure AD Connect with Reset Password permission over on-premises AD privileged accounts (including Enterprise and Domain Administrator accounts)

In this case as stated by Microsoft the risk consists of a situation where a malicious administrator resets the password of an active directory user using “password writeback”. Allowing the administrator in question to gain privileged access to a customer’s on-premises active directory environment.

Password writeback allows Azure Active Directory to write passwords back to an on-premises Active Directory environment. And helps simplify the process of setting up and managing complicated on-premises self-service password reset solutions. It also provides a rather convenient cloud based means for users to reset their on-premises passwords.

Users may look for confirmation of their exposure to this vulnerability by checking whether the feature in question (password writeback) is enabled and whether AADC has been granted reset password permission over on-premises AD privileged accounts.

A further statement from Microsoft on this issue read…

If the AD DS account is a member of one or more on-premises AD privileged groups, consider removing the AD DS account from the groups.

CVE reference number CVE-2017-8613 was attributed to the vulnerability.

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-8613

Using ADFS on-premises MFA with Azure AD Conditional Access

With the recent announcement of General Availability of the Azure AD Conditional Access policies in the Azure Portal, it is a good time to reassess your current MFA policies particularly if you are utilising ADFS with on-premises MFA; either via a third party provider or with something like Azure MFA Server.

Prior to conditional MFA policies being possible, when utilising on-premises MFA with Office 365 and/or Azure AD the MFA rules were generally enabled on the ADFS relying party trust itself.  The main limitation with this of course is the inability to define different MFA behaviours for the various services behind that relying party trust.  That is, within Office 365 (Exchange Online, Sharepoint Online, Skype for Business Online etc.) or through different Azure AD Apps that may have been added via the app gallery (e.g. ServiceNow, SalesForce etc.).  In some circumstances you may have been able to define some level of granularity utilising custom authorisation claims, such as bypassing MFA for activesync and legacy  authentication scenarios, but that method was reliant on special client headers or the authentication endpoints that were being used and hence was quite limited in its use.

Now with Azure AD Conditional Access policies, the definition and logic of when to trigger MFA can, and should, be driven from the Azure AD side given the high level of granularity and varying conditions you can define. This doesn’t mean though that you can’t keep using your on-premises ADFS server to perform the MFA, you’re simply letting Azure AD decide when this should be done.

In this article I’ll show you the method I like to use to ‘migrate’ from on-premises MFA rules to Azure AD Conditional Access.  Note that this is only applicable for the MFA rules for your Azure AD/Office 365 relying party trust.  If you are using ADFS MFA for other SAML apps on your ADFS farm, they will remain as is.

Summary

At a high level, the process is as follows:

  1. Configure Azure AD to pass ‘MFA execution’ to ADFS using the SupportsMFA parameter
  2. Port your existing ADFS MFA rules to an Azure AD Conditional Access (CA) Policy
  3. Configure ADFS to send the relevant claims
  4. “Cutover” the MFA execution by disabling the ADFS MFA rules and enabling the Azure AD CA policy

The ordering here is important, as by doing it like this, you can avoid accidentally forcing users with a ‘double MFA’ prompt.

Step 1:  Using the SupportsMFA parameter

The crux of this configuration is the use of the SupportsMFA parameter within your MSOLDomainFederationSettings configuration.

Setting this parameter to True will tell Azure AD that your federated domain is running an on-premises MFA capability and that whenever it determines a need to perform MFA, it is to send that request to your STS IDP (i.e. ADFS) to execute, instead of triggering its own ‘Azure Cloud MFA’.

To perform this step is a simple MSOL PowerShell command:

Set-MsolDomainFederationSettings -domain yourFederatedDomain.com -SupportsMFA $true

Pro Tip:  This setting can take up to 15-30 mins to take effect.  So make sure you factor in this into your change plan.  If you don’t wait for this to kick in before cutting over your users will get ‘double MFA’ prompts.

Step 2:  Porting your existing MFA Rules to Azure AD Conditional Access Policies

There’s a whole article in itself talking about what Azure AD CA policies can do nowadays, but for our purposes let’s use the two most common examples of MFA rules:

  1. Bypass MFA for users that are a member of a group
  2. Bypass MFA for users on the internal network*

Item 1 is pretty straight forward, just ensure our Azure AD CA policy has the following:

  • Assignment – Users and Groups:
    • Include:  All Users
    • Exclude:  Bypass MFA Security Group  (simply reuse the one used for ADFS if it is synced to Azure AD)

MFABypass1

Item 2 requires the use of the Trusted Locations feature.  Note that at the time of writing, this feature is still the ‘old’ MFA Trusted IPs feature hosted in the Azure Classic Portal.   Note*:  If you are using Windows 10 Azure AD Join machines this feature doesn’t work.  Why this is the case will be an article in itself, so I’ll add a link here when I’ve written that up.

So within your Azure AD CA policy do the following:

  • Conditions – Locations:
    • Include:  All Locations
    • Exclude:  All Trusted IPs

MFABypass2.png

Then make sure you click on Configure all trusted locations to be taken to the Azure Classic Portal.  From there you must set Skip multi-factor authentication for requests from federated users on my intranet

MFABypass3.png

This effectively tells Azure AD that a ‘trusted location’ is any authentication requests that come in with a InsideCorporateNetwork claim.

Note:  If you don’t use ADFS or an IDP that can send that claim, you can always use the actual ‘Trusted IP addresses’ method.

Now you can define exactly which Azure AD apps you want MFA to be enabled for, instead of all of them as you had originally.

MFABypass7.png

Pro Tip:  If you are going to enable MFA on All Cloud Apps to start off with, check the end of this article for some extra caveats you should consider for, else you’ll start breaking things.

Finally, to make this Azure AD CA policy actually perform MFA, set the access controls:

MFABypass8.png

For now, don’t enable the policy just yet as there is more prep work to be done.

Step 3:  Configure ADFS to send all the relevant claims

So now that Azure AD is ready for us, we have to configure ADFS to actually send the appropriate claims across to ‘inform’ it of what is happening or what it is doing.

The first is to make sure we send the InsideCorporateNetwork claim so Azure AD can apply the ‘bypass for all internal users’ rule.  This is well documented everywhere, but the short version is, within your Microsoft Office 365 Identity Platform relying party trust in ADFS and Add a new Issuance Transform Rule to pass through the Inside Corproate Network Claim:

MFABypass4

Fun fact:   The Inside Corporate Network claim is automatically generated by ADFS when it detects that the authentication was performed on the internal ADFS server, rather then through the external ADFS proxy (i.e. WAP).  This is why it’s a good idea to always use an ADFS proxy as opposed to simply reverse proxying your ADFS.  Without it you can’t easily tell whether it was an ‘internal’ or ‘external’ authentication request (plus its more secure).

The other important claim to send through is the authnmethodsreferences claim.  Now you may already have this if you were following some online Microsoft Technet documentation when setting up ADFS MFA.  If so, you can skip this step.

This claim is what is generated when ADFS successfully performs MFA.  So think of it as a way for ADFS to tell Azure AD that it has performed MFA for the user.

MFABypass6

Step 4: “Cutover” the MFA execution

So now that everything is prepared, the ‘cutover’ can be performed by doing the following:

  1. Disable the MFA rules on the ADFS Relying Party Trust
    Set-AdfsRelyingPartyTrust -TargetName "Microsoft Office 365 Identity Platform" -AdditionalAuthenticationRules $null
  2. Enable the Azure AD CA Policy

Now if it all goes as planned, what should happen is this:

  1. User attempts sign into an Azure AD application.  Since their domain is federated, they are redirected to ADFS to sign in.
  2. User will perform standard username/password authentication.
    • If internal, this is generally ‘SSO’ with Windows Integrated Auth (WIA).  Most importantly this user will get a ‘InsideCorporateNetwork’ = true claim
    • If external, this is generally a Forms Based credential prompt
  3. Once successfully authenticated, they will be redirected back to Azure AD with a SAML token.  Now is actually when Azure AD will assess the CA policy rules and determines whether the user requires MFA or not.
  4. If they do, Azure AD actually generates a new ADFS sign in request, this time specifically stating via the wauth parameter to use multipleauthn. This will effectively tell ADFS to execute MFA using its configured providers
  5. Once the user successfully completes MFA, they will go back to Azure AD with this new SAML token that contains a claim telling Azure AD that MFA has now been performed and subsequently lets the user through

This is what the above flow looks like in Fiddler:

MFABypass9.png

This is what your end-state SAML token should like as well:

MFABypass10

The main takeaway is that Step 4 is the new auth flow that is introduced by moving MFA evaluation into Azure AD.  Prior to this, step 2 would have simply perform both username/password authentication and MFA in the same instance rather then over two requests.

Extra Considerations when enabling MFA on All Cloud Apps

If you decide to take a ‘exclusion’ approach to MFA enforcement for Cloud Apps, be very careful with this.  In fact you’ll even see Microsoft giving you a little extra warning about this.

MFABypass12

The main difference with taking this approach compared to just doing MFA enforcement at the ADFS level is that you are now enforcing MFA on all cloud identities as well!  This may very well unintentionally break some things, particularly if you’re using ‘cloud identity’ service accounts (e.g. for provisioning scripts or the like).  One thing that will definitely break is the AADConnect account that is created for directory synchronisation.

So at a very minimum, make sure you remember to add the On-Premises Directory Synchronization Service Account(s) into the exclusion list for for your Azure AD MFA CA policy.

The very last thing to call out is that some Azure AD applications, such as the Intune Company Portal and Azure AD Powershell cmdlets, can cause a ‘double ADFS prompt’ when MFA evaluation is being done in Azure AD.   The reason for this and the fix is covered in my next article Resolving the ‘double auth’ prompt issue with Azure AD Conditional Access MFA and ADFS so make sure you check that out as well.

 

The Art Of War – Is your Tech Department Combat Ready?

Over the course of a series of articles, I plan to address strategic planning and why it’s becoming more important in the technology fuelled world we live in. It’s critical an organisation’s response to shifting external events is measured & appropriate. The flow on effects of change to the nature and structure of the IT department has to be addressed. Is a defensive or attack formation needed for what lies ahead?

In this first post, I’ll introduce what is meant by strategy and provide a practical planning process. In future posts, I’ll aim to address subsets of the processes presented in this post.

Operational vs Strategic

I often see technology departments so focussed on operations, they begin to lose sight of what’s coming on the horizon & how their business will change as a result. What’s happening in the industry? What are competitors up to? What new technologies are relevant? These questions are typically answered through strategic planning.

This can, however, be quite challenging for an IT function with an operational mindset (focussing on the now with some short-term planning). This typically stems from IT being viewed as an operational cost i.e. used to improve internal processes that don’t explicitly improve the organisation’s competitive position or bring it closer to fulfilling its strategic vision.

So, what exactly is “strategy”? A quick crash course suggests it aims to answer four key questions;

  • Where are we now? Analyse the current position and likely future scenarios.
  • Where are we going? Develop a plausible and sustainable desired future state (goals & objectives)
  • How will we get there? Develop strategies and programs to get there
  • How will we know we are on track? Develop KPI’s, monitoring and review processes to make sure you get where you intended in the most effective and efficient way

Strategy has plagued influential minds throughout history. It’s interesting to note this style of thinking developed through war-times as it answers similar questions e.g. How are we going to win the war?

While it is sometimes hard to see the distinction, an organisation that confuses operational applications for strategic uses will be trapped into simply becoming more efficient at the same things it does now.

Why does IT need to think strategically?
The Menzies Research Centre has released a Statement of National Challenges1 citing the need for Australian organisations to embrace digital disruption. The report highlights that Australia has been fortunate with having 25 years of continued economic growth which has caused complacency with regards to organisations capability to explore new value creating opportunities, constantly.

“Australian businesses cannot wish these disruptive technologies away, and nor should they do so, as they represent an opportunity to be part of a reshaping of the global economy and should be embraced.” – Mr Tony Shepherd, Commission of Audit Chair 2014 & Shepherd Review Author 2017

We also see The Australia Innovation System Report 20162 providing a comparison of “innovation active” business versus “non-innovation actives” business and provides some interesting insights;

Innovation

Australia’s Chief Economist, Mark Cully is calling for organisations to look at ways to reinvent themselves through the application of new technologies. He argues persistent innovators significantly outgrow other businesses in terms of sales, value added, employment and profit growth.

Here comes the problem – in a technology fuelled world, organisations will struggle to innovate with an operationally focussed technology department. We believe there’s a relationship between an organisation’s ability to compete & it’s strategic use of technology. Operational IT departments are typically challenged with lack of agility, not able to influence enterprise strategy & not having clear sense of purpose; all of which are required to innovate, adapt & remain relevant.

I want to be clear that technology isn’t the only prerequisite for innovation; other elements include culture, creativity & leadership. These aren’t addressed in this post, perhaps topics for another blog.

What does a strategic planning process look like?
In today’s rapidly evolving landscape, a technology focussed strategic planning activity should be short, sharp & deliver high value. This approach helps ensure the impact of internal and external forces are properly accounted for in the planning and estimation of ICT activity over the planning period, typically 3-5 years.  Below is our approach;

Process

  1. Looking out involves checking the industry, what competitors are doing and what technology is relevant.
  2. Looking in focusses on understanding the current environment, the business context & what investments have already been made.
  3. These two areas are then aligned to the organisation’s strategy and relevant business unit plans which then informs the Technology Areas of focus include architecture, operating model & governance.
  4. From an execution perspective, investment portfolios are established from which business cases are established. Risk is also factored in.
  5. Measure & monitoring will ensure the strategy is being executed correctly and gives the intelligence & data to base strategic revisions as needed.

There’s value in hiring external assistance to guide you through the process. They’ll inject new thinking, approach your challenges from new perspectives and give a measured assessment of the status-quo. This is a result from being immersed in this stuff on a daily basis, just like you are with your organisation.  When these two areas are combined, the timing & sequence of the plans is laid down to ensure your tech department is ‘combat ready’!

1 The Shepherd Review 2017: Statement of National Challenges – https://www.menziesrc.org/images/PDF/TheShepherdReview_StatementOfNationalChallenges_March2017web.pdf

2 Australian Innovation System Report 2016 –  https://industry.gov.au/Office-of-the-Chief-Economist/Publications/Documents/Australian-Innovation-System/2016-AIS-Report.pdf

Azure AD Connect – Upgrade Errors

 

 

Azure AD Connect is the latest release to date for Azure AD sync or previously known as Dirsync service. It comes with some new features which make it even more efficient and useful in Hybrid environment. Besides many new features the primary purpose of this application remains the same i.e. to sync identities from your local (On-Prem) AD to Azure AD.

Of the late I upgraded an AD sync service to AD connect and during the install process I ran into a few issues which I felt are not widely discussed or posted on the web but yet are real world scenarios which people can face during AD connect Install and configuration. Let’s discus them below.

 

Installation Errors

The very first error is stumped up on was Sync service install failure. The installation process started smoothly and Visual C++ package was installed and sql database created without any issue but during synchronization service installation, process failed and below screen message was displayed.

Issue:

Event viewer logs suggested that the installation process failed because of install package could not install the required dll files. The primary reason suggested that the install package was corrupt.

 

sync install error

 

Actions Taken:

Though I was not convinced but for sake of busting this reason I downloaded new AD connect install package and reinstalled the application but unfortunately it failed at same point.

Next, I switched from my domain account to another service account which was being used to run AD sync service on current server. This account had higher privileges then mine but unfortunately result was the same.

Next I started reviewing the application logs located at following path.

 

And at first look I found access denied errors logged in. What was blocking the installation files? Yes, none other but the AV. Immediately contacted security administrator and requested to temporarily stop AV scanning. Result was a smooth install on next attempt.

I have shared below some of the related errors I found in the log files.

 

 

 

 

Configuration Errors:

One of the important configurations in AD connect is the Azure Ad account with global administrator permissions. If you are creating a new account for this purpose and you have not logged on with it to change first time password, then you may face with below error.

badpassword

 

Nothing to panic about. All you need to do is log into Azure portal using this account, change password and then add credentials with newly set password into configuration console.

Another error related to Azure Ad sync account was encountered by one of my colleague Lucian and he has beautifully narrated the whole scenario in one of his cool blogs here: Azure AD Connect: Connect Service error

 

Other Errors and Resolutions:

Before I conclude, I would like to share some more scenarios which you might face during install/configuration and post install. My Kloudie fellows have done their best to explain them. Have a look and happy AAD connecting.

 

Proxy Errors

Configuring Proxy for Azure AD Connect V1.1.105.0 and above

 

Sync Errors:

Azure AD Connect manual sync cycle with powershell, Start-ADSyncSyncCycle

 

AAD Connect – Updating OU Sync Configuration Error: stopped-deletion-threshold-exceeded

 

Azure Active Directory Connect Export profile error: stopped-server-down