EU GDPR – is it relevant to Australian companies?

The new General Data Protection Regulation (GDPR) from the European Union (EU) imposes new rules on organisations that offer goods and services to the people in the EU, or collects and analyses data tied to EU residents, no matter where the organisations or the data processing is located. GDPR comes into force in May 2018.

If your customers reside in the EU, whether you have a presence in the EU or not, then GDPR applies to you. The internet lets you interact with customers where ever they are, and GDPR applies to anyone that deals with EU people where ever they are.

And the term personal data covers everything from IP address, to cookie data, to submitted forms, to CCTV and even to a photo of a landscape that can be tied to an identity. Then there is sensitive personal data, such as ethnicity, sexual orientation and genetic data, which have enhanced protections.

And for the first time there are very strong penalties for non-compliance – the maximum fine for a GDPR breach is EU$20M, or 4% of worldwide annual turnover. The maximum fine can be imposed for the most serious infringements e.g. not having sufficient customer consent to process data or violating the core of Privacy by Design concepts.

Essentially GDPR states that organisations must:

  • provide clear notice of data collection
  • outline the purpose the data is being used for
  • collect the data needed for that purpose
  • ensure that the data is only kept as long as required to process
  • disclose whether the data will be shared within or outside or the EU
  • protect personal data using appropriate security
  • individuals have the right to access, correct and erase their personal data, and to stop an organisation processing their data
  • and that organisations notify authorities of personal data breaches.

Specific criteria for companies required to comply are:

  • A presence in an EU country
  • No presence in the EU, but it processes personal data of European residents
  • More than 250 employees
  • Fewer than 250 employees but the processing it carries out is likely to result in a risk for the rights and freedoms of data subject, is not occasional, or includes certain types of sensitive personal data. That effectively means almost all companies.

What does this mean in real terms to common large companies? Well…

  • Apple turned over about USD$230B in 2017, so the maximum fine applicable to Apple would be USD$9.2B
  • CBA turned over AUD$26B in 2017 and so their maximum fine would “only” be AUD$1B
  • Telstra turned over AUD$28.2B in 2017, the maximum fine would be AUD$1.1B.

Ouch.

The GDPR legislation won’t impact Australian businesses, will it? What if an EU resident gets a Telstra phone or CBA credit/travel card whilst on holiday in Australia or if your organisation has local regulatory data retention requirements that appear, on the surface at least, at odds with GDPR obligations…

I would get legal advice if the organisation provides services that may be used by EU nationals.

In a recent PWC “Pulse Survey: US Companies ramping up General Data Protection Regulation (GDPR) budgets” 92% of responses stated that GDPR is one of several top priorities.

Technology cannot alone make an organisation GDPR compliant. There must be policy, process, people changes to support GDPR. But technology can greatly assist organisations that need to comply with GDPR.

Microsoft has invested in providing assistance to organisations impacted by GDPR.

Office 365 Advanced Data Governance enables you to intelligently manage your organisation’s data with classifications. The classifications can be applied automatically, for example, if there is GDPR German PII data present in the document the document can be marked as confidential when saved. With the document marked the data can be protected, whether that is to encrypt the file or assign permissions based on user IDs, or add watermarks indicating sensitivity.

An organisation can choose to encrypt their data at rest in Office 365, Dynamics 365 or Azure with their own encryption keys. Alternatively, a Microsoft generated key can be used.  Sounds like a no-brainer, all customers will use customer keys. However, the customer must have a HSM (Hardware Security Module) and a proven key management capability.

Azure Information Protection enables an organisation to track and control marked data. Distribution of data can be monitored, and access and access attempts logged. This information can allow an organisation to revoke access from an employee or partner if data is being shared without authorisation.

Azure Active Directory (AD) can provide risk-based conditional access controls – can the user credentials be found in public data breaches, is it an unmanaged device, are they trying to access a sensitive app, are they a privileged user or have they just completed an impossible trip (logged in five minutes ago from Australia, the current attempt is from somewhere that is a 12 hour flight away) – to assess the risk of the user and the risk of the session and based on that access can be provided, or request multi-factor authentication (MFA), or limit or deny access.

Microsoft Enterprise Mobility + Security (EMS) can protect your cloud and on-premises resources. Advanced behavioural analytics are the basis for identifying threats before data is compromised. Advanced Threat Analytics (ATA) detects abnormal behaviour and provides advanced threat detection for on-premises resources. Azure AD provides protection from identity-based attacks and cloud-based threat detection and Cloud App Security detects anomalies for cloud apps. Cloud App Security can detect what cloud apps are being used, as well as control access and can support compliance efforts with regulatory mandates such as Payment Card Industry (PCI), Health Insurance Accountability and Portability Act (HIPAA), Sarbanes-Oxley (SOX), General Data Protection Regulation (GDPR) and others. Cloud App Security can apply policies to apps from Microsoft or other vendors, such as Box, Dropbox, Salesforce, and more.

Microsoft provides a set of compliance and security tools to help organisations meet their regulatory obligations. To reiterate policy, process and people changes are required to support GDPR.

Please discuss your legal obligations with a legal professional to clarify any obligations that the EU GDPR may place on your organisation. Remember May 2018 is only a few months away.

Azure ARM architecture pattern: a DMZ design with a firewall appliance

Im in the process of putting together a new Azure design for a client. As always in Azure, the network components form the core of the design. There was a couple of key requirements that needed to be addressed that the existing environment had outgrown: lack of any layer 7 edge heightened security controls and a lack of a DMZ.

I was going through some designs that I’ve previously done and was checking the Microsoft literature on what some fresh design patterns might look like, in case anythings changed in recent times. There is still only a single reference on the Microsoft Azure documentation and it still references ASM and not ARM.

For me then, it seems that the existing pattern I’ve used is still valid. Therefore, I thought I’d share what that architecture would look like via this quick blog post.

My DMZ with a firewall appliance design

Here’s an overview of some key points on the design:

  • Firewall appliances will have multiple interfaces, but, typically will have 2 that we are mostly concerned about: an internal interface and an external interface
  • Network interfaces in ARM are now independent objects from compute resources
    • As such, an interface needs to be associated with a subnet
    • Both the internal and external interfaces could in theory be associated with the same subnet, but, thats a design for another blog post some day
  • My DMZ design features two subnets in two zones
    • Zone 1 = “Untrusted”
    • Zone 2 = “Semi-trusted”
    • Zone 3 = “Trusted” – this is for reference purposes only so you get the overall picture
  • Simple subnet design
    • Subnet 1 = “External DMZ”
    • Subnet 2 = “Internal DMZ”
    • Subnet 3 = Trusted
  • Network Security Groups (NSGs) are also used to encapsulate the DMZ subnets and to isolate traffic from the VNET
  • Through this topology there are effectively three layers of firewall between the untrusted zone and the trusted zone
    • External DMZ NSG, firewall appliance and Internal DMZ NSG
  • With two DMZ subnets (external DMZ and internal DMZ), there are two scenarios for deploying DMZ workloads
    • External DMZ = workloads that do not require heightened security controls by way of the firewall
      • Workload example: proxy server, jump host, additional firewall appliance management or monitoring interfaces
      • Alternatively, this subnet does not need to be used for anything other than the firewall edge interface
    • Internal DMZ = workloads that required heightened security controls and also (by way of best practice) shouldn’t be deployed in the trusted zone
      • Workload example: front end web server, Windows WAP server, non domain joined workloads
  • Using the firewall as an edge device requires route tables to force all traffic leaving a subnet to be directed to the firewall internal interface
    • This applies to the internal DMZ subnet and the trusted subnet
    • The External DMZ subnet does not have a route table configured so that the firewall is able to route out to Azure internet and the greater internet
  • Through VNET peering, other VNETs and subnets could also route out to Azure internet (and the greater WWW) via this firewall appliance – again, leverage route tables

Cheers 🙂

Azure ARM architecture pattern: the correct way to deploy a DMZ with NSGs

Isolating any subnet in Azure can effectively create a DMZ. To do this correctly though is certainly something that is super easy, but, something that can easily be done incorrectly.

Firstly, all that is required is a NSG and associating that with any given subnet (caveat- remember that NSGs are not compatible with the GatewaySubnet). Doing this will deny most traffic to and from that subnet- mostly relating to the tag “internet”. What is easily missed is not applying a deny all rule set in both the inbound and outbound rules of the NSG itself.

Ive seen some clients that have put an NSG on a subnet and assumed that subnet was protected. Unfortunately, thats not correct.

Ive seen some clients that have put a deny all inbound from the internet and vice versa deny all outbound to the internet and assumed that the subnet was protected and isolated. Unfortunately, thats also not correct.

How to correctly isolate a subnet to create a DMZ

Azure has 3 default rules that apply to an NSG.

These rules are:

To view these default rules, at the top of the NSG inbound or outbound rules, select this button:

With these 3 rules, there’s the higher in order two rules that can trip people up. The main culprit being rule 65000 means that any other subnet in your VNET and any other VNET that is peered with your VNET is allowed to communicate with your given subnet.

To correctly isolate a subnet in a VNET, we need to create new rule (I always do the lowest available rule priority number of 4096) for both inbound and outbound, to deny * ports on * IP’s or subnets that will override these default rules. Azure NSGs work by way of precedence. The higher the rule priority number, the higher…. um… you guess it: the priority when processing the rules (much like any other firewall or network appliance vendor ACL). This 4096 deny rule should look something like this:

You’ll also notice two warnings when you’ve attempted to save that rule which should explain in better and more succinct English my earlier definition of what happens:

  • Warning #1 – This rule denies traffic from AzureLoadBalancer and may affect virtual machine connectivity. To allow access, add an inbound rule with higher priority to allow AzureLoadBalancer to VirtualNetwork.
  • Warning #2 – This rule denies virtual network access. If you wish to allow access to your virtual network, add an inbound rule with higher priority to Allow VirtualNetwork to VirtualNetwork.

With that, the Azure load balancer and VNET traffic from other subnets within the same VNET or other VNETs through peering will be denied. Thus, we have a true isolated subnet and one that can be setup as a DMZ.

Cheers!

BONUS

Before I go, I just wanted to quickly mention that in Azure NSGs are also able to be applied to a single network interface.

If you flip the methodology there, its quite easily possible to have no NSGs on any subnets, but rather, apply those NSGs on every interface associated with servers and instances in a VNET. There is a key drawback though with this approach- administrative overhead.

With an NSG associated with a server instance, again following my correct deny all rule mentioned earlier, a single NIC or a single server instance could be isolated in a sudo DMZ. The challenge then is, if this process is repeated across 100 servers, keeping all those NSGs up to date and replicating rules when servers need to communicate on various ports or protocols. Administrative overhead indeed!


Original posted on Lucian.Blog. Follow Lucian on Twitter @LucianFrango.

Writing for the Web – that includes your company intranet!

You can have a pool made out of gold – if the water in it is as dirty and old as a swamp- no one will swim in it!

The same can be said about the content of an intranet. You can have the best design, the best developers and the most carefully planned out navigation and taxonomy but if the content and documents are outdated and hard to read, staff will lose confidence in its authority and relevance and start to look elsewhere – or use it as an excuse to get a coffee.

The content of an intranet is usually left to a representative from each department (if you’re lucky!) Usually people that have been working in a company for years. Or worse yet, to the IT guy. They are going to use very different language to a new starter, or to a book keeper, or the CEO. Often content is written for an intranet “because it has to be there” or to “cover ourselves” or because “the big boss said so” with no real thought into how easy it is to read or who will be reading it.

adaptive

Content on the internet has changed and adapted to meet a need that a user has and to find it as quickly as possible. Why isn’t the same attitude used for your company? If your workers weren’t so frustrated finding the information they need to do their job, maybe they’d perform better, maybe that would result in faster sales, maybe investing in the products your staff use is just as important as the products your consumers use.

I’m not saying that you have to employ a copywriter for your intranet but at least train the staff you nominate to be custodians of your land (your intranet- your baby).

Below are some tips for your nominated content authors.

The way people read has changed

People read differently thanks to the web. They don’t read. They skim.

  • They don’t like to feel passive
  • They’re reluctant to invest too much time at one site or your intranet
  • They don’t want to work for the information

People DON’T READ MORE because

  • What they find isn’t relevant to what they need or want.
  • They’re trying to answer a question and they only want the answer.
  • They’re trying to do a task and they only want what’s necessary.

Before you write, identify your audience

People come to an intranet page with a specific task in mind. When developing your pages content, keep your users’ tasks in mind and write to ensure you are helping them accomplish those tasks.  If your page doesn’t help them complete that task, they’ll leave (or call your department!)

Ask these questions

  • Who are they? New starters? Experienced staff? Both? What is the lowest common denominator?
  • Where are they? At work? At home? On the train? (Desktop, mobile, laptop, ipad)
  • What do they want?
  • How educated are they? Are they familiar with business Jargon?

Identify the purpose of your text

As an intranet, the main purpose is to inform and educate. Not so much to entertain or sell.

When writing to present information ensure:

  • Consistency
  • Objectivity
  • Consider tables, diagrams or graphs

Structuring your content

Headings and Sub headings

Use headings and sub headings for each new topic. This provides context and informs readers about what is to come. It provides a bridge between chunks of content.

Sentences and Paragraphs

Use short sentences. And use only 1-3 sentences per paragraph.

‘Front Load’ your sentences. Position key information at the front of sentences and paragraphs.

Chunk your text. Break blocks of text into smaller chunks. Each chunk should address a single concept. Chunks should be self-contained and context-independent.

Layering vs scrolling

It is OK if a page scrolls. It just depends how you break up your page! Users’ habits have changed in the past 10 years due to mobile devices, scrolling is not a dirty word, as long as the user knows there’s more content on the page by using visual cues.

phone.png

Use lists to improve comprehension and retention

  • Bullets for list items that have no logical order
  • Numbered lists for items that have a logical sequence
  • Avoid the lonely bullet point
  • Avoid death by bullet point

General Writing tips

  • Write in plain English
  • Use personal pronouns. Don’t say “Company XYZ prefers you do this” Say “We prefer this”
  • Make your point quickly
  • Reduce print copy – aim for 50% less copy than what you’d write for print
  • Be objective and don’t exaggerate
  • USE WHITE SPACE – this makes content easier to scan, and it is more obvious to the eye that content is broken up into chunks.
  • Avoid jargon
  • Don’t use inflated language

Hyperlinks

  • Avoid explicit link expressions (eg. Click here)
  • Describe the information readers will find when they follow the link
  • Use VERBS (doing word) as links.
  • Warn users of a large file size before they start downloading
  • Use links to remove secondary information from the bulk of the text (layer the content)

Remove

  • Empty words and phrases
  • Long words or phrases that could be shorter
  • Unnecessary jargon and acronyms
  • Repetitive words or phrases
  • Adverbs (e.g., quite, really, basically, generally, etc.)

Avoid Fluff

  • Don’t pad write with unnecessary sentences
  • Stick to the facts
  • Use objective language
  • Avoid adjectives, adverbs, buzzwords and unsubstantiated claims

Tips for proofreading

  1. Give it a rest
  2. Look for one type of problem at a time
  3. Double-check facts, figures, dates, addresses, and proper names
  4. Review a hard copy
  5. Read your text aloud
  6. Use a spellchecker
  7. Trust your dictionary
  8. Read your text backwards
  9. Create your own proofreading checklist
  10. Ask for help!

A Useful App

Hemingwayapp.com assesses how good your content is for the web.

A few examples (from a travel page)

Bad Example

Our Approved an​​​d Preferred Providers

Company XYZ has contracted arrangements with a number of providers for travel.  These arrangements have been established on the basis of extracting best value by aggregating spend across all of Company XYZ.

Why it’s Bad

Use personal pronouns such as we and you, so the user knows you are talking to them. They know where they work. Remove Fluff.

Better Example

Our Approved an​​​d Preferred Providers

We have contracted arrangements with a number of providers for travel to provide best value

Bad Example

Travel consultant:  XYZ Travel Solutions is the approved provider of travel consultant services and must be used to make all business travel bookings.  All airfare, hotel and car rental bookings must be made through FCM Travel Solutions

Why it’s bad

The author is saying the same thing twice in two different ways. This can easily be said in one sentence.

Better Example

Travel consultant

XYZ Travel Solutions must be used to make all airfare, hotel and car rental bookings.

Bad Example

Qantas is Company XYZ preferred airline for both domestic and international air travel and must be used where it services the route and the “lowest logical fare” differential to another airline is less than $50 for domestic travel and less than $400 for international travel

Why it’s bad

This sentence is too long. This is a case of using too much jargon. What does lowest logical fare even mean? And the second part does not make any sense. What exactly are they trying to say here? I am not entirely sure, but if my guess is correct it should read something like below.

Better Example

Qantas is our preferred airline for both domestic and international air travel. When flying, choose the cheapest rate available within reason. You can only choose another airline if it is cheaper by $50 for domestic and cheaper by $400 for international travel.

Bad Example

Ground transportation:  Company XYZ preferred provider for rental vehicle services is Avis.  Please refer to the list of approved rental vehicle types in the “Relevant Documents” link to the right hand side of this page.

Why it’s bad

Front load your sentences. With the most important information first. Don’t make a user dig for a document, have the relevant document right there. Link the Verb. Don’t say CLICK HERE!

Better Example

Ground transportation

Avis is our preferred provider to rent vehicles.

View our list of approved rental vehicles.

Bad Example

Booking lead times:  To ensure that the best airfare and hotel rate can be obtained, domestic travel bookings should be made between 14-21 days prior to travel, and international travel bookings between 21 and 42 days prior to travel.  For international bookings, also consider lead times for any visas that may need to be obtained.

Why it’s bad

Front load your sentence… most important information first. This is a good opportunity to chunk your text.

Better Example

Booking lead times

Ensure your book your travel early

14-21 day prior to travel for domestic

21-42 days prior to travel for internatonal (also consider lead times for visas)

This will ensure that the best airfare and hotel rate can be obtained.

 

Xamarin Application Architecture

In this post, I will talk about strategies for developing a cross-platform Xamarin application with focus on code sharing, increasing testability, and reducing overall development and maintenance efforts.

The application architecture is itself problem specific, but there are certain design patterns that can guide the overall structure of the application. The ones I mostly work with are Model-View-Controller, Model-View-Presenter, and Model-View-ViewModel.

MVC should be adopted for small applications or proof of concept. Since, Android and iOS both natively support MVC, it will mean less roadblocks, and faster implementation.

MVVM reduces platform specific code, and most of the logic is shared across platforms using PCLs. There are great MVVM frameworks out there that work really well with Xamarin, such as MVVM Cross, and the official Xamarin Forms.

MVVM Cross is a data driven pattern, and the presentation logic is quiet centralised. This means any upgrades to the library or to the system, can break custom navigation and screen transitions within the application. As Android and iOS platforms are constantly upgraded, with newer restrictions around security and permissions (background tasks / app permissions), and as user interface (gestures / notifications / newer design guidelines) are greatly improved, support for these can be costly in both time and effort. That being said, it is still a very powerful platform, and most applications can be written with it, with great results.

Xamarin.Forms is a UI first framework, and allows .Net developers to use their existing XAML skills for developing mobile applications. However, this platform is still maturing and needs to reach a more stable stage. Even though native controls can now be used in XAML, but one looses advantage of data binding, and the extra effort required to maintain proper navigation in the application, and catering for various screen types, out does the benefit for now.

MVP is another pattern that works really well with Xamarin cross-platform applications. It is a variant of the MVC pattern and allows full use of platform native capabilities, with great control over the UI of the application. It is the pattern of choice for native development and we will look more into it below.

MVP has 3 major components

Model: This is data that we want to show in our views. Apart from data classes, this carries responsibility for retrieving, storing, validating advanced data sets, and advanced functions such as syncing and conflict resolution, cache maintenance, offline capabilities.

View: This module is responsible for showing data to user and responding to user gestures and actions. It should only be dependent upon the presenter and native platform capabilities for styling.

Presenter: Presenter is the layer below the view and is responsible for providing data the view can use. It relies on the model layer to retrieve data, and can also publish other events to UI, such as loading data state, timeout and error state.

The golden rule of mobile development is to keep the UI layer as dumb as possible. That means UI only needs to know about its data, view transitions and animations, and publishing interaction events, such as gestures, button clicks. As mentioned view is completely dependent upon presenter, and presenter is dependent upon the view for human interaction. However, to promote unit testing for each of the modules, presenter and view need to be decoupled.

To achieve this, we can define a contract between presenter and the view. For a login screen the view’s contract can be:

and the presenter contract can be

The presenter should be platform independent so that it can be shared across different mobile platforms. This means, it does not respond to UI life-cycle methods, unless explicitly told so.

The controllers in iOS and Activities in Android are never directly created using constructor, which means that we cannot use dependency injection to provide a presenter to our view. In this scenario I like to use an anti-pattern called ‘Service Locator’.

Service locator is considered anti-pattern because of the complexities of managing dependencies, and dependency’s child dependencies. This is highly error prone, but in case of multi-threaded programs, where start of the application can run many different initialisation threads, double instance of a service can be created. This pattern works well in simple scenarios and this is exactly what we are trying to solve. A dependency service can be used to locate the presenter implementation inside the view.

If the scenario was anymore complex, it certainly means, that View has been assigned more work than it needs to perform and is against MVP principles. A sample of this is shown below

As shown above the AppDelegate in the application loads all the presenters in the application. When the view loads, it uses the service locator instance to retrieve its presenter.

By abstracting the presenter from the view gives us many advantages.

  • One presenter will not work on all devices. Such as finger print functionality for login can only be used on devices that have the required hardware. In this case inheritance can be used to create versions that can handle extra features, while having no functionality in base presenters.
  • By abstracting the creation of the presenter outside the view, allows to make our UI really passive, and thus we are able to test UI independent of any other module.
  • This method allows to create presenters that can help automate UI testing, performing common testing scenarios automatically. This can also be used for automatic product demos.

 

The above implementation of MVP suffers from one disadvantage. It ignores the asynchronous nature of the mobile applications. Both Android and iOS are quite capable devices. Even a simple Notes application performs a lot of activities when loading. These can be logging in the user, loading the notes from the cache, syncing down any notes created on other device, syncing up any notes created during offline mode, resolving conflicts, showing progress to the user, restoring state to what it was when the application was last closed. These activities can take from few milliseconds to few seconds. The longer the application takes to boot up, the higher the chance that the user will stop using the application, and will eventually remove it from the device.

Enterprise applications are much more complex. In a perfect world, they are developed using mobile-first approach. However, not all companies can follow this and instead of the application interacting with a unified set of APIs, it ends up interacting with multiple legacy systems, combine the result and then provide it to the UI layer. This means that the application load time can be really high especially if there is stale data on the device.

In such a scenario, instead of defining contracts between the presenter and view, a more reactive approach can be used. This allows us to build application with a user centric approach. This means we think of how the user is going to interact with application, and what does the user see while waiting for data to be available in the application. The code below shows this for a login screen.

The above code shows the asynchronous nature of the View, and it follows a push approach, reacting to events when raised by the presenter. This further decouple the bond between the view and the presenter allowing for easier testing, and also helps to identify real life scenarios within the app, such as data being delayed or no connectivity.

Thus, we see how MVP aims at building a very user centric app, with a very passive and light UI layer. Both iOS and Android platforms are competing to enhance user experience. In future, we can see apps that will react and adapt to user habits using machine learning. This also means that they will have to work in a stricter environment, like stricter guidelines for UI, background tasks, app permissions, reduced background tasks, and change in application notification patterns. MVP will definitely allow to handle these changes, without affecting user experience.

Be SOLID: uncle Bob

We have discussed STUPID issues in programming. The shared modules and tight coupling leads to dependency issues in design. The SOLID principles address those dependency issues in OOP.

SOLID acronym was popularized by Robert Martin as generic design principles dictated by common sense in OOP. They mainly address dependencies and tight coupling. We will discuss SOLID one by one and try to relate each of them with the underline problems and how they try to solve them.

          S – Single Responsibility Principle – SRP

“There should not be more than one reason for something to exist.”

 img.png

As the name suggest, a module/ class etc. should not have more than one responsibility in a system. The more a piece of code is doing, or trying to do, the more fragile, rigid, and difficult to (re)use it gets. Have a look at the code below:

 

class EmployeeService

{

               //constructor(s)//

                Add(Employee emp)

               {

                              //…..//

using (var db = new <Some Database class/ service>()) // or some SINGLETON or factory call, Database.Get()

                              {

try

{

                                              db.Insert(emp);

             db.Commit();

             //more code

}

catch(…)

{

   db.Rollback();

}

}

//….

}

}

 

All looks good yes? There are genuine issues with this code. The EmployeeService has too much responsibilities. The database handling should not be a responsibility of this class. Because of baked in database handling details, the EmployeeService has become rigid and harder to reuse or extend for multiple databases, for example. It’s like a Swiss knife; it looks easy but very rigid and inextensible.

Let’s KISS (keep it simple, stupid) it a bit.

 

//…

Database db = null;

public EmployeeService ()

{

               //…

               db = Database.Get(); // or a constructor etc.

//..

}

Add(Employee emp)

{

               //…

               db.Add<Employee>(emp);

//..

}

 

We have removed the database handling details from the EmployeeService class. This makes the code a bit cleaner and maintainable. It also ensures that everything is doing their job and their job only. Now the class care less about how the database is handled and more about Employee, its true purpose.

Also note that, SRP does not mean a structure/ class will only have a single function/ property etc. It means a piece of code should only have one responsibility related to the business: An Entity service should only be concerned about handling entities and not anything else like database handlings, logging, handling sub entities directly (like saving employee address explicitly) etc.

SRP might increase total number of classes in a module but it also increases their simplicity and reusability. This means in a longer run the codebase remains flexible and adoptive to changes. Singletons are often regarded as opposite of SRP because they quickly become God objects doing too many things (Swiss knife) and introducing too many hidden dependencies into a system.

           O – Open Close Principle – OCP

“Once done, don’t change it, extend it”

motorcycle-sidecar

A class in a system must not be open to any changes, except bug fixing. That means we should not introduce changes to a class to add new features/ functionality to it. Now this does not sound practical because every class would evolve relatively to the business it represents. The OCP says that to add new features, the classes must be extended (open) instead of modified (close). And this introduces abstractions as a part of a business need to add new features into classes, instead of just a fancy have-it.

Developing our classes in the form of abstractions (interfaces/ abstract classes) provides multiple implementation flexibility and greater reusability. It also ensures that once a piece of code is tested, it does not go through another cycle of code changes and retesting for new features. Have a look the above EmployeeService class.

 

class EmployeeService

{

               void Add(Employee emp)

{

//..

db.Add<Employee>(emp);

//…

}

//…

}

 

Now if there was a new requirement that would request an email to be sent to the Finance department, if the newly added employee is a contractor, say. We will have to make changes to this class. Let’s redo the service for the new feature.

 

void Add(Employee emp)

{

//..

db.Add<Employee>(emp);

 

if (emp.Type == EmplolyeeType.Contractor)

               //… send email to finance

//…

}

//…

 

The above, though seems straightforward and a lot easier, is a code smell. It introduces rigid code and hardwired conditioning into a class that would demand retesting all existing use cases related to EmployeeService on top of the new ones. It also makes the code cluttered and harder to manage and reuse as the requirements evolve with time. Instead what we could do is be close to modifications and open to extensions.

 

interface IEmployeeService

{

               void Add(Employee employee);

               //…

}

 

And then;

 

class EmployeeService : IEmployeeService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

}

class ContractorService: IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

}

 

Of course, we could have an abstract Employee service class instead of the interface that will have a virtual Add method with add the employee functionality, that would be DRY.

Now instead of a single EmployeeService class we have separate classes that are extensions of the EmployeeService abstraction. This way we can keep adding new features into the service without having a need to retest any existing ones. This also removed the unnecessary cluttering and rigidness from the code and made it more reusable.

          L – Liskov Substitution Principle – LSP

“If your duck needs batteries, it’s not a duck”

So Liskov worded the principle as:

               If for each object obj1 of type S, there is an object obj2 of type T, such that for all programs P defined in terms of T, the behaviour of P is unchanged when obj1 is substituted for obj2 then S is a subtype of T

Sounds too complex? I know. Let us say that in English instead.

If we have a piece of code using an object of class Parent, then the code should not have any issues, if we replace Parent object with an object of its Child, where Child inherits Parent.

likso1.jpg

Take the same Employee service code and try to add a new feature in it, Get leaves for an employee.

 

interface IEmployeeService

{

               void Add(Employee employee);

               int GetLeaves(int employeeId);

               //…

}

class EmployeeService : IEmployeeService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

int GetLeaves (int employeeId)

{

               // Calculate and return leaves

}

}

class ContractorService : IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

int GetLeaves (int employeeId)

{

               //throw some exception

}

}

 

Since the ContractorService does not have any business need to calculate the leaves, the GetLeaves method just throws a meaningful exception. Make sense, right? Now let’s see the client code using these classes, with IEmployeeService as Parent and EmployeeService and ContractorService as its children.

 

IEmployeeService employeeService = new EmployeeService();

IEmployeeService contractorService = new ContractorService ();

employeeService. GetLeaves (<id>);

contractorService. GetLeaves (<id2>);

 

The second line will throw an exception at RUNTIME. At this level, it does not mean much. So what? Just don’t invoke GetLeaves if it’s a ContractorService. Ok let’s modify the client code a little to highlight the problem even more.

 

List<IEmployeeService> employeeServices = new List<IEmployeeService>();

employeeServices.Add(new EmployeeService());

employeeServices.Add(new ContractorService ());

CalculateMonthlySalary(employeeServices);

//..

void CalculateMonthlySalary(IEnumerable<IEmployeeService> employeeServices)

{

               foreach(IEmployeeService eService in employeeServices)

               {

               int leaves = eService. GetLeaves (<id>);//this will break on the second iteration

               //… bla bla

}

}

 

The above code will break the moment it tries to invoke GetLeaves in that loop the second time. The CalculateMonthlySalary knows nothing about ContractorService and only understands IEmployeeService, as it should. But its behaviour changes (it breaks) when a child of IEmployeeService (ContractorService) is used, at runtime. Let’s solve this:

 

interface IEmployeeService

{

               void Add(Employee employee);

               //…

}

interface ILeaveService

{

               int GetLeaves (int employeeId);

               //…

}

class EmployeeService : IEmployeeService, ILeaveService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

int GetLeaves (int employeeId)

{

               // Calculate and return leaves

}

}

class ContractorService : IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

}

 

Now the client code to calculate leaves will be

 

void CalculateMonthlySalary(IEnumerable<ILeaveService> leaveServices)

 

and wallah, the code is as smooth as it gets. The moment we try to do.

 

List<ILeaveService> leaveServices = new List<ILeaveService>();

leaveServices.Add(new EmployeeService());

leaveServices.Add(new ContractorService ()); //Compile-time error.

CalculateMonthlySalary(leaveServices);

 

It will give us a compile-time error. Because the method CalculateMonthlySalary is now expecting IEnumberable of ILeaveService to calculate leaves of employees, we have a List of ILeaveService, but ContractorService does not implements ILeaveService.

 

List<ILeaveService> leaveServices = new List<ILeaveService>();

leaveServices.Add(new EmployeeService());

leaveServices.Add(new EmployeeService ());

CalculateMonthlySalary(leaveServices);

 

LSP helps fine graining the business requirements and operational boundaries of the code. It also helps identifying the responsibilities of a piece of code and what kind of resources it would need to do its job. This increases SRP, enhances decoupling and reduces useless dependencies (CalculateMonthlySalary does not care about the whole IEmployeeService anymore, and only depends upon ILeaveService).

Breaking down responsibilities sometimes can be a bit hard in complex business requirements and LSP also tends to increase the number of isolated code units (classes, interfaces etc.). But it becomes apparent in simple and carefully designed structures where Tight Coupling and Duplications are avoided.

          I – Interface Segregation Principle – ISP

Don’t give me something I don’t need”

 oop-principles

In LSP, we did see that the method CalculateMonthlySalary had no use of the complete IEmployeeService, it only needed a subset of IEmployeeService, the GetLeaves method. This is, in its basic form, the ISP. It asks to identify the resources needed by a piece of code to do its job and then only provide those resources to it, nothing more. ISP finds real dependencies in code and eliminates unwanted ones. This helps in decoupling code greatly, helps recognising code dependencies, and ensures code isolation and security (CalculateMonthlySalary does not have any access to Add method anymore).

ISP advocates module customization based on OCP; identify requirements and isolate code by creating smaller abstractions, instead of modifications. Also, when we fine-grain pieces of code using ISP, the individual components become smaller. This increases their testability, manageability and reusability. Have a look:

 

class Employee

{

               //…

               string Id;

               string Name;

               string Address;

               string Email;

//…

}

void SendEmail(Employee employee)

{

               //.. uses Name and Email properties only

}

 

The above is a violation of ISP. The method SendEmail has no use of the class Employee, it only uses a Name and an Email to send out emails but is dependent on Employee class definition. This introduces unnecessary dependencies into the system, though it seems small at start. Now the SendEmail method can only be used for Employees and with nothing else; no reusability. Also, it has access to all the other features of Employee, without any requirements; security and isolation. Let’s rewrite it.

 

 void SendEmail(string name, string email)

{

               //.. sends email of whatever

}

 

Now the method does not care about any changes in the Employee class; dependency identified and isolated. It can be reused and is testable with anything, instead of just Employee. In short, don’t be misguided by the word Interface in ISP. It has its usage everywhere.

          D – Dependency Inversion Principle – DIP

To exist, I did not depend upon my sister, and my sister not upon me. We both depended upon our parents

Remember the example we discussed in SRP, where we introduced Database class into the EmployeeService.

 

class EmployeeService : IEmployeeService, ILeaveService

{

Database db = null;

public EmployeeService ()

{

                              //…

                              db = Database.Get(); // or a constructor etc. the EmployeeService is dependent upon Database

//..

}

Add(Employee emp)

{

                              //…

                              db.Add<Employee>(emp);

//..

}

}

 

The DIP dictates that:

               No high-level modules (EmployeeService) should be dependent upon any low-level modules (Database), instead both should depend upon abstractions. And abstractions should not depend upon details, details should depend upon abstractions.

 dip

The EmployeeService here is a high-level module that is using, and dependent upon a low-level module, Database. This introduces a hidden dependency on the Database class. This also increases the coupling between EmployeeService and the Database. The client code using EmployeeService now must have access to the Database class definition, even though it’s not exposed to it and does not know, apparently, that Database class/ service/ factory/ interface exists.

Also note that it does not matter if, to get a Database instance, we use a singleton, factory, or a constructor. Inverting a dependency does not mean replacing its constructor with a service / factory/ singleton call, because then the dependency is just transformed into another class/ interface while remain hidden. One change in Database.Get, for example, could have unforeseen implications on the client code using the EmployeeService without knowing. This makes the code rigid and tightly coupled to details, difficult to test and almost impossible to reuse.

Let’s change it a bit.

 

 class EmployeeService : IEmployeeService, ILeaveService

{

Database db = null;

 public EmployeeService (Database database)

{

                              //…

                              db = database;

//..

}

Add(Employee emp)

{

                              //…

                              db.Add<Employee>(emp);

//..

}

//…

}

 

We have moved the Getting of Database to an argument in the constructor (because the scope of the db variable is class level). Now EmployeeService is not dependent upon the details of Database instantiation. This solves one problem but the EmployeeService is still dependent upon a low-level module (Database). Let’s change that:

 

IDatabase db = null;

public EmployeeService (IDatabase database)

{

//…

         db = database;

//..

}

 

We have replaced the Database with an abstraction (IDatabase). EmployeeService does not depend, nor care about any details of Database anymore. It only cares about the abstraction IDatabase. The Database class will be implementing the IDatabase abstraction (depends upon an abstraction).

Now the actual database implementation can be replaced anytime (testing) with a mock or with any other database details, as per the requirements and the Service will not be affected.

We have covered, in some details, the SOLID design principles, with few examples to understand the underline problems and how those can be solved using these principles. As can be seen, most of the time, a SOLID principle is just common sense and not making STUPID mistakes in designs and giving the business requirements some thought.

 

Set your eyes on the Target!

1015red_F1CoverStory.jpg

So in my previous posts I’ve discussed a couple of key points in what I define as the basic principles of Identity and Access Management;

Now that we have all the information needed, we can start to look at your target systems. Now in the simplest terms this could be your local Active Directory (Authentication Domain), but this could be anything, and with the adoption of cloud services, often these target systems are what drives the need for robust IAM services.

Something that we are often asked as IAM consultants is why. Why should the corporate applications be integrated with any IAM Service, and these are valid questions. Sometimes depending on what the system is and what it does, integrating with an IAM system isn’t a practical solution, but more often there are many benefits to having your applications integrated with and IAM system. These benefits include:

  1. Automated account provisioning
  2. Data consistency
  3. If supported Central Authentication services

Requirements

With any target system much like the untitled1IAM system itself, the one thing you must know before you go into any detail are the requirements. Every target system will have individual requirements. Some could be as simple as just needing basic information, first name, last name and date of birth. But for most applications there is allot more to it, and the requirements will be derived largely by the application vendor, and to a lessor extent the application owners and business requirements.

IAM Systems are for the most part extremely flexible in what they can do, they are built to be customized to an enormous degree, and the target systems used by the business will play a large part in determining the amount of customisations within the IAM system.

This could be as simple as requiring additional attributes that are not standard within both the IAM system and your source systems, or could also be the way in which you want the IAM system to interact with the application i.e. utilising web services and building custom Management Agents to connect and synchronise data sets between.

But the root of all this data is when using an IAM system you are having a constant flow of data that is all stored within the “Vault”. This helps ensure that any changes to a user is flowed to all systems, and not just the phone book, it also ensures that any changes are tracked through governance processes that have been established and implemented as part of the IAM System. Changes made to a users’ identity information within a target application can be easily identified, to the point of saying this change was made on this date/time because a change to this persons’ data occurred within the HR system at this time.

Integration

Most IAM systems will have management agents or connectors (the phases can vary depending on the vendor you use) built for the typical “Out of Box” systems, and these will for the most part satisfy the requirements of many so you don’t tend to have to worry so much about that, but if you have “bespoke” systems that have been developed and built up over the years for your business then this is where the custom management agents would play a key part, and how they are built will depend on the applications themselves, in a Microsoft IAM Service the custom management agents would be done using an Extensible Connectivity Management Agent (ECMA). How you would build and develop management agents for FIM or MIM is quite an extensive discussion and something that would be better off in a separate post.

One of the “sticky” points here is that most of the time in order to integrate applications, you need to have elevated access to the applications back end to be able to populate data to and pull data from the application, but the way this is done through any IAM system is through specific service accounts that are restricted to only perform the functions of the applications.

Authentication and SSO

Application integration is something seen to tighten the security of the data and access to applications being controlled through various mechanisms, authentication plays a large part in the IAM process.

During the provisioning process, passwords are usually set when an account is created. This is either through using random password generators (preferred), or setting a specific temporary password. When doing this though, it’s always done with the intent of the user resetting their password when they first logon. The Self Service functionality that can be introduced to do this enables the user to reset their password without ever having to know what the initial password was.

Depending on the application, separate passwords might be created that need to be managed. In most cases IAM consultants/architects will try and minimise this to not being required at all, but this isn’t always the case. In these situations, the IAM System has methods to manage this as well. In the Microsoft space this is something that can be controlled through Password Synchronisation using the “Password Change Notification Service” (PCNS) this basically means that if a user changes their main password that change can be propagated to all the systems that have separate passwords.

SONY DSCMost applications today use standard LDAP authentication to provide access to there application services, this enables the password management process to be much simpler. Cloud Services however generally need to be setup to do one of two things.

  1. Store local passwords
  2. Utilise Single Sign-On Services (SSO)

SSO uses standards based protocols to allow users to authenticate to applications with managed accounts and credentials which you control. Examples of these standard protocols are the likes of SAML, oAuth, WS-Fed/WS-Trust and many more.

There is a growing shift in the industry for these to be cloud services however, being the likes of Microsoft Azure Active Directory, or any number of other services that are available today.
The obvious benefit of SSO is that you have a single username or password to remember, this also greatly reduces the security risk that your business has from and auditing and compliance perspective having a single authentication directory can help reduce the overall exposure your business has to compromise from external or internal threats.

Well that about wraps it up, IAM for the most part is an enabler, it enables your business to be adequately prepared for the consumption of Cloud services and cloud enablement, which can help reduce the overall IT spend your business has over the coming years. But one thing I think I’ve highlighted throughout this particular series is requirements requirements requirements… repetitive I know, but for IAM so crucially important.

If you have any questions about this post or any of my others please feel free to drop a comment or contact me directly.

 

What’s a DEA?

In my last post I made a reference to a “Data Exchange Agreement” or DEA, and I’ve since been asked a couple of times about this. So I thought it would be worth while writing a post about what it is, why it’s of value to you and to your business.

So what’s a DEA? Well in simply terms it’s exactly what the name states, it’s an agreement that defines the parameters in which data is exchanged between Service A and Service B. Service A being the Producer of Attributes X and Services B, the consumers. Now I’ve intentionally used a vague example here as a DEA is used amongst many services in business and or government and is not specifically related to IT or IAM Services. But if your business adopts a controlled data governance process, it can play a pivotal role in the how IAM Services are implemented and adopted throughout the entire enterprise.

So what does a DEA look like, well in an IAM service it’s quite simple, you specify your “Source” and your “Target” services, an example of this could be the followings;

Source
ServiceNow
AurionHR
PROD Active Directory
Microsoft Exchange
Target
PROD Active Directory
Resource Active Directory Domain
Microsoft Online Services (Office 365)
ServiceNow

As you can see this only tells you where the data is coming from and where it’s going to, it doesn’t go into any of the details around what data is being transported and in which direction. A separate section in the DEA details this and an example of this is provided below;

MIM Flow Service Now Source User Types Notes
accountName –> useraccountname MIM All  
employeeID –> employeeid AurionHR All  
employeeType –> employeetype AurionHR All  
mail <– email Microsoft Exchange All  
department –> department AurionHR All
telephoneNumber –> phone PROD AD All  
o365SourceAnchor –> ImmutableID Resource Domain All  
employeeStatus –> status AurionHR All  
dateOfBirth –> dob AurionHR CORP Staff yyyy-MM-dd
division –> region AurionHR CORP Staff  
firstName –> preferredName AurionHR CORP Staff  
jobTitle –> jobtitle AurionHR CORP Staff  
positionNumber –> positionNumber AurionHR CORP Staff
legalGivenNames <– firstname ServiceNow Contractors
localtionCode <– location ServiceNow Contractors  
ManagerID <– manager ServiceNow Contractors  
personalTitle <– title ServiceNow Contractors  
sn <– sn ServiceNow Contractors  
department <– department ServiceNow Contractors
employeeID <– employeeid ServiceNow Contractors  
employeeType <– employeetype ServiceNow Contractors  

This might seem like a lot of detail, but this is actually only a small section of what would be included in a DEA of this type, as the whole purpose of this agreement is to define what attributes are managed by which systems and going to which target systems, and as many IAM consultants can tell you, would be substantially more then what’s provided in this example. And this is just an example for a single system, this is something that’s done for all applications that consume data related to your organisations staff members.

One thing that you might also notice is that I’ve highlighted 2 attributes in the sample above in bold. Why might you ask? Well the point of including this was to highlight data sets that are considered “Sensitive” and within the DEA you would specify this being classified as sensitive data with specific conditions around this data set. This is something your business would define and word to appropriately express this but it could be as simple as a section stating the following;

“Two attributes are classed as sensitive data in this list and cannot be reproduced, presented or distributed under any circumstances”

One challenge that is often confronted within any business is application owners wanting “ownership” of the data they consume. Utilising a DEA provides clarity over who owns the data and what your applications can do with the data they consume removing any uncertainty.

To summarise this post, the point of this wasn’t to provide you with a template, or example DEA to use, it was to help explain what a DEA is, what its used for and examples of what parts can look like. No DEA is the same, and providing you with a full example DEA is only going to make you end up recreating it from scratch anyway. But it is intended to help you with understanding what is needed.

As with any of my posts if you have any questions please pop a comment or reach out to me directly.

 

The Vault!

Vault

The vault or more precisely the “Identity Vault” is a single pane view of all the collated data of your users, from the various data source repositories. This sounds like a lot of jargon but it’s quite simple really.

In the diagram below we look at a really simple attribute firstName (givenName within AD) DataFlow

As you will see at the centre is the attribute, and branching off this is all the Connected Systems, i.e. Active Directory. What this doesn’t illustrate very well is the specific data flow, where this data is coming from and where it’s going to. This comes down to import and export rules as well as any precedence rules that you need to put in place.

The Identity Vault, or Central Data Repository, provides a central store of an Identities information aggregated from a number of sources. It’s also able to identify the data that exists within each of the connected systems from which it either collects the identity information from or provides the information to as a target system. Sounds pretty simple right?

Further to all the basics described above, each object in the Vault has a Unique Identifier, or an Anchor. This is a unique value that is automatically generated when the user is created to ensure that regardless of what happens to the users details throughout the lifecycle of the user object, we are able to track the user and update changes accordingly. This is particularly useful when you have multiple users with the same name for example, it avoids the wrong person being updated when changes occur.

Attribute User 1 User 2
FirstName John John
LastName Smith Smith
Department Sales Sales
UniqueGUID 10294132 18274932

So the table above provides the most simplest forms of a users identity profile, whereas a complete users identity profile will consist of many more attributes, some of which maybe custom attributes for specific purposes, as in the example demonstrated below;

Attribute ContributingMA Value
AADAccountEnabled AzureAD Users TRUE
AADObjectID AzureAD Users 316109a6-7178-4ba5-b87a-24344ce1a145
accountName MIM Service jsmith
cn PROD CORP AD Joe Smith
company PROD CORP AD Contoso Corp
csObjectID AzureAD Users 316109a6-7178-4ba5-b87a-24344ce1a145
displayName MIM Service Joe Smith
domain PROD CORP AD CORP
EXOPhoto Exchange Online Photos System.Byte[]
EXOPhotoChecksum Exchange Online Photos 617E9052042E2F77D18FEFF3CE0D09DC621764EC8487B3517CCA778031E03CEF
firstName PROD CORP AD Joe
fullName PROD CORP AD Joe Smith
mail PROD CORP AD joe.smith@contoso.com.au
mailNickname PROD CORP AD jsmith
o365AccountEnabled Office365 Licensing TRUE
o365AssignedLicenses Office365 Licensing 6fd2c87f-b296-42f0-b197-1e91e994b900
o365AssignedPlans Deskless, MicrosoftCommunicationsOnline, MicrosoftOffice, PowerAppsService, ProcessSimple, ProjectWorkManagement, RMSOnline, SharePoint, Sway, TeamspaceAPI, YammerEnterprise, exchange
o365ProvisionedPlans MicrosoftCommunicationsOnline, SharePoint, exchange
objectSid PROD CORP AD AQUAAAAAAAUVAAAA86Yu54D8Hn5pvugHOA0CAA==
sn PROD CORP AD Smith
source PROD CORP AD WorkDay
userAccountControl PROD CORP AD 512
userPrincipalName PROD CORP AD jsmith@contoso.com.au

So now we have more complete picture of the data, where it’s come from and how we connect that data to a users’ identity profile. We can start to look at how we synchronise that data to any and all Managed targets. It’s very important to control this flow though, to do so we need to have in place strict governance controls about what data is to be distributed throughout the environment.

One practical approach to managing this is by using a data exchange agreement. This helps the organisation have a more defined understanding of what data is being used by what application and for what purpose, it also helps define a strict control on what the application owners can do with the data being consumed for example, strictly prohibiting the application owners from sharing that data with anyone, without the written consent of the data owners.

In my next post we will start to discuss how we then manage target systems, how we use the data we have to provision services and manage the user information through what’s referred to as synchronisation rules.

As with all my posts if, you have any questions please drop me a note.

 

Protect Your Business and Users from Email Phishing in a Few Simple Steps

The goal of email phishing attacks is obtain personal or sensitive information from a victim such as credit card, passwords or username data, for malicious purposes. That is to say trick a victim into performing an unwitting action aimed at stealing sensitive information from them. This form of attack is generally conducted by means of spoofed emails or instant messaging communications which try to deceive their target as to the nature of the sender and purpose of the email they’ve received. An example of which would be an email claiming to be from a bank asking for credential re-validation in the hope of stealing them by means of a cloned website.

Some examples of email Phishing attacks.

Spear phishing

Phishing attempts directed at specific individuals or companies have been termed spear phishing. Attackers may gather personal information about their target to increase their probability of success. This technique is by far the most successful on the internet today, accounting for 91% of attacks. [Wikipedia]

Clone phishing

Clone phishing is a type of phishing attack whereby a legitimate, and previously delivered, email containing an attachment or link has had its content and recipient address(es) taken and used to create an almost identical or cloned email. The attachment or link within the email is replaced with a malicious version and then sent from an email address spoofed to appear to come from the original sender. It may claim to be a resend of the original or an updated version to the original. This technique could be used to pivot (indirectly) from a previously infected machine and gain a foothold on another machine, by exploiting the social trust associated with the inferred connection due to both parties receiving the original email. [Wikipedia]

Whaling

Several phishing attacks have been directed specifically at senior executives and other high-profile targets within businesses, and the term whaling has been coined for these kinds of attacks  In the case of whaling, the masquerading web page/email will take a more serious executive-level form. The content will be crafted to target an upper manager and the person’s role in the company. The content of a whaling attack email is often written as a legal subpoena, customer complaint, or executive issue. Whaling scam emails are designed to masquerade as a critical business email, sent from a legitimate business authority. The content is meant to be tailored for upper management, and usually involves some kind of falsified company-wide concern. Whaling phishers have also forged official-looking FBI subpoena emails, and claimed that the manager needs to click a link and install special software to view the subpoena. [Wikipedia]

Staying ahead of the game from an end user perspective

  1. Take a very close look at the sender’s email address.

Phishing email will generally use an address that looks genuine but isn’t (e.g. accounts@paypals.com) or try to disguise the email’s real sender with what looks like a genuine address but isn’t using HTML trickery (see below).

  1. Is the email addressed to you personally?

Companies with whom you have valid accounts will always address you formally by means of your name and surname. Formulations such as ‘Dear Customer’ is a strong indication the sender doesn’t know you personally and should perhaps be avoided.

  1. What web address is the email trying to lure you to?

Somewhere within a phishing email, often surrounded by links to completely genuine addresses, will be one or more links to the means by which the attacker is to steal from you. In many cases a web site that looks genuine enough, however there are a number of ways of confirming it’s validity.

  1. Hover your cursor over any link you receive in an email before you click it if you’re unsure because it will reveal the real destination sometimes hidden behind deceptive HTML. Also look at the address very closely. The deceit may be obvious or well hidden in a subtle typo (e.g. accouts@app1e.com).

a. Be wary of URL redirection services such as bit.ly which hide the ultimate destination of a link.

b. Be wearing of very long URLs. If in doubt do a Google search for the root domain.

c. Does the email contain poor grammar and spelling mistakes?

d. Many times the quality of a phishing email isn’t up to the general standard of a company’s official communications. Look for spelling mistakes, barbarisms, grammatical errors and odd characters in they email as a sign that something may be wrong.

 

Mitigating the impact of Phishing attacks against an organization

  1. Implement robust email and web access filtering.

  2. User education.

  3. Deploy an antivirus endpoint protection solution.

  4. Deploy Phishing attack aware endpoint protection software.