Kloud Managed Services team responds to the global (WannaCry) ransomware attack

It’s been a busy week for our Managed Services team, who have collaborated together across our globally distributed team to provide a customer first security response to this most recent ransomware attack.

From the outset our teams acted swiftly to ensure the several hundred servers, platforms & desktop infrastructure under our management were secure.

Based on our deep knowledge of our customers, their environments and landscapes Kloud Managed Services were able to swiftly develop individual security approaches to remediate discovered security weak points.

In cases where customers have opted for Kloud’s robust approach to patch management the process has ensured customer environments were proactively protected against the threat. For these customers no action was required other than to confirm the relevant patches were in place for all infrastructure Kloud manage.

Communication is key in responding to these types of threats and the Managed Services team’s ability to collaborate and share knowledge quickly internally and then externally with our customers has ensured we remain on top of this threat and are ever vigilant for the next one.

VPC ( Virtual Private Cloud) Configuration


This blog is Part 01 of a 02 part series related to custom VPC configurations

Part 01 discusses the following scenario

  • Creating a VPC with 02 subnets ( Public and Private )
  • Creating a bastion host server in the public subnet
  • Allowing the Bastion host to connect to the servers in the Private Subnet using RDP.

Part 02 will discuss the following

  • Configuring NAT Instances
  • Configuring VPC Peering
  • Configuring VPC flow Logs.

What is a VPC

VPC can be described as a logical Datacenter where AWS resources can be deployed.

The logical datacenter can be connected to your physical datacenter through VPN or direct connect. Details (https://blog.kloud.com.au/2014/04/10/quickly-connect-to-your-aws-vpc-via-vpn/)

This section deals with the following tasks.

  • Creating the VPC
  • Creating Subnets
  • Configuring Subnets for internet access

1 Creating the VPC

The following steps should be followed for configuring VPC. we can use the wizard to create a VPC but this document will focus on the detailed method where every configuration parameter is defined by the user.

Step 01.01 : Logon to the AWS console

Step 01.02 : Click on VPC

Step 01.03 : Select Your VPCs


Step 01.04 : Select Create VPC


Step 01.05 Enter the following details in the Create VPC option

  • Enter the details of the Name Tag
  • Enter the CIDR Block. keep in mind that the block size cannot be greater that /16.

Step 01.06: Click on Yes,Create


We have now created a VPC. The following resources are also created automatically

  • Routing table for the VPC
  • Default VPC Security Group
  • Network ACL for the VPC

Default Routing Table ( Route Table Id = rtb-ab1cc9d3)

Check the Routing table below for the VPC. If you check the routes of the route table, you see the following

  • Destination :
  • target : Local
  • Status: Active
  • Propagated: No


This route ensures that all the subnets in the VPC are able to connect with each other. All the subnets created in the VPC are assigned to the default route table therefore its best practice not to change the default route table. For any route modification, a new route table can be created and assigned to subnets specifically.

Default Network Access Control List ( NACL id= acl-ded45ca7)

Mentioned below is the snapshot of the default NACL created when the VPC was created.


Default security group for the VPC ( Group id = sg-5c088122)

Mentioned below is the snapshot of the default Security Group created when the VPC was created.


Now we need to create Subnets. Keeping in mind that the considered scenario needs 02 subnets ( 01 Private and 01 Public ).1.

2 Creating Subnets

Step 02.01 : Go to the VPC Dashboard and select Subnets


Step 02.02 : Click on Create Subnet


Step 02.03: Enter the following details in the Create Subnet window

  • Name Tag: Subnet IPv4 CIDR Block ) – “Availability Zone” = – us-east-1a
  • VPC : Select the newly created VPC = vpc-cd54beb4 | MyVPC
  • Availability Zone: us-east-1a
  • IPv4 CIDR Block :

Step 02.04: Click on Yes,Create


Now we have created subnet

We will use the same steps to create another subnet. in availability zone us-east-1b

  • Name Tag: Subnet IPv4 CIDR Block ) – “Availability Zone” = – us-east-1b
  • VPC : Select the newly created VPC = vpc-cd54beb4 | MyVPC
  • Availability Zone: us-east-1b
  • IPv4 CIDR Block :


3 Configuring subnets

Now that we have 02 subnets and we need to configure the as the public subnet and as the private subnets. The following tasks need to be performed for the activity

  • Internet Gateway creation and configuration
  • Route table Creation and configuration
  • Auto Assign Public IP Configuration.

3.1 Internet gateway Creation and Configuration ( IGW Config )

Internet gateways as the name suggest provide access to the internet. They are assigned to VPC and routing table is configured to direct all internet based traffic to the internet gateway.

Mentioned below are the steps for creating and configuring the internet gateway.

Step 03.01.01 : Select Internet Gateways  from the VPC dashboard and click on Create Internet Gateway


Step 03.01.02 : Enter the name tag and click on Yes,Create


The internet gateways is created but not attached to any VPC.( internet gateway Id = igw-90c467f6)

Step 03.01.03: Select the Internet Gateway and click on Attach to VPC


Step 03.01.04 : Select your VPC and click on Yes,Attach


We have now attached the Internet Gateway to the VPC. Now we need to configure the route tables for internet access.

3.2 Route Table creation and Configuration ( RTBL Config)

A default route table ( with id rtb-ab1cc9d3) was created when the VPC was created. Its best practice to create a separate route table to internet access.

Step 03.02.01 : Click on the Route Table section in the VPC Dashboard and click Create Route table


Step 03.02.02: Enter the following details in the Create Route Table window and click on Yes,Create

  • Name tag: Relevant Name = InternetAccessRoutetbl
  • VPC : Your VPC = vpc-cd54b3b4 | MyVPC


Step 03.02.03 : Select a newly created Route table( Route Table Id = rtb-3b78ad43 | InternetAccessRouteTbl) and Click Routes and then Edit


Step 03.02.04: Click on Add Another Route


Step 03.02.05 : Enter the following values in the route and click on Save

  • Destination:
  • Target : Your Internet Gateway ID  = igw-90c467f6 ( in my case )


Route table needs subnet associations. The subnets which we want to make Public should be associated with the route table. In our case, we would associate Subnet to the route table.

Step 03.02.06: Click on Subnet Associations


You should be able to see the message “You do not have any subnet associations”

Step 03.02.07: Click on Edit

24.GIFStep 03.02.08: Select the subnet you want to configure as a Public Subnet. In our case and Click on Save


03.03 Auto Assign Public IP Configuration

Both the subnets created ( and will not assign public IP addresses to the instances deployed in them as per their default configuration.

We need to configure the public subnet ( ) to provide Public IPs automatically.

Step 03.03.01: Go to the Subnets section in the VPC dashboard.

Step 03.03.02: Select the Public Subnet

Step 03.03.03: Click on Subnet Actions

Step 03.03.04: Select Modify auto-assign IP Settings


Step 03.03.05: Check the Enable Auto-assign Public IPv4 Addresses  in the  Modify Auto-Assign IP Settings Window and click on Save


After this configuration, any EC2 instance deployed in the subnet will be assigned a public IP.

4 Security

security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group. When we decide whether to allow traffic to reach an instance, we evaluate all the rules from all the security groups that are associated with the instance.

We will create 02 security groups,

  • Public-Private ( Will contains access rules from Public Subnet to Private Subnet )
  • Internet-Public ( will contains the ports allowed from the internet to the Public Subnet )

Step 4.1  : Click on Security Groups in the Network and Security section

Step 4.2 : Click on Create Security Group


Step 4.3 : Enter the following details on the Create Security group Window and click on Create

  • Security Group Name : Public-Private
  • Description : Rules between Private subnet and Public subnets
  • VPC : Select the VPC we created in the exercise.
  • Click on Add Rules to Add the following rules
    • Type = RDP , Protocol = TCP , POrt Range = 3389 , Source = Custom :
    • Type = All ICMP – IPV4, Protocol = ICMP , Port Range = 0 – 65535 , Source = Custom,


Step 4.4 : Enter the following details on the Create Security group Window and click on Create

  • Security Group Name : Public-Internet
  • Description : Rules between Public and the internet
  • VPC : Select the VPC we created in the exercise.
  • Click on Add Rules to Add the following rules
    • Type = RDP , Protocol = TCP , POrt Range = 3389 , Source =Anywhere
    • Type = All ICMP – IPV4, Protocol = ICMP , Port Range = 0 – 65535 , Source = Anywhere


4 EC2 installation

Now we will deploy 02 EC2 instances . One EC2 Instances Named PrivateInstance in the subnet and one instance named PublicInstance in the subnet.

Public Instance Configuration :

  • Instance Name : Public Instance
  • Network : MyVPC
  • Subnet :
  • Auto-Assign Public ip : Use subnet setting ( enabled )
  • Security Group : Public-Internet security group
  • IAM Role : As per requirement

Private Instance Configuration :

  • Instance Name : Private Instance
  • Network : MyVPC
  • Subnet :
  • Auto-Assign Public ip : Use subnet setting ( disabled)
  • Security Group : Public-Private security group
  • IAM Role : As per requirement

Once the deployment of the EC2 instance is complete, you can connect to the PublicInstance through RDP and from there connect further to the Private instances.

Patching EC2 through SSM


Why Patch Manager?

AWS SSM Patch Manager is an automated tool that helps you simplify your operating system patching process, including selecting the patches you want to deploy, the timing for patch roll-outs, controlling instance reboots, and many other tasks. You can define auto-approval rules for patches with an added ability to black-list or white-list specific patches, control how the patches are deployed on the target instances (e.g. stop services before applying the patch), and schedule the automatic roll out through maintenance windows.

These capabilities help you automate your patch maintenance process to save you time and reduce the risk of non-compliance. All capabilities of EC2 Systems Manager, including Patch Manager, are available free of charge so you only pay for the resources you manage.

The article can be used to configure patching for instances hosted in AWS Platform.

You will need to have the necessary pre-requisite knowledge regarding, EC2, and IAM section of the AWS. If so then please read on.

The configuration has three major sections

  • EC2 instance configuration for patching
  • Default Patching Baseline Configuration
  • Maintenance Window configuration.

1  Instance Configuration

We will start with the First section which is configuring the Instances to be patched. This requires the following tasks.

  1. Create Amazon EC2 Role for patching with two policies attached
    • AmazonEC2RoleForSSM
    • AmazonSSMFullAccess
  2. Assign Roles to the EC2 Instances
  3. Configure Tags to ensure patching in groups.

Important: The Machines to be patched should be able to contact Windows Update Services.  Mentioned below article contains the URLS which should be accessible for proper patch management.


Mentioned below are the detailed steps for the creation of an IAM role for Instances to be Patched using Patch Manager.

Step 1: Select IAM —–> Roles and Click on Create New Role


Step 2: Select Role Type —-> Amazon EC2


Step 3: Under Attach Policy Select the following and Click Next

  • AmazonEC2RoleForSSM
  • AmazonSSMFullAccess


Step 4: Enter the Role Name and Select Create Role (At the bottom of the page)


Now you have gone through the first step in your patch management journey.

Instances should be configured to use the above created role to ensure proper patch management. (or any roles which has AmazonEC2RoleforSSM and AmazonSSMFullAccess policies attached to it.)


We need to group our AWS hosted servers in groups cause no one with the right frame of mind wants to patch all the servers in one go.

To accomplish that we need to use Patch Groups (explained later).

For example:

We can configure Patch manager to Patch EC2 instances with Patch Group Tag = Group01 on Wednesday and EC2 instances with Patch Group Tag = PatchGroup02 on Friday.

To utilize patch groups, all EC2 instances should be tagged to support cumulative patch management based on Patch Groups.

Congratulations, you have completed the first section of the configuration. Keep following just two to go.

Default Patch Baseline configuration.

Patches are categorized using the following attributes :

  • Product Type: like Windows version etc.
  • Classification: CriticalUpdates, SecurityUpdates, SevicePacks, UpdateRollUps
  • Severity: Critical,Important,Low etc.

Patches are prioritized based on the above factors.

A Patch baseline can be used to configure the following using rules

  • Products to be included in Patching
  • Classification of Patches
  • Severity of Patches
  • Auto Approval Delay: Time to wait (Days) before automatic approval)

Patch Baseline is configured as follows.

Step 01: Select EC2 —> Select Patch Baselines (under the Systems Manager Services Section)

Step 02: Click on Create Patch Baseline


Step 03: Fill in the details of the baseline and click on Create


Go to Patch Baseline and make the newly created baseline as your default.


At this point, the instances to be patched are configured and we have also configured the patch policies. The next section we provide AWS the when (Date and Time) and what (task) of the patching cycle.

Maintenance Windows Configuration

As the name specifies, Maintenance Windows give us the option to Run Tasks on EC2 Instances on a specified schedule.

What we wish to accomplish with Maintenance Windows is to Run a Command (Apply-AWSPatchBaseline), but on a given schedule and on a subset of our servers. This is where all the above configurations gel together to make patching work.

Configuring Maintenance windows consist of the following tasks.

  • IAM role for Maintenance Windows
  • Creating the Maintenance Window itself
  • Registering Targets (Selecting servers for the activity)
  • Registering Tasks (Selecting tasks to be executed)

Mentioned below are the detailed steps for configuring all the above.

Step 01: Create a Role with the following policy attached

  • AmazonSSMMaintenanceWindowRole


Step 02: Enter the Role Name and Role Description


Step 03: Click on Role and copy the Role ARN

Step 04: Click on Edit Trust Relationships


Step 05: Add the following values under the Principal section of the JSON file as shown below

“Service”: “ssm.amazonaws.com”

Step 06: Click on Update Trust Relationships (on the bottom of the page)


At this point the IAM role for the maintenance window has been configured. The next section details the configuration of the maintenance window.

Step 01: Click on EC2 and select Maintenance Windows (under the Systems Manager Shared Resources section)


Step 02: Enter the details of the maintenance Windows and click on Create Maintenance Windows


At this point the Maintenance Window has been created. The next task is to Register Targets and Register Tasks for this maintenance window.

Step 01: Select the Maintenance Window created and click on Actions

Step 02: Select Register Targets


Step 03: Enter Owner Information and select the Tag Name and Tag Value

Step 04: Select Register Targets


At this point the targets for the maintenance window have been configured. This leaves us with the last activity in the configuration which is to register the tasks to be executed in the maintenance window.

Step 01: Select the Maintenance Window and Click on Actions

Step 02: Select Register Task


Step 03: Select AWS-ApplyPatchBaseline from the Document section


Step 04: Click on Registered targets and select the instances based on the Patch Group Tag

Step 05: Select Operation SCAN or Install based on the desired function (Keep in mind that an Install will result in a server restart).

Step 06: Select the MaintenanceWindowsRole

Step 07: Click on Register Tasks


After completing the configuration, the Registered Task will run on the Registered Targets based on the schedule specified in the Maintenance Window

The status of the Maintenance Window can be seen in the History section (as Shown below)


Hope this guide does get you through the initial patching configuration for your EC2 instances in Amazon.

Also in AWS the configuration can be done using CLI as well. Lets leave that for another blog for now.

Thanks for Reading.

When to use Azure Load Balancer or Application Gateway


One thing Microsoft Azure is very good at is giving you choices – choices on how you host your workloads and how you let people connect to those workloads.

In this post I am going to take a quick look at two technologies that are related to high availability of solutions: Azure Load Balancer and Application Gateway.

Application Gateway is a bit of a dark horse for many people. Anyone coming to Azure has to get to grips with Azure Load Balancer (ALB) as a means to provide highly available solutions, but many don’t progress beyond this to look at Application Gateway (AG) – hopefully this post will change that!

The OSI model

A starting point to understand one of the key differences between the ALB and AG offerings is the OSI model which describes the logical layers required to implement computer networking. Wikipedia has a good introduction, which…

View original post 469 more words

A Closer Look at Amazon Chime

In news this quarter AWS have released a web conferencing cloud service to their existing ‘Business Productivity‘ services which already includes Amazon WorkDocs and Amazon WorkMail. So my thought was to help you gauge where this sits in relation to Skype for Business. I don’t want to put this into a Microsoft versus Amazon review but I do want you to understand the product names that ‘somewhat’ align with each other.

Exchange = WorkMail

SharePoint/OneDrive for Business  =  WorkDocs

Skype for Business  = Chime

The Microsoft products are reasonably well known in the world so I’ll give you a quick one liner about the Amazons products:

WorkMail “Hosted Email”

WorkDocs “Hosted files accessible via web, PC, mobile devices with editing and sharing capability”

So what is Chime?

Chime isn’t exactly a one-to-one feature set for Skype for Business so be wary of articles conveying this sentiment as they haven’t really done their homework. Chime can be called either a web meeting, online meeting, or online video conferencing platform. Unlike Skype for Business, Chime is not a PBX replacement. So what does it offer?

  • Presence
  • 1-1 and group IM
  • Persistent chat / chat rooms
  • Transfer files
  • Group HD video calling
  • Schedule meetings using Outlook or Google calendar
  • Join meeting from desktop or browser
  • Join meeting anonymously
  • Meeting controls for presenter
  • Desktop sharing
  • Remote desktop control
  • Record audio and video of meetings
  • Allow participants to dial-in with no Chime client (PSTN conferencing bridge)
  • Enable legacy H.323 endpoints to join meetings (H.323 bridge)
  • Customisable / personalised meeting space URL

The cloud hosted based products that I see are similar to Chime are WebEx, Google Hangouts, GoToMeeting and ReadyTalk to name just a few. As here at Kloud we are Microsoft Gold Partners and have a Unified Communication team that deliver Skype for Business solutions and not the other products I have previously mentioned, I will tell you a few things that differentiate SFB from Chime.

  • Direct Inward Dial (DID) user assignment
  • Call forwarding and voicemail
  • Automated Call Distribution (ACD)
  • Outbound PSTN dialling and route based on policy assignment
  • Integrated Voice Recording (IVR)
  • Hunt Groups / Call Pickup Groups
  • Shared Line Apperance
  • SIP IP-Phone compatibility
  • Attendant / Reception call pickup
  • On-premises, hybrid and hosted deployment models

Basically all things that replace a PBX Solution Skype for Business will do. Now this isn’t a cheap shot at Amazon, cause that isn’t where they are positioning their product. What I’ve hoped to have done is clarify any misconception about where the product sits in the market and how it relates to features in a well known product like Microsoft Skype for Business.

For the price that Amazon are offering Chime for in the online meeting market it is very competitive against some other hosted products. Their ‘rolls-royce’ plan is simply purchased for $15 USD per user per month. If you’re not invested in the Office 365 E3/E5 license ecosystem and you need a online meeting platform at a low cost, then Chime might be right for you. Amazon offer a 30 day free trial for free that is a great way to test it out.



UX Process: A groundwork for effective design teams

User Experience practice is about innovating and finding solutions to real-world problems. Which means we need to find problems, then validate them first before trying to fix them. So how do we go about doing all this? Read on…

I’ve been asked to explain a “Good UX Process” numerous times over the years in consulting. Customers want a formula per se that can solve all their design problems. But unfortunately, it doesn’t exist and there is no set UX process that applies to all.

Every organisation and its problems are unique. They all require different sets of UX activities to determine a positive outcome.

However, there are some general guidelines on:

  • What type of UX artefacts can we deliver?
  • Who do we engage and collaborate with?
  • What kind of UX activities / workshops can we suggest?

To answer the above, I put together a general UX design process with the help of my colleagues a few years back. So here it goes.


Phase I – Discovery

People Involved
  1. Product Owner (Whoever is funding the project)
  2. Project Manager (Whoever is overseeing the project)
  3. Business Analyst (Whoever is managing different teams)
  4. Analytics
  5. Marketing
  6. Information Technology
  7. User Experience Designer
  1. Problem Statements
  2. User Needs
  3. Design principles
  4. Benchmarking
  5. Personas
  6. Servicer maps
  7. Hypothesis
  1. Discover pain-points
  2. Discuss solutions to pain points (utopia)
  3. Analyse competitors in similar space
  4. Discover potential constraints (IT or culture related)
  5. Come up with a basic information architecture (homepage elements, navigation and unique pages)

Phase II – Ideation and Concept

People Involved
  1. Product Owner
  2. Project Manager
  3. Business Analyst
  4. Developers
  5. Information Technology
  6. User Experience Designer
  7. SEO
  1. Concept Vision
  2. High Level Requirements
  3. UX Estimates
  4. Dev Estimates
  5. UX Epics
  6. Story Boards
  7. Experience Maps
  8. Navigation
  1. User Testing
  2. Feasibility Prototyping
  3. Workshop Facilitations

Phase III – Design and Build

People Involved
  1. Project Manager
  2. Business Analyst
  3. Marketing
  4. Developers
  5. User Experience Designer
  6. Visual Designer
  7. SEO
  1. Wireframes
  2. Visual Designs
  3. User Interface Specs
  4. Process Flows
  1. Collaborative Design Sessions
  2. 6-ups Designs
  3. Rapid User Testing
  4. Wireframes
  5. UI Trends

Phase IV – Measure and Respond

People Involved
  1. Project Manager
  2. Analytics
  3. Developers
  4. SEO
  1. UI Improvements
  2. UX Enhancements
  1. Advanced Analytics
  2. Collaborative Design Sessions
  3. User Testing (A/B Testing)

The best way to use this UX process is to post-understanding your client’s requirements, extract the best bits that suit your needs and take it from there.

I hope you guys find this useful!

Back to Basics – Design Patterns – Part 2

In the previous post, we discussed design patterns, their structure and usage. Then we discussed the three fundamental types and started off with the first one – Creational Patterns.

Continuing with creational patterns we will now discuss Abstract Factory pattern, which is considered to be a super set of Factory Method.

Abstract Factory

In the Factory method, we discussed how it targets a single family of subclasses using a corresponding set of factory method classes, or a single factory method class via parametrised/ static factory method. But if we target families of related classes (multiple abstractions having their own subclasses) and need to interact with them using a single abstraction then factory method will not work.

A good example could be the creation of doors and windows for a room. A room could offer a combination of wooden door, sliding door etc. and wooden window, glass window etc. The client machine will however, interact with a single abstraction (abstract factory) to create the desired door and window combination based on selection/ configuration. This could be a good candidate for an Abstract Factory.

So abstract factory allows initiation of families (plural) of related classes using a single interface (abstract factory) independent of the underline concrete classes.


When a system needs to use families of related or dependent classes, it might need to instantiate several subclasses. This will lead to code duplication and complexities. Taking the above example of a room, the client machine will need to instantiate classes for doors and windows for one combination and then do the same for others, one by one. This will break the abstraction of those classes, exposes their encapsulation, and put the instantiation complexity on the client. Even if we use a factory method for every single family of classes this will still require several factory methods, unrelated to each other. Thereby managing them to make combination offerings (rooms) will be code cluttering.

We will use abstract factory when:

–          A system is using families of related or dependent objects without any knowledge of their concrete types.

–          The client does not need to know the instantiation details of subclasses.

–          The client does not need to use subclasses in a concrete way.


There are four components of this pattern.

–          Abstract Factory

The abstraction client interacts with, to create door and window combinations. This is the core factory that provide interfaces for individual factories to implement.

–          Concrete Factories

These are the concrete factories (CombinationFactoryA, CobinationFactoryB) that create concrete products (doors and windows).

–          Abstract Products

These are the abstract products that will be visible to the client (AbstractDoor & AbstractWindow).

–          Concrete Products

The concrete implementations of products offered. WoodenDoor, WoodenWindow etc.




Sample code

Using the above example, our implementation would be:

    public interface Door


double GetPrice();


class WoodenDoor : Door


public double GetPrice()


//return price;



class GlassDoor : Door


public double GetPrice()


//return price



public interface Window


double GetPrice();


class WoodenWindow : Window


public double GetPrice()


//return price



class GlassWindow : Window


public double GetPrice()


//return price



The concrete classes and factories should ideally have protected or private constructors and should have appropriate access modifiers. e.g.

    protected WoodenWindow()



The factories would be like:

    public interface AbstractFactory


Door GetDoor();

Window GetWindow();


class CombinationA : AbstractFactory


public Door GetDoor()


return new WoodenDoor();


public Window GetWindow()


return new WoodenWindow();



class CombinationB : AbstractFactory


public Door GetDoor()


return new GlassDoor();


public Window GetWindow()


return new GlassWindow();



And the client:

    public class Room


Door _door;

Window _window;

public Room(AbstractFactory factory)


_door = factory.GetDoor();

_window = factory.GetWindow();


public double GetPrice()


return this._door.GetPrice() + this._window.GetPrice();



            AbstractFactory woodFactory = new CombinationA();

Room room1 = new Room(woodFactory);


AbstractFactory glassFactory = new CombinationB();

Room room2 = new Room(glassFactory);


The above showcases how abstract factory could be utilised to instantiate and use related or dependent families of classes via their respective abstractions without having to know or understand the corresponding concrete classes.

The Room class only knows about Door and Window abstractions and let the configuration/ client code input dictate which combination to use, at runtime.

Sometimes abstract factory also uses Factory Method or Static Factory Method for factory configurations:

    public static class FactoryMaker


public static AbstractFactory GetFactory(string type)   //some configuration


//configuration switches

if (type == “wood”)

return new CombinationA();

else if (type == “glass”)

return new CombinationB();

else   //default or fault config

return null;



Which changes the client:

AbstractFactory factory = FactoryMaker.GetFactory(“wood”);//configurations or inputs

Room room1 = new Room(factory);

As can be seen, polymorphic behaviours are the core of these factories as well as the usage of related families of classes.


Creational patterns, particularly Factory can work along with other creational patterns; Abstract factory

–          Isolation of the creation mechanics from its usage for related families of classes.

–          Adding new products/ concrete types does not affect the client code rather the configuration/ factory code.

–          Provide way for the client to work with abstractions instead of concrete types. This gives flexibility to the client code to the related use cases.

–          Usage of abstractions reduce dependencies across components and increases maintainability.

–          Design often starts with Factory method and evolves towards Abstract factory (or other creational patterns) as the families of classes expends and their relationships develops.


Abstract factory does introduce some disadvantages in the system.

–          It has fairly complex implementation and as the families of classes grows, so does the complexity.

–          Relying heavily on polymorphism does require expertise for debugging and testing.

–          It introduces factory classes, which can be seen as added workload without having any direct purpose except for instantiation of other classes, particularly in bigger systems.

–          Factory structures are tightly coupled with the relationships of the families of classes. This introduces maintainability issues and rigid design.

For example, adding a new type of window or door in the above example would not be as easy. Adding another family of classes, like Carpet and its sub types would be even more complex, but this does not affect the client code.


Abstract factory is a widely used creational pattern, particularly because of its ability to handle the instantiation mechanism of several families of related classes. This is helpful in real-world solutions where entities are often interrelated and work in a blend in a variety of use cases.

Abstract factory ensures the simplification of design targeting business processes by eliminating the concrete types and replacing them with abstractions while maintaining their encapsulation and removing the added complexity of object creation. This also reduces a lot of duplicate code at the client side making business processes testable, robust and independent of the underline concrete types.

In the third part of creational patterns, we will discuss a pattern slightly similar to Abstract factory, the Builder. Sometimes the two could be competitors in design decisions but differs in real-world applications.


Further reading





Back to Basics – Design Patterns – Part 1

Design Patterns

Design patterns are reusable solutions to recurring problems of software design and engineering in the real-world. Patterns makes it easier to reuse proven techniques to resolve design and architectural complications and then communicating and documenting them with better understanding, making them more accessible to developers in an abstract way.


Design patterns enhance the classic techniques of object oriented programming by encouraging the reusability and communication of the solutions of common problems at abstract levels and improves the maintainability of the code as a by-product at implementation levels.

The “Y” in Design Patterns

Apart from the obvious advantage of providing better techniques to map real-world into programming models, a prime objective of OOP is design and code reusability. However, this is easier said than done. In reality, designing reusable classes is hard and takes time. In the real-world, less than few devs write code with long-term reusability in mind. This becomes obvious when dealing with recurring problems in design and implementation. This is where design patterns come in to the picture, when dealing with problems that seems to appear again and again. Any proven technique that provides a solution of a recurring problem in an abstract, reusable way, independent of the implementation barriers like programming language details and data structures, is categorised as design pattern.

The design patterns:

  • help analysing common problems in a more abstract way.
  • provides proven solutions to those problems.
  • help decreasing the overall code time at the implementation level.
  • encourages the code reusability by providing common solutions.
  • increases code lifetime and maintainability by enhancing the capacity of change.
  • Increases the understanding of the solutions of recurring problems.

A pattern will describe a problem, provide its solution at an abstract level, and elaborate the result.


The problem part of a pattern describes the issue(s) a program/ piece of code is facing along with its context. It might highlight a class structure to be inflexible, or issues related to the usage of an object, particularly at runtime.


A solution is always defined as a template, an abstract design that will describe its element(s), their relationships, and responsibilities and will detail out how this abstract design will address the problem at hand. A pattern never provides any concrete implementation of the solution, enhancing its reusability and flexibility. The actual implementation of a pattern might vary in different programming languages.

Understanding software design patterns requires respectable knowledge of object oriented programming concepts like abstraction, inheritance, and polymorphic behaviour.

Types of Design Patterns

Design patterns are often divided into three fundamental types.

  • Creational – deals with the creation/ instantiation of objects specific to business use cases. Polymorphic concepts, along with inheritance, are the core of these patterns.
  • Structural – targets the structure and composition of classes. Heavily relies upon the inheritance and composition concepts of OOP.
  • Behavioural – underlines the interaction between classes, separation and delegation of responsibilities.


Creational Patterns

Often the implementation and usage of a class, or a group of classes is tightly coupled with the way objects of those classes are created. This decreases the flexibility of those classes, particularly at runtime. For example, if we have a group of cars providing a functionality of driving. Then the creation of each car will require a piece of code; new constructor in common modern languages. This will infuse the creation of cars with its usage:

Holden car1 = new Holden();


Mazda car2 = new Mazda();


Even if we use base type;

Car car1 = new Holden();


Car car2 = new Mazda();


If you look at the above examples, you will notice that the actual class being instantiated is selected at compile-time. This creates problems when designing common functionality and forces hardwired code into the usage based on concrete types. This also exposes the constructors of the classes which penetrates the encapsulation.

Creational patterns provide the means of creating, and using objects of related classes without having to identify their concrete types or exposing their creational mechanism. This gives move flexibility at the usage of the instantiated objects at runtime without worrying about their types. This also results in less code and eliminates the creational complexity at the usage, allowing the code to focus on what to do then what to create.

CarFactory[] factories = <Create factories>;

foreach (CarFactory factory in factories) {

               Car car = factory.CreateCar();



The above code removes the creational logic and dedicates it to subclasses and factories. This gives flexibility on usage of classes independent of its creation and let the runtime dictates the instantiated types while making the code independent of the number of concrete types (cars) to be used. This code will work regardless of the number concrete types, enhancing the reusability and separation of creation from its usage. We will discuss the above example in details in Factory Method pattern.

The two common trades of creational patterns are:

  • Encapsulation of concrete types and exposure using common abstractions.
  • Encapsulation of instantiation and encouraging polymorphic behaviour.

The system leveraging creational patterns does not need to know, or understand concrete types; it handles abstractions only (interfaces, abstract classes). This gives flexibility in configuring a set of related classes at runtime, based on use cases and requirements without having to alter the code.

There are five fundamental creational patterns:

  • Factory method
  • Abstract factory
  • Prototype
  • Singleton
  • Builder

Factory Method

This pattern specifies a way of creating instances of related classes but let the subclasses decide which concrete type to instantiate at runtime, also called Virtual Construction. The pattern encourages the use of interfaces and abstract classes over concrete types. The decision is based upon the input supplied by either the client code or configuration.


When client code instantiates a class, it knows the concrete type of that class. This breaks through the polymorphic abstraction when dealing with a family of classes. We may use factory method when:

  • Client code don’t know the concrete types of subclasses to create.
  • The instantiation needs to be deferred to subclasses.
  • The job needs to be delegated to subclasses and client code does not need to know which subclass is doing the job.



Abstract <Product> (Car)

The interface/ abstraction the client code understands. This describes the family of subclasses to be instantiated.

<Product> (Holden, Mazda, …)

This implements the abstract <product>. The subclass(es) that needs to be created.

Abstract Factory (CarFactory)

This provides the interface/ abstraction for the creation of abstract product, called factory method. This might also use configuration for creating a default product.

ConcreteFactory (HoldenFactory, MazdaFactory, …)

This provides the instantiation of the concrete subclass/ product by implementing the factory method.

Sample code

Going back to our example of cars earlier, we will provide detail implementation of the factory method.

public abstract class Car //could be an interface instead, if no default behaviour is required


        public virtual void Drive()


            Console.Write(“Driving a car”);



    public class Holden : Car


        public override void Drive()


            Console.Write(“Driving Holder”);



    public class Mazda : Car


        public override void Drive()


            Console.Write(“Driving Mazda”);



   public interface ICarFactory //This will be a class/ abstract if there is a default factory implementation.


        Car CreateCar();


    public class HoldenFactory : CarFactory


        public Car CreateCar()


            return new Holden();



    public class MazdaFactory : CarFactory


        public Car CreateCar()


            return new Mazda();



Now the client code could be:

var factories = new CarFactory[2];

factories[0] = new HoldenFactory();

factories[1] = new MazdaFactory();

foreach (var factory in factories)


var car = factory.CreateCar();




Now we can keep introducing new Car types and the client code will behave the same way for this use case. The creation of factories could further be abstracted by using parametrised factory method. This will modify the CarFactory interface.


    public class CarFactory


        public virtual Car CreateCar(string type) //any configuration or input from client code.


     /////switches based on the configuration or input from client code

            if (type == “holden”)

                return new Holden();

            else if (type == “mazda”)

                return new Mazda();


            else    ////default instantiation/ fault condition etc.

                return null; //default/ throw etc.




Which will change the client code:


CarFactory factory = new CarFactory();

Car car1 = factory.CreateCar(“holdern”); //Configuration/ input etc.

Car car2 = factory.CreateCar(“mazda”); //Configuration/ input etc.


The above code shows the flexibility of the factory method, particularly the Factory class.


Factory method is the simplest of the creational patterns that targets the creation of a family of classes.

  • It separates the client code and a family of classes by weakening the coupling and abstracting the concrete subclasses.
  • It reduces the changes in client code due to changes in concrete types.
  • Provides configuration mechanism for subclasses by removing their instantiation from the client code and into the factory method(s).
  • The default constructors of the subclasses could be marked protected/ private to shield direct creation.


There are some disadvantages that should be considered before applying factory method:

  • Factory demands individual implementations of a factory method for every subclass in the family. This might introduce unnecessary complexity.
  • Concrete parametrised factory leverages the switch conditions to identify the concrete type to be instantiated. This introduces cluttered and hardwired code in the factory. Any changes in the configuration/ client code input or implementation changes of the concrete types demands a review of the factory method.
  • The subtypes must be of the same base type. Factory method cannot handle subtypes of different base types. That will require complex creational patterns.


Factory method is simple to understand and implement in a variety of languages. The most important consideration is to look for many subclasses of the same base type/ interface and being handled at that abstract level in a system. Conditions like below could warrant a factory method implementation.

Car car1 = new Holder();


Car car2 = new Mazda();


. . .


On the other hand, if you have some concrete usage of subclasses like.


if (car.GetType() == typeof(Holden))



Then factory method might not be the answer and, perhaps reconsider the class hierarchy.

In part 2, we will discuss the next Creational Pattern, that is considered a superset of factory method, Abstract Factory.


Further Reading





Microservices – An Agile Architecture Introduction

In the ever evolving and dynamic world of software architecture, you will hear new buzz words every other month. Microservices is the latest of them, though not as new as it appears, it has been around for some time now in different forms.

Microservices – The micro of SOA?

        “… the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.” that Fowler guy.

So, to break it all down, microservices is a form of fine-grained service oriented architecture (SOA) implementation that could be used to build independent, flexible, scalable, and reusable software applications in a decoupled manner. This relies heavily on separately deployable and maintainable sub systems/ processes that can communicate over a network using technology-agnostic protocols. That in turns, pave way for DevOps and Continuous Deployment pipelines.

Principia Microservices – nutshell

You would not find a defined set of characteristics of a microservices based system. A formal definition is still missing, though we can outline the basic and mostly mentioned pointer.

–          The services are small fine-grained representation of a single business function.

–          Designed and maintained around capabilities.

–          Flexible, robust, replaceable, and independent of each other in nature.

–          Designed to embrace faults and failures. An unavailable service cannot bring the system down. Emphasis on monitoring and logging.

–          Advocates the practices of continuous improvement, continuous integration and deployment.

Services as Components

Defining software systems as a set of smaller, independently maintainable components has always been a craving of well-define architectures. The current programming languages lack the mechanism of explicit boundary definitions for components and often, it’s just documentation and published best practices/ code reviews that stops a developer in mixing logical components in a monolithically designed application.

This differs a bit from the idea of libraries. Libraries are in-process components of an application that, though logically separate, cannot be deployed and maintained independently.

In microservices, components are explicitly defined as independent and autonomously deployable services. This enables continuous processes (CI & CD) and enhances maintainability of individual components, increases reusability and overall robustness of the system. It also enables a better realization of business processes into the application.



Built for Business


Most of the time, organisations looking to build a decoupled system focuses on technical capabilities. This leads to teams of UI devs, backend devs, DB devs etc. This leads to siloed architectures and applications.

Microservices does things differently. The organisation splits teams as per the business capabilities. This results in cross-functional, dynamic and independent teams with full technical capabilities.




A consequence of a monolithic design is central management and support of the whole system. This create problems in scalability and reusability. These systems tend to favour a single set of tools and technologies.

Microservices advocates “you build it, you run it” approach. The teams are designed to take end-to-end responsibility of the product. This disperse the capabilities across the teams and helps tie up the services with business capabilities and functions. This also enables selection of underlying technologies based on needs and availabilities.



Designed for Failure

A failure in monolithic design could be a catastrophe, if not dealt with gracefully and with a fall-back approach. Microservices, however, are designed for failure from the beginning. Because of their independent and granular nature, the overall user experience in case of a failure tends to be manageable. Unavailability of one service does not bring the whole system down. It is also easier to drill down in microservices to identify the root cause because of their agility and explicit boundaries. Real-time monitoring and logging and other cross-cutting concerns are easier to implement and are heavily emphasized.

      “If we look at the characteristics of an agile software architecture, we tend to think of something that is built using a collection of small, loosely coupled components/services that collaborate together to satisfy an end-goal. This style of architecture provides agility in several ways. Small, loosely coupled components/services can be built, modified and tested in isolation, or even ripped out and replaced depending on how requirements change. This style of architecture also lends itself well to a very flexible and adaptable deployment model, since new components/services can be added and scaled if needed.” — Simon Brown

Microservices architecture tends to result in independent products/ services designed and implemented using different languages, databases, and hardware and software environments, as per the capabilities and requirements. They are granular, with explicit boundaries, autonomously developed, independently deployable, decentralized and built and released with automated processes.

Issues – Architecture of the wise

However, there are certain drawbacks of microservices as well; like higher cost of communication because of network latency, runtime overhead (message processing) as oppose to in-process calls, and blurry definitions of boundaries between services. A poorly designed application based on microservices could yield more complexity since the complexity will only be shifted onto the communication protocols, particularly in distributed transactional systems.   

“You can move it about but it’s still there” — Robert Annett: Where is the complexity?

Decentralized management, especially for databases has some implications related to updates and patches, apart from transactional issues. This might lead to inconsistent databases and need well defined processes for management, updates, and transactional data consistency.


Overall system performance often demand serious considerations. Since microservices targets distributed applications in isolated environments; the communication cost between components tends to be higher. This leads to lower overall system performance if not designed carefully. Netflix is a leading example of how a distributed system based on microservices could outperform some of the best monolithic systems in the world.

Microservices tends to be small and stateless; still business processes are often state-full and transitional in nature so dividing business processes and data modelling could be challenging. Poorly parted business processes create blurry boundaries and responsibility issues.

Potential of failure and unavailability rises significantly with distributed systems based on isolated components communicating over networks. Microservices could suffer with the same because of high dependence on isolated components/ services. Systems with little or no attention to Design for Failure could lead to higher running cost and unavailability issues.

Coping with issues of distributed systems could lead to complexities in implementation details. Less skilled teams in this case will suffer greatly, especially when handling with performance and reliability issues. Therefore, microservices is the architecture of the wise. Often building a monolithic system and then migrating to a more distributed system using microservices works best. This also gives the teams more insights about business processes and experience of handling existing complexities which comes handy in segregating processes into services with clearer boundaries.

When we have multiple isolated components working together in a system, version control becomes a problem, as with all the distributed systems. Services dependencies upon each other create issues when incorporating changes. This tends to get bigger as the system grows. Microservices emphasises on simpler interfaces and handshakes between services and advocates immutable designs to resolve versioning issues. This still requires managing and supporting multiple services at times.

Last words

There is no zero-one choice between monolithic and distributed systems. Both have their advantages and shortcomings. The choice of an architecture heavily depends upon the business structure, team capabilities and skill set, and the dispersed knowledge of the business models. Microservices does solve a lot of problems classical one package systems face but it does come up with a cost. More than the architectural choices, it is the culture of the team and the mindset of people that makes the difference between success and failure – there is no holy grail.


Further reading



Google 🙂


Azure Functions: Build an ecommerce processor using Braintree’s API


In this blog I am continuing with my series covering useful scenarios for using Azure Functions – today I’m going to cover how you can process payments by using Functions with Braintree’s payment gateway services.

I’m not going to go into the details of setting up and configuring your Braintree account, but what I will say is the model you should be applying to make this scenario work is one where you Function will play the Server role as documented in Braintree’s configuration guide.

My sample below is pretty basic, and for the Function to be truly useful (and secure) to use you will need to consider a few things:

  1. Calculate total monetary amount to charge elsewhere and pass to the Function as an argument (my preference is via a message on a Service Bus Queue or Topic). Don’t do the calculation here – make the Function do precisely…

View original post 398 more words