A Closer Look at Amazon Chime

In news this quarter AWS have released a web conferencing cloud service to their existing ‘Business Productivity‘ services which already includes Amazon WorkDocs and Amazon WorkMail. So my thought was to help you gauge where this sits in relation to Skype for Business. I don’t want to put this into a Microsoft versus Amazon review but I do want you to understand the product names that ‘somewhat’ align with each other.

Exchange = WorkMail

SharePoint/OneDrive for Business  =  WorkDocs

Skype for Business  = Chime

The Microsoft products are reasonably well known in the world so I’ll give you a quick one liner about the Amazons products:

WorkMail “Hosted Email”

WorkDocs “Hosted files accessible via web, PC, mobile devices with editing and sharing capability”

So what is Chime?

Chime isn’t exactly a one-to-one feature set for Skype for Business so be wary of articles conveying this sentiment as they haven’t really done their homework. Chime can be called either a web meeting, online meeting, or online video conferencing platform. Unlike Skype for Business, Chime is not a PBX replacement. So what does it offer?

  • Presence
  • 1-1 and group IM
  • Persistent chat / chat rooms
  • Transfer files
  • Group HD video calling
  • Schedule meetings using Outlook or Google calendar
  • Join meeting from desktop or browser
  • Join meeting anonymously
  • Meeting controls for presenter
  • Desktop sharing
  • Remote desktop control
  • Record audio and video of meetings
  • Allow participants to dial-in with no Chime client (PSTN conferencing bridge)
  • Enable legacy H.323 endpoints to join meetings (H.323 bridge)
  • Customisable / personalised meeting space URL

The cloud hosted based products that I see are similar to Chime are WebEx, Google Hangouts, GoToMeeting and ReadyTalk to name just a few. As here at Kloud we are Microsoft Gold Partners and have a Unified Communication team that deliver Skype for Business solutions and not the other products I have previously mentioned, I will tell you a few things that differentiate SFB from Chime.

  • Direct Inward Dial (DID) user assignment
  • Call forwarding and voicemail
  • Automated Call Distribution (ACD)
  • Outbound PSTN dialling and route based on policy assignment
  • Integrated Voice Recording (IVR)
  • Hunt Groups / Call Pickup Groups
  • Shared Line Apperance
  • SIP IP-Phone compatibility
  • Attendant / Reception call pickup
  • On-premises, hybrid and hosted deployment models

Basically all things that replace a PBX Solution Skype for Business will do. Now this isn’t a cheap shot at Amazon, cause that isn’t where they are positioning their product. What I’ve hoped to have done is clarify any misconception about where the product sits in the market and how it relates to features in a well known product like Microsoft Skype for Business.

For the price that Amazon are offering Chime for in the online meeting market it is very competitive against some other hosted products. Their ‘rolls-royce’ plan is simply purchased for $15 USD per user per month. If you’re not invested in the Office 365 E3/E5 license ecosystem and you need a online meeting platform at a low cost, then Chime might be right for you. Amazon offer a 30 day free trial for free that is a great way to test it out.



UX Process: A groundwork for effective design teams

User Experience practice is about innovating and finding solutions to real-world problems. Which means we need to find problems, then validate them first before trying to fix them. So how do we go about doing all this? Read on…

I’ve been asked to explain a “Good UX Process” numerous times over the years in consulting. Customers want a formula per se that can solve all their design problems. But unfortunately, it doesn’t exist and there is no set UX process that applies to all.

Every organisation and its problems are unique. They all require different sets of UX activities to determine a positive outcome.

However, there are some general guidelines on:

  • What type of UX artefacts can we deliver?
  • Who do we engage and collaborate with?
  • What kind of UX activities / workshops can we suggest?

To answer the above, I put together a general UX design process with the help of my colleagues a few years back. So here it goes.


Phase I – Discovery

People Involved
  1. Product Owner (Whoever is funding the project)
  2. Project Manager (Whoever is overseeing the project)
  3. Business Analyst (Whoever is managing different teams)
  4. Analytics
  5. Marketing
  6. Information Technology
  7. User Experience Designer
  1. Problem Statements
  2. User Needs
  3. Design principles
  4. Benchmarking
  5. Personas
  6. Servicer maps
  7. Hypothesis
  1. Discover pain-points
  2. Discuss solutions to pain points (utopia)
  3. Analyse competitors in similar space
  4. Discover potential constraints (IT or culture related)
  5. Come up with a basic information architecture (homepage elements, navigation and unique pages)

Phase II – Ideation and Concept

People Involved
  1. Product Owner
  2. Project Manager
  3. Business Analyst
  4. Developers
  5. Information Technology
  6. User Experience Designer
  7. SEO
  1. Concept Vision
  2. High Level Requirements
  3. UX Estimates
  4. Dev Estimates
  5. UX Epics
  6. Story Boards
  7. Experience Maps
  8. Navigation
  1. User Testing
  2. Feasibility Prototyping
  3. Workshop Facilitations

Phase III – Design and Build

People Involved
  1. Project Manager
  2. Business Analyst
  3. Marketing
  4. Developers
  5. User Experience Designer
  6. Visual Designer
  7. SEO
  1. Wireframes
  2. Visual Designs
  3. User Interface Specs
  4. Process Flows
  1. Collaborative Design Sessions
  2. 6-ups Designs
  3. Rapid User Testing
  4. Wireframes
  5. UI Trends

Phase IV – Measure and Respond

People Involved
  1. Project Manager
  2. Analytics
  3. Developers
  4. SEO
  1. UI Improvements
  2. UX Enhancements
  1. Advanced Analytics
  2. Collaborative Design Sessions
  3. User Testing (A/B Testing)

The best way to use this UX process is to post-understanding your client’s requirements, extract the best bits that suit your needs and take it from there.

I hope you guys find this useful!

Back to Basics – Design Patterns – Part 2

In the previous post, we discussed design patterns, their structure and usage. Then we discussed the three fundamental types and started off with the first one – Creational Patterns.

Continuing with creational patterns we will now discuss Abstract Factory pattern, which is considered to be a super set of Factory Method.

Abstract Factory

In the Factory method, we discussed how it targets a single family of subclasses using a corresponding set of factory method classes, or a single factory method class via parametrised/ static factory method. But if we target families of related classes (multiple abstractions having their own subclasses) and need to interact with them using a single abstraction then factory method will not work.

A good example could be the creation of doors and windows for a room. A room could offer a combination of wooden door, sliding door etc. and wooden window, glass window etc. The client machine will however, interact with a single abstraction (abstract factory) to create the desired door and window combination based on selection/ configuration. This could be a good candidate for an Abstract Factory.

So abstract factory allows initiation of families (plural) of related classes using a single interface (abstract factory) independent of the underline concrete classes.


When a system needs to use families of related or dependent classes, it might need to instantiate several subclasses. This will lead to code duplication and complexities. Taking the above example of a room, the client machine will need to instantiate classes for doors and windows for one combination and then do the same for others, one by one. This will break the abstraction of those classes, exposes their encapsulation, and put the instantiation complexity on the client. Even if we use a factory method for every single family of classes this will still require several factory methods, unrelated to each other. Thereby managing them to make combination offerings (rooms) will be code cluttering.

We will use abstract factory when:

–          A system is using families of related or dependent objects without any knowledge of their concrete types.

–          The client does not need to know the instantiation details of subclasses.

–          The client does not need to use subclasses in a concrete way.


There are four components of this pattern.

–          Abstract Factory

The abstraction client interacts with, to create door and window combinations. This is the core factory that provide interfaces for individual factories to implement.

–          Concrete Factories

These are the concrete factories (CombinationFactoryA, CobinationFactoryB) that create concrete products (doors and windows).

–          Abstract Products

These are the abstract products that will be visible to the client (AbstractDoor & AbstractWindow).

–          Concrete Products

The concrete implementations of products offered. WoodenDoor, WoodenWindow etc.




Sample code

Using the above example, our implementation would be:

    public interface Door


double GetPrice();


class WoodenDoor : Door


public double GetPrice()


//return price;



class GlassDoor : Door


public double GetPrice()


//return price



public interface Window


double GetPrice();


class WoodenWindow : Window


public double GetPrice()


//return price



class GlassWindow : Window


public double GetPrice()


//return price



The concrete classes and factories should ideally have protected or private constructors and should have appropriate access modifiers. e.g.

    protected WoodenWindow()



The factories would be like:

    public interface AbstractFactory


Door GetDoor();

Window GetWindow();


class CombinationA : AbstractFactory


public Door GetDoor()


return new WoodenDoor();


public Window GetWindow()


return new WoodenWindow();



class CombinationB : AbstractFactory


public Door GetDoor()


return new GlassDoor();


public Window GetWindow()


return new GlassWindow();



And the client:

    public class Room


Door _door;

Window _window;

public Room(AbstractFactory factory)


_door = factory.GetDoor();

_window = factory.GetWindow();


public double GetPrice()


return this._door.GetPrice() + this._window.GetPrice();



            AbstractFactory woodFactory = new CombinationA();

Room room1 = new Room(woodFactory);


AbstractFactory glassFactory = new CombinationB();

Room room2 = new Room(glassFactory);


The above showcases how abstract factory could be utilised to instantiate and use related or dependent families of classes via their respective abstractions without having to know or understand the corresponding concrete classes.

The Room class only knows about Door and Window abstractions and let the configuration/ client code input dictate which combination to use, at runtime.

Sometimes abstract factory also uses Factory Method or Static Factory Method for factory configurations:

    public static class FactoryMaker


public static AbstractFactory GetFactory(string type)   //some configuration


//configuration switches

if (type == “wood”)

return new CombinationA();

else if (type == “glass”)

return new CombinationB();

else   //default or fault config

return null;



Which changes the client:

AbstractFactory factory = FactoryMaker.GetFactory(“wood”);//configurations or inputs

Room room1 = new Room(factory);

As can be seen, polymorphic behaviours are the core of these factories as well as the usage of related families of classes.


Creational patterns, particularly Factory can work along with other creational patterns; Abstract factory

–          Isolation of the creation mechanics from its usage for related families of classes.

–          Adding new products/ concrete types does not affect the client code rather the configuration/ factory code.

–          Provide way for the client to work with abstractions instead of concrete types. This gives flexibility to the client code to the related use cases.

–          Usage of abstractions reduce dependencies across components and increases maintainability.

–          Design often starts with Factory method and evolves towards Abstract factory (or other creational patterns) as the families of classes expends and their relationships develops.


Abstract factory does introduce some disadvantages in the system.

–          It has fairly complex implementation and as the families of classes grows, so does the complexity.

–          Relying heavily on polymorphism does require expertise for debugging and testing.

–          It introduces factory classes, which can be seen as added workload without having any direct purpose except for instantiation of other classes, particularly in bigger systems.

–          Factory structures are tightly coupled with the relationships of the families of classes. This introduces maintainability issues and rigid design.

For example, adding a new type of window or door in the above example would not be as easy. Adding another family of classes, like Carpet and its sub types would be even more complex, but this does not affect the client code.


Abstract factory is a widely used creational pattern, particularly because of its ability to handle the instantiation mechanism of several families of related classes. This is helpful in real-world solutions where entities are often interrelated and work in a blend in a variety of use cases.

Abstract factory ensures the simplification of design targeting business processes by eliminating the concrete types and replacing them with abstractions while maintaining their encapsulation and removing the added complexity of object creation. This also reduces a lot of duplicate code at the client side making business processes testable, robust and independent of the underline concrete types.

In the third part of creational patterns, we will discuss a pattern slightly similar to Abstract factory, the Builder. Sometimes the two could be competitors in design decisions but differs in real-world applications.


Further reading





Back to Basics – Design Patterns – Part 1

Design Patterns

Design patterns are reusable solutions to recurring problems of software design and engineering in the real-world. Patterns makes it easier to reuse proven techniques to resolve design and architectural complications and then communicating and documenting them with better understanding, making them more accessible to developers in an abstract way.


Design patterns enhance the classic techniques of object oriented programming by encouraging the reusability and communication of the solutions of common problems at abstract levels and improves the maintainability of the code as a by-product at implementation levels.

The “Y” in Design Patterns

Apart from the obvious advantage of providing better techniques to map real-world into programming models, a prime objective of OOP is design and code reusability. However, this is easier said than done. In reality, designing reusable classes is hard and takes time. In the real-world, less than few devs write code with long-term reusability in mind. This becomes obvious when dealing with recurring problems in design and implementation. This is where design patterns come in to the picture, when dealing with problems that seems to appear again and again. Any proven technique that provides a solution of a recurring problem in an abstract, reusable way, independent of the implementation barriers like programming language details and data structures, is categorised as design pattern.

The design patterns:

  • help analysing common problems in a more abstract way.
  • provides proven solutions to those problems.
  • help decreasing the overall code time at the implementation level.
  • encourages the code reusability by providing common solutions.
  • increases code lifetime and maintainability by enhancing the capacity of change.
  • Increases the understanding of the solutions of recurring problems.

A pattern will describe a problem, provide its solution at an abstract level, and elaborate the result.


The problem part of a pattern describes the issue(s) a program/ piece of code is facing along with its context. It might highlight a class structure to be inflexible, or issues related to the usage of an object, particularly at runtime.


A solution is always defined as a template, an abstract design that will describe its element(s), their relationships, and responsibilities and will detail out how this abstract design will address the problem at hand. A pattern never provides any concrete implementation of the solution, enhancing its reusability and flexibility. The actual implementation of a pattern might vary in different programming languages.

Understanding software design patterns requires respectable knowledge of object oriented programming concepts like abstraction, inheritance, and polymorphic behaviour.

Types of Design Patterns

Design patterns are often divided into three fundamental types.

  • Creational – deals with the creation/ instantiation of objects specific to business use cases. Polymorphic concepts, along with inheritance, are the core of these patterns.
  • Structural – targets the structure and composition of classes. Heavily relies upon the inheritance and composition concepts of OOP.
  • Behavioural – underlines the interaction between classes, separation and delegation of responsibilities.


Creational Patterns

Often the implementation and usage of a class, or a group of classes is tightly coupled with the way objects of those classes are created. This decreases the flexibility of those classes, particularly at runtime. For example, if we have a group of cars providing a functionality of driving. Then the creation of each car will require a piece of code; new constructor in common modern languages. This will infuse the creation of cars with its usage:

Holden car1 = new Holden();


Mazda car2 = new Mazda();


Even if we use base type;

Car car1 = new Holden();


Car car2 = new Mazda();


If you look at the above examples, you will notice that the actual class being instantiated is selected at compile-time. This creates problems when designing common functionality and forces hardwired code into the usage based on concrete types. This also exposes the constructors of the classes which penetrates the encapsulation.

Creational patterns provide the means of creating, and using objects of related classes without having to identify their concrete types or exposing their creational mechanism. This gives move flexibility at the usage of the instantiated objects at runtime without worrying about their types. This also results in less code and eliminates the creational complexity at the usage, allowing the code to focus on what to do then what to create.

CarFactory[] factories = <Create factories>;

foreach (CarFactory factory in factories) {

               Car car = factory.CreateCar();



The above code removes the creational logic and dedicates it to subclasses and factories. This gives flexibility on usage of classes independent of its creation and let the runtime dictates the instantiated types while making the code independent of the number of concrete types (cars) to be used. This code will work regardless of the number concrete types, enhancing the reusability and separation of creation from its usage. We will discuss the above example in details in Factory Method pattern.

The two common trades of creational patterns are:

  • Encapsulation of concrete types and exposure using common abstractions.
  • Encapsulation of instantiation and encouraging polymorphic behaviour.

The system leveraging creational patterns does not need to know, or understand concrete types; it handles abstractions only (interfaces, abstract classes). This gives flexibility in configuring a set of related classes at runtime, based on use cases and requirements without having to alter the code.

There are five fundamental creational patterns:

  • Factory method
  • Abstract factory
  • Prototype
  • Singleton
  • Builder

Factory Method

This pattern specifies a way of creating instances of related classes but let the subclasses decide which concrete type to instantiate at runtime, also called Virtual Construction. The pattern encourages the use of interfaces and abstract classes over concrete types. The decision is based upon the input supplied by either the client code or configuration.


When client code instantiates a class, it knows the concrete type of that class. This breaks through the polymorphic abstraction when dealing with a family of classes. We may use factory method when:

  • Client code don’t know the concrete types of subclasses to create.
  • The instantiation needs to be deferred to subclasses.
  • The job needs to be delegated to subclasses and client code does not need to know which subclass is doing the job.



Abstract <Product> (Car)

The interface/ abstraction the client code understands. This describes the family of subclasses to be instantiated.

<Product> (Holden, Mazda, …)

This implements the abstract <product>. The subclass(es) that needs to be created.

Abstract Factory (CarFactory)

This provides the interface/ abstraction for the creation of abstract product, called factory method. This might also use configuration for creating a default product.

ConcreteFactory (HoldenFactory, MazdaFactory, …)

This provides the instantiation of the concrete subclass/ product by implementing the factory method.

Sample code

Going back to our example of cars earlier, we will provide detail implementation of the factory method.

public abstract class Car //could be an interface instead, if no default behaviour is required


        public virtual void Drive()


            Console.Write(“Driving a car”);



    public class Holden : Car


        public override void Drive()


            Console.Write(“Driving Holder”);



    public class Mazda : Car


        public override void Drive()


            Console.Write(“Driving Mazda”);



   public interface ICarFactory //This will be a class/ abstract if there is a default factory implementation.


        Car CreateCar();


    public class HoldenFactory : CarFactory


        public Car CreateCar()


            return new Holden();



    public class MazdaFactory : CarFactory


        public Car CreateCar()


            return new Mazda();



Now the client code could be:

var factories = new CarFactory[2];

factories[0] = new HoldenFactory();

factories[1] = new MazdaFactory();

foreach (var factory in factories)


var car = factory.CreateCar();




Now we can keep introducing new Car types and the client code will behave the same way for this use case. The creation of factories could further be abstracted by using parametrised factory method. This will modify the CarFactory interface.


    public class CarFactory


        public virtual Car CreateCar(string type) //any configuration or input from client code.


     /////switches based on the configuration or input from client code

            if (type == “holden”)

                return new Holden();

            else if (type == “mazda”)

                return new Mazda();


            else    ////default instantiation/ fault condition etc.

                return null; //default/ throw etc.




Which will change the client code:


CarFactory factory = new CarFactory();

Car car1 = factory.CreateCar(“holdern”); //Configuration/ input etc.

Car car2 = factory.CreateCar(“mazda”); //Configuration/ input etc.


The above code shows the flexibility of the factory method, particularly the Factory class.


Factory method is the simplest of the creational patterns that targets the creation of a family of classes.

  • It separates the client code and a family of classes by weakening the coupling and abstracting the concrete subclasses.
  • It reduces the changes in client code due to changes in concrete types.
  • Provides configuration mechanism for subclasses by removing their instantiation from the client code and into the factory method(s).
  • The default constructors of the subclasses could be marked protected/ private to shield direct creation.


There are some disadvantages that should be considered before applying factory method:

  • Factory demands individual implementations of a factory method for every subclass in the family. This might introduce unnecessary complexity.
  • Concrete parametrised factory leverages the switch conditions to identify the concrete type to be instantiated. This introduces cluttered and hardwired code in the factory. Any changes in the configuration/ client code input or implementation changes of the concrete types demands a review of the factory method.
  • The subtypes must be of the same base type. Factory method cannot handle subtypes of different base types. That will require complex creational patterns.


Factory method is simple to understand and implement in a variety of languages. The most important consideration is to look for many subclasses of the same base type/ interface and being handled at that abstract level in a system. Conditions like below could warrant a factory method implementation.

Car car1 = new Holder();


Car car2 = new Mazda();


. . .


On the other hand, if you have some concrete usage of subclasses like.


if (car.GetType() == typeof(Holden))



Then factory method might not be the answer and, perhaps reconsider the class hierarchy.

In part 2, we will discuss the next Creational Pattern, that is considered a superset of factory method, Abstract Factory.


Further Reading





Microservices – An Agile Architecture Introduction

In the ever evolving and dynamic world of software architecture, you will hear new buzz words every other month. Microservices is the latest of them, though not as new as it appears, it has been around for some time now in different forms.

Microservices – The micro of SOA?

        “… the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.” that Fowler guy.

So, to break it all down, microservices is a form of fine-grained service oriented architecture (SOA) implementation that could be used to build independent, flexible, scalable, and reusable software applications in a decoupled manner. This relies heavily on separately deployable and maintainable sub systems/ processes that can communicate over a network using technology-agnostic protocols. That in turns, pave way for DevOps and Continuous Deployment pipelines.

Principia Microservices – nutshell

You would not find a defined set of characteristics of a microservices based system. A formal definition is still missing, though we can outline the basic and mostly mentioned pointer.

–          The services are small fine-grained representation of a single business function.

–          Designed and maintained around capabilities.

–          Flexible, robust, replaceable, and independent of each other in nature.

–          Designed to embrace faults and failures. An unavailable service cannot bring the system down. Emphasis on monitoring and logging.

–          Advocates the practices of continuous improvement, continuous integration and deployment.

Services as Components

Defining software systems as a set of smaller, independently maintainable components has always been a craving of well-define architectures. The current programming languages lack the mechanism of explicit boundary definitions for components and often, it’s just documentation and published best practices/ code reviews that stops a developer in mixing logical components in a monolithically designed application.

This differs a bit from the idea of libraries. Libraries are in-process components of an application that, though logically separate, cannot be deployed and maintained independently.

In microservices, components are explicitly defined as independent and autonomously deployable services. This enables continuous processes (CI & CD) and enhances maintainability of individual components, increases reusability and overall robustness of the system. It also enables a better realization of business processes into the application.



Built for Business


Most of the time, organisations looking to build a decoupled system focuses on technical capabilities. This leads to teams of UI devs, backend devs, DB devs etc. This leads to siloed architectures and applications.

Microservices does things differently. The organisation splits teams as per the business capabilities. This results in cross-functional, dynamic and independent teams with full technical capabilities.




A consequence of a monolithic design is central management and support of the whole system. This create problems in scalability and reusability. These systems tend to favour a single set of tools and technologies.

Microservices advocates “you build it, you run it” approach. The teams are designed to take end-to-end responsibility of the product. This disperse the capabilities across the teams and helps tie up the services with business capabilities and functions. This also enables selection of underlying technologies based on needs and availabilities.



Designed for Failure

A failure in monolithic design could be a catastrophe, if not dealt with gracefully and with a fall-back approach. Microservices, however, are designed for failure from the beginning. Because of their independent and granular nature, the overall user experience in case of a failure tends to be manageable. Unavailability of one service does not bring the whole system down. It is also easier to drill down in microservices to identify the root cause because of their agility and explicit boundaries. Real-time monitoring and logging and other cross-cutting concerns are easier to implement and are heavily emphasized.

      “If we look at the characteristics of an agile software architecture, we tend to think of something that is built using a collection of small, loosely coupled components/services that collaborate together to satisfy an end-goal. This style of architecture provides agility in several ways. Small, loosely coupled components/services can be built, modified and tested in isolation, or even ripped out and replaced depending on how requirements change. This style of architecture also lends itself well to a very flexible and adaptable deployment model, since new components/services can be added and scaled if needed.” — Simon Brown

Microservices architecture tends to result in independent products/ services designed and implemented using different languages, databases, and hardware and software environments, as per the capabilities and requirements. They are granular, with explicit boundaries, autonomously developed, independently deployable, decentralized and built and released with automated processes.

Issues – Architecture of the wise

However, there are certain drawbacks of microservices as well; like higher cost of communication because of network latency, runtime overhead (message processing) as oppose to in-process calls, and blurry definitions of boundaries between services. A poorly designed application based on microservices could yield more complexity since the complexity will only be shifted onto the communication protocols, particularly in distributed transactional systems.   

“You can move it about but it’s still there” — Robert Annett: Where is the complexity?

Decentralized management, especially for databases has some implications related to updates and patches, apart from transactional issues. This might lead to inconsistent databases and need well defined processes for management, updates, and transactional data consistency.


Overall system performance often demand serious considerations. Since microservices targets distributed applications in isolated environments; the communication cost between components tends to be higher. This leads to lower overall system performance if not designed carefully. Netflix is a leading example of how a distributed system based on microservices could outperform some of the best monolithic systems in the world.

Microservices tends to be small and stateless; still business processes are often state-full and transitional in nature so dividing business processes and data modelling could be challenging. Poorly parted business processes create blurry boundaries and responsibility issues.

Potential of failure and unavailability rises significantly with distributed systems based on isolated components communicating over networks. Microservices could suffer with the same because of high dependence on isolated components/ services. Systems with little or no attention to Design for Failure could lead to higher running cost and unavailability issues.

Coping with issues of distributed systems could lead to complexities in implementation details. Less skilled teams in this case will suffer greatly, especially when handling with performance and reliability issues. Therefore, microservices is the architecture of the wise. Often building a monolithic system and then migrating to a more distributed system using microservices works best. This also gives the teams more insights about business processes and experience of handling existing complexities which comes handy in segregating processes into services with clearer boundaries.

When we have multiple isolated components working together in a system, version control becomes a problem, as with all the distributed systems. Services dependencies upon each other create issues when incorporating changes. This tends to get bigger as the system grows. Microservices emphasises on simpler interfaces and handshakes between services and advocates immutable designs to resolve versioning issues. This still requires managing and supporting multiple services at times.

Last words

There is no zero-one choice between monolithic and distributed systems. Both have their advantages and shortcomings. The choice of an architecture heavily depends upon the business structure, team capabilities and skill set, and the dispersed knowledge of the business models. Microservices does solve a lot of problems classical one package systems face but it does come up with a cost. More than the architectural choices, it is the culture of the team and the mindset of people that makes the difference between success and failure – there is no holy grail.


Further reading



Google 🙂


Azure Functions: Build an ecommerce processor using Braintree’s API


In this blog I am continuing with my series covering useful scenarios for using Azure Functions – today I’m going to cover how you can process payments by using Functions with Braintree’s payment gateway services.

I’m not going to go into the details of setting up and configuring your Braintree account, but what I will say is the model you should be applying to make this scenario work is one where you Function will play the Server role as documented in Braintree’s configuration guide.

My sample below is pretty basic, and for the Function to be truly useful (and secure) to use you will need to consider a few things:

  1. Calculate total monetary amount to charge elsewhere and pass to the Function as an argument (my preference is via a message on a Service Bus Queue or Topic). Don’t do the calculation here – make the Function do precisely…

View original post 398 more words

Six Competencies For Strong CX Management

Customer Experience Management is bloody challenging. Delivering a consistent (and pleasurable) experience across dozens of channels requires well-defined CX practices that are deeply ingrained within the organisational culture.

Forrester first published CXM maturity framework in 2011 which defines the CX practices that every firm needs to master. Naturally, a lot of companies jumped on board with this framework. For some, it had been hugely successful, and for others — not so much.

Now after 5 years, Forrester has published an updated CX Index which is based on interviews with many (19) companies, new research, and lessons that they’ve learned from helping clients over time. In this post, I will be discussing the updated CXM framework and the six competencies required for a strong CX management.

84% of brands got “OK” scores or worse from customers, which is a clear sign that their current approach to CXM needs work. Forrester’s CX Index, Q3 2015

Customer Understanding

No surprises here — the better we understand our customers, the better we can adapt to the changing landscape. Gaining insight into what customers are thinking, feeling, and doing will keep us one step ahead of our competition.

In the last five years, more and more companies have ramped up their UX capabilities to help them understand their target audience. Activities such as ethnographic studies, behavioural research and user interviews provide deep insights on how customers engage with our brand.

While working with YellowPages, we would not release a product (or iteration) without doing multiple rounds of user interviews and usability testing. Believe it or not, it saves a lot of time and re-work in future.


Mature companies focus on what’s utmost important for their brand success. They don’t try to manage every nook and cranny of every customer interaction. Instead, they concentrate on the parts of CX that are critical to the business.

And this is where prioritisation comes in handy. Make a list of all the potential CX challenges and give it a ranking of severity. Tackle the ones that are on top of the list and make your way down. For example, human resources technology firm Aon focuses CX efforts on its self-service portal, in part because 90% of interactions happen there. But high usage isn’t the only reason that the portal gets top prioritisation. Good portal experiences satisfy two of the company’s three constituencies — end users (the client’s employees) and HR technology buyers. Users get the services they need, and buyers rest easy knowing that benefits issues aren’t distracting employees.


There is a reason why even banks such as CBA and ANZ are focusing and investing more on human-centered design practices. There is no doubt that the first layer of customer interaction with your brand needs to be impressive and engaging.

The only way you can achieve that in today’s design-agnostic economy is by understanding what role design plays in people’ lives and how can we leverage that to create an experience that not only meets but exceeds their expectations.

Marriott Group (of hotels) uses human-centered design to expand high-level concepts like “lively social spaces” into CX blueprints that employees and partners can actually execute. Designers ground their work in primary research, like the 300-guest diary study that digital teams used to figure out which “mobile moments” to include in an app redesign.  The Marriott Innovation Lab gives employees a place to prototype and test new experiences to reimagine the hotel lobby as a mobile-enabled social hub instead of just a place that guests pass through to get to their room.


Helmuth von Moltke, a famous 19th century Prussian army field marshal, observed that “no battle plan survives first contact with the enemy.” As the same is so often true with CX designs, mature companies make sure that customers experience what designers intend them to.

That’s why Travelodge created one-page guides that explain the “right” way to do things like cook breakfast or clean a room. Each guide uses simple drawings, which it patterns after directions for assembling Ikea furniture, to explain the process. According to Andrew Archibald, director of CX, “Anyone can pick it up, even if they’ve never worked here before, and follow most of the documentation.” Regular guest feedback reports tell managers how well they’re hitting the mark. If data signals a failure to deliver CX as it’s designed, hotel employees own the issue. But if guests complain even when staff executed well, the problem lies in the design, which the centralised CX team takes ownership for fixing.


Customer perception metrics are the cornerstone of mature brands’ CX measurement programs. In 2012, CIBC chose Net Promoter Score (NPS) as its high-level CX metric for all client-facing areas of the bank, including branches, contact centres, and other teams like fraud and collection.

Continuously monitoring these scores lets CIBC leaders know that their CX efforts are working — NPS is up 5 to 10 points or more in every area that adopted it. BMO Bank of Montreal, a CIBC competitor, also uses NPS as its main CX metric. BMO leaders recently added four new CX metrics to the mix to measure each pillar of the bank’s CX vision. For example, the ideal BMO CX calls for employees to proactively reach out to customers with relevant offers, like a checking account that’s a better fit for their needs or a mortgage refinance with a lower interest rate. Surveys ask customers how proactive they feel that employees have been, with responses rolled up into a “proactivity score” on a weekly dashboard that managers at the company, region, and branch level track.


In mature companies, employees manage CX because it’s the right thing to do, not just because their boss tells them to do so. Customer-centric behaviours are ingrained into culture alongside traits like empathy, trust, fairness, and cooperation. And executives work hard to make sure that it stays that way.

For example, HubSpot’s “culture code” aligns job applicants and employees around core beliefs like, “Solve for the customer — not just their happiness, but their success,” and, “You shouldn’t penalise the many for the mistakes of the few”. A newly appointed vice president of culture and experience makes sure that the code doesn’t fade into obscurity as the company grows but keeps driving who and how HubSpot hires, how people work, and how managers evaluate performance.

Customer experience management is a blend of discipline and empathy. High levels of CX discipline and empathy creates mature CX driven organisation.  Another key area to focus on is to find existing pockets of maturity and build on them, instead of trying to find what’s missing.

Experiences with the new AWS Application Load Balancer

Originally posted on Andrew’s blog @ cloudconsultancy.info


Recently I had an opportunity to test drive AWS Application load balancer as my client had a requirement for making their websocket application fault tolerant. The implementation was complete windows stack and utilised ADFS 2.0 for SAML authentication however this should not affect other people’s implementation.

The AWS Application load balancer is a fairly new feature which provides layer 7 load balancing and support for HTTP/2 as well as websockets. In this blog post I will include examples of the configuration that I used to implement as well is some of the troubleshooting steps I needed to resolve.

The application load balancer is an independent AWS resource from classic ELB and is defined as aws elbv2 with a number of different properties.

Benefits of Application Load Balancer include:

  • Content based routing, ie route /store to a different set of instances from /apiv2
  • Support for websocket
  • Support for HTTP/2 over HTTPS only (much larger throughput as it’s a single stream multiplexed meaning it’s great for mobile and other high latency apps)
  • Cheaper cost than classic, roughly 10% cheaper than traditional.
  • Cross-zone load balancing is always enabled for ALB.

Some changes that I’ve noticed:

  • Load balancing algorithm used for application load balancer is currently round robin.
  • Cross-zone load balancing is always enabled for an Application Load Balancer and is disabled by default for a Classic Load Balancer.
  • With an Application Load Balancer, the idle timeout value applies only to front-end connections and not the LB-> server connection and this prevents the LB cycling the connection.
  • Application Load balancer is exactly that and performs at Layer 7, so if you want to perform SSL bridge use Classic load balancer with TCP and configure SSL certs on your server endpoint.
  • cookie-expiration-period value of 0 is not supported to defer session timeout to the application. I ended up having to configure the stickiness.lb_cookie.duration_seconds value. I’d suggest making this 1 minute longer than application session timeout, in my example a value of 1860.
  • The X-Forwarded-For parameter is still supported and should be utilised if you need to track client IP addresses, in particular useful if going through a proxy server.

For more detailed information from AWS see http://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/how-elastic-load-balancing-works.html.

Importing SSL Certificate into AWS – Windows

(http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html )

  1. Convert the existing pkcs key into .pem format for AWS

You’ll need openssl for this, the pfx and the password for the SSL certificate.

I like to use chocolatey as my Windows package manager, similar to yum or apt-get for Windows, which is a saviour for downloading package and managing dependencies in order to support automation, but enough of that, check it out @ https://chocolatey.org/

Once choco is installed I simply execute the following from an elevated command prompt.

“choco install openssl.light”

Thereafter I run the following two commands which breaks out the private and public keys (during which you’ll be prompted for the password):

openssl pkcs12 -in keyStore.pfx -out SomePrivateKey.key –nodes –nocerts

openssl pkcs12 -in keyStore.pfx -out SomePublic.cert –nodes –nocerts

NB: I’ve found that sometimes copy and paste doesn’t work when trying to convert keys.

  1. Next you’ll need to also break out the trust chain into one contiguous file, like the following.

Intermediate certificate 2



Intermediate certificate 1



Optional: Root certificate


Save the file for future use,


Example attached above is for a Thawte trust chain with the following properties

“thawte Primary Root CA” Thumbprint ‎91 c6 d6 ee 3e 8a c8 63 84 e5 48 c2 99 29 5c 75 6c 81 7b 81

With intermediate

“thawte SSL CA - G2” Thumbprint ‎2e a7 1c 36 7d 17 8c 84 3f d2 1d b4 fd b6 30 ba 54 a2 0d c5

Ordinarily you’ll only have a root and intermediate CA, although sometimes there will be second intermediary CA.

Ensure that your certificates are base 64 encoded when you export them.

  1. Finally execute the following after authenticating to the AWS CLI (v1.11.14+ to support aws elbv2 function) then run “aws configure” applying your access and secret keys, configuring region and format type. Please note that this includes some of the above elements including trust chain and public and private keys.

If you get the error as below

A client error (MalformedCertificate) occurred when calling the UploadServerCertificate operation: Unable to validate certificate chain. The certificate chain must start with the immediate signing certificate, followed by any intermediaries in order. The index within the chain of the invalid certificate is: 2”

Please check the contents of the original root and intermediate keys as they probably still have the headers and maybe some intermediate,


Bag Attributes

localKeyID: 01 00 00 00

friendlyName: serviceSSL

subject=/C=AU/ST=New South Wales/L=Sydney/O=Some Company/OU=IT/CN=service.example.com

issuer=/C=US/O=thawte, Inc./CN=thawte SSL CA - G2

Bag Attributes

friendlyName: thawte

subject=/C=US/O=thawte, Inc./OU=Certification Services Division/OU=(c) 2006 thawte, Inc. - For authorized use only/CN=thawte Primary Root CA

issuer=/C=US/O=thawte, Inc./OU=Certification Services Division/OU=(c) 2006 thawte, Inc. - For authorized use only/CN=thawte Primary Root CA

AWS Application LB Configuration

Follow this gist with comments embedded. Comments provided based on some gotchas during configuration.

You should be now good to go, the load balancer takes a little while to warm up, however will be available within multiple availability zones.

If you have issues connecting to the ALB, validate connectivity direct to the server using curl.  Again chocolatey comes in handy “choco install curl”

Also double check your security group registered against the ALB and confirm NACLS.

WebServer configuration

You’ll need to import the SSL certificate into the local computer certificate store. Some of the third party issuing (Ensign, Thawte, etc) CAs may not have the intermediate CA within the computed trusted Root CAs store, especially if built in a network isolated from the internet, so make sure after installing the SSL certificate on the server that the trust chain is correct.

You won’t need to update local hosts file on the servers to point to the load balanced address.

Implementation using CNAMEs

In large enterprises where I’ve worked there have been long lead times associated with fairly simple DNS changes, which defeats some of the agility provided by cloud computing. A pattern I’ve often seen adopted is to use multiple CNAMEs to work around such lead times. Generally you’ll have a subdomain domain somewhere where the Ops team have more control over or shorter lead times. Within the target domain (Example.com) create a CNAME pointing to an address within the ops managed domain (aws.corp.internal) and have a CNAME created within that zone to point to the ALB address, ie

Service.example.com -> service.aws.corp.internal -> elbarn.region.elb.amazonaws.com

With this approach I can update service.aws.corp.internal to reflect a new service which I’ve built via a new ELB and avoid the enterprise change lead times associated with a change in .example.com.

Auto-Acceleration for SharePoint Online

Working with one of my colleagues recently, we were tasked with implementing Smart Links to speed up the login processes for a client’s SharePoint Online implementation.

The client was working towards replacing their on-premises implementation of SharePoint and OpenSpaces with SharePoint Online. The issue they faced was that when when a user tries to access a SharePoint Online site collection and is not already authenticated with Office 365, the user will be directed to the default Microsoft Online login page. The user then provides their email address and something called home-realm discovery is performed. This is where the Microsoft Online login screen will use the domain portion of the users email address to determine whether the domain is managed by Office 365, in which case the user will need to provide a password. If the domain is Federated (i.e. not managed by Office 365), the user is re-directed to the Identity Provider (IdP) for the user’s organisation – i.e. AD FS. At this point providing internet security settings within the user’s web browser are correct, they will be authenticated using their logged in credentials. Although this is the default SharePoint Online experience, it was different to the seamless on-premises SharePoint experience with Windows Integrated Authentication.

After spending a little time working through David Ross’ great blog on the topic, we discovered that due to a change in the URL construct with AD FS 3.0, the Smart Link process is no longer supported by Microsoft and has been replaced with Auto-Acceleration for SharePoint Online. This feature reduces logon prompts for the user by instructing SharePoint Online to use a pre-defined home-realm, rather than perform home-realm discovery, when a user accesses the site and “accelerating” the user through the logon process.

However, there are two caveats for getting Auto-Acceleration to work:

  • You can have multiple domains but there must be a single SSO end-point to authenticate users (AD FS or a 3rd party IdP such as Okta, can be used).
  • Auto-acceleration only works with sites that are accessible internally to the organisation, it will not work for external sites.

There isn’t much documentation around for this feature, however it’s pretty simple:

  1. If you haven’t already, download the SharePoint Online Management Shell https://www.microsoft.com/en-au/download/details.aspx?id=35588
  2. Connect to SharePoint Online using the SPO Management Shell:
    $adminUPN="[UPN for Office 365 Admin]"
    $orgName="[Your tenant name]"
    $userCredential = Get-Credential -UserName $adminUPN -Message "Type the password."
    Connect-SPOService -Url https://$orgName-admin.sharepoint.com -Credential $userCredential 
  3. Type the command Set-SPOTenant –SignInAccelerationDomain “yourdomain.com”
  4. If your IdP supports guest users, you can also run Set-SPOTenant -EnableGuestSignInAcceleration $true

AAD Connect – Updating OU Sync Configuration Error: stopped-deletion-threshold-exceeded

I was recently working with a customer on cleaning up their Azure AD Connect synchronisation configuration.

Initially, the customer had enabled sync for all OU’s in the Forest (As a lot of companies do),  and had now come to a point in maturity where they could look at optimising the solution.

We identified an OU with approximately 7000 objects which did not need to be synced.


I logged onto the AAD Connect server and launched the configuration utility. After authenticating with my Office365 global admin account, I navigated to the OU sync configuration and deselected the required OU.

At this point, everything appeared to be working as expected. The configuration utility saved my changes successfully and started a delta sync based on the checkbox which was automatically selected in the tool. The delta sync also completed successfully.

I went to validate my results, and noticed that no changes had been made, and no objects had been deleted from Azure AD. ????

It occurred to me that a full sync was probably required in order to force the deletion to occur. I kicked off a full synchronisation using the following command.

Start-ADSyncSyncCycle -PolicyType initial

When the sync cycle reach the export phase however, I noticed that the task had thrown an error as seen below:


It would seem I’m trying to delete too many objects. Well that does make sense considering we identified 7000 objects earlier. We need to disable the Export deletion threshold before we can move forward!

Ok, so now we know what we have to do! What does the order of events look like? See below:

  1. Update OU synchronisation configuration in Azure AD Connect utility
  2. Delect the run synchronisation option before saving the AADC utility
  3. Run the following powershell command to disable the deletion threshold
    1. Disable-ADSyncExportDeletionThreshold
  4. Run the following powershell command to start the full synchronisation
    1. Start-ADSyncSyncCycle -PolicyType initial
  5. Wait for the full synchronisation cycle to complete
  6. Run the following powershell command to disable the deletion threshold
    1. Enable-ADSyncExportDeletionThreshold

I hope this helps save some time for some of you out there.