Azure AD Connect – Upgrade Errors

 

 

Azure AD Connect is the latest release to date for Azure AD sync or previously known as Dirsync service. It comes with some new features which make it even more efficient and useful in Hybrid environment. Besides many new features the primary purpose of this application remains the same i.e. to sync identities from your local (On-Prem) AD to Azure AD.

Of the late I upgraded an AD sync service to AD connect and during the install process I ran into a few issues which I felt are not widely discussed or posted on the web but yet are real world scenarios which people can face during AD connect Install and configuration. Let’s discus them below.

 

Installation Errors

The very first error is stumped up on was Sync service install failure. The installation process started smoothly and Visual C++ package was installed and sql database created without any issue but during synchronization service installation, process failed and below screen message was displayed.

Issue:

Event viewer logs suggested that the installation process failed because of install package could not install the required dll files. The primary reason suggested that the install package was corrupt.

 

sync install error

 

Actions Taken:

Though I was not convinced but for sake of busting this reason I downloaded new AD connect install package and reinstalled the application but unfortunately it failed at same point.

Next, I switched from my domain account to another service account which was being used to run AD sync service on current server. This account had higher privileges then mine but unfortunately result was the same.

Next I started reviewing the application logs located at following path.

 

And at first look I found access denied errors logged in. What was blocking the installation files? Yes, none other but the AV. Immediately contacted security administrator and requested to temporarily stop AV scanning. Result was a smooth install on next attempt.

I have shared below some of the related errors I found in the log files.

 

 

 

 

Configuration Errors:

One of the important configurations in AD connect is the Azure Ad account with global administrator permissions. If you are creating a new account for this purpose and you have not logged on with it to change first time password, then you may face with below error.

badpassword

 

Nothing to panic about. All you need to do is log into Azure portal using this account, change password and then add credentials with newly set password into configuration console.

Another error related to Azure Ad sync account was encountered by one of my colleague Lucian and he has beautifully narrated the whole scenario in one of his cool blogs here: Azure AD Connect: Connect Service error

 

Other Errors and Resolutions:

Before I conclude, I would like to share some more scenarios which you might face during install/configuration and post install. My Kloudie fellows have done their best to explain them. Have a look and happy AAD connecting.

 

Proxy Errors

Configuring Proxy for Azure AD Connect V1.1.105.0 and above

 

Sync Errors:

Azure AD Connect manual sync cycle with powershell, Start-ADSyncSyncCycle

 

AAD Connect – Updating OU Sync Configuration Error: stopped-deletion-threshold-exceeded

 

Azure Active Directory Connect Export profile error: stopped-server-down

 

 

 

 

 

 

 

 

Back to Basics – Design Patterns – Part 2

In the previous post, we discussed design patterns, their structure and usage. Then we discussed the three fundamental types and started off with the first one – Creational Patterns.

Continuing with creational patterns we will now discuss Abstract Factory pattern, which is considered to be a super set of Factory Method.

Abstract Factory

In the Factory method, we discussed how it targets a single family of subclasses using a corresponding set of factory method classes, or a single factory method class via parametrised/ static factory method. But if we target families of related classes (multiple abstractions having their own subclasses) and need to interact with them using a single abstraction then factory method will not work.

A good example could be the creation of doors and windows for a room. A room could offer a combination of wooden door, sliding door etc. and wooden window, glass window etc. The client machine will however, interact with a single abstraction (abstract factory) to create the desired door and window combination based on selection/ configuration. This could be a good candidate for an Abstract Factory.

So abstract factory allows initiation of families (plural) of related classes using a single interface (abstract factory) independent of the underline concrete classes.

Reasons

When a system needs to use families of related or dependent classes, it might need to instantiate several subclasses. This will lead to code duplication and complexities. Taking the above example of a room, the client machine will need to instantiate classes for doors and windows for one combination and then do the same for others, one by one. This will break the abstraction of those classes, exposes their encapsulation, and put the instantiation complexity on the client. Even if we use a factory method for every single family of classes this will still require several factory methods, unrelated to each other. Thereby managing them to make combination offerings (rooms) will be code cluttering.

We will use abstract factory when:

–          A system is using families of related or dependent objects without any knowledge of their concrete types.

–          The client does not need to know the instantiation details of subclasses.

–          The client does not need to use subclasses in a concrete way.

Components

There are four components of this pattern.

–          Abstract Factory

The abstraction client interacts with, to create door and window combinations. This is the core factory that provide interfaces for individual factories to implement.

–          Concrete Factories

These are the concrete factories (CombinationFactoryA, CobinationFactoryB) that create concrete products (doors and windows).

–          Abstract Products

These are the abstract products that will be visible to the client (AbstractDoor & AbstractWindow).

–          Concrete Products

The concrete implementations of products offered. WoodenDoor, WoodenWindow etc.

 

drawing1

 

Sample code

Using the above example, our implementation would be:

    public interface Door

{

double GetPrice();

}

class WoodenDoor : Door

{

public double GetPrice()

{

//return price;

}

}

class GlassDoor : Door

{

public double GetPrice()

{

//return price

}

}

public interface Window

{

double GetPrice();

}

class WoodenWindow : Window

{

public double GetPrice()

{

//return price

}

}

class GlassWindow : Window

{

public double GetPrice()

{

//return price

}

}

The concrete classes and factories should ideally have protected or private constructors and should have appropriate access modifiers. e.g.

    protected WoodenWindow()

{

}

The factories would be like:

    public interface AbstractFactory

{

Door GetDoor();

Window GetWindow();

}

class CombinationA : AbstractFactory

{

public Door GetDoor()

{

return new WoodenDoor();

}

public Window GetWindow()

{

return new WoodenWindow();

}

}

class CombinationB : AbstractFactory

{

public Door GetDoor()

{

return new GlassDoor();

}

public Window GetWindow()

{

return new GlassWindow();

}

}

And the client:

    public class Room

{

Door _door;

Window _window;

public Room(AbstractFactory factory)

{

_door = factory.GetDoor();

_window = factory.GetWindow();

}

public double GetPrice()

{

return this._door.GetPrice() + this._window.GetPrice();

}

}

            AbstractFactory woodFactory = new CombinationA();

Room room1 = new Room(woodFactory);

Console.Write(room1.GetPrice());

AbstractFactory glassFactory = new CombinationB();

Room room2 = new Room(glassFactory);

Console.Write(room2.GetPrice());

The above showcases how abstract factory could be utilised to instantiate and use related or dependent families of classes via their respective abstractions without having to know or understand the corresponding concrete classes.

The Room class only knows about Door and Window abstractions and let the configuration/ client code input dictate which combination to use, at runtime.

Sometimes abstract factory also uses Factory Method or Static Factory Method for factory configurations:

    public static class FactoryMaker

{

public static AbstractFactory GetFactory(string type)   //some configuration

{

//configuration switches

if (type == “wood”)

return new CombinationA();

else if (type == “glass”)

return new CombinationB();

else   //default or fault config

return null;

}

}

Which changes the client:

AbstractFactory factory = FactoryMaker.GetFactory(“wood”);//configurations or inputs

Room room1 = new Room(factory);

As can be seen, polymorphic behaviours are the core of these factories as well as the usage of related families of classes.

Advantages

Creational patterns, particularly Factory can work along with other creational patterns; Abstract factory

–          Isolation of the creation mechanics from its usage for related families of classes.

–          Adding new products/ concrete types does not affect the client code rather the configuration/ factory code.

–          Provide way for the client to work with abstractions instead of concrete types. This gives flexibility to the client code to the related use cases.

–          Usage of abstractions reduce dependencies across components and increases maintainability.

–          Design often starts with Factory method and evolves towards Abstract factory (or other creational patterns) as the families of classes expends and their relationships develops.

Drawbacks

Abstract factory does introduce some disadvantages in the system.

–          It has fairly complex implementation and as the families of classes grows, so does the complexity.

–          Relying heavily on polymorphism does require expertise for debugging and testing.

–          It introduces factory classes, which can be seen as added workload without having any direct purpose except for instantiation of other classes, particularly in bigger systems.

–          Factory structures are tightly coupled with the relationships of the families of classes. This introduces maintainability issues and rigid design.

For example, adding a new type of window or door in the above example would not be as easy. Adding another family of classes, like Carpet and its sub types would be even more complex, but this does not affect the client code.

Conclusion

Abstract factory is a widely used creational pattern, particularly because of its ability to handle the instantiation mechanism of several families of related classes. This is helpful in real-world solutions where entities are often interrelated and work in a blend in a variety of use cases.

Abstract factory ensures the simplification of design targeting business processes by eliminating the concrete types and replacing them with abstractions while maintaining their encapsulation and removing the added complexity of object creation. This also reduces a lot of duplicate code at the client side making business processes testable, robust and independent of the underline concrete types.

In the third part of creational patterns, we will discuss a pattern slightly similar to Abstract factory, the Builder. Sometimes the two could be competitors in design decisions but differs in real-world applications.

 

Further reading

 

http://www.dofactory.com/net/abstract-factory-design-pattern

http://www.oodesign.com/abstract-factory-pattern.html

http://www.oodesign.com/abstract-factory-pattern.html

Back to Basics – Design Patterns – Part 1

Design Patterns

Design patterns are reusable solutions to recurring problems of software design and engineering in the real-world. Patterns makes it easier to reuse proven techniques to resolve design and architectural complications and then communicating and documenting them with better understanding, making them more accessible to developers in an abstract way.

60288347

Design patterns enhance the classic techniques of object oriented programming by encouraging the reusability and communication of the solutions of common problems at abstract levels and improves the maintainability of the code as a by-product at implementation levels.

The “Y” in Design Patterns

Apart from the obvious advantage of providing better techniques to map real-world into programming models, a prime objective of OOP is design and code reusability. However, this is easier said than done. In reality, designing reusable classes is hard and takes time. In the real-world, less than few devs write code with long-term reusability in mind. This becomes obvious when dealing with recurring problems in design and implementation. This is where design patterns come in to the picture, when dealing with problems that seems to appear again and again. Any proven technique that provides a solution of a recurring problem in an abstract, reusable way, independent of the implementation barriers like programming language details and data structures, is categorised as design pattern.

The design patterns:

  • help analysing common problems in a more abstract way.
  • provides proven solutions to those problems.
  • help decreasing the overall code time at the implementation level.
  • encourages the code reusability by providing common solutions.
  • increases code lifetime and maintainability by enhancing the capacity of change.
  • Increases the understanding of the solutions of recurring problems.

A pattern will describe a problem, provide its solution at an abstract level, and elaborate the result.

Problem

The problem part of a pattern describes the issue(s) a program/ piece of code is facing along with its context. It might highlight a class structure to be inflexible, or issues related to the usage of an object, particularly at runtime.

Solution

A solution is always defined as a template, an abstract design that will describe its element(s), their relationships, and responsibilities and will detail out how this abstract design will address the problem at hand. A pattern never provides any concrete implementation of the solution, enhancing its reusability and flexibility. The actual implementation of a pattern might vary in different programming languages.

Understanding software design patterns requires respectable knowledge of object oriented programming concepts like abstraction, inheritance, and polymorphic behaviour.

Types of Design Patterns

Design patterns are often divided into three fundamental types.

  • Creational – deals with the creation/ instantiation of objects specific to business use cases. Polymorphic concepts, along with inheritance, are the core of these patterns.
  • Structural – targets the structure and composition of classes. Heavily relies upon the inheritance and composition concepts of OOP.
  • Behavioural – underlines the interaction between classes, separation and delegation of responsibilities.

let-us-understand-design-pattern-9-638

Creational Patterns

Often the implementation and usage of a class, or a group of classes is tightly coupled with the way objects of those classes are created. This decreases the flexibility of those classes, particularly at runtime. For example, if we have a group of cars providing a functionality of driving. Then the creation of each car will require a piece of code; new constructor in common modern languages. This will infuse the creation of cars with its usage:

Holden car1 = new Holden();

car1.Drive();

Mazda car2 = new Mazda();

car2.Drive();

Even if we use base type;

Car car1 = new Holden();

car1.Drive();

Car car2 = new Mazda();

car2.Drive();

If you look at the above examples, you will notice that the actual class being instantiated is selected at compile-time. This creates problems when designing common functionality and forces hardwired code into the usage based on concrete types. This also exposes the constructors of the classes which penetrates the encapsulation.

Creational patterns provide the means of creating, and using objects of related classes without having to identify their concrete types or exposing their creational mechanism. This gives move flexibility at the usage of the instantiated objects at runtime without worrying about their types. This also results in less code and eliminates the creational complexity at the usage, allowing the code to focus on what to do then what to create.

CarFactory[] factories = <Create factories>;

foreach (CarFactory factory in factories) {

               Car car = factory.CreateCar();

               car.Drive();

}

The above code removes the creational logic and dedicates it to subclasses and factories. This gives flexibility on usage of classes independent of its creation and let the runtime dictates the instantiated types while making the code independent of the number of concrete types (cars) to be used. This code will work regardless of the number concrete types, enhancing the reusability and separation of creation from its usage. We will discuss the above example in details in Factory Method pattern.

The two common trades of creational patterns are:

  • Encapsulation of concrete types and exposure using common abstractions.
  • Encapsulation of instantiation and encouraging polymorphic behaviour.

The system leveraging creational patterns does not need to know, or understand concrete types; it handles abstractions only (interfaces, abstract classes). This gives flexibility in configuring a set of related classes at runtime, based on use cases and requirements without having to alter the code.

There are five fundamental creational patterns:

  • Factory method
  • Abstract factory
  • Prototype
  • Singleton
  • Builder

Factory Method

This pattern specifies a way of creating instances of related classes but let the subclasses decide which concrete type to instantiate at runtime, also called Virtual Construction. The pattern encourages the use of interfaces and abstract classes over concrete types. The decision is based upon the input supplied by either the client code or configuration.

Reasons

When client code instantiates a class, it knows the concrete type of that class. This breaks through the polymorphic abstraction when dealing with a family of classes. We may use factory method when:

  • Client code don’t know the concrete types of subclasses to create.
  • The instantiation needs to be deferred to subclasses.
  • The job needs to be delegated to subclasses and client code does not need to know which subclass is doing the job.

Components

factorymethod1

Abstract <Product> (Car)

The interface/ abstraction the client code understands. This describes the family of subclasses to be instantiated.

<Product> (Holden, Mazda, …)

This implements the abstract <product>. The subclass(es) that needs to be created.

Abstract Factory (CarFactory)

This provides the interface/ abstraction for the creation of abstract product, called factory method. This might also use configuration for creating a default product.

ConcreteFactory (HoldenFactory, MazdaFactory, …)

This provides the instantiation of the concrete subclass/ product by implementing the factory method.

Sample code

Going back to our example of cars earlier, we will provide detail implementation of the factory method.

public abstract class Car //could be an interface instead, if no default behaviour is required

    {

        public virtual void Drive()

        {

            Console.Write(“Driving a car”);

        }

    }

    public class Holden : Car

    {

        public override void Drive()

        {

            Console.Write(“Driving Holder”);

        }

    }

    public class Mazda : Car

    {

        public override void Drive()

        {

            Console.Write(“Driving Mazda”);

        }

    }

   public interface ICarFactory //This will be a class/ abstract if there is a default factory implementation.

    {

        Car CreateCar();

    }

    public class HoldenFactory : CarFactory

    {

        public Car CreateCar()

        {

            return new Holden();

        }

    }

    public class MazdaFactory : CarFactory

    {

        public Car CreateCar()

        {

            return new Mazda();

        }

    }

Now the client code could be:

var factories = new CarFactory[2];

factories[0] = new HoldenFactory();

factories[1] = new MazdaFactory();

foreach (var factory in factories)

{

var car = factory.CreateCar();

       car.Drive();

}

 

Now we can keep introducing new Car types and the client code will behave the same way for this use case. The creation of factories could further be abstracted by using parametrised factory method. This will modify the CarFactory interface.

 

    public class CarFactory

    {

        public virtual Car CreateCar(string type) //any configuration or input from client code.

        {

     /////switches based on the configuration or input from client code

            if (type == “holden”)

                return new Holden();

            else if (type == “mazda”)

                return new Mazda();

//…

            else    ////default instantiation/ fault condition etc.

                return null; //default/ throw etc.

        }

    }

 

Which will change the client code:

 

CarFactory factory = new CarFactory();

Car car1 = factory.CreateCar(“holdern”); //Configuration/ input etc.

Car car2 = factory.CreateCar(“mazda”); //Configuration/ input etc.

 

The above code shows the flexibility of the factory method, particularly the Factory class.

Advantages

Factory method is the simplest of the creational patterns that targets the creation of a family of classes.

  • It separates the client code and a family of classes by weakening the coupling and abstracting the concrete subclasses.
  • It reduces the changes in client code due to changes in concrete types.
  • Provides configuration mechanism for subclasses by removing their instantiation from the client code and into the factory method(s).
  • The default constructors of the subclasses could be marked protected/ private to shield direct creation.

Drawbacks

There are some disadvantages that should be considered before applying factory method:

  • Factory demands individual implementations of a factory method for every subclass in the family. This might introduce unnecessary complexity.
  • Concrete parametrised factory leverages the switch conditions to identify the concrete type to be instantiated. This introduces cluttered and hardwired code in the factory. Any changes in the configuration/ client code input or implementation changes of the concrete types demands a review of the factory method.
  • The subtypes must be of the same base type. Factory method cannot handle subtypes of different base types. That will require complex creational patterns.

Conclusion

Factory method is simple to understand and implement in a variety of languages. The most important consideration is to look for many subclasses of the same base type/ interface and being handled at that abstract level in a system. Conditions like below could warrant a factory method implementation.

Car car1 = new Holder();

car1.GetDiscount();

Car car2 = new Mazda();

car2.GetDiscount();

. . .

 

On the other hand, if you have some concrete usage of subclasses like.

 

if (car.GetType() == typeof(Holden))

      ((Holden)car).GetHoldenDiscount();

 

Then factory method might not be the answer and, perhaps reconsider the class hierarchy.

In part 2, we will discuss the next Creational Pattern, that is considered a superset of factory method, Abstract Factory.

 

Further Reading

https://msdn.microsoft.com/en-us/library/orm-9780596527730-01-05.aspx

http://www.oodesign.com/abstract-factory-pattern.html

http://www.dofactory.com/net/factory-method-design-pattern

 

7 tips for making UX work in Agile teams

Agile is here to stay. Corporates love it, start-ups embrace it and developers live by it. So there is no denying that Agile is going nowhere and we have to work with it. For a number of years, I’ve tried to align User Experience practices with Agile methods and haven’t met with great success every time.

But nevertheless, there are a lot of lessons that I’ve learnt during the process and I’m going to share 7 tips that always worked for me.

agile-and-ux

Create a shared vision early on

Get all the decision makers (Dev leads, Project managers and Project sponsors) in one room. Get a whiteboard and discuss why are we developing this product? What problems are we trying to solve? Once you have an overall theme, ask more specific questions such as how many app downloads are we targeting in the first week?

This workshop will give you a snapshot of a shared vision and common goals of the organisation. During every checkpoint of this project, this shared vision will serve as a guide, helping teams prioritise user stories and make the right trade-offs along the way.

Engage stakeholders wherever possible

Regardless of how many people in your team are in agreement, most of the times the decision makers are the Project Sponsors or Division Managers. You do not want them to appear randomly during sprint 3 planning and poop on it.

I highly recommend cultivating strong relationships with these stakeholders early on in the project. Invite them to all UX workshops, and if they can’t/don’t attend, find a way to communicate the summary of the meeting in an engaging way (not an email with a PDF attachment). I use to put together a Keynote slide and have it ready on my iPad for a quick 5-minute summary.

Work at least one sprint ahead of the Dev team

The chances of getting everything from research, wireframes, designs and development done for a single card – in one sprint is implausible.

You’ll struggle to get everything going at the same time. When you are designing, the developers are counting sheep because they are waiting on you to give them something to work with. You don’t want to be the reason behind the declining burn chart. Always be at least one sprint (if not two sprints) ahead of the development team. Sometimes it takes longer to research and validate design decisions, but if you are a sprint ahead – you are not holding up the developers, and you have ample time to respond to design challenges.

Foster a collaborative culture

Needless to say – Collaborate as much as you can. Try to get the team involved (just the people sitting around you is fine) for even small things such a changing a button’s colour. It makes them feel important; makes them feel good and fosters a culture of collaboration.

If you don’t collaborate with the team on small (or big) things, don’t expect them to tell you everything either. Your opinion might not be very valuable in most of the Dev discussions such whether to use ReactJS or Angular, but knowing that the Devs are going to use a certain JS Library – will definitely help you in (one or the other way) planning future sprints.

Follow an Iterative Design Process

DO NOT design mock-ups, to begin with. I know all the customers want to see something real that they can sell to their bosses. But the pretty design approach falls on its face every time. I want my customers to detach themselves from aesthetics and focus on structure and interaction first. Once we have worked out the hardware, then we can look at building the software.

Try Iterative Design Process. Sketch on the whiteboard, get the stakeholders to put a vision on paper and come up with a structure first. Then iterate. Here is my design process:

  1. Paper sketches
  2. Low fidelity wireframes (on white board / PC)
  3. Interactive wireframes – B/W (on PC)
  4. Draft Designs – in Colour
  5. Final Designs
  6. Pass onto the build team.

Do a round of user testing with at least 5 people

User testing is not expensive, it does not take days or weeks, and you don’t have to talk to 25 people.

There is a lot of research on how testing only 5 users is highly effective and valuable to product development. Pick users from different demographic, put an interactive wireframe together and run it past them for about 30 – 45 minutes. After 3 users, you’ll start noticing common themes appearing. And after 5, you’ll have enough pointers to take back to the team for another round of iteration. Repeat this process every two to four sprints.

Hold a brief stand-up meeting every day

Hold a stand-up meeting first thing in the morning. The aim is to keep everyone updated on progress, recognise blockers and pick-up new cards.  This ensures all the team members are on the same page and are working towards a common goal.

However, be mindful of the time since some discussions are lengthier and may need to be taken offline. We generally time-box stand-ups for 15 minutes.

Microservices – An Agile Architecture Introduction

In the ever evolving and dynamic world of software architecture, you will hear new buzz words every other month. Microservices is the latest of them, though not as new as it appears, it has been around for some time now in different forms.

Microservices – The micro of SOA?

        “… the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.” that Fowler guy.

So, to break it all down, microservices is a form of fine-grained service oriented architecture (SOA) implementation that could be used to build independent, flexible, scalable, and reusable software applications in a decoupled manner. This relies heavily on separately deployable and maintainable sub systems/ processes that can communicate over a network using technology-agnostic protocols. That in turns, pave way for DevOps and Continuous Deployment pipelines.

Principia Microservices – nutshell

You would not find a defined set of characteristics of a microservices based system. A formal definition is still missing, though we can outline the basic and mostly mentioned pointer.

–          The services are small fine-grained representation of a single business function.

–          Designed and maintained around capabilities.

–          Flexible, robust, replaceable, and independent of each other in nature.

–          Designed to embrace faults and failures. An unavailable service cannot bring the system down. Emphasis on monitoring and logging.

–          Advocates the practices of continuous improvement, continuous integration and deployment.

Services as Components

Defining software systems as a set of smaller, independently maintainable components has always been a craving of well-define architectures. The current programming languages lack the mechanism of explicit boundary definitions for components and often, it’s just documentation and published best practices/ code reviews that stops a developer in mixing logical components in a monolithically designed application.

This differs a bit from the idea of libraries. Libraries are in-process components of an application that, though logically separate, cannot be deployed and maintained independently.

In microservices, components are explicitly defined as independent and autonomously deployable services. This enables continuous processes (CI & CD) and enhances maintainability of individual components, increases reusability and overall robustness of the system. It also enables a better realization of business processes into the application.

feature02-figure02

 

Built for Business

conways-law

Most of the time, organisations looking to build a decoupled system focuses on technical capabilities. This leads to teams of UI devs, backend devs, DB devs etc. This leads to siloed architectures and applications.

Microservices does things differently. The organisation splits teams as per the business capabilities. This results in cross-functional, dynamic and independent teams with full technical capabilities.

03-conwayslawenabler

 

Decentralized

A consequence of a monolithic design is central management and support of the whole system. This create problems in scalability and reusability. These systems tend to favour a single set of tools and technologies.

Microservices advocates “you build it, you run it” approach. The teams are designed to take end-to-end responsibility of the product. This disperse the capabilities across the teams and helps tie up the services with business capabilities and functions. This also enables selection of underlying technologies based on needs and availabilities.

team_monolithic_vs_microservice

 

Designed for Failure

A failure in monolithic design could be a catastrophe, if not dealt with gracefully and with a fall-back approach. Microservices, however, are designed for failure from the beginning. Because of their independent and granular nature, the overall user experience in case of a failure tends to be manageable. Unavailability of one service does not bring the whole system down. It is also easier to drill down in microservices to identify the root cause because of their agility and explicit boundaries. Real-time monitoring and logging and other cross-cutting concerns are easier to implement and are heavily emphasized.

      “If we look at the characteristics of an agile software architecture, we tend to think of something that is built using a collection of small, loosely coupled components/services that collaborate together to satisfy an end-goal. This style of architecture provides agility in several ways. Small, loosely coupled components/services can be built, modified and tested in isolation, or even ripped out and replaced depending on how requirements change. This style of architecture also lends itself well to a very flexible and adaptable deployment model, since new components/services can be added and scaled if needed.” — Simon Brown

Microservices architecture tends to result in independent products/ services designed and implemented using different languages, databases, and hardware and software environments, as per the capabilities and requirements. They are granular, with explicit boundaries, autonomously developed, independently deployable, decentralized and built and released with automated processes.

Issues – Architecture of the wise

However, there are certain drawbacks of microservices as well; like higher cost of communication because of network latency, runtime overhead (message processing) as oppose to in-process calls, and blurry definitions of boundaries between services. A poorly designed application based on microservices could yield more complexity since the complexity will only be shifted onto the communication protocols, particularly in distributed transactional systems.   

“You can move it about but it’s still there” — Robert Annett: Where is the complexity?

Decentralized management, especially for databases has some implications related to updates and patches, apart from transactional issues. This might lead to inconsistent databases and need well defined processes for management, updates, and transactional data consistency.

issues

Overall system performance often demand serious considerations. Since microservices targets distributed applications in isolated environments; the communication cost between components tends to be higher. This leads to lower overall system performance if not designed carefully. Netflix is a leading example of how a distributed system based on microservices could outperform some of the best monolithic systems in the world.

Microservices tends to be small and stateless; still business processes are often state-full and transitional in nature so dividing business processes and data modelling could be challenging. Poorly parted business processes create blurry boundaries and responsibility issues.

Potential of failure and unavailability rises significantly with distributed systems based on isolated components communicating over networks. Microservices could suffer with the same because of high dependence on isolated components/ services. Systems with little or no attention to Design for Failure could lead to higher running cost and unavailability issues.

Coping with issues of distributed systems could lead to complexities in implementation details. Less skilled teams in this case will suffer greatly, especially when handling with performance and reliability issues. Therefore, microservices is the architecture of the wise. Often building a monolithic system and then migrating to a more distributed system using microservices works best. This also gives the teams more insights about business processes and experience of handling existing complexities which comes handy in segregating processes into services with clearer boundaries.

When we have multiple isolated components working together in a system, version control becomes a problem, as with all the distributed systems. Services dependencies upon each other create issues when incorporating changes. This tends to get bigger as the system grows. Microservices emphasises on simpler interfaces and handshakes between services and advocates immutable designs to resolve versioning issues. This still requires managing and supporting multiple services at times.

Last words

There is no zero-one choice between monolithic and distributed systems. Both have their advantages and shortcomings. The choice of an architecture heavily depends upon the business structure, team capabilities and skill set, and the dispersed knowledge of the business models. Microservices does solve a lot of problems classical one package systems face but it does come up with a cost. More than the architectural choices, it is the culture of the team and the mindset of people that makes the difference between success and failure – there is no holy grail.

 

Further reading

https://martinfowler.com/articles/microservices.html

https://smartbear.com/learn/api-design/what-are-microservices/

Google 🙂

 

Configuring AWS Web Application Firewall

In a previous blog, we discussed Site Delivery with AWS CloudFront CDN, one aspect in that blog was not covered and that was WAF (Web Application Firewall).

What is Web Application Firewall?

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules.

When to create your WAF?

Since this blog is related to CloudFront, and the fact that WAF is tightly integrated with CloudFront, our focus will be on that. Your WAF rules will run and be applied in all Edge Locations you specify during your CloudFront configuration.

It is recommended that you create your WAF before deploying your CloudFront Distribution, since CloudFront takes a while to be deployed, any changes applied after its deployment will take equal time to be updated. Even after you attach the WAF to the CloudFront Distribution (this is done in “Choose Resource” during WAF configuration – shown below).

Although there is no general rule here, it is up to the organisation or administrator to apply WAF rules before or after the deployment of CloudFront distribution.

Main WAF Features

In terms of security, WAF protects your applications from the following attacks:

  • SQL Injection
  • DDoS attacks
  • Cross Site Scripting

In terms of visibility, WAF gives you the ability to monitor requests and attacks through CloudWatch integration (excluded from this blog). It gives you raw data on location, IP Addresses and so on.

How to setup WAF?

Setting up WAF can be done in few ways, you could either use CloudFormation template, or configure the setting on the WAF page.

Since each organisation is different, and requirements change based on applications, and websites, the configuration present on this blog are considered general practice and recommendation. However, you will still need to tailor the WAF rules according to your own needs.

WAF Conditions

For the rules to function, you need to setup filter conditions for your application or website ACL.

I already have WAF setup in my AWS Account, and here’s a sample on how conditions will look.

1-waf

If you have no Conditions already setup, you will see something like “There is no IP Match conditions, please create one”.

To Create a condition, have a look at the following images:

In here, we’re creating a filter that an HTTP method contains a threat after HTML decoding.

2-waf

Once you’ve selected your filters, click on “Add Filter”. The filter will be added to the list of filters, and once you’re done adding all your filters, create your condition.

3-waf

You need to follow the same procedure to create your conditions for SQL Injection for example.

WAF Rules

When you are done with configuring conditions, you can create a rule and attach it to your web ACL. You can attach multiple rules to an ACL.

Creating a rule – Here’s where you specify the conditions you have created in a previous step.

4-waf

From the list of rules, select the rule you have created from the drop down menu, and attach it to the ACL.

5-waf

In the next steps you will have the option to choose your AWS Resource, in this case one of my CloudFront Distributions. Review and create your Web ACL.

6-waf

7-waf

Once you click on create, go to your CloudFront distribution and check its status, it should show “In progress”.

WAF Sample

Since there isn’t a one way for creating a WAF rule, and if you’re not sure where to begin, AWS gives you a good way to start with a CloudFormation template that will create WAF sample rules for you.

This sample WAF rule will include the following found here:

  • A manual IP rule that contains an empty IP match set that must be updated manually with IP addresses to be blocked.
  • An auto IP rule that contains an empty IP match condition for optionally implementing an automated AWS Lambda function, such as is shown in How to Import IP Address Reputation Lists to Automatically Update AWS WAF IP Blacklists and How to Use AWS WAF to Block IP Addresses That Generate Bad Requests.
  • A SQL injection rule and condition to match SQL injection-like patterns in URI, query string, and body.
  • A cross-site scripting rule and condition to match Xss-like patterns in URI and query string.
  • A size-constraint rule and condition to match requests with URI or query string >= 8192 bytes which may assist in mitigating against buffer overflow type attacks.
  • ByteHeader rules and conditions (split into two sets) to match user agents that include spiders for non–English-speaking countries that are commonly blocked in a robots.txt file, such as sogou, baidu, and etaospider, and tools that you might choose to monitor use of, such as wget and cURL. Note that the WordPress user agent is included because it is used commonly by compromised systems in reflective attacks against non–WordPress sites.
  • ByteUri rules and conditions (split into two sets) to match request strings containing install, update.php, wp-config.php, and internal functions including $password, $user_id, and $session.
  • A whitelist IP condition (empty) is included and added as an exception to the ByteURIRule2 rule as an example of how to block unwanted user agents, unless they match a list of known good IP addresses

Follow this link to create a stack in the Sydney region.

I recommend that you review the filters, conditions, and rules created with this Web ACL sample. If anything, you could easily update and edit the conditions as you desire according to your applications and websites.

Conclusion

In conclusion, there are certain aspects of WAF that need to be considered, like choosing an appropriate WAF solution and managing its availability, and you have to be sure that your WAF solution can keep up with your applications.

The best feature of WAF, and since it is integrated with CloudFront it can be used to protect websites even if they’re not hosted in AWS.

I hope you found this blog informative. Please feel free to add your comments below.

Thanks for reading.

Azure API Management Step by Step – Use Cases

jorge-fotoUse Cases

On this second post about Azure API management, let’s discuss about use cases. Why “Use Cases”?                  

Use cases helps to manage complexity, since it focuses on one specific usage aspect at the time. I am grouping and versioning use cases to facilitate your learning process and helping to keep track with future changes. You are welcome to use these diagrams to demonstrate Azure API management features.

API On-boarding is a key aspect of API governance and first thing to be discussed. How can I publish my existing and future APIs back-ends to API Management?

API description formats like Swagger Specification (aka Open API Initiative https://openapis.org/) are fundamental to properly implement automation and devops on your APIM initiative. API can be imported using swagger, created manually or as part of a custom automation/integration process.

Azure API management administrators can group APIs by product allowing subscription workflow. Products visibility are linked with user groups, providing restricted access to APIs. You can manage your API policies as Code thought an exclusive GIT source control repository available to your APIM instance. Secrets and constants used by policies are managed by a key/value(string) service called properties.

apim-use-cases-adm-api-onboarding

Azure API management platform provides a rich developers portal. Developers can create an account/profile, discover APIs and subscribe to products. API Documentation, multiple language source code samples, console to try APIs, API subscription keys management and Analytics are main features provided. 

apim-use-cases-developer

The management and operation of the platform plays an important role on daily tasks. For enterprises, user groups and user(developers) can be fully integrated with Active Directory. Analytics dashboards and reports are available. Email notification and templates are customizable. APIM REST API and powershell commands are available to most of platform features, including exporting analytics reports.

apim-use-cases-administrator

Security administration use cases groups different configurations. Delegation allows custom development of portal sign-in, sign-up and product subscription. OAuth 2.0 and OpenID providers registration are used by development portal console, when trying APIs, to generate required tokens. Client certificates upload and management are done here or using automation. Developers portal identities configurations brings out of the box integration with social providers. GIT source control settings/management and APIM REST API tokens are available as well.

apim-use-cases-adm-security

Administrators can customize developers portal using built in content management systems functionality. Custom pages and modern javascript development is now allowed. Blogs feature allow of the box blog/post publish/unpublish functionality. Developers submitted applications can be published/unpublished by administrator, to be displayed at developers portal.

apim-use-cases-adm-developer-poral

In Summary, Azure API management is a mature and live platform with a few new features under development, bringing a strong integration with Azure Cloud. Click here for RoadMap

In my next post, I will deep dive in API on-boarding strategies.  

Thanks for reading @jorgearteiro

Posts: 1) Introduction  2) Use Cases

Enterprise Cloud Take Up Accelerating Rapidly According to New Study By McKinsey

A pair of studies published a few days ago by global management consulting firm McKinsey & Company entitled IT as a service: From build to consume show enterprise adoption of Infrastructure as a Service (IaaS) services accelerating increasingly rapidly over the next two years into 2018.

Of the two, one examined the on-going migrations of 50 global businesses. The other saw a large number of CIOs, from small businesses up to Fortune 100 companies, interviewed on the progress of their transitions and the results speak for themselves.

1. Compute and storage is shifting massively to cloud service providers.

Compute and storage is shift massively to the cloud service providers.

Compute and storage is shift massively to the cloud service providers.

“The data reveals that a notable shift is under way for enterprise IT vendors, with on-premise shipped server instances and storage capacity facing compound annual growth rates of –5 percent and –3 percent, respectively, from 2015 to 2018.”

With on-premise storage and server sales growth going into negative territory, it’s clear the next couple of years will see the hyperscalers of this world consume an ever increasing share of global infrastructure hardware shipments.

2.Companies of all sizes are shifting to off-premise cloud services.

Companies of all sizes are shifting to off-premise cloud services.

Companies of all sizes are shifting to off-premise cloud services.

“A deeper look into cloud adoption by size of enterprise shows a significant shift coming in large enterprises (Exhibit 2). More large enterprises are likely to move workloads away from traditional and virtualized environments toward the cloud—at a rate and pace that is expected to be far quicker than in the past.

The report also anticipates the number of enterprises hosting at least one workload on an IaaS platform will see an increase of 41% in the three year period to 2018. While that of small and medium sized businesses will increase a somewhat less aggressive 12% and 10% respectively.

3. A fundamental shift is underway from a build to consume model for IT workloads.

a-fundamental-shift

“The survey showed an overall shift from build to consume, with off-premise environments expected to see considerable growth (Exhibit 1). In particular, enterprises plan to reduce the number of workloads housed in on-premise traditional and virtualized environments, while dedicated private cloud, virtual private cloud, and public infrastructure as a service (IaaS) are expected to see substantially higher rates of adoption.”

Another takeaway is that the share of traditional and virtualized on-premise workloads will shrink significantly from 77% and 67% in 2015 to 43% and 57% respectively in 2018. While virtual private cloud and IaaS will grow from 34% and 25% in 2015 to 54% and 37% respectively in 2018.

Cloud adoption will have far-reaching effects

The report concludes “McKinsey’s global ITaaS Cloud and Enterprise Cloud Infrastructure surveys found that the shift to the cloud is accelerating, with large enterprises becoming a major driver of growth for cloud environments. This represents a departure from today, and we expect it to translate into greater headwinds for the industry value chain focused on on-premise environments; cloud-service providers, led by hyperscale players and the vendors supplying them, are likely to see significant growth.”

About McKinsey & Company

McKinsey & Company is a worldwide management consulting firm. It conducts qualitative and quantitative analysis in order to evaluate management decisions across the public and private sectors. Widely considered the most prestigious management consultancy, McKinsey’s clientele includes 80% of the world’s largest corporations, and an extensive list of governments and non-profit organizations.

Web site: McKinsey & Company
The full report: IT as a service: From build to consume

Azure Functions or WebJobs? Where to run my background processes on Azure?

functionsvswebjobs-icon

Introduction

Azure WebJobs have been a quite popular way of running background processes on Azure. They have been around since early 2014. When they were released, they were a true PaaS alternative to Cloud Services Worker Roles bringing many benefits like the WebJobs SDK, easy configuration of scalability and availability, a dashboard, and more recently all the advantages of Azure Resource Manager and a very flexible continuous delivery model. My colleague Namit previously compared WebJobs to Worker Roles.

Meanwhile, Azure Functions were announced earlier this year (march 2016). Azure Functions, or “Functions Apps” as they appear on the Azure Portal, are Microsoft’s Function as a Service (FaaS) offering. With them, you can create microservices or small pieces of code which can run synchronously or asynchronously as part of composite and distributed cloud solutions. Even though they are still in the making (at the time of this writing they are in Public Preview version 0.5), Azure Functions are now an appealing alternative for running background processes. Azure Functions are being built on top of the WebJobs SDK but with the option of being deployed with a Serverless model.

So, the question is: which option suits better my requirements to run background processes? In this post, I will try to contrast each of them and shade some light so you can better decide between the two.

Comparing Available Triggers

Let’s see what trigger options we have for each:

WebJobs Triggers

WebJobs can be initiated by:

  • messages in Azure Service Bus queues or topics (when created using the SDK and configured to run continuously),
  • messages in an Azure storage queue (when created using the SDK and configured to run continuously),
  • blobs added to a container in an Azure Storage account (when created using the SDK and configured to run continuously),
  • a schedule configured with a CRON expression (if configured to run on-demand),
  • HTTP call by calling the Kudu WebJobs API (when configured to run on-demand),

Additionally, with the SDK extensions, the triggers below were added:

  • file additions or changes in a particular directory (of the Web App File System),
  • queue messages containing a record id of Azure Mobile App table endpoints,
  • queue messages containing a document id of documents on DocumentDB collections, and
  • third-party WebHooks (requires the Kudu credentials).

Furthermore, the SDK 2.0 (currently in beta) is adding support to:

Azure Functions Triggers

Being Function Apps founded on WebJobs SDK, most of the triggers listed above for WebJobs are supported by Azure Functions. The options available at the time of writing this post are:

And, currently provided as experimental options

  • files added in Cloud File Storage SaaS platforms, such as Box, DropBox, OneDrive, FTP and SFTP (SaaSFileTrigger Template).

I believe the main difference between both in terms of triggers is the HTTP trigger option as detailed below:

Authentication for HTTP Triggers

Being WebJobs hosted on the Kudu SCM site, to trigger them via an HTTP call we need to use the Kudu credentials, which is not ideal. Azure Function Apps, on the other hand, provide more authentication options, including Azure Active Directory and third-party identity providers like Facebook, Google, Twitter, and Microsoft accounts.

HTTP Triggers Metadata

Functions support exposing their API metadata based on the OpenAPI specification, which eases the integration with consumers. This option is not available for WebJobs.

Comparing Outbound Bindings

After comparing the trigger bindings for both options, let’s have a look at the output bindings for each.

WebJobs Outputs

The WebJobs SDK provides the following out-of-the-box output bindings:

Azure Functions Outputs

Function Apps can output messages to different means. Options available at the time of writing are detailed below:

In regard to supported outputs, the only difference between the two is that Azure Functions can return a response to the caller when triggered via HTTP. Otherwise, they provide pretty much the same capabilities.

Supported Languages

Both, WebJobs and Function Apps support a wide variety of languages, including: bash (.sh), batch (.bat / .cmd), C#, F#, Node.Js, PHP, PowerShell, and Python.

So no difference here, probably just that WebJobs require some of them to be compiled as an executable (.exe) program.

Tooling

WebJobs can be easily created using Visual Studio and the WebJobs SDK. For those WebJobs which are a compiled console application, you can run and test them locally, which comes in always very handy.

At the time of this writing, there is no way you can program, compile and test your Functions with Visual Studio. So you might need to code all your functions using the online functions editor which provides different templates. However, being Functions a very promising offering, I believe Microsoft will provide better tooling by the time they reach General Availability. In the meantime, here an alpha version tool and a ScriptCs Functions emulator by my colleague Justin Yoo.

Managing “VM” Instances, Scaling, and Pricing

This is probably the most significant difference between WebJobs and Azure Functions.

WebJobs require you to create and manage an Azure App Service (Web App) and the underlying App Service Plan (a.k.a. server farm). If you want your WebJob to run continuously, you need at least one instance on a Basic App Service Plan to support “Always On”. For WebJobs you always need to pay for at least one VM Instance (as PaaS) regardless of this being used or idle. For WebJobs, the App Service Plan Pricing applies. However, you can always deploy more than one App Service on one App Service Plan. If you have larger loads or load peaks and you need auto-scaling, then you would require at least a Standard App Service Plan.

Conversely, with Azure Functions and the Dynamic Service Plan, the creation and management of a VM Instances and configuring scaling is all abstracted now. We can write functions without caring about server instances and get the benefits of a Serverless architecture. Functions scale out automatic and dynamically as load increases, and scale down if decreases. Scaling up or down is performed based on the traffic, which depends on the configured triggers.

With functions, you get billed only for the resources you actually use. The cost is calculated by the number of executions, memory size, and execution time measure as Gigabyte Seconds. If you have background processes which don’t require a dedicated instance and you only want to pay for the compute resources in use, then a dynamic plan would make a lot of sense.

It’s worth noting that if you already have an App Service Plan, which you are already managing and paying for, and has resources available, you can deploy your Functions on it and avoid extra costs.

One point to consider with the Dynamic Service Plan (Serverless model) is that as you don’t control which instances are hosting your Azure Functions, there might be a cold-startup overhead. This wouldn’t be the case for Functions running on your own App Service Plan (server farm) or WebJobs running as continuous on an “Always On” Web App where you have “dedicated” instances and can benefit from having your components loaded in memory.

Summary

As we have seen, being Azure Functions built on top of the WebJobs SDK, they provide a lot of the previously available and already mature functionality but with additional advantages.

In terms of triggers, Functions now provide HTTP triggers without requiring the use of Publish Profile credentials and bringing the ability to have authentication integrated with Azure AD or third party identity providers. Additionally, functions give the option to expose an OpenAPI specification.

In terms of binding outputs and supported languages, both provide pretty much the same.

In regard to tooling, at the time of writing, WebJobs allow you to develop and test offline with Visual Studio. It is expected that by the time Azure Functions reach General Availability, Microsoft will provide much better tools for them.

I would argue that the most significant difference between Azure Functions and WebJobs is the ability to deploy Functions on the new Dynamic Service Plan. With this service plan, you can have the advantages of not worrying about the underlying instances or scaling, it’s all managed for you. This also means that you only pay for the compute resources you actually use. However, when needed or when you are already paying for an App Service Plan, you have the option of squeezing in your Functions in the same instances and avoid additional costs.

Coming back to the original question, which technology suits better your requirements? I would say that If you prefer a “serverless” approach in which you don’t need or want to worry about the underlying instances and scaling, then Functions is the way to go (considering you are OK with the temporary lack of mature tools). But if you still favour managing your instances, WebJobs might be a better fit for you.

I will update this post once Functions reach GA and tools are there. Probably (just probably), Azure Functions will provide the best of both worlds and the question will only be whether to choose a Dynamic Service Plan or not. We will see. 🙂

Feel free to share your experiences or add comments or queries below.

Azure API Management Step by Step

jorge-fotoIntroduction

As a speaker and cloud consultant, I have learned and received a lot of feedback about Azure API management platform from customers and community members. I will share some of my learnings in this series of blog posts. Let’s get started!

apim-image

APIs – Application programming interfaces are everywhere! They are already part of many companies’ strategies. But how could we consolidate internal and external APIs? How could you productize and monetize them for your company?

We often build APIs to be consumed by a unique application. However, we could also build these APIs to be shared. If you write HTTP APIs around a single and specific business requirement, you can encourage API re-usability and adoption. Bleeding edge technologies like containers and serverless architecture are pushing this approach even further.

API strategy and Governance comes in play to help build a Gateway on top of your APIs. Companies are developing MVPs (minimum viable products) and time to market is fundamental. For example, we do not have time to write authentication, caching and Analytics over and over again.  Azure API management can help you make this happen. apim-consolidation

apim-consolidation-operations

This demo API Manegement instance that I created for Kloud solutions illustrates how you could create a unified API endpoint to expose your APIs. Multiple “Services” are published there with a single Authentication layer. If your Email Service back-end implementation uses an external API, like Sendgrid, you can Inject this authentication on the API Management gateway layer, making it transparent for end users.

Azure API management provides a high scalable and multi-regional Gateway that can be deployed on any Azure Region around the world. It is a fully PaaS (platform-as-a-service) API management solution, where you do not have to manage any infrastructure. This, combined with other Azure offerings, like App Services (Web Apps, API Apps, Logic Apps and Functions), provides an Enterprise grade platform to delivery any API strategy.

apim-diagram

Looking this diagram above, we can decouple API Management in 3 main components:

  • Developer Portal – Customizable web site exclusive to your company to allow internal and external developers to engage, discover and consume APIs.
  • Gateway (proxy) – Engine of APIM where Policies can be applied on you inbound, back-end and outbound traffic. It’s very scalable and allows multi-regional deployment, Azure Virtual Network VPN, Azure Active Directory integration and native caching solution. Policies are written in XML and C# expressions to define complex rules like: Rate limit, quota, caching, JWT token validation, Authentication, XML to Json and Json to XML transformations, rewrite URL, CORS, restrict IPs, Set Headers, etc.
  • Administration Portal (aka Publisher Portal) – Administration of your APIM instance can be done via the portal.  Automation and devops teams can use APIM management REST API and/or Powershell commands to fully integrate APIM in your onboarding, build and release processes.

Please keep in mind that this strategy can apply to any environment and architecture where HTTP APIs are exposed, whether they are new microservices or older legacy applications.

Feel free to create an user at https://kloud.portal.azure-api.net, I will try to keep this Azure API Management instance usable for demo purposes only, no guaranties. Then, you can create your own development instance from Azure Portal later.

In my next post, I will talk about API Management use cases and give you a broader view of how deep this platform can go. Click here.

Thanks for reading! @jorgearteiro

Posts: 1) Introduction  2) Use Cases