Set your eyes on the Target!

1015red_F1CoverStory.jpg

So in my previous posts I’ve discussed a couple of key points in what I define as the basic principles of Identity and Access Management;

Now that we have all the information needed, we can start to look at your target systems. Now in the simplest terms this could be your local Active Directory (Authentication Domain), but this could be anything, and with the adoption of cloud services, often these target systems are what drives the need for robust IAM services.

Something that we are often asked as IAM consultants is why. Why should the corporate applications be integrated with any IAM Service, and these are valid questions. Sometimes depending on what the system is and what it does, integrating with an IAM system isn’t a practical solution, but more often there are many benefits to having your applications integrated with and IAM system. These benefits include:

  1. Automated account provisioning
  2. Data consistency
  3. If supported Central Authentication services

Requirements

With any target system much like the untitled1IAM system itself, the one thing you must know before you go into any detail are the requirements. Every target system will have individual requirements. Some could be as simple as just needing basic information, first name, last name and date of birth. But for most applications there is allot more to it, and the requirements will be derived largely by the application vendor, and to a lessor extent the application owners and business requirements.

IAM Systems are for the most part extremely flexible in what they can do, they are built to be customized to an enormous degree, and the target systems used by the business will play a large part in determining the amount of customisations within the IAM system.

This could be as simple as requiring additional attributes that are not standard within both the IAM system and your source systems, or could also be the way in which you want the IAM system to interact with the application i.e. utilising web services and building custom Management Agents to connect and synchronise data sets between.

But the root of all this data is when using an IAM system you are having a constant flow of data that is all stored within the “Vault”. This helps ensure that any changes to a user is flowed to all systems, and not just the phone book, it also ensures that any changes are tracked through governance processes that have been established and implemented as part of the IAM System. Changes made to a users’ identity information within a target application can be easily identified, to the point of saying this change was made on this date/time because a change to this persons’ data occurred within the HR system at this time.

Integration

Most IAM systems will have management agents or connectors (the phases can vary depending on the vendor you use) built for the typical “Out of Box” systems, and these will for the most part satisfy the requirements of many so you don’t tend to have to worry so much about that, but if you have “bespoke” systems that have been developed and built up over the years for your business then this is where the custom management agents would play a key part, and how they are built will depend on the applications themselves, in a Microsoft IAM Service the custom management agents would be done using an Extensible Connectivity Management Agent (ECMA). How you would build and develop management agents for FIM or MIM is quite an extensive discussion and something that would be better off in a separate post.

One of the “sticky” points here is that most of the time in order to integrate applications, you need to have elevated access to the applications back end to be able to populate data to and pull data from the application, but the way this is done through any IAM system is through specific service accounts that are restricted to only perform the functions of the applications.

Authentication and SSO

Application integration is something seen to tighten the security of the data and access to applications being controlled through various mechanisms, authentication plays a large part in the IAM process.

During the provisioning process, passwords are usually set when an account is created. This is either through using random password generators (preferred), or setting a specific temporary password. When doing this though, it’s always done with the intent of the user resetting their password when they first logon. The Self Service functionality that can be introduced to do this enables the user to reset their password without ever having to know what the initial password was.

Depending on the application, separate passwords might be created that need to be managed. In most cases IAM consultants/architects will try and minimise this to not being required at all, but this isn’t always the case. In these situations, the IAM System has methods to manage this as well. In the Microsoft space this is something that can be controlled through Password Synchronisation using the “Password Change Notification Service” (PCNS) this basically means that if a user changes their main password that change can be propagated to all the systems that have separate passwords.

SONY DSCMost applications today use standard LDAP authentication to provide access to there application services, this enables the password management process to be much simpler. Cloud Services however generally need to be setup to do one of two things.

  1. Store local passwords
  2. Utilise Single Sign-On Services (SSO)

SSO uses standards based protocols to allow users to authenticate to applications with managed accounts and credentials which you control. Examples of these standard protocols are the likes of SAML, oAuth, WS-Fed/WS-Trust and many more.

There is a growing shift in the industry for these to be cloud services however, being the likes of Microsoft Azure Active Directory, or any number of other services that are available today.
The obvious benefit of SSO is that you have a single username or password to remember, this also greatly reduces the security risk that your business has from and auditing and compliance perspective having a single authentication directory can help reduce the overall exposure your business has to compromise from external or internal threats.

Well that about wraps it up, IAM for the most part is an enabler, it enables your business to be adequately prepared for the consumption of Cloud services and cloud enablement, which can help reduce the overall IT spend your business has over the coming years. But one thing I think I’ve highlighted throughout this particular series is requirements requirements requirements… repetitive I know, but for IAM so crucially important.

If you have any questions about this post or any of my others please feel free to drop a comment or contact me directly.

 

What’s a DEA?

In my last post I made a reference to a “Data Exchange Agreement” or DEA, and I’ve since been asked a couple of times about this. So I thought it would be worth while writing a post about what it is, why it’s of value to you and to your business.

So what’s a DEA? Well in simply terms it’s exactly what the name states, it’s an agreement that defines the parameters in which data is exchanged between Service A and Service B. Service A being the Producer of Attributes X and Services B, the consumers. Now I’ve intentionally used a vague example here as a DEA is used amongst many services in business and or government and is not specifically related to IT or IAM Services. But if your business adopts a controlled data governance process, it can play a pivotal role in the how IAM Services are implemented and adopted throughout the entire enterprise.

So what does a DEA look like, well in an IAM service it’s quite simple, you specify your “Source” and your “Target” services, an example of this could be the followings;

Source
ServiceNow
AurionHR
PROD Active Directory
Microsoft Exchange
Target
PROD Active Directory
Resource Active Directory Domain
Microsoft Online Services (Office 365)
ServiceNow

As you can see this only tells you where the data is coming from and where it’s going to, it doesn’t go into any of the details around what data is being transported and in which direction. A separate section in the DEA details this and an example of this is provided below;

MIM Flow Service Now Source User Types Notes
accountName –> useraccountname MIM All  
employeeID –> employeeid AurionHR All  
employeeType –> employeetype AurionHR All  
mail <– email Microsoft Exchange All  
department –> department AurionHR All
telephoneNumber –> phone PROD AD All  
o365SourceAnchor –> ImmutableID Resource Domain All  
employeeStatus –> status AurionHR All  
dateOfBirth –> dob AurionHR CORP Staff yyyy-MM-dd
division –> region AurionHR CORP Staff  
firstName –> preferredName AurionHR CORP Staff  
jobTitle –> jobtitle AurionHR CORP Staff  
positionNumber –> positionNumber AurionHR CORP Staff
legalGivenNames <– firstname ServiceNow Contractors
localtionCode <– location ServiceNow Contractors  
ManagerID <– manager ServiceNow Contractors  
personalTitle <– title ServiceNow Contractors  
sn <– sn ServiceNow Contractors  
department <– department ServiceNow Contractors
employeeID <– employeeid ServiceNow Contractors  
employeeType <– employeetype ServiceNow Contractors  

This might seem like a lot of detail, but this is actually only a small section of what would be included in a DEA of this type, as the whole purpose of this agreement is to define what attributes are managed by which systems and going to which target systems, and as many IAM consultants can tell you, would be substantially more then what’s provided in this example. And this is just an example for a single system, this is something that’s done for all applications that consume data related to your organisations staff members.

One thing that you might also notice is that I’ve highlighted 2 attributes in the sample above in bold. Why might you ask? Well the point of including this was to highlight data sets that are considered “Sensitive” and within the DEA you would specify this being classified as sensitive data with specific conditions around this data set. This is something your business would define and word to appropriately express this but it could be as simple as a section stating the following;

“Two attributes are classed as sensitive data in this list and cannot be reproduced, presented or distributed under any circumstances”

One challenge that is often confronted within any business is application owners wanting “ownership” of the data they consume. Utilising a DEA provides clarity over who owns the data and what your applications can do with the data they consume removing any uncertainty.

To summarise this post, the point of this wasn’t to provide you with a template, or example DEA to use, it was to help explain what a DEA is, what its used for and examples of what parts can look like. No DEA is the same, and providing you with a full example DEA is only going to make you end up recreating it from scratch anyway. But it is intended to help you with understanding what is needed.

As with any of my posts if you have any questions please pop a comment or reach out to me directly.

 

The Vault!

Vault

The vault or more precisely the “Identity Vault” is a single pane view of all the collated data of your users, from the various data source repositories. This sounds like a lot of jargon but it’s quite simple really.

In the diagram below we look at a really simple attribute firstName (givenName within AD) DataFlow

As you will see at the centre is the attribute, and branching off this is all the Connected Systems, i.e. Active Directory. What this doesn’t illustrate very well is the specific data flow, where this data is coming from and where it’s going to. This comes down to import and export rules as well as any precedence rules that you need to put in place.

The Identity Vault, or Central Data Repository, provides a central store of an Identities information aggregated from a number of sources. It’s also able to identify the data that exists within each of the connected systems from which it either collects the identity information from or provides the information to as a target system. Sounds pretty simple right?

Further to all the basics described above, each object in the Vault has a Unique Identifier, or an Anchor. This is a unique value that is automatically generated when the user is created to ensure that regardless of what happens to the users details throughout the lifecycle of the user object, we are able to track the user and update changes accordingly. This is particularly useful when you have multiple users with the same name for example, it avoids the wrong person being updated when changes occur.

Attribute User 1 User 2
FirstName John John
LastName Smith Smith
Department Sales Sales
UniqueGUID 10294132 18274932

So the table above provides the most simplest forms of a users identity profile, whereas a complete users identity profile will consist of many more attributes, some of which maybe custom attributes for specific purposes, as in the example demonstrated below;

Attribute ContributingMA Value
AADAccountEnabled AzureAD Users TRUE
AADObjectID AzureAD Users 316109a6-7178-4ba5-b87a-24344ce1a145
accountName MIM Service jsmith
cn PROD CORP AD Joe Smith
company PROD CORP AD Contoso Corp
csObjectID AzureAD Users 316109a6-7178-4ba5-b87a-24344ce1a145
displayName MIM Service Joe Smith
domain PROD CORP AD CORP
EXOPhoto Exchange Online Photos System.Byte[]
EXOPhotoChecksum Exchange Online Photos 617E9052042E2F77D18FEFF3CE0D09DC621764EC8487B3517CCA778031E03CEF
firstName PROD CORP AD Joe
fullName PROD CORP AD Joe Smith
mail PROD CORP AD joe.smith@contoso.com.au
mailNickname PROD CORP AD jsmith
o365AccountEnabled Office365 Licensing TRUE
o365AssignedLicenses Office365 Licensing 6fd2c87f-b296-42f0-b197-1e91e994b900
o365AssignedPlans Deskless, MicrosoftCommunicationsOnline, MicrosoftOffice, PowerAppsService, ProcessSimple, ProjectWorkManagement, RMSOnline, SharePoint, Sway, TeamspaceAPI, YammerEnterprise, exchange
o365ProvisionedPlans MicrosoftCommunicationsOnline, SharePoint, exchange
objectSid PROD CORP AD AQUAAAAAAAUVAAAA86Yu54D8Hn5pvugHOA0CAA==
sn PROD CORP AD Smith
source PROD CORP AD WorkDay
userAccountControl PROD CORP AD 512
userPrincipalName PROD CORP AD jsmith@contoso.com.au

So now we have more complete picture of the data, where it’s come from and how we connect that data to a users’ identity profile. We can start to look at how we synchronise that data to any and all Managed targets. It’s very important to control this flow though, to do so we need to have in place strict governance controls about what data is to be distributed throughout the environment.

One practical approach to managing this is by using a data exchange agreement. This helps the organisation have a more defined understanding of what data is being used by what application and for what purpose, it also helps define a strict control on what the application owners can do with the data being consumed for example, strictly prohibiting the application owners from sharing that data with anyone, without the written consent of the data owners.

In my next post we will start to discuss how we then manage target systems, how we use the data we have to provision services and manage the user information through what’s referred to as synchronisation rules.

As with all my posts if, you have any questions please drop me a note.

 

Protect Your Business and Users from Email Phishing in a Few Simple Steps

The goal of email phishing attacks is obtain personal or sensitive information from a victim such as credit card, passwords or username data, for malicious purposes. That is to say trick a victim into performing an unwitting action aimed at stealing sensitive information from them. This form of attack is generally conducted by means of spoofed emails or instant messaging communications which try to deceive their target as to the nature of the sender and purpose of the email they’ve received. An example of which would be an email claiming to be from a bank asking for credential re-validation in the hope of stealing them by means of a cloned website.

Some examples of email Phishing attacks.

Spear phishing

Phishing attempts directed at specific individuals or companies have been termed spear phishing. Attackers may gather personal information about their target to increase their probability of success. This technique is by far the most successful on the internet today, accounting for 91% of attacks. [Wikipedia]

Clone phishing

Clone phishing is a type of phishing attack whereby a legitimate, and previously delivered, email containing an attachment or link has had its content and recipient address(es) taken and used to create an almost identical or cloned email. The attachment or link within the email is replaced with a malicious version and then sent from an email address spoofed to appear to come from the original sender. It may claim to be a resend of the original or an updated version to the original. This technique could be used to pivot (indirectly) from a previously infected machine and gain a foothold on another machine, by exploiting the social trust associated with the inferred connection due to both parties receiving the original email. [Wikipedia]

Whaling

Several phishing attacks have been directed specifically at senior executives and other high-profile targets within businesses, and the term whaling has been coined for these kinds of attacks  In the case of whaling, the masquerading web page/email will take a more serious executive-level form. The content will be crafted to target an upper manager and the person’s role in the company. The content of a whaling attack email is often written as a legal subpoena, customer complaint, or executive issue. Whaling scam emails are designed to masquerade as a critical business email, sent from a legitimate business authority. The content is meant to be tailored for upper management, and usually involves some kind of falsified company-wide concern. Whaling phishers have also forged official-looking FBI subpoena emails, and claimed that the manager needs to click a link and install special software to view the subpoena. [Wikipedia]

Staying ahead of the game from an end user perspective

  1. Take a very close look at the sender’s email address.

Phishing email will generally use an address that looks genuine but isn’t (e.g. accounts@paypals.com) or try to disguise the email’s real sender with what looks like a genuine address but isn’t using HTML trickery (see below).

  1. Is the email addressed to you personally?

Companies with whom you have valid accounts will always address you formally by means of your name and surname. Formulations such as ‘Dear Customer’ is a strong indication the sender doesn’t know you personally and should perhaps be avoided.

  1. What web address is the email trying to lure you to?

Somewhere within a phishing email, often surrounded by links to completely genuine addresses, will be one or more links to the means by which the attacker is to steal from you. In many cases a web site that looks genuine enough, however there are a number of ways of confirming it’s validity.

  1. Hover your cursor over any link you receive in an email before you click it if you’re unsure because it will reveal the real destination sometimes hidden behind deceptive HTML. Also look at the address very closely. The deceit may be obvious or well hidden in a subtle typo (e.g. accouts@app1e.com).

a. Be wary of URL redirection services such as bit.ly which hide the ultimate destination of a link.

b. Be wearing of very long URLs. If in doubt do a Google search for the root domain.

c. Does the email contain poor grammar and spelling mistakes?

d. Many times the quality of a phishing email isn’t up to the general standard of a company’s official communications. Look for spelling mistakes, barbarisms, grammatical errors and odd characters in they email as a sign that something may be wrong.

 

Mitigating the impact of Phishing attacks against an organization

  1. Implement robust email and web access filtering.

  2. User education.

  3. Deploy an antivirus endpoint protection solution.

  4. Deploy Phishing attack aware endpoint protection software.

 

Where’s the source!

SauceIn this post I will talk about data (aka the source)! In IAM there’s really one simple concept that is often misunderstood or ignored. The data going out of any IAM solution is only as good as the data going in. This may seem simple enough but if not enough attention is paid to the data source and data quality then the results are going to be unfavourable at best and catastrophic at worst.
With most IAM solutions data is going to come from multiple sources. Most IAM professionals will agree the best place to source the majority of your user data is going to be the HR system. Why? Well simply put it’s where all important information about the individual is stored and for the most part kept up to date, for example if you were to change positions within the same company the HR systems are going to be updated to reflect the change to your job title, as well as any potential direct report changes which may come as a result of this sort of change.
I also said that data can come and will normally always come from multiple sources. At typical example of this generally speaking, temporary and contract staff will not be managed within the central HR system, the HR team simply put don’t care about contractors. So where do they come from, how are they managed? For smaller organisations this is usually something that’s manually done in AD with no real governance in place. For the larger organisations this is less ideal and can be a security nightmare for the IT team to manage and can create quite a large security risk to the business, so a primary data source for contractors becomes necessary what this is is entirely up to the business and what works for them, I have seen a standard SQL web application being used to populate a database, I’ve seen ITSM tools being used, and less common is using the IAM system they build to manage contractor accounts (within MIM 2016 this is through the MIM Portal).
There are many other examples of how different corporate applications can be used to augment the identity information of your user data such as email, phone systems and to a lessor extent physical security systems building access, and datacentre access, but we will try and keep it simple for the purpose of this post. The following diagram helps illustrate the dataflow for the different user types.

IAM Diagram

What you will notice from the diagram above, is even though an organisation will have data coming from multiple systems, they all come together and are stored in a central repository or an “Identity Vault”. This is able to keep an accurate record of the information coming from multiple sources to compile what is the users complete identity profile. From this we can then start to manage what information is flowed to downstream systems when provisioning accounts, and we can also ensure that if any information was to change, it can be updated to the users profiles in any attached system that is managed through the enterprise IAM Services.
In my next post I will go into the finer details of the central repository or the “Identity Vault”

So in summary, the source of data is very important in defining an IAM solution, it ensures you have the right data being distributed to any managed downstream systems regardless of what type of user base you have. My next post we will dig into the central repository or the Identity Vault, this will go into details around how we can set precedence to data from specific systems to ensure that if there is a difference in the data coming from the difference sources that only the highest precedence will be applied we will also discuss how we augment the data sets to ensure that we are also only collecting the necessary information related to the management of that user and the applications that use within your business.

As per usual, if you have any comments or questions on this post of any of my previous posts then please feel free to comment or reach out to me directly.

The Art Of War – Is your Tech Department Combat Ready?

Over the course of a series of articles, I plan to address strategic planning and why it’s becoming more important in the technology fuelled world we live in. It’s critical an organisation’s response to shifting external events is measured & appropriate. The flow on effects of change to the nature and structure of the IT department has to be addressed. Is a defensive or attack formation needed for what lies ahead?

In this first post, I’ll introduce what is meant by strategy and provide a practical planning process. In future posts, I’ll aim to address subsets of the processes presented in this post.

Operational vs Strategic

I often see technology departments so focussed on operations, they begin to lose sight of what’s coming on the horizon & how their business will change as a result. What’s happening in the industry? What are competitors up to? What new technologies are relevant? These questions are typically answered through strategic planning.

This can, however, be quite challenging for an IT function with an operational mindset (focussing on the now with some short-term planning). This typically stems from IT being viewed as an operational cost i.e. used to improve internal processes that don’t explicitly improve the organisation’s competitive position or bring it closer to fulfilling its strategic vision.

So, what exactly is “strategy”? A quick crash course suggests it aims to answer four key questions;

  • Where are we now? Analyse the current position and likely future scenarios.
  • Where are we going? Develop a plausible and sustainable desired future state (goals & objectives)
  • How will we get there? Develop strategies and programs to get there
  • How will we know we are on track? Develop KPI’s, monitoring and review processes to make sure you get where you intended in the most effective and efficient way

Strategy has plagued influential minds throughout history. It’s interesting to note this style of thinking developed through war-times as it answers similar questions e.g. How are we going to win the war?

While it is sometimes hard to see the distinction, an organisation that confuses operational applications for strategic uses will be trapped into simply becoming more efficient at the same things it does now.

Why does IT need to think strategically?
The Menzies Research Centre has released a Statement of National Challenges1 citing the need for Australian organisations to embrace digital disruption. The report highlights that Australia has been fortunate with having 25 years of continued economic growth which has caused complacency with regards to organisations capability to explore new value creating opportunities, constantly.

“Australian businesses cannot wish these disruptive technologies away, and nor should they do so, as they represent an opportunity to be part of a reshaping of the global economy and should be embraced.” – Mr Tony Shepherd, Commission of Audit Chair 2014 & Shepherd Review Author 2017

We also see The Australia Innovation System Report 20162 providing a comparison of “innovation active” business versus “non-innovation actives” business and provides some interesting insights;

Innovation

Australia’s Chief Economist, Mark Cully is calling for organisations to look at ways to reinvent themselves through the application of new technologies. He argues persistent innovators significantly outgrow other businesses in terms of sales, value added, employment and profit growth.

Here comes the problem – in a technology fuelled world, organisations will struggle to innovate with an operationally focussed technology department. We believe there’s a relationship between an organisation’s ability to compete & it’s strategic use of technology. Operational IT departments are typically challenged with lack of agility, not able to influence enterprise strategy & not having clear sense of purpose; all of which are required to innovate, adapt & remain relevant.

I want to be clear that technology isn’t the only prerequisite for innovation; other elements include culture, creativity & leadership. These aren’t addressed in this post, perhaps topics for another blog.

What does a strategic planning process look like?
In today’s rapidly evolving landscape, a technology focussed strategic planning activity should be short, sharp & deliver high value. This approach helps ensure the impact of internal and external forces are properly accounted for in the planning and estimation of ICT activity over the planning period, typically 3-5 years.  Below is our approach;

Process

  1. Looking out involves checking the industry, what competitors are doing and what technology is relevant.
  2. Looking in focusses on understanding the current environment, the business context & what investments have already been made.
  3. These two areas are then aligned to the organisation’s strategy and relevant business unit plans which then informs the Technology Areas of focus include architecture, operating model & governance.
  4. From an execution perspective, investment portfolios are established from which business cases are established. Risk is also factored in.
  5. Measure & monitoring will ensure the strategy is being executed correctly and gives the intelligence & data to base strategic revisions as needed.

There’s value in hiring external assistance to guide you through the process. They’ll inject new thinking, approach your challenges from new perspectives and give a measured assessment of the status-quo. This is a result from being immersed in this stuff on a daily basis, just like you are with your organisation.  When these two areas are combined, the timing & sequence of the plans is laid down to ensure your tech department is ‘combat ready’!

1 The Shepherd Review 2017: Statement of National Challenges – https://www.menziesrc.org/images/PDF/TheShepherdReview_StatementOfNationalChallenges_March2017web.pdf

2 Australian Innovation System Report 2016 –  https://industry.gov.au/Office-of-the-Chief-Economist/Publications/Documents/Australian-Innovation-System/2016-AIS-Report.pdf

7 tips for making UX work in Agile teams

Agile is here to stay. Corporates love it, start-ups embrace it and developers live by it. So there is no denying that Agile is going nowhere and we have to work with it. For a number of years, I’ve tried to align User Experience practices with Agile methods and haven’t met with great success every time.

But nevertheless, there are a lot of lessons that I’ve learnt during the process and I’m going to share 7 tips that always worked for me.

agile-and-ux

Create a shared vision early on

Get all the decision makers (Dev leads, Project managers and Project sponsors) in one room. Get a whiteboard and discuss why are we developing this product? What problems are we trying to solve? Once you have an overall theme, ask more specific questions such as how many app downloads are we targeting in the first week?

This workshop will give you a snapshot of a shared vision and common goals of the organisation. During every checkpoint of this project, this shared vision will serve as a guide, helping teams prioritise user stories and make the right trade-offs along the way.

Engage stakeholders wherever possible

Regardless of how many people in your team are in agreement, most of the times the decision makers are the Project Sponsors or Division Managers. You do not want them to appear randomly during sprint 3 planning and poop on it.

I highly recommend cultivating strong relationships with these stakeholders early on in the project. Invite them to all UX workshops, and if they can’t/don’t attend, find a way to communicate the summary of the meeting in an engaging way (not an email with a PDF attachment). I use to put together a Keynote slide and have it ready on my iPad for a quick 5-minute summary.

Work at least one sprint ahead of the Dev team

The chances of getting everything from research, wireframes, designs and development done for a single card – in one sprint is implausible.

You’ll struggle to get everything going at the same time. When you are designing, the developers are counting sheep because they are waiting on you to give them something to work with. You don’t want to be the reason behind the declining burn chart. Always be at least one sprint (if not two sprints) ahead of the development team. Sometimes it takes longer to research and validate design decisions, but if you are a sprint ahead – you are not holding up the developers, and you have ample time to respond to design challenges.

Foster a collaborative culture

Needless to say – Collaborate as much as you can. Try to get the team involved (just the people sitting around you is fine) for even small things such a changing a button’s colour. It makes them feel important; makes them feel good and fosters a culture of collaboration.

If you don’t collaborate with the team on small (or big) things, don’t expect them to tell you everything either. Your opinion might not be very valuable in most of the Dev discussions such whether to use ReactJS or Angular, but knowing that the Devs are going to use a certain JS Library – will definitely help you in (one or the other way) planning future sprints.

Follow an Iterative Design Process

DO NOT design mock-ups, to begin with. I know all the customers want to see something real that they can sell to their bosses. But the pretty design approach falls on its face every time. I want my customers to detach themselves from aesthetics and focus on structure and interaction first. Once we have worked out the hardware, then we can look at building the software.

Try Iterative Design Process. Sketch on the whiteboard, get the stakeholders to put a vision on paper and come up with a structure first. Then iterate. Here is my design process:

  1. Paper sketches
  2. Low fidelity wireframes (on white board / PC)
  3. Interactive wireframes – B/W (on PC)
  4. Draft Designs – in Colour
  5. Final Designs
  6. Pass onto the build team.

Do a round of user testing with at least 5 people

User testing is not expensive, it does not take days or weeks, and you don’t have to talk to 25 people.

There is a lot of research on how testing only 5 users is highly effective and valuable to product development. Pick users from different demographic, put an interactive wireframe together and run it past them for about 30 – 45 minutes. After 3 users, you’ll start noticing common themes appearing. And after 5, you’ll have enough pointers to take back to the team for another round of iteration. Repeat this process every two to four sprints.

Hold a brief stand-up meeting every day

Hold a stand-up meeting first thing in the morning. The aim is to keep everyone updated on progress, recognise blockers and pick-up new cards.  This ensures all the team members are on the same page and are working towards a common goal.

However, be mindful of the time since some discussions are lengthier and may need to be taken offline. We generally time-box stand-ups for 15 minutes.

Azure API Management Step by Step – Use Cases

jorge-fotoUse Cases

On this second post about Azure API management, let’s discuss about use cases. Why “Use Cases”?                  

Use cases helps to manage complexity, since it focuses on one specific usage aspect at the time. I am grouping and versioning use cases to facilitate your learning process and helping to keep track with future changes. You are welcome to use these diagrams to demonstrate Azure API management features.

API On-boarding is a key aspect of API governance and first thing to be discussed. How can I publish my existing and future APIs back-ends to API Management?

API description formats like Swagger Specification (aka Open API Initiative https://openapis.org/) are fundamental to properly implement automation and devops on your APIM initiative. API can be imported using swagger, created manually or as part of a custom automation/integration process.

Azure API management administrators can group APIs by product allowing subscription workflow. Products visibility are linked with user groups, providing restricted access to APIs. You can manage your API policies as Code thought an exclusive GIT source control repository available to your APIM instance. Secrets and constants used by policies are managed by a key/value(string) service called properties.

apim-use-cases-adm-api-onboarding

Azure API management platform provides a rich developers portal. Developers can create an account/profile, discover APIs and subscribe to products. API Documentation, multiple language source code samples, console to try APIs, API subscription keys management and Analytics are main features provided. 

apim-use-cases-developer

The management and operation of the platform plays an important role on daily tasks. For enterprises, user groups and user(developers) can be fully integrated with Active Directory. Analytics dashboards and reports are available. Email notification and templates are customizable. APIM REST API and powershell commands are available to most of platform features, including exporting analytics reports.

apim-use-cases-administrator

Security administration use cases groups different configurations. Delegation allows custom development of portal sign-in, sign-up and product subscription. OAuth 2.0 and OpenID providers registration are used by development portal console, when trying APIs, to generate required tokens. Client certificates upload and management are done here or using automation. Developers portal identities configurations brings out of the box integration with social providers. GIT source control settings/management and APIM REST API tokens are available as well.

apim-use-cases-adm-security

Administrators can customize developers portal using built in content management systems functionality. Custom pages and modern javascript development is now allowed. Blogs feature allow of the box blog/post publish/unpublish functionality. Developers submitted applications can be published/unpublished by administrator, to be displayed at developers portal.

apim-use-cases-adm-developer-poral

In Summary, Azure API management is a mature and live platform with a few new features under development, bringing a strong integration with Azure Cloud. Click here for RoadMap

In my next post, I will deep dive in API on-boarding strategies.  

Thanks for reading @jorgearteiro

Posts: 1) Introduction  2) Use Cases

Enterprise Cloud Take Up Accelerating Rapidly According to New Study By McKinsey

A pair of studies published a few days ago by global management consulting firm McKinsey & Company entitled IT as a service: From build to consume show enterprise adoption of Infrastructure as a Service (IaaS) services accelerating increasingly rapidly over the next two years into 2018.

Of the two, one examined the on-going migrations of 50 global businesses. The other saw a large number of CIOs, from small businesses up to Fortune 100 companies, interviewed on the progress of their transitions and the results speak for themselves.

1. Compute and storage is shifting massively to cloud service providers.

Compute and storage is shift massively to the cloud service providers.

Compute and storage is shift massively to the cloud service providers.

“The data reveals that a notable shift is under way for enterprise IT vendors, with on-premise shipped server instances and storage capacity facing compound annual growth rates of –5 percent and –3 percent, respectively, from 2015 to 2018.”

With on-premise storage and server sales growth going into negative territory, it’s clear the next couple of years will see the hyperscalers of this world consume an ever increasing share of global infrastructure hardware shipments.

2.Companies of all sizes are shifting to off-premise cloud services.

Companies of all sizes are shifting to off-premise cloud services.

Companies of all sizes are shifting to off-premise cloud services.

“A deeper look into cloud adoption by size of enterprise shows a significant shift coming in large enterprises (Exhibit 2). More large enterprises are likely to move workloads away from traditional and virtualized environments toward the cloud—at a rate and pace that is expected to be far quicker than in the past.

The report also anticipates the number of enterprises hosting at least one workload on an IaaS platform will see an increase of 41% in the three year period to 2018. While that of small and medium sized businesses will increase a somewhat less aggressive 12% and 10% respectively.

3. A fundamental shift is underway from a build to consume model for IT workloads.

a-fundamental-shift

“The survey showed an overall shift from build to consume, with off-premise environments expected to see considerable growth (Exhibit 1). In particular, enterprises plan to reduce the number of workloads housed in on-premise traditional and virtualized environments, while dedicated private cloud, virtual private cloud, and public infrastructure as a service (IaaS) are expected to see substantially higher rates of adoption.”

Another takeaway is that the share of traditional and virtualized on-premise workloads will shrink significantly from 77% and 67% in 2015 to 43% and 57% respectively in 2018. While virtual private cloud and IaaS will grow from 34% and 25% in 2015 to 54% and 37% respectively in 2018.

Cloud adoption will have far-reaching effects

The report concludes “McKinsey’s global ITaaS Cloud and Enterprise Cloud Infrastructure surveys found that the shift to the cloud is accelerating, with large enterprises becoming a major driver of growth for cloud environments. This represents a departure from today, and we expect it to translate into greater headwinds for the industry value chain focused on on-premise environments; cloud-service providers, led by hyperscale players and the vendors supplying them, are likely to see significant growth.”

About McKinsey & Company

McKinsey & Company is a worldwide management consulting firm. It conducts qualitative and quantitative analysis in order to evaluate management decisions across the public and private sectors. Widely considered the most prestigious management consultancy, McKinsey’s clientele includes 80% of the world’s largest corporations, and an extensive list of governments and non-profit organizations.

Web site: McKinsey & Company
The full report: IT as a service: From build to consume

Azure API Management Step by Step

jorge-fotoIntroduction

As a speaker and cloud consultant, I have learned and received a lot of feedback about Azure API management platform from customers and community members. I will share some of my learnings in this series of blog posts. Let’s get started!

apim-image

APIs – Application programming interfaces are everywhere! They are already part of many companies’ strategies. But how could we consolidate internal and external APIs? How could you productize and monetize them for your company?

We often build APIs to be consumed by a unique application. However, we could also build these APIs to be shared. If you write HTTP APIs around a single and specific business requirement, you can encourage API re-usability and adoption. Bleeding edge technologies like containers and serverless architecture are pushing this approach even further.

API strategy and Governance comes in play to help build a Gateway on top of your APIs. Companies are developing MVPs (minimum viable products) and time to market is fundamental. For example, we do not have time to write authentication, caching and Analytics over and over again.  Azure API management can help you make this happen. apim-consolidation

apim-consolidation-operations

This demo API Manegement instance that I created for Kloud solutions illustrates how you could create a unified API endpoint to expose your APIs. Multiple “Services” are published there with a single Authentication layer. If your Email Service back-end implementation uses an external API, like Sendgrid, you can Inject this authentication on the API Management gateway layer, making it transparent for end users.

Azure API management provides a high scalable and multi-regional Gateway that can be deployed on any Azure Region around the world. It is a fully PaaS (platform-as-a-service) API management solution, where you do not have to manage any infrastructure. This, combined with other Azure offerings, like App Services (Web Apps, API Apps, Logic Apps and Functions), provides an Enterprise grade platform to delivery any API strategy.

apim-diagram

Looking this diagram above, we can decouple API Management in 3 main components:

  • Developer Portal – Customizable web site exclusive to your company to allow internal and external developers to engage, discover and consume APIs.
  • Gateway (proxy) – Engine of APIM where Policies can be applied on you inbound, back-end and outbound traffic. It’s very scalable and allows multi-regional deployment, Azure Virtual Network VPN, Azure Active Directory integration and native caching solution. Policies are written in XML and C# expressions to define complex rules like: Rate limit, quota, caching, JWT token validation, Authentication, XML to Json and Json to XML transformations, rewrite URL, CORS, restrict IPs, Set Headers, etc.
  • Administration Portal (aka Publisher Portal) – Administration of your APIM instance can be done via the portal.  Automation and devops teams can use APIM management REST API and/or Powershell commands to fully integrate APIM in your onboarding, build and release processes.

Please keep in mind that this strategy can apply to any environment and architecture where HTTP APIs are exposed, whether they are new microservices or older legacy applications.

Feel free to create an user at https://kloud.portal.azure-api.net, I will try to keep this Azure API Management instance usable for demo purposes only, no guaranties. Then, you can create your own development instance from Azure Portal later.

In my next post, I will talk about API Management use cases and give you a broader view of how deep this platform can go. Click here.

Thanks for reading! @jorgearteiro

Posts: 1) Introduction  2) Use Cases