Options to consider for SharePoint Framework solutions deployment

There are various options to package and deploy a SharePoint Framework solution and as part of packaging and deployment process, the developers have to identify a best approach for their team. Sometimes it becomes a nightmare to plan the right approach for your solution, if you haven’t weighed the options properly.

Working at multiple implementations of SPFx solution for sometime now, I have been able to get an idea of various options and approach for them. Hence in this blog, we will at these options and look at merits and limitations for each.

At a high level, below are the main components that are deployed as part of SPFx package deployment:

  1. The minified js file for all code
  2. The webpart manifest file
  3. Webpart compiled file
  4. The package (.pckg) file with all package information

Deployment Options

Please check this blog for an overview of the steps for building and packaging a SPFx solution. The packaged solution (.sppkg) file can then be deployed to a App catalog site. The assets of the package (point 1-3 of above) could be deployed by any of the four below options. We will look at the merits and limitations for each.

1. Deploy to Azure CDN or Office 365 CDN

The assets could be deployed to an Azure CDN. The deployment script is already a part of SPFx solution and could be done from within the solution. More detailed steps for setting this up are here.

Note: Please remember to enable CORS on the storage account before deployment of the package.

If CORS is not enabled before CDN profile is used, you might have delete and recreate the Storage account.

Merits:

  • Easy deployment using gulp
  • Faster access to assets and web part resources because of CDN hosting
  • Add geographical restrictions (if needed)

Limitations:

  • Dependency on Azure Subscription
  • Proper set up steps required for setting up Azure CDN. In some cases if the CDN if not set properly, then the deployment has to be done again.
  • Since the assets are deployed to a CDN endpoint, so if assets need restricted access then this mayn’t be recommended

2. Deploy to Office 365 Public CDN

For this option, you will need to enable and set up Office 365 CDN in your tenancy before deployment. For more details of setting this up, check the link here.

Merits:

  • Faster access to assets and web part resources because of CDN hosting
  • No Azure subscription requirement
  • Content is accessible from SharePoint Interface

Limitations:

  • Manual copy of assets files to CDN enabled SharePoint library
  • Office 365 CDN is a tenant setting and has to be enabled for the whole tenancy
  • Since the assets are deployed to a CDN endpoint, so if assets needs restricted access then this mayn’t be recommended
  • Accidental deletion could cause issues

3. Deploy to SharePoint document library

This is also an option to copy for the compiled assets to be copied to a SharePoint document library anywhere in the tenancy. Setting this up is quite simple, first set the setting “includeClientSideAssets”: false in package-solution.json file and then set the CDN details in write-manifests.json  file.

Merits:

  • No need of additional Azure hosting or enabling Office 365 CDN
  • Access to Assets files from SharePoint interface

Limitations:

  • Manual Copy of assets file to SharePoint library
  • Accidental deletion could cause issues

4. Deploy to ClientAssets in App Catalog

From SPFx version 1.4, it is possible to include assets as part of the package file and deploy it to the hidden ClientAssets library residing in App Catalog. It is set in the package-solution.json file “includeClientSideAssets”: true.

Merits:

  • No extra steps needed to package and deploy assets

Limitations:

  • Increases the payload of the package file
  • Risk for Tenant admins to deploy script files to the Tenant App Catalog.

Conclusion

In this blog, we saw the various options for SPFx deployment, and merits and limitations of each approach.

Happy Coding !!!

 

Provisioning complex Modern Sites with Azure Functions and Flow – Part 3 – Post Provisioning Site Configuration

In the previous two blogs part 1 and part 2, we looked at steps to create a Modern team site and apply a custom provisioning template to it. In this blog, we will have a look at the steps for the post provisioning process to implement site specific requirements. Some of them could be:

1. Apply default values to list fields
2. Create a bunch of default folders
3. Manage Security groups (SP level) and permission level.
4. Navigation level changes
5. Add/Enable web parts or custom actions (SPFx extensions)

Most of the above steps are part of SharePoint Provisioning processes for a long time, just are less complex now with Provisioning templates doing a lot of heavy lifting. I will be writing very soon about PnP Templates, do and don’ts.

Pre-requisties

One key point to add is that, with Modern Team Sites and Office 365 Groups we cannot add AD security groups into a Office 365 Unified Group. For more information, please see this link.

The apply template process (link) takes about 45-90 min (long running process) for complex templates so wouldn’t be possible to start the post process on a flow wait state. Hence, we could trigger the post provisioning process on update of an inventory item or poll an endpoint for the status. In our case, we triggered the process when updating the inventory list with a status that apply template process is complete.

Post Provisioning Process

1. The first step of the Post Provisioning proces is to make sure that noscript is enabled on the team site (link) and all dependencies are ready such as Term stores, Navigation items, Site Pages, Content types, site columns etc. For a complex site template, this step will check for failure conditions to make sure that all artefacts are in place.

Note: The below sequence of steps could vary based on the solution 
and site structure in place but this was the faster way to 
isolate issues and ensure dependencies.

After the failure checks are done, we will start with the Site structure changes and navigation changes. For implementing navigation changes, check the blogs here and here.

2. Next, we will update any site specific site columns, site content types and permission level changes. A upcoming blog will have more details to this.

3. After that, we will update the changes for list structure, we will move to list specific updates such as default value setting, modifying list properties etc.

4. Next let’s move on to Apps and Site Pages updates. These changes take time because of SharePoint ALM lifecycle any possible duplications. So error handling is the key. Please check the blog here and for steps to deploy app and web parts here.

5. Before we finalize the site, let’s provision folders and metadata. This is not a simple process if you have to set metadata values for large number of folders like in our case 800 recursive folders (child in parent). So we will use the metadata file override. All the values in the defaults file have to be hardcoded before override.

Note: The metadata file override is not generally a good approach 
because of possible corruption that renders the library unusable 
so do error handling for all cases. For the CSOM approach check here.

6. Finally, we will set the site Property bag values for Site to set some of the tags making the site searchable. Here is the blog for the same.

Conclusion

The above we saw the final process of Site Provisioning process with setting up site properties and attributes for preparing the site before handing it off to business for use.

Provisioning complex Modern Sites with Azure Functions and Microsoft Flow – Part 1 – Architecture

In one of my previous blog here,  I have discussed about creating Office 365 groups using Azure Function and Flow. The same process could be used also to provision Modern Team sites in SharePoint Online because Modern Team Sites are Office 365 groups too. However, if you are creating a Complex Modern Team Site with lots of Libraries, Content types, Termstore associated columns etc. it will challenging to do it with a single Azure Function.
Thus, in this blog (part 1), we will look at the Architecture of a Solution to provision a complex Modern Team Site using multiple Azure Function and Flows. This is an approach that went through four months of validation and testing. There might be other options but this one worked for the complex team site which takes around 45-90 mins to provision.
Solution Design
To start with lets’ look at the solution design. The solution consists of two major components
1. Template Creation – Create a SharePoint Modern Team site to be used as a template and generate a Provisioning template from it
2. Provisioning Process – Create a SharePoint Inventory List to run the Flow and Azure Function. There will be three Azure Functions that will run three separate parts of the provisioning lifecycle. More details about the Azure Functions will in upcoming blog.
Get the Provisioning Template
The first step in the process is to  create a clean site that will be used as a reference template site for the Provisioning template. In this site, create all the lists, libraries, site columns, content type and set other necessary site settings.
In order to make sure that the generated template doesn’t have any elements which are not needed for provisioning, use the following PnP PowerShell cmdlet. The below cmdlet removes any content type hub association, ALM api handles and site security for provisioning requirements.

Get-PnPProvisioningTemplate -Out "" -ExcludeHandlers ApplicationLifecycleManagement, SiteSecurity -ExcludeContentTypesFromSyndication

The output of the above cmdlet is ProvisioningTemplate.xml file which could be applied to new sites for setting up the same SharePoint elements. To know more about the provisioning template file, schema and allowed tags, check the link here.
ModernSitesProvisioningFlow_GetTemplate
Team Site Provsioning Process
The second step in the process would be to create and apply the template to a Modern SharePoint Team site using Flow and Azure Function. The detail steps would be as follows:
1. Create an Inventory list to capture all the requirements for Site Creation
2. Create two flows
a) Create and Apply Template flow, and
b) Post Provisioning Flow
3. Create three Azure Functions –
a) Create a blank Modern Team Site
b) Apply Provisioning Template on the above site. This is a long running process and can take about 45-90 min for applying a complex template with about 20 libraries, 20-30 site columns and 10-15 content types
Note: Azure Functions on Consumption plan have a timeout of 10 min. Host the Azure function on an App Service Plan for the above to work without issues
c) Post Provisioning to apply changes that are not supported by Provisioning Template such as Creating default folders etc.
Below is the process flow for the provisioning process. It has steps from 1 – 11 which goes from creating the site to applying it. The brief list of the steps are as follows

  1. Call the Create Site flow to start the Provisioning Process
  2. Call the Create Site Azure Function
  3. Create the Modern Team Site in Azure Function and set any dependencies required for the Apply template such as Navigation items, pages etc, and then return to flow
  4. Call the Apply Template Azure Function.
  5. Get the previously generated ProvisioningTemplate.xml file from a shared location
  6. Apply the Template onto the newly created Modern site. Note: The flow call times out because it cannot wait for such a long running process
  7. Update the status column in the Site Directory for the post provisioning flow to start
  8. Call the Post provisioning flow to run the Post provisioning azure function
  9. The Post provisioning azure function will complete the remaining SharePoint changes which were not completed by the apply template such as, set field default values, create folders in any libraries, associate default values to taxonomy fields etc.

ModernSitesProvisioningFlow_ProvisioningProcess
Conclusion:
Hence in the above blog, we saw how to create a provisioning process to handle complex modern team site creation at a high architectural level. Next, we will deep dive into the Azure functions to create, apply template and post process in the next upcoming blogs.
Happy Coding!!!

Planning Site structure and Navigation in SharePoint Modern Experience Communication and Team sites

If you are planning to implement or implementing Modern team sites or Communication sites, there is change in best practices for planning and managing the Sites structure, Site Hierarchy and Navigation. This is a very common question during my presentations – how do we manage site structures, navigation and content in Modern experiences.
So, in this blog, we will look at few strategies for planning Site structure and Navigation in Modern Experience sites.
1. First and foremost, get rid of nested subsites and Site hierarchy navigation. Recently Microsoft has been pushing for Site Collections flat structure with Modern Team and Communication sites, which adds a lot of benefit for managing isolation and content. So, the new approach – Flat Site Collections and no Subsites. (There are various advantages of flat structure site collections which will be listed in another upcoming blog)
2. Secondly, to achieve a hierarchy relationship among sites such as Navigation, news items, search etc, use Hub Sites. Hub sites are the new way of connecting SharePoint site collections together. Besides, they have added advantage of aggregating information such as News and Search results from related hub associated sites. So, create a Hub site for Navigation across related sites.HubSiteAssociatedTeam
3. A best candidate for Hub sites, in my opinion, is Communication sites. Communication sites have a top navigation that can be easily extended for Hub sites. They are perfect for publishing information and showing aggregrated content. However, it also depends on if the Hub site is meant for a team and business unit or company as a whole. So, use Communication as a Hub site if targeting all company or a major group.QuickLaunchNestedCommunicationSite
4. One Navigation structure – Quick launch (Left hand) is Top Navigation for Communication sites. So no need to maintain two navigations. If you ask me, this a big win and removes a lot of confusion for end users.QuickLaunchEdit_CommSite
5. Quick launch for Modern Team and Communication Sites allows three level sub hierarchy which allows to define a nested custom hierarchy structure for Navigation which could be different from the content structure and site structure.
6. Focus on Content, not on Navigation or location of Content, through new Modern web parts such as Highlighted content, Quick links etc. which allow you to find content anywhere easily.HighlightedContent
7. Finally, few limitations of Modern Site Structure and Navigation (as of June 2018) for reference. Hopefully, this will be changed soon.

    • Permissions management still needs to be managed at each Site Collection, no nested structure there yet. Yet it is possible to use AD groups for consistent permissions set up
    • Office 365 Unified Security Groups cannot have AD or other Office 365 groups nested for Modern Team sites. But SharePoint site permissions could be managed through AD groups
    • Contextual Navigation bolding is missing in Hub sites i.e. if you click on the link to move to a child site then navigation is not automatically bolded, but this might be coming soon.
    • Navigation headers in Modern sites cannot be blank and needs to be set to a link

Conclusion:
Hence in this blog, we looked at an approach for Modern site structures, hierarchy and navigation.

Defining IT Strategy

Information Technology (IT) Strategy is a comprehensive plan that outlines how technology should be used to meet IT and business goals.

The following approach can be used to define your organisation’s IT Strategy.

Inputs:

  1. Organisational Business Priorities
  2. Organisational Key Behaviours
  3. How Business will be Supported by IT
  4. Technology Influences
  5. IT Strategic Principles
  6. IT Service Management Operating Principles

First of all, in order to define an IT Strategy, we need to obtain the above inputs (as much as possible). The approach to define the strategy is based on what are the business priorities and how the IT is going to shape to support the business goals. Those IT priorities will then become the strategy with key initiatives to support and achieve both IT and business goals.
Example Step 1: Organisational Business Priorities

  • Customers First
  • Efficient
  • People and Relationship
  • Secure Service
  • Outstanding Growth

Example Step 2: Organisational Key Behaviours

  • Accountable
  • Connected
  • Innovation
  • High Performance

Example Step 3: How the Business will be Supported IT

  • Keep the Business Running
  • Execute Business Change
  • Leadership
  • Adaptability

Example Step 4: Technology Influences

  • Technology Trends
  • Current Industry
  • Agile
  • Innovative Solutions
  • Adaptable Changes

Example Step 5: IT Strategic Principles

  • Put Business at First
  • Efficiency
  • People and Relationships

Example Step 6: IT Service Management Operating Principles

  • Provide Effective Service
  • Driven by Agility
  • Create Efficiency
  • Enforce Resilience
  • Value People

Outcome 1:

Example: IT Strategy

Strengthen service and customer focus via:

  • Improving our customer satisfaction
  • Creating a service delivery culture

Promote agility and flexibility through the services we offer by:

  • Investing in our BYO consumerisation
  • Increasing our utilisation of virtualisation technology
  • Investigating emerging technologies to support flexible workforce

Innovate efficiency and strengthen our partnership by:

  • Reducing lifecycle cost through Cloud program and other cost saving initiatives
  • Optimise operations and strategically invest in improvements

Improve service quality, reliability, and maintainability by:

  • Focusing on the stability & robustness of our systems
  • Improving the quality of our processes by driving quality upstream

Invest in people to grow and support by:

  • Creating a collaborative, proactive, outside-in culture

 
Strategy 1
Outcome 2:
The above strategy should have key initiatives that supports the strategy (supports both IT and business goals) and implementation/transformation roadmap.
Summary
Hope this is useful. This is one of the approaches that can be used to define your IT Strategy and key initiatives.

EU GDPR – is it relevant to Australian companies?

The new General Data Protection Regulation (GDPR) from the European Union (EU) imposes new rules on organisations that offer goods and services to the people in the EU, or collects and analyses data tied to EU residents, no matter where the organisations or the data processing is located. GDPR comes into force in May 2018.
If your customers reside in the EU, whether you have a presence in the EU or not, then GDPR applies to you. The internet lets you interact with customers where ever they are, and GDPR applies to anyone that deals with EU people where ever they are.
And the term personal data covers everything from IP address, to cookie data, to submitted forms, to CCTV and even to a photo of a landscape that can be tied to an identity. Then there is sensitive personal data, such as ethnicity, sexual orientation and genetic data, which have enhanced protections.
And for the first time there are very strong penalties for non-compliance – the maximum fine for a GDPR breach is EU$20M, or 4% of worldwide annual turnover. The maximum fine can be imposed for the most serious infringements e.g. not having sufficient customer consent to process data or violating the core of Privacy by Design concepts.
Essentially GDPR states that organisations must:

  • provide clear notice of data collection
  • outline the purpose the data is being used for
  • collect the data needed for that purpose
  • ensure that the data is only kept as long as required to process
  • disclose whether the data will be shared within or outside or the EU
  • protect personal data using appropriate security
  • individuals have the right to access, correct and erase their personal data, and to stop an organisation processing their data
  • and that organisations notify authorities of personal data breaches.

Specific criteria for companies required to comply are:

  • A presence in an EU country
  • No presence in the EU, but it processes personal data of European residents
  • More than 250 employees
  • Fewer than 250 employees but the processing it carries out is likely to result in a risk for the rights and freedoms of data subject, is not occasional, or includes certain types of sensitive personal data. That effectively means almost all companies.

What does this mean in real terms to common large companies? Well…

  • Apple turned over about USD$230B in 2017, so the maximum fine applicable to Apple would be USD$9.2B
  • CBA turned over AUD$26B in 2017 and so their maximum fine would “only” be AUD$1B
  • Telstra turned over AUD$28.2B in 2017, the maximum fine would be AUD$1.1B.

Ouch.
The GDPR legislation won’t impact Australian businesses, will it? What if an EU resident gets a Telstra phone or CBA credit/travel card whilst on holiday in Australia or if your organisation has local regulatory data retention requirements that appear, on the surface at least, at odds with GDPR obligations…
I would get legal advice if the organisation provides services that may be used by EU nationals.
In a recent PWC “Pulse Survey: US Companies ramping up General Data Protection Regulation (GDPR) budgets” 92% of responses stated that GDPR is one of several top priorities.
Technology cannot alone make an organisation GDPR compliant. There must be policy, process, people changes to support GDPR. But technology can greatly assist organisations that need to comply with GDPR.
Microsoft has invested in providing assistance to organisations impacted by GDPR.
Office 365 Advanced Data Governance enables you to intelligently manage your organisation’s data with classifications. The classifications can be applied automatically, for example, if there is GDPR German PII data present in the document the document can be marked as confidential when saved. With the document marked the data can be protected, whether that is to encrypt the file or assign permissions based on user IDs, or add watermarks indicating sensitivity.
An organisation can choose to encrypt their data at rest in Office 365, Dynamics 365 or Azure with their own encryption keys. Alternatively, a Microsoft generated key can be used.  Sounds like a no-brainer, all customers will use customer keys. However, the customer must have a HSM (Hardware Security Module) and a proven key management capability.
Azure Information Protection enables an organisation to track and control marked data. Distribution of data can be monitored, and access and access attempts logged. This information can allow an organisation to revoke access from an employee or partner if data is being shared without authorisation.
Azure Active Directory (AD) can provide risk-based conditional access controls – can the user credentials be found in public data breaches, is it an unmanaged device, are they trying to access a sensitive app, are they a privileged user or have they just completed an impossible trip (logged in five minutes ago from Australia, the current attempt is from somewhere that is a 12 hour flight away) – to assess the risk of the user and the risk of the session and based on that access can be provided, or request multi-factor authentication (MFA), or limit or deny access.
Microsoft Enterprise Mobility + Security (EMS) can protect your cloud and on-premises resources. Advanced behavioural analytics are the basis for identifying threats before data is compromised. Advanced Threat Analytics (ATA) detects abnormal behaviour and provides advanced threat detection for on-premises resources. Azure AD provides protection from identity-based attacks and cloud-based threat detection and Cloud App Security detects anomalies for cloud apps. Cloud App Security can detect what cloud apps are being used, as well as control access and can support compliance efforts with regulatory mandates such as Payment Card Industry (PCI), Health Insurance Accountability and Portability Act (HIPAA), Sarbanes-Oxley (SOX), General Data Protection Regulation (GDPR) and others. Cloud App Security can apply policies to apps from Microsoft or other vendors, such as Box, Dropbox, Salesforce, and more.
Microsoft provides a set of compliance and security tools to help organisations meet their regulatory obligations. To reiterate policy, process and people changes are required to support GDPR.
Please discuss your legal obligations with a legal professional to clarify any obligations that the EU GDPR may place on your organisation. Remember May 2018 is only a few months away.

Writing for the Web – that includes your company intranet!

You can have a pool made out of gold – if the water in it is as dirty and old as a swamp- no one will swim in it!

The same can be said about the content of an intranet. You can have the best design, the best developers and the most carefully planned out navigation and taxonomy but if the content and documents are outdated and hard to read, staff will lose confidence in its authority and relevance and start to look elsewhere – or use it as an excuse to get a coffee.
The content of an intranet is usually left to a representative from each department (if you’re lucky!) Usually people that have been working in a company for years. Or worse yet, to the IT guy. They are going to use very different language to a new starter, or to a book keeper, or the CEO. Often content is written for an intranet “because it has to be there” or to “cover ourselves” or because “the big boss said so” with no real thought into how easy it is to read or who will be reading it.
adaptive
Content on the internet has changed and adapted to meet a need that a user has and to find it as quickly as possible. Why isn’t the same attitude used for your company? If your workers weren’t so frustrated finding the information they need to do their job, maybe they’d perform better, maybe that would result in faster sales, maybe investing in the products your staff use is just as important as the products your consumers use.
I’m not saying that you have to employ a copywriter for your intranet but at least train the staff you nominate to be custodians of your land (your intranet- your baby).
Below are some tips for your nominated content authors.

The way people read has changed

People read differently thanks to the web. They don’t read. They skim.

  • They don’t like to feel passive
  • They’re reluctant to invest too much time at one site or your intranet
  • They don’t want to work for the information

People DON’T READ MORE because

  • What they find isn’t relevant to what they need or want.
  • They’re trying to answer a question and they only want the answer.
  • They’re trying to do a task and they only want what’s necessary.

Before you write, identify your audience

People come to an intranet page with a specific task in mind. When developing your pages content, keep your users’ tasks in mind and write to ensure you are helping them accomplish those tasks.  If your page doesn’t help them complete that task, they’ll leave (or call your department!)
Ask these questions

  • Who are they? New starters? Experienced staff? Both? What is the lowest common denominator?
  • Where are they? At work? At home? On the train? (Desktop, mobile, laptop, ipad)
  • What do they want?
  • How educated are they? Are they familiar with business Jargon?

Identify the purpose of your text

As an intranet, the main purpose is to inform and educate. Not so much to entertain or sell.
When writing to present information ensure:

  • Consistency
  • Objectivity
  • Consider tables, diagrams or graphs

Structuring your content

Headings and Sub headings

Use headings and sub headings for each new topic. This provides context and informs readers about what is to come. It provides a bridge between chunks of content.

Sentences and Paragraphs

Use short sentences. And use only 1-3 sentences per paragraph.
‘Front Load’ your sentences. Position key information at the front of sentences and paragraphs.
Chunk your text. Break blocks of text into smaller chunks. Each chunk should address a single concept. Chunks should be self-contained and context-independent.

Layering vs scrolling

It is OK if a page scrolls. It just depends how you break up your page! Users’ habits have changed in the past 10 years due to mobile devices, scrolling is not a dirty word, as long as the user knows there’s more content on the page by using visual cues.
phone.png

Use lists to improve comprehension and retention

  • Bullets for list items that have no logical order
  • Numbered lists for items that have a logical sequence
  • Avoid the lonely bullet point
  • Avoid death by bullet point

General Writing tips

  • Write in plain English
  • Use personal pronouns. Don’t say “Company XYZ prefers you do this” Say “We prefer this”
  • Make your point quickly
  • Reduce print copy – aim for 50% less copy than what you’d write for print
  • Be objective and don’t exaggerate
  • USE WHITE SPACE – this makes content easier to scan, and it is more obvious to the eye that content is broken up into chunks.
  • Avoid jargon
  • Don’t use inflated language

Hyperlinks

  • Avoid explicit link expressions (eg. Click here)
  • Describe the information readers will find when they follow the link
  • Use VERBS (doing word) as links.
  • Warn users of a large file size before they start downloading
  • Use links to remove secondary information from the bulk of the text (layer the content)

Remove

  • Empty words and phrases
  • Long words or phrases that could be shorter
  • Unnecessary jargon and acronyms
  • Repetitive words or phrases
  • Adverbs (e.g., quite, really, basically, generally, etc.)

Avoid Fluff

  • Don’t pad write with unnecessary sentences
  • Stick to the facts
  • Use objective language
  • Avoid adjectives, adverbs, buzzwords and unsubstantiated claims

Tips for proofreading

  1. Give it a rest
  2. Look for one type of problem at a time
  3. Double-check facts, figures, dates, addresses, and proper names
  4. Review a hard copy
  5. Read your text aloud
  6. Use a spellchecker
  7. Trust your dictionary
  8. Read your text backwards
  9. Create your own proofreading checklist
  10. Ask for help!

A Useful App

Hemingwayapp.com assesses how good your content is for the web.

A few examples (from a travel page)

Bad Example

Our Approved an​​​d Preferred Providers

Company XYZ has contracted arrangements with a number of providers for travel.  These arrangements have been established on the basis of extracting best value by aggregating spend across all of Company XYZ.

Why it’s Bad

Use personal pronouns such as we and you, so the user knows you are talking to them. They know where they work. Remove Fluff.

Better Example

Our Approved an​​​d Preferred Providers

We have contracted arrangements with a number of providers for travel to provide best value

Bad Example

Travel consultant:  XYZ Travel Solutions is the approved provider of travel consultant services and must be used to make all business travel bookings.  All airfare, hotel and car rental bookings must be made through FCM Travel Solutions

Why it’s bad

The author is saying the same thing twice in two different ways. This can easily be said in one sentence.

Better Example

Travel consultant

XYZ Travel Solutions must be used to make all airfare, hotel and car rental bookings.

Bad Example

Qantas is Company XYZ preferred airline for both domestic and international air travel and must be used where it services the route and the “lowest logical fare” differential to another airline is less than $50 for domestic travel and less than $400 for international travel

Why it’s bad

This sentence is too long. This is a case of using too much jargon. What does lowest logical fare even mean? And the second part does not make any sense. What exactly are they trying to say here? I am not entirely sure, but if my guess is correct it should read something like below.

Better Example

Qantas is our preferred airline for both domestic and international air travel. When flying, choose the cheapest rate available within reason. You can only choose another airline if it is cheaper by $50 for domestic and cheaper by $400 for international travel.

Bad Example

Ground transportation:  Company XYZ preferred provider for rental vehicle services is Avis.  Please refer to the list of approved rental vehicle types in the “Relevant Documents” link to the right hand side of this page.

Why it’s bad

Front load your sentences. With the most important information first. Don’t make a user dig for a document, have the relevant document right there. Link the Verb. Don’t say CLICK HERE!

Better Example

Ground transportation

Avis is our preferred provider to rent vehicles.
View our list of approved rental vehicles.

Bad Example

Booking lead times:  To ensure that the best airfare and hotel rate can be obtained, domestic travel bookings should be made between 14-21 days prior to travel, and international travel bookings between 21 and 42 days prior to travel.  For international bookings, also consider lead times for any visas that may need to be obtained.

Why it’s bad

Front load your sentence… most important information first. This is a good opportunity to chunk your text.

Better Example

Booking lead times

Ensure your book your travel early
14-21 day prior to travel for domestic
21-42 days prior to travel for internatonal (also consider lead times for visas)
This will ensure that the best airfare and hotel rate can be obtained.

 

Set your eyes on the Target!

1015red_F1CoverStory.jpg
So in my previous posts I’ve discussed a couple of key points in what I define as the basic principles of Identity and Access Management;

Now that we have all the information needed, we can start to look at your target systems. Now in the simplest terms this could be your local Active Directory (Authentication Domain), but this could be anything, and with the adoption of cloud services, often these target systems are what drives the need for robust IAM services.
Something that we are often asked as IAM consultants is why. Why should the corporate applications be integrated with any IAM Service, and these are valid questions. Sometimes depending on what the system is and what it does, integrating with an IAM system isn’t a practical solution, but more often there are many benefits to having your applications integrated with and IAM system. These benefits include:

  1. Automated account provisioning
  2. Data consistency
  3. If supported Central Authentication services

Requirements
With any target system much like the untitled1IAM system itself, the one thing you must know before you go into any detail are the requirements. Every target system will have individual requirements. Some could be as simple as just needing basic information, first name, last name and date of birth. But for most applications there is allot more to it, and the requirements will be derived largely by the application vendor, and to a lessor extent the application owners and business requirements.
IAM Systems are for the most part extremely flexible in what they can do, they are built to be customized to an enormous degree, and the target systems used by the business will play a large part in determining the amount of customisations within the IAM system.
This could be as simple as requiring additional attributes that are not standard within both the IAM system and your source systems, or could also be the way in which you want the IAM system to interact with the application i.e. utilising web services and building custom Management Agents to connect and synchronise data sets between.
But the root of all this data is when using an IAM system you are having a constant flow of data that is all stored within the “Vault”. This helps ensure that any changes to a user is flowed to all systems, and not just the phone book, it also ensures that any changes are tracked through governance processes that have been established and implemented as part of the IAM System. Changes made to a users’ identity information within a target application can be easily identified, to the point of saying this change was made on this date/time because a change to this persons’ data occurred within the HR system at this time.
Integration
Most IAM systems will have management agents or connectors (the phases can vary depending on the vendor you use) built for the typical “Out of Box” systems, and these will for the most part satisfy the requirements of many so you don’t tend to have to worry so much about that, but if you have “bespoke” systems that have been developed and built up over the years for your business then this is where the custom management agents would play a key part, and how they are built will depend on the applications themselves, in a Microsoft IAM Service the custom management agents would be done using an Extensible Connectivity Management Agent (ECMA). How you would build and develop management agents for FIM or MIM is quite an extensive discussion and something that would be better off in a separate post.
One of the “sticky” points here is that most of the time in order to integrate applications, you need to have elevated access to the applications back end to be able to populate data to and pull data from the application, but the way this is done through any IAM system is through specific service accounts that are restricted to only perform the functions of the applications.
Authentication and SSO
Application integration is something seen to tighten the security of the data and access to applications being controlled through various mechanisms, authentication plays a large part in the IAM process.
During the provisioning process, passwords are usually set when an account is created. This is either through using random password generators (preferred), or setting a specific temporary password. When doing this though, it’s always done with the intent of the user resetting their password when they first logon. The Self Service functionality that can be introduced to do this enables the user to reset their password without ever having to know what the initial password was.
Depending on the application, separate passwords might be created that need to be managed. In most cases IAM consultants/architects will try and minimise this to not being required at all, but this isn’t always the case. In these situations, the IAM System has methods to manage this as well. In the Microsoft space this is something that can be controlled through Password Synchronisation using the “Password Change Notification Service” (PCNS) this basically means that if a user changes their main password that change can be propagated to all the systems that have separate passwords.
SONY DSCMost applications today use standard LDAP authentication to provide access to there application services, this enables the password management process to be much simpler. Cloud Services however generally need to be setup to do one of two things.

  1. Store local passwords
  2. Utilise Single Sign-On Services (SSO)

SSO uses standards based protocols to allow users to authenticate to applications with managed accounts and credentials which you control. Examples of these standard protocols are the likes of SAML, oAuth, WS-Fed/WS-Trust and many more.
There is a growing shift in the industry for these to be cloud services however, being the likes of Microsoft Azure Active Directory, or any number of other services that are available today.
The obvious benefit of SSO is that you have a single username or password to remember, this also greatly reduces the security risk that your business has from and auditing and compliance perspective having a single authentication directory can help reduce the overall exposure your business has to compromise from external or internal threats.
Well that about wraps it up, IAM for the most part is an enabler, it enables your business to be adequately prepared for the consumption of Cloud services and cloud enablement, which can help reduce the overall IT spend your business has over the coming years. But one thing I think I’ve highlighted throughout this particular series is requirements requirements requirements… repetitive I know, but for IAM so crucially important.
If you have any questions about this post or any of my others please feel free to drop a comment or contact me directly.
 

What's a DEA?

In my last post I made a reference to a “Data Exchange Agreement” or DEA, and I’ve since been asked a couple of times about this. So I thought it would be worth while writing a post about what it is, why it’s of value to you and to your business.
So what’s a DEA? Well in simply terms it’s exactly what the name states, it’s an agreement that defines the parameters in which data is exchanged between Service A and Service B. Service A being the Producer of Attributes X and Services B, the consumers. Now I’ve intentionally used a vague example here as a DEA is used amongst many services in business and or government and is not specifically related to IT or IAM Services. But if your business adopts a controlled data governance process, it can play a pivotal role in the how IAM Services are implemented and adopted throughout the entire enterprise.
So what does a DEA look like, well in an IAM service it’s quite simple, you specify your “Source” and your “Target” services, an example of this could be the followings;

Source
ServiceNow
AurionHR
PROD Active Directory
Microsoft Exchange
Target
PROD Active Directory
Resource Active Directory Domain
Microsoft Online Services (Office 365)
ServiceNow

As you can see this only tells you where the data is coming from and where it’s going to, it doesn’t go into any of the details around what data is being transported and in which direction. A separate section in the DEA details this and an example of this is provided below;

MIM Flow Service Now Source User Types Notes
accountName –> useraccountname MIM All  
employeeID –> employeeid AurionHR All  
employeeType –> employeetype AurionHR All  
mail <– email Microsoft Exchange All  
department –> department AurionHR All
telephoneNumber –> phone PROD AD All  
o365SourceAnchor –> ImmutableID Resource Domain All  
employeeStatus –> status AurionHR All  
dateOfBirth –> dob AurionHR CORP Staff yyyy-MM-dd
division –> region AurionHR CORP Staff  
firstName –> preferredName AurionHR CORP Staff  
jobTitle –> jobtitle AurionHR CORP Staff  
positionNumber –> positionNumber AurionHR CORP Staff
legalGivenNames <– firstname ServiceNow Contractors
localtionCode <– location ServiceNow Contractors  
ManagerID <– manager ServiceNow Contractors  
personalTitle <– title ServiceNow Contractors  
sn <– sn ServiceNow Contractors  
department <– department ServiceNow Contractors
employeeID <– employeeid ServiceNow Contractors  
employeeType <– employeetype ServiceNow Contractors  

This might seem like a lot of detail, but this is actually only a small section of what would be included in a DEA of this type, as the whole purpose of this agreement is to define what attributes are managed by which systems and going to which target systems, and as many IAM consultants can tell you, would be substantially more then what’s provided in this example. And this is just an example for a single system, this is something that’s done for all applications that consume data related to your organisations staff members.
One thing that you might also notice is that I’ve highlighted 2 attributes in the sample above in bold. Why might you ask? Well the point of including this was to highlight data sets that are considered “Sensitive” and within the DEA you would specify this being classified as sensitive data with specific conditions around this data set. This is something your business would define and word to appropriately express this but it could be as simple as a section stating the following;
“Two attributes are classed as sensitive data in this list and cannot be reproduced, presented or distributed under any circumstances”
One challenge that is often confronted within any business is application owners wanting “ownership” of the data they consume. Utilising a DEA provides clarity over who owns the data and what your applications can do with the data they consume removing any uncertainty.
To summarise this post, the point of this wasn’t to provide you with a template, or example DEA to use, it was to help explain what a DEA is, what its used for and examples of what parts can look like. No DEA is the same, and providing you with a full example DEA is only going to make you end up recreating it from scratch anyway. But it is intended to help you with understanding what is needed.
As with any of my posts if you have any questions please pop a comment or reach out to me directly.
 

The Vault!

Vault
The vault or more precisely the “Identity Vault” is a single pane view of all the collated data of your users, from the various data source repositories. This sounds like a lot of jargon but it’s quite simple really.
In the diagram below we look at a really simple attribute firstName (givenName within AD) DataFlow
As you will see at the centre is the attribute, and branching off this is all the Connected Systems, i.e. Active Directory. What this doesn’t illustrate very well is the specific data flow, where this data is coming from and where it’s going to. This comes down to import and export rules as well as any precedence rules that you need to put in place.
The Identity Vault, or Central Data Repository, provides a central store of an Identities information aggregated from a number of sources. It’s also able to identify the data that exists within each of the connected systems from which it either collects the identity information from or provides the information to as a target system. Sounds pretty simple right?
Further to all the basics described above, each object in the Vault has a Unique Identifier, or an Anchor. This is a unique value that is automatically generated when the user is created to ensure that regardless of what happens to the users details throughout the lifecycle of the user object, we are able to track the user and update changes accordingly. This is particularly useful when you have multiple users with the same name for example, it avoids the wrong person being updated when changes occur.

Attribute User 1 User 2
FirstName John John
LastName Smith Smith
Department Sales Sales
UniqueGUID 10294132 18274932

So the table above provides the most simplest forms of a users identity profile, whereas a complete users identity profile will consist of many more attributes, some of which maybe custom attributes for specific purposes, as in the example demonstrated below;

Attribute ContributingMA Value
AADAccountEnabled AzureAD Users TRUE
AADObjectID AzureAD Users 316109a6-7178-4ba5-b87a-24344ce1a145
accountName MIM Service jsmith
cn PROD CORP AD Joe Smith
company PROD CORP AD Contoso Corp
csObjectID AzureAD Users 316109a6-7178-4ba5-b87a-24344ce1a145
displayName MIM Service Joe Smith
domain PROD CORP AD CORP
EXOPhoto Exchange Online Photos System.Byte[]
EXOPhotoChecksum Exchange Online Photos 617E9052042E2F77D18FEFF3CE0D09DC621764EC8487B3517CCA778031E03CEF
firstName PROD CORP AD Joe
fullName PROD CORP AD Joe Smith
mail PROD CORP AD joe.smith@contoso.com.au
mailNickname PROD CORP AD jsmith
o365AccountEnabled Office365 Licensing TRUE
o365AssignedLicenses Office365 Licensing 6fd2c87f-b296-42f0-b197-1e91e994b900
o365AssignedPlans Deskless, MicrosoftCommunicationsOnline, MicrosoftOffice, PowerAppsService, ProcessSimple, ProjectWorkManagement, RMSOnline, SharePoint, Sway, TeamspaceAPI, YammerEnterprise, exchange
o365ProvisionedPlans MicrosoftCommunicationsOnline, SharePoint, exchange
objectSid PROD CORP AD AQUAAAAAAAUVAAAA86Yu54D8Hn5pvugHOA0CAA==
sn PROD CORP AD Smith
source PROD CORP AD WorkDay
userAccountControl PROD CORP AD 512
userPrincipalName PROD CORP AD jsmith@contoso.com.au

So now we have more complete picture of the data, where it’s come from and how we connect that data to a users’ identity profile. We can start to look at how we synchronise that data to any and all Managed targets. It’s very important to control this flow though, to do so we need to have in place strict governance controls about what data is to be distributed throughout the environment.
One practical approach to managing this is by using a data exchange agreement. This helps the organisation have a more defined understanding of what data is being used by what application and for what purpose, it also helps define a strict control on what the application owners can do with the data being consumed for example, strictly prohibiting the application owners from sharing that data with anyone, without the written consent of the data owners.
In my next post we will start to discuss how we then manage target systems, how we use the data we have to provision services and manage the user information through what’s referred to as synchronisation rules.
As with all my posts if, you have any questions please drop me a note.
 

Follow ...+

Kloud Blog - Follow