Writing for the Web – that includes your company intranet!

You can have a pool made out of gold – if the water in it is as dirty and old as a swamp- no one will swim in it!

The same can be said about the content of an intranet. You can have the best design, the best developers and the most carefully planned out navigation and taxonomy but if the content and documents are outdated and hard to read, staff will lose confidence in its authority and relevance and start to look elsewhere – or use it as an excuse to get a coffee.

The content of an intranet is usually left to a representative from each department (if you’re lucky!) Usually people that have been working in a company for years. Or worse yet, to the IT guy. They are going to use very different language to a new starter, or to a book keeper, or the CEO. Often content is written for an intranet “because it has to be there” or to “cover ourselves” or because “the big boss said so” with no real thought into how easy it is to read or who will be reading it.

adaptive

Content on the internet has changed and adapted to meet a need that a user has and to find it as quickly as possible. Why isn’t the same attitude used for your company? If your workers weren’t so frustrated finding the information they need to do their job, maybe they’d perform better, maybe that would result in faster sales, maybe investing in the products your staff use is just as important as the products your consumers use.

I’m not saying that you have to employ a copywriter for your intranet but at least train the staff you nominate to be custodians of your land (your intranet- your baby).

Below are some tips for your nominated content authors.

The way people read has changed

People read differently thanks to the web. They don’t read. They skim.

  • They don’t like to feel passive
  • They’re reluctant to invest too much time at one site or your intranet
  • They don’t want to work for the information

People DON’T READ MORE because

  • What they find isn’t relevant to what they need or want.
  • They’re trying to answer a question and they only want the answer.
  • They’re trying to do a task and they only want what’s necessary.

Before you write, identify your audience

People come to an intranet page with a specific task in mind. When developing your pages content, keep your users’ tasks in mind and write to ensure you are helping them accomplish those tasks.  If your page doesn’t help them complete that task, they’ll leave (or call your department!)

Ask these questions

  • Who are they? New starters? Experienced staff? Both? What is the lowest common denominator?
  • Where are they? At work? At home? On the train? (Desktop, mobile, laptop, ipad)
  • What do they want?
  • How educated are they? Are they familiar with business Jargon?

Identify the purpose of your text

As an intranet, the main purpose is to inform and educate. Not so much to entertain or sell.

When writing to present information ensure:

  • Consistency
  • Objectivity
  • Consider tables, diagrams or graphs

Structuring your content

Headings and Sub headings

Use headings and sub headings for each new topic. This provides context and informs readers about what is to come. It provides a bridge between chunks of content.

Sentences and Paragraphs

Use short sentences. And use only 1-3 sentences per paragraph.

‘Front Load’ your sentences. Position key information at the front of sentences and paragraphs.

Chunk your text. Break blocks of text into smaller chunks. Each chunk should address a single concept. Chunks should be self-contained and context-independent.

Layering vs scrolling

It is OK if a page scrolls. It just depends how you break up your page! Users’ habits have changed in the past 10 years due to mobile devices, scrolling is not a dirty word, as long as the user knows there’s more content on the page by using visual cues.

phone.png

Use lists to improve comprehension and retention

  • Bullets for list items that have no logical order
  • Numbered lists for items that have a logical sequence
  • Avoid the lonely bullet point
  • Avoid death by bullet point

General Writing tips

  • Write in plain English
  • Use personal pronouns. Don’t say “Company XYZ prefers you do this” Say “We prefer this”
  • Make your point quickly
  • Reduce print copy – aim for 50% less copy than what you’d write for print
  • Be objective and don’t exaggerate
  • USE WHITE SPACE – this makes content easier to scan, and it is more obvious to the eye that content is broken up into chunks.
  • Avoid jargon
  • Don’t use inflated language

Hyperlinks

  • Avoid explicit link expressions (eg. Click here)
  • Describe the information readers will find when they follow the link
  • Use VERBS (doing word) as links.
  • Warn users of a large file size before they start downloading
  • Use links to remove secondary information from the bulk of the text (layer the content)

Remove

  • Empty words and phrases
  • Long words or phrases that could be shorter
  • Unnecessary jargon and acronyms
  • Repetitive words or phrases
  • Adverbs (e.g., quite, really, basically, generally, etc.)

Avoid Fluff

  • Don’t pad write with unnecessary sentences
  • Stick to the facts
  • Use objective language
  • Avoid adjectives, adverbs, buzzwords and unsubstantiated claims

Tips for proofreading

  1. Give it a rest
  2. Look for one type of problem at a time
  3. Double-check facts, figures, dates, addresses, and proper names
  4. Review a hard copy
  5. Read your text aloud
  6. Use a spellchecker
  7. Trust your dictionary
  8. Read your text backwards
  9. Create your own proofreading checklist
  10. Ask for help!

A Useful App

Hemingwayapp.com assesses how good your content is for the web.

A few examples (from a travel page)

Bad Example

Our Approved an​​​d Preferred Providers

Company XYZ has contracted arrangements with a number of providers for travel.  These arrangements have been established on the basis of extracting best value by aggregating spend across all of Company XYZ.

Why it’s Bad

Use personal pronouns such as we and you, so the user knows you are talking to them. They know where they work. Remove Fluff.

Better Example

Our Approved an​​​d Preferred Providers

We have contracted arrangements with a number of providers for travel to provide best value

Bad Example

Travel consultant:  XYZ Travel Solutions is the approved provider of travel consultant services and must be used to make all business travel bookings.  All airfare, hotel and car rental bookings must be made through FCM Travel Solutions

Why it’s bad

The author is saying the same thing twice in two different ways. This can easily be said in one sentence.

Better Example

Travel consultant

XYZ Travel Solutions must be used to make all airfare, hotel and car rental bookings.

Bad Example

Qantas is Company XYZ preferred airline for both domestic and international air travel and must be used where it services the route and the “lowest logical fare” differential to another airline is less than $50 for domestic travel and less than $400 for international travel

Why it’s bad

This sentence is too long. This is a case of using too much jargon. What does lowest logical fare even mean? And the second part does not make any sense. What exactly are they trying to say here? I am not entirely sure, but if my guess is correct it should read something like below.

Better Example

Qantas is our preferred airline for both domestic and international air travel. When flying, choose the cheapest rate available within reason. You can only choose another airline if it is cheaper by $50 for domestic and cheaper by $400 for international travel.

Bad Example

Ground transportation:  Company XYZ preferred provider for rental vehicle services is Avis.  Please refer to the list of approved rental vehicle types in the “Relevant Documents” link to the right hand side of this page.

Why it’s bad

Front load your sentences. With the most important information first. Don’t make a user dig for a document, have the relevant document right there. Link the Verb. Don’t say CLICK HERE!

Better Example

Ground transportation

Avis is our preferred provider to rent vehicles.

View our list of approved rental vehicles.

Bad Example

Booking lead times:  To ensure that the best airfare and hotel rate can be obtained, domestic travel bookings should be made between 14-21 days prior to travel, and international travel bookings between 21 and 42 days prior to travel.  For international bookings, also consider lead times for any visas that may need to be obtained.

Why it’s bad

Front load your sentence… most important information first. This is a good opportunity to chunk your text.

Better Example

Booking lead times

Ensure your book your travel early

14-21 day prior to travel for domestic

21-42 days prior to travel for internatonal (also consider lead times for visas)

This will ensure that the best airfare and hotel rate can be obtained.

 

How to set Property Bag values in SharePoint Modern Sites using SharePoint Online .Net CSOM

If you think setting Property Bag values programmatically for Modern SharePoint sites using CSOM would be as straight forward as in the Old Classic SharePoint sites, then there is a surprise waiting for you.

As you can see below, in the old C# CSOM this code would set the Property Bag values as follows:

The challenge with the above method is that the PropertyBag values are not persisted after saving. So, if you load the context object again, the values are back to the initial values (i.e. without the “PropertyBagValue1” = “Property”).

The cause of the issue is that Modern Sites have the following setting set; IsNoScript = false, which prevents us from updating the Object model through script. Below is a screenshot of the documentation from MS docs – https://docs.microsoft.com/en-us/sharepoint/dev/solution-guidance/modern-experience-customizations-customize-sites

ModernTeamSitesPropertyBag_Limitation

Resolution:

Using PowerShell

Resolving this is quite easy using PowerShell, by setting the -DenyAddAndCustomizePages to 0.

Set-SPOsite <SiteURL> -DenyAddAndCustomizePages 0

However, when working with the CSOM model we need to take another approach (see below).

Using .NET CSOM Model

When using the CSOM Model, we must use the SharePoint PnP Online CSOM for this.  Start by downloading it from Nuget or the Package Manager in Visual Studio.  Next, add the code below.

Walking through the code, we are first initializing Tenant Administration and then accessing the Site Collection using the SiteUrl property. Then we use the SetSiteProperties method to set noScriptSite to false.  After that, we can use the final block of code to set the property bag values.

When done, you may want to set the noScriptSite back to true, as there are implications on modern sites as specified in this article here – https://support.office.com/en-us/article/security-considerations-of-allowing-custom-script-b0420ab0-aff2-4bbc-bf5e-03de9719627c

 

Quick start guide for PnP PowerShell

It is quite easy and quick to set up PnP PowerShell on a local system and start using it. Considering that PnP PowerShell has been gaining a lot of momentum among Devs and Admins, I thought it would be good to have a post for everyone’s reference.

So why PnP PowerShell? Because it is the recommended and most updated PowerShell module for IT Pros to work on SharePoint Online, SharePoint on-premise and Office 365 Groups. It allows us to remotely maintain a SharePoint Online and Office 365 tenancy as we will see below.

Installation:

First, we have to make sure that we have the latest and greatest version of PowerShell (at least >= v4.0) on the system. We can simply check it using the following command $PSVersionTable.PSVersion which will give the output below.

PowerShell Version Check

If your version isn’t sufficient, then go to this link to upgrade – https://www.microsoft.com/en-us/download/details.aspx?id=40855. Since there are OS requirements accompanied with PowerShell, please make sure to check the System Requirements in order to make sure the OS matches the specs.

Also, I would recommend installing SharePoint Online Management Shell because there are various Tenant commands that you may not find in PnP PowerShell. The link can be found here – https://www.microsoft.com/en-au/download/details.aspx?id=35588. Again, make sure to check the System Requirements in order to make sure the OS matches the specs.

After you have the proper PowerShell version, run the following commands to install the PnP PowerShell module depending on if you’re using the Online or On-Premise version.  You only need to install the version(s) you need to interact with (no need to install on premise versions if you’re only working online).

SharePoint Version Command to install
SharePoint Online Install-Module SharePointPnPPowerShellOnline
SharePoint 2016 Install-Module SharePointPnPPowerShell2016
SharePoint 2013 Install-Module SharePointPnPPowerShell2013

It will likely take few mins to run and complete the above command(s).

Getting Started Guide:

To start using PnP PowerShell, first we will have to understand how PnP PowerShell works.

PnP PowerShell works in the context of the current Connection the site it is connected to. Assuming it is connected to a Site Collection, all commands refer by default to that Site Collection. This is different from SPO commands which require you to have Tenant Admin rights. A benefit of the PnP approach is that you can have separate Admin personnel to manage separate site collections or sites if needed.

Overview of Basic SharePoint Operations with PnP PowerShell

1. Connect to PnP Online

Connect-PnPOnline -Url

In the above case, if you are not given a prompt for entering your credentials, then create a credential object and pass it to the Connect command (as seen below).

$user = ""
$secpasswd = ConvertTo-SecureString "" -AsPlainText -Force
$mycreds = New-Object System.Management.Automation.PSCredential ($user, $secpasswd)
Connect-SPOnline -url $siteAdminURL -Credentials $mycreds

2. Get the web object using Get-PnPSite and then get the subsites in it

## For Site Collection (root web)
$web = Get-PnPSite 
## For Sub web
$web = Get-PnPWeb -Identity ""

Remember, PnP works in the context of the current connection.

3. To work on lists or objects of the site, use the “Includes” parameter to request them or else they will not be initialized.

$web = Get-PnPSite -Includes RootWeb.Lists

PnPPowerShell output

After you get the list object, you can traverse it as needed using the object.

4. To work on list items, you must request the list items and same goes for updating the items.

$listItem = Get-PnPListItem -List "Site Assets"

DocumentResults

Set-PnPListItem -List "Site Assets" -Identity 2 -Values @{"" = ""}

PnPPowerShell_UpdateItem

So, as we saw above, we can use PnP PowerShell to maintain SharePoint assets remotely.

Please feel free to let me know if there are any specific questions you may have regarding the approach above and I will post about those as well.

Automate SharePoint Document Libraries Management using Flow, Azure Functions and SharePoint CSOM

I’ve been working on a client requirement to automate SharePoint library management via scripts to implement a document lifecycle with many document libraries that have custom content types and requires regular housekeeping for ownership and permissions.

Solution overview

To provide a seamless user experience, we decided to do the following:

1. Create a document library template (.stp) with all the prerequisite folders and content types applied.

2. Create a list to store the data about entries for said libraries. Add the owner and contributors for the library as columns in that list.

3. Whenever the title, owners or contributors are changed, the destination document library will be updated.

Technical Implementation

The solution has two main elements to automate this process

1. Microsoft Flow – Trigger when an item is created or modified

2. Two Azure Functions – Create the library and update permissions

The broad steps and code are as follows:

1. When the flow is triggered, we would check the status field to find if it is a new entry or a change.

Note: Since Microsoft flow doesn’t have conditional triggers to differentiate between create and modified list item events, use a text column in the SharePoint list which is set to start, in progress and completed values to identify create and update events.

2. The flow will call an Azure function via an HTTP Post action in a Function. Below is the configuration of this.

AzureFunctionFromFlow

3. For the “Create Library” Azure function, create a HTTP C# Function in your Azure subscription.

4. In the Azure Function, open Properties -> App Service Editor. Then add a folder called bin and then copy two files to it.

  • Microsoft.SharePoint.Client
  • Microsoft.SharePoint.Client.Runtime

KuduTools_AzureFunction

Create Lib App Service Editor

Please make sure to get the latest copy of the Nuget package for SharepointPnPOnlineCSOM. To do that, you can set up a VS solution and copy the files from there, or download the Nuget package directly and extract the files from it.

5. After copying the files, reference them in the Azure function using the below code

#r "Microsoft.SharePoint.Client.dll"
#r "Microsoft.SharePoint.Client.Runtime.dll"
#r "System.Configuration"
#r "System.Web"

6. Then create the SharePoint client context and create a connection to the source list.

7. After that, use the ListCreationInformation class to create the Document library from the library template using the code below.

8. After the library is created, break the role inheritance for the library as per the requirement

9. Update the library permissions using the role assignment object

10. To differentiate between People, SharePoint Groups and AD Groups, find the unique ID and add the group as per the script below.

Note: In case you have people objects that are not in AD anymore because they have left the organisation, please refer to this blog for validating them before updating – https://blog.kloud.com.au/2017/11/07/resolving-user-not-found-issue-while-assigning-permissions-using-sharepoint-csom/

      Note: Try to avoid item.Update() from the Azure Function as that will trigger a second flow run, causing an iterative loop, instead use item.SystemUpdate()

11. After the update is done, return to the Flow with the success value from the Azure Function which will complete the loop.

As shown above, we saw how we can automate document library creation from a template and permissions management using Flow and Azure Functions

Migrating Sharepoint 2013 on prem to Office365 using Sharegate

Recently I completed a migration project which brought a number of sub-sites within Sharepoint 2013 on-premise to the cloud (Sharepoint Online). We decided to use Sharegate as the primary tool due to the simplistic of it.

Although it might sound as a straightforward process, there are a few things worth to be checked pre and post migration and I have summarized them here. I found it easier to have these information recorded in a spreadsheet with different tabs:

Pre-migration check:

  1. First thing, Get Site Admin access!

    This is the first and foremost important step, get yourself the admin access. It could be a lengthy process especially in a large corporation environment. The best level of access is being granted as the Site Collection Admin for all sites, but sometimes this might not be possible. Hence, getting Site Administrator access is the bare minimum for getting migration to work.

    You will likely be granted Global Admin on the new tenant at most cases, but if not, ask for it!

  2. List down active site collection features

    Whatever feature activated on the source site would need to be activated on the destination site as well. Therefore, we need to record down what have been activated on the source site. If there is any third party feature activated, you will need to liaise with relevant stakeholder in regards to whether it is still required on the new site. If it is, it is highly likely that a separate piece of license is required as the new environment will be a cloud based, rather than on-premise. Take Nintex Workflow for example, Nintex Workflow Online is a separate license comparing to Nintex Workflow 2013.

  3. Segregate the list of sites, inventory analysis

    I found it important to list down all the list of sites you are going to migrate, distinguish if they are site collections or just subsites. What I did was to put each site under a new tab, with all its site contents listed. Next to each lists/ libraries, I have fields for the list type, number of items and comment (if any).

    Go through each of the content, preferably sit down with the site owner and get in details of it. Some useful questions can be asked

  • Is this still relevant? Can it be deleted or skipped for the migration?
  • Is this heavily used? How often does it get accessed?
  • Does this form have custom edit/ new form? Sometimes owners might not even know, so you might have to take extra look by scanning through the forms.
  • Check if pages have custom script with site URL references as this will need to be changed to accommodate the new site url.

It would also be useful to get a comprehensive knowledge of how much storage each site holds. This can help you working out which site has the most content, hence likely to take the longest time during the migration. Sharegate has an inventory reporting tool, which can help but it requires Site Collection Admin access.

  1. Discuss some of the limitations

    Pages library

    Pages library under each site need specific attention, especially if you don’t have site collection admin! Pages which inherit any content type and master page from the parent site will not have these migrated across by Sharegate, meaning these pages will either not be created at the new site, or they will simply show as using default master page. This needs to be communicated and discussed with each owners.

    External Sharing

    External users will not be migrated across to the new site! These are users who won’ be provisioned in the new tenant but still require access to Sharepoint. They will need to be added (invited) manually to a site using their O365 email account or a Microsoft account.

    An O365 account would be whatever account they have been using to get on to their own Sharepoint Online. If they have not had one, they would need to use their Microsoft account, which would be a Hotmail/ Outlook account. Once they have been invited, they would need to response to the email by signing into the portal in order to get provisioned. New SPO site collection will need to have external sharing enabled before external access can happen. For more information, refer to: https://support.office.com/en-us/article/Manage-external-sharing-for-your-SharePoint-Online-environment-C8A462EB-0723-4B0B-8D0A-70FEAFE4BE85

    What can’t Sharegate do?

    Some of the following minor things cannot be migrated to O365:

  • User alerts – user will need to reset their alerts on new site
  • Personal views – user will need to create their personal views again on new site
  • Web part connections – any web part connections will not be preserved

For more, refer: https://support.share-gate.com/hc/en-us/categories/115000076328-Limitations

Performing the migration:

  1. Pick the right time

    Doing the migration at the low activity period would be ideal. User communications should be sent out to inform about the actual date happening as earlier as possible. I tend to stick to middle of the week as that way we still have a couple of days left to solve any issues instead of doing it on Friday or Saturday.

  2. Locking old sites

    During the migration, we do not want any users to be making changes to the old site. If you are migrating site collections, fortunately there’s a way to lock it down, provided you having access to the central admin portal. See https://technet.microsoft.com/en-us/library/cc263238.aspx

    However, if you are migrating sub-sites, there’s no way to lock down a sole sub-site, except changing its site permissions. That also means changing the site permissions risk having all these permissions information lost, so it would be ideal to record these permissions before making any changes. Also, take extra note on lists or libraries with unique permissions, which means they do not inherit site permissions, hence won’t be “locked unless manually changed respectively.

  3. Beware of O365 traffic jam

    Always stick to the Insane mode when running the migration in Sharegate. The Insane mode makes use of the new Offie 365 Migration API which is the fastest way to migrate huge volumes of data to Office365. While it’s been fast to export these data to Office365, I did find a delay in waiting for Office365 to import these into Sharepoint tenant. Sometimes, it could sit there for an hour before continuing with the import. Also, avoid running too many sessions if your VM is not powerful enough.

  4. Delta migration

    The good thing with using Sharegate is that you could do delta migration, which means you only migrate those files which have been modified or added since last migrated. However, it doesn’t handle deletion! If any files have been removed since you last migrated, running a delta sync will not delete these files from the destination end. Therefore, best practice is still delete the list from the destination site and re-create it using the Site Object wizard.

Post-migration check:

Doing the migration at the low activity period would be ideal. User communications should be sent out to inform about the actual date happening as earlier as possible. I tend to stick to middle of the week as that way we still have a couple of days left to solve any issues instead of doing it on Friday or Saturday.

Things to check:

  • Users can still access relevant pages, list and libraries
  • Users can still CRUD files/ items
  • Users can open Office web app (there can be different experience related to authentication when opening Office files, in most cases, users should only get prompted the very first time opening)

Global Navigation and Branding for Modern Site using SharePoint Framework Extensions

Last month at the Microsoft Ignite 2017, SharePoint Framework Extensions became GA. It gave us whole new capabilities how we can customize Modern Team sites and Communication sites.

Even though there are lots of PnP examples on SPFx extensions, while presenting at Office 365 Bootcamp, Melbourne and taking hands-on lab, I realised not many people are aware about the new capabilities that SPFx extensions provide. One of the burning question we often get from clients, if we can have custom header, footer and global navigation in the modern sites and the answer is YES. Here is an example where Global Navigation has been derived from Managed Metadata:

Communication Site with header and footer:

Modern Team Site with header and footer (same navigation):

With the latest Yeoman SharePoint generator, along with the SPFx Web Part now we have options to create extensions:

To create header and footer for the modern site, we need to select the Application Customizer extension.

After the solution has been created, one noticeable difference is TenantGlobalNavBarApplicationCustomizer is extending from BaseApplicationCustomizer and not BaseClientSideWebPart.

export default class TenantGlobalNavBarApplicationCustomizer
extends BaseApplicationCustomizer

Basic Header and Footer

Now to create a very basic Application Customizer with header/footer, make sure to import the React, ReactDom, PlaceholderContent and PlaceholderName:

import * as ReactDom from 'react-dom';
import * as React from 'react';
import {
  BaseApplicationCustomizer,
  PlaceholderContent,
  PlaceholderName
} from '@microsoft/sp-application-base';

In the onInit() function, the top(header) and the bottom (footer) placeholders need to be created:

const topPlaceholder =  this.context.placeholderProvider.tryCreateContent(PlaceholderName.Top);
const bottomPlaceholder =  this.context.placeholderProvider.tryCreateContent(PlaceholderName.Bottom);

Create the appropriate elements for the header and footer:

const topElement =  React.createElement('h1', {}, 'This is Header');
const bottomElement =   React.createElement('h1', {}, 'This is Footer');

Those elements can be render within placeholder domElement:

ReactDom.render(topElement, topPlaceholder.domElement);
ReactDom.render(bottomElement, bottomPlaceholder.domElement);

If you now run the solution:

  • gulp serve –nobrowser
  • Copy the id from src\extensions\.manifest.json file e.g. “7650cbbb-688f-4c62-b5e3-5b3781413223”
  • Open a modern site and append the following URL (change the id as per your solution):
    ?loadSPFX=true&debugManifestsFile=https://localhost:4321/temp/manifests.js&customActions={“7650cbbb-688f-4c62-b5e3-5b3781413223”:{“location”:”ClientSideExtension.ApplicationCustomizer”}}

The above 6 lines code will give the following outcome:

Managed Metadata Navigation

Now back to the first example. That solution has been copied from SPFx Sample and has been updated.

To get the above header and footer:

Go to command line and run

  • npm i
  • gulp serve –nobrowser
  • To see the code running, go to the SharePoint Online Modern site and append the following with the site URL:

    ?loadSPFX=true&debugManifestsFile=https://localhost:4321/temp/manifests.js&customActions={“b1efedb9-b371-4f5c-a90f-3742d1842cf3”:{“location”:”ClientSideExtension.ApplicationCustomizer”,”properties”:{“TopMenuTermSet”:”TopNav”,”BottomMenuTermSet”:”Footer”}}}

Deployment

Create an Office 365 CDN

Update config\write-manifests.json file “cdnBasePath” to the new location. E.g.

"cdnBasePath":"https://publiccdn.sharepointonline.com/<YourTenantName>.sharepoint.com//<YourSiteName>//<YourLibName>/<FoldeNameifAny>"

In the command line, run

  • gulp bundle –ship
  • gulp package-solution –ship

and upload all artefacts from \temp\deploy\ to the CDN location

Upload \sharepoint\solution.sppkg to the App Library

Go to the Modern Site > Site Content > New > App > select and add the app:

Hopefully this post has given an overview how to implement SPFx Application Customizers. There are many samples available in the GitHub SharePoint.

Integrating Yammer data within SharePoint web-part using REST API

Background

We were developing a SharePoint application for one of our client and have some web-parts that had to retrieve data from Yammer. As we were developing on SharePoint Online (SPO) using a popular SharePoint Framework (SPFx), so for the most part of our engagement we were developing using a client-side library named React to deliver what is required from us.

In order for us to integrate client’s Yammer data into our web-parts, we were using JavaScript SDK provided by Yammer.

Scenario

We were having around 7-8 different calls to Yammer API in different web-parts to extract data from Yammer on behalf of a logged-in user. Against each API call, a user has to be authenticated before a call to Yammer API has been made and this has to be done without the user being redirected to Yammer for login or presented with a popup or a button to log in first.

If you follow Yammer’s JavaScript SDK instructions, we will not be meeting our client’s requirement of not asking the user to go Yammer first (as this will change their user flow) or a pop-up with login/sign-in dialog.

Approach

After looking on the internet to fulfill above requirements, I could not find anything that serves us. I have found the closest match in PnP sample but it only works if a client has already consented to your Yammer app before. In our case, this isn’t possible as many users will be accessing SharePoint home page for the first them and have never accessed Yammer before.

What we have done is, let our API login calls break into two groups. Randomly one of the calls was chosen to let the user login to Yammer and get access token in the background and cache it with Yammer API and make other API login calls to wait for the first login and then use Yammer API to log in.

Step-1

This function will use standard Yammer API to check login status if successful then it will proceed with issuing API data retrieval calls, but if could not log in the first time; it will wait and check again after every 2 sec until it times out after 30 sec.

  public static loginToYammer(callback: Function, requestLogin = true) {
    SPComponentLoader.loadScript('https://assets.yammer.com/assets/platform_js_sdk.js', { globalExportsName: "yam"}).then(() => {
      const yam = window["yam"];

        yam.getLoginStatus((FirstloginStatusResponse) => {
        if (FirstloginStatusResponse.authResponse) {
          callback(yam);
        }
        else {
          let timerId = setInterval(()=>{
              yam.getLoginStatus((SecondloginStatusResponse) => {
                if (SecondloginStatusResponse.authResponse) {
                  clearInterval(timerId);
                  callback(yam);
                }
              });
          }, 2000);

          setTimeout(() => {
              yam.getLoginStatus((TimeOutloginStatusResponse) => {
                if (TimeOutloginStatusResponse.authResponse) {
                  clearInterval(timerId);
                }
                else {
                  console.error("iFrame - user could not log in to Yammer even after waiting");
                }
              });
          }, 30000);
        }
      });
    });
  }

Step-2

This method will again use the standard Yammer API to check login status; then tries to log in user in the background using an iframe approach as called out in PnP sample; if that approach didn’t work either then it will redirect user to Smart URL in the same window to get user consent for Yammer app with a redirect URI set to home page of  your SharePoint where web-parts with Yammer API are hosted.

  public static logonToYammer(callback: Function, requestLogin = true) {
    SPComponentLoader.loadScript('https://assets.yammer.com/assets/platform_js_sdk.js', { globalExportsName: "yam"}).then(() => {
      const yam = window["yam"];

      yam.getLoginStatus((loginStatusResponse) => {
        if (loginStatusResponse.authResponse) {
          callback(yam);
        }
        else if (requestLogin) {
          this._iframeAuthentication()
              .then((res) => {
                callback(yam);
              })
              .catch((e) => {
                window.location.href="https://www.yammer.com/[your-yammer-network-name]/oauth2/authorize?client_id=[your-yammer-app-client-id]&response_type=token&redirect_uri=[your-sharepoint-home-page-url]";
                console.error("iFrame - user could not log in to Yammer due to error. " + e);
              });
        } else {
          console.error("iFrame - it was not called and user could not log in to Yammer");
        }
      });
    });
  }

The function _iframeAuthentication is copied from PnP sample with some modifications to fit our needs as per the client requirements were developing against.


  private static _iframeAuthentication(): Promise<any> {
      let yam = window["yam"];
      let clientId: string = "[your-yammer-app-client-id]";
      let redirectUri: string = "[your-sharepoint-home-page-url]";
      let domainName: string = "[your-yammer-network-name]";

      return new Promise((resolve, reject) => {
        let iframeId: string = "authIframe";
        let element: HTMLIFrameElement = document.createElement("iframe");

        element.setAttribute("id", iframeId);
        element.setAttribute("style", "display:none");
        document.body.appendChild(element);

        element.addEventListener("load", _ => {
            try {
                let elem: HTMLIFrameElement = document.getElementById(iframeId) as HTMLIFrameElement;
                let token: string = elem.contentWindow.location.hash.split("=")[1];
                yam.platform.setAuthToken(token);
                yam.getLoginStatus((res: any) => {
                    if (res.authResponse) {
                        resolve(res);
                    } else {
                        reject(res);
                    }
                });
            } catch (ex) {
                reject(ex);
            }
        });

        let queryString: string = `client_id=${clientId}&response_type=token&redirect_uri=${redirectUri}`;

       let url: string = `https://www.yammer.com/${domainName}/oauth2/authorize?${queryString}`;

        element.src = url;
      });
    }

Conclusion

This resulted in authenticating Office 365 tenant user within the same window of SharePoint home page with the help of an iframe [case: the user had consented Yammer app before] or getting a Yammer app consent from the Office 365 tenant user without being redirected to Yammer to do OAuth based authentication [case: the user is accessing Yammer integrated web-parts for the 1st time].

We do hope future releases of Yammer API will cater seamless integration among O365 products without having to go through a hassle to get access tokens in a way described in this post.

Angular Bag of Tricks for SharePoint

Introduction

I’ve been using Angular 1.x for building custom UI components and SPAs for SharePoint for years now. There’s a lot of rinse and repeat here, and over time my “stack” of open-source custom Directives and Services that make Angular such a powerful framework, have settled down to a few key ones that I wanted to highlight – hence this post.

Some may be wondering why am I still working almost exclusively with Angular 1.x? Why not Angular 2, or React, or Aurelia? A few reasons:

  • It’s often already in use. Quite often a customer is already leveraging Angular 1.x in a custom masterpage, so it makes sense not to add another framework to the mix.
  • Performance improvements are not a high priority. I work almost entirely with SharePoint Online. The classic ASP.NET pages served up there aren’t exactly blindingly fast to load, so Angular 1 (used carefully) doesn’t slow things down measurably. Will this change when SPFx finally GA’s? Of course! But in the meantime, Angular 1.x is very comfortable, which leads to…
  • Familiarity = Productivity. Ramping up a custom application in SharePoint with Angular is now very quick to do. This is the whole “go with what you know well and can iterate fast on” approach to framework selection. Spend your time building out the logic of your app rather than fighting an unfamiliar framework.
  • The absolute smorgasbord of community-produced libraries that enhance Angular. A lot of the major ones have Angular 2 versions, but there are some notable exceptions (highlighted below).

So here, in order of frequency of use, are the plugins that I go to time and time again. Check them out and star those github repos!

UI-Router 

ui-router

An awesome state-based routing service for Angular (and there are Angular 2 and React versions as well) – more widely used than the default Angular 1 router as it has a fantastic API which allows you resolve asynchronous data calls (via promises) before you transition to the state that needs it. This keeps your controllers/components light and clean. I use this every custom webpart/SPA I build that has more than one view (which is almost all of them).

If you need modals in your app, you can add in the uib-modal extension that allows UI-Bootstrap modals to be declared as state in your UI-Router state config. Great for deep linking through to modal windows!

Angular Formly

formly

Sick of labouring over large form templates? They are time consuming to wire up and maintain – that’s a lot of markup! Formly allows you to declare your form fields in JavaScript instead. This allows for a lot more control and being able to generate the UI on the fly at run time is a killer feature (that I haven’t done enough with to date!). I hope to have another post on this topic very soon…

Formly makes using custom controls / directives in forms really easy and gives you uniform validation rules over all of them. It’s also wonderfully extensible – there’s so much you can do with it, once you learn the basics. I put off trying it for AGES and now I wouldn’t be without it – if you have any user input in your Angular app, do yourself a favour and use Formly!

AG-Grid

ag-grid

The best JavaScript grid component. Period. Like all libraries I’ve mentioned so far, this one is also not just for Angular (this one supports nearly all of the major frameworks, including Aurelia and Vue). There’s a Free and an Enterprise version with a lot of extra bells and whistles. I haven’t had to shell out for Enterprise yet – Free is very fully featured as is. If you have a table of data in your app – you should give this a try for all but the most simple scenarios.

Angular-Gantt

angular-gantt

Here’s the first Angular 1 only library in my toolbox. Makes creating complex Gantt-chart interfaces if not dead easy, at least feasible! I shudder to think what I nightmare it would be to write this kind of functionality from scratch…

There’s loads of features here, just like the other libararies listed.

Angular-Wizard

 

angular-wizardAnother Angular 1-only (although similar Angular 2 projects exist). Great little wizard form directive that allows you to declare steps of your wizard declaratively in your template, or when teamed up with Formly, in your controller. The latter allows you to create dynamic wizard steps by filtering the questions in the next step based on the response to the previous (once again – need to document this in another post in future).

A few extra tricks for SharePoint…

A few other more generic practices when slinging Angular in SharePoint:

  • Don’t be afraid to use $q to wrap your own promises – yes it is overused in a lot of example code on Stack Overflow (hint: if you are calling the $http service, you don’t need $q, just return the result of the $http call), but it’s great if you want/need to use CSOM. Just wrap the result of executeQueryAsync in a promise’s resolve method and you’ve got a far cleaner implementation (no callbacks when you utilise it), so it’s easily packaged up in a service.
  • Create a reusable service layer – lots of people don’t bother to use Angular services, as most example code just keeps the $http calls in the controller for simplicity. Keep all your REST and CSOM calls to interact with SharePoint in a service module and you’ll get a lot more reuse of your code from application to application. Ideally, use ui-router to resolve the promises from your service before the controller is even initialised (as mentioned above).
  • Use Widget Wrangler for hosting your Angular apps in SharePoint pages– this handles all your script dependencies cleanly and lets you host in a ScriptEditor webpart (easily deployed with PnP PowerShell).
  • Think about caching your data or real-time sync – the excellent Angular-Cache is great for client-side caching of data and if your application’s data is frequently updated, you may want to consider a real-time data option to enhance the solution and prevent the need for page refreshes (another post on this coming soon too), such as Firebase or GunJS.
  • Azure Functions-All-The-Things! No more PowerShell running in a scheduled task on a VM for background processing. There is a better (and even cheaper) way.

I hope some people find this useful. Please leave a comment if you’ve got some other Angular-goodness you’d like to share!

Moving SharePoint Online workflow task metadata into the data warehouse using Nintex Flows and custom Web API

This post suggests the idea of automatic copying of SharePoint Online(SPO) workflow tasks’ metadata into the external data warehouse.  In this scenario, workflow tasks are becoming a subject of another workflow that performs automatic copying of task’s data into the external database using a custom Web API endpoint as the interface to that database. Commonly, the requirement to move workflow tasks data elsewhere arises from limitations of SPO. In particular, SPO throttles requests for access to workflow data making it virtually impossible to create a meaningful workflow reporting system with large amounts of workflow tasks. The easiest approach to solve the problem is to use Nintex workflow to “listen” to the changes in the workflow tasks, then request the task data via SPO REST API and, finally, send the data to external data warehouse Web API endpoint.

Some SPO solutions require creation of a reporting system that includes workflow tasks’ metadata. For example, it could be a report about documents with statuses of workflows linked to these documents. Using conventional approach (ex. SPO REST API) to obtain the data seems unfeasible as SPO throttles requests for workflow data. In fact, the throttling is so tight that generation of reports with more than a hundred of records is unrealistic. In addition to that, many companies would like to create Business Intelligence(BI) systems analysing workflow tasks data. Having data warehouse with all the workflow tasks metadata can assist in this job very well.

To be able to implement the solution a few prerequisites must be met. You must know basics of Nintex workflow creation and to be able to create a backend solution with the database of your choice and custom Web API endpoint that allows you to write the data model to that database. In this post we have used Visual Studio 2015 and created ordinary REST Web API 2.0 project with Azure SQL Database.

The solution will involve following steps:

  1. Get sample of your workflow task metadata and create your data model.
  2. Create a Web API capable of writing data model to the database.
  3. Expose one POST endpoint method of the Web REST API that accepts JSON model of the workflow task metadata.
  4. Create Nintex workflow in the SPO list storing your workflow tasks.
  5. Design Nintex workflow: call SPO REST API to get JSON metadata and pass this JSON object to your Web API hook.

Below is detailed description of each step.

We are looking here to export metadata of a workflow task. We need to find the SPO list that holds all your workflow tasks and navigate there. You will need a name of the list to be able to start calling SPO REST API. It is better to use a REST tool to perform Web API requests. Many people use Fiddler or Postman (Chrome Extension) for this job. Request SPO REST API to get a sample of JSON data that you want to put into your database. The request will look similar to this example:

Picture 1

The key element in this request is getbytitle(“list name”), where “list name” is SPO list name of your workflow tasks. Please remember to add header “Accept” with the value “application/json”. It tells SPO to return JSON instead of the HTML. As a result, you will get one JSON object that contains JSON metadata of Task 1. This JSON object is the example of data that you will need to put into your database. Not all fields are required in the data warehouse. We need to create a data model containing only fields of our choice. For example, it can look like this one in C# and all properties are based on model returned earlier:

The next step is to create a Web API that exposes a single method that accepts our model as a parameter from the body of the request. You can choose any REST Web API design. We have created a simple Web API 2.0 in Visual Studio 2015 using general wizard for MVC, Web API 2.0 project. Then, we have added an empty controller and filled  it with the code that works with the Entity Framework to write data model to the database. We have also created code-first EF database context that works with just one entity described above.

The code of the controller:

The code of the database context for Entity Framework

Once you have created the Web API, you should be able to call Web API method like this:
https://yoursite.azurewebsites.net/api/EmployeeFileReviewTasksWebHook

You will need to put your model data in the request body as a JSON object. Also don’t forget to include proper headers for your authentication and header “Accept” with “application/json” and set type of the request to POST. Once you’ve tested the method, you can move on to the next steps. For example, below is how we tested it in our project.

Picture 4

Next, we will create a new Nintex Workflow in the SPO list with our workflow tasks. It is all straightforward. Click Nintex Workflows, then create a new workflow and start designing it.

Picture 5

Picture 6

Once you’ve created a new workflow, click on Workflow Settings button. In the displayed form please set parameters as it shown on screenshot below. We set “Start when items are created” and “Start when items are modified”. In this scenario, any modifications of our Workflow task will start this workflow automatically. It also includes cases when Workflow task have been modified by other workflows.

Picture 7.1

Create 5 steps in this workflow as it shown on the following screenshots labelled as numbers 1 to 5. Please keep in mind that blocks 3 and 5 are there to assist in debugging only and not required in production use.

Picture 7

Step 1. Create a Dictionary variable that contains SPO REST API request headers. You can add any required headers, including Authentication headers. It is essential here to include Accept header with “application/json” in it to tell SPO that we want JSON in responses. We set Output variable to SPListRequestHeaders so we can use it later.

Picture 8

Step 2. Call HTTP Web Service. We call SPO REST API here. It is important to make sure that getbytitle parameter is correctly set to your Workflow Tasks list as we discussed before. The list of fields that we want to be returned is defined in the “$select=…” parameter of OData request. We need only fields that are included in our data model. Other settings are straightforward: we supply our Request Headers created in Step 1 and create two more variables for response. SPListResponseContent will get resulting JSON object that we going to need at the Step 4.

Picture 9

Step 3 is optional. We’ve added it to debug our workflow. It will send an email with the contents of our JSON response from the previous step. It will show us what was returned by SPO REST API.

Picture 10

Step 4. Here we are calling our custom API endpoint with passing JSON object model that we got from SPO REST API. We supply full URL of our Web Hook, Set method to POST and in the Request body we inject SPListResponseContent from the Step 2. We’re also capturing response code to display later in workflow history.

Picture 11

Step 5 is also optional. It writes a log message with the response code that we have received from our API Endpoint.

Picture 12

Once all five steps are completed, we publish this Nintex workflow. Now we are ready for testing.

To test the system, open list of our Workflow tasks. Click on any task and modify any of task’s properties and save the task. This will initiate our workflow automatically. You can monitor workflow execution in workflow history. Once workflow is completed, you should be able to see messages as displayed below. Notice that our workflow has also written Web API response code at the end.

Picture 13

To make sure that everything went well, open your database and check the records updated by your Web API. After every Workflow Task modification you will see corresponding changes in the database. For example:

Picture 14

 

In this post we have shown that automatic copying of Workflow Tasks metadata into your data warehouse can be done with a simple Nintex Workflow setup and performing only two REST Web API requests. The solution is quite flexible as you can select required properties from the SPO list and export into the data warehouse. We can easily add more tables in case if there are more than one workflow tasks lists. This solution enables creation of powerful reporting system  using data warehouse and also allows to employ BI data analytics tool of your choice.

Implement a SharePoint Timer job using Azure WebJob

The SharePoint Timer service runs in background to do long running tasks. The Timer service does some important SharePoint clean up tasks in the background but can also be used to provide useful functional tasks. For instance, there may be  a situation when you want to send newsletters to your users on regular basis or want to keep your customer up to date with  some regular timed information.

I will be using SharePoint Timer Service to send an email to newly registered customers/users for this demo. The newly registered customers/users are stored in SharePoint list with a status field capturing whether an email has been sent or not.

There are some implementation choices when developing a SharePoint Timer service:

  1. Azure Web Job
  2. Azure Worker Role
  3. Windows Service (can be hosted on premise or vm on Cloud)
  4. Task Scheduler (hosted on premise)

I am choosing WebJob as it is free of cost and I can leverage my Console application as WebJob. Please check http://www.troyhunt.com/2015/01/azure-webjobs-are-awesome-and-you.html why to choose Web Job.

Azure web job does not live it its own. It sits under Azure Web Apps. For this purpose I am going to create a dummy web app and host my Azure web job. I will be hosting all my CSOM code in this web job.

There are two types of web job:

  •  Continuous best fit for queuing application where it keeps receiving messages from queue.
  • On Demand can be scheduled for hourly, weekly and monthly etc.

The Web Job is used to host and execute CSOM code to get information about the user/customers from SharePoint to send email. Following code snippets show what web job is doing:

Querying SharePoint using CSOM and CAML Query:

Sends Email using Office365 Web Exchange:

Composing email using Razor Engine templating engine:

And finally update SharePoint list item using CSOM:

You can download full source code from Codeplex: https://webjobforsptimer.codeplex.com/

When writing a Web Job, the following points should be considered to make your web job diagnosable and reusable:

  1. Do not absorbs exceptions. Handle it first throw it to let web job know something went wrong.
  2. Try to use interfaces so that it can be mocked for unit testings
  3. Always log major steps and errors using Console.WriteLine etc
  4. Make your code like it can be used as console application so that it can be used in Task scheduler
  5. Try to avoid hardcoding. Maximise the use of configuration. It can be plugged from Azure portal as well.

It is time to publish this web job. There are lots of article out there how to create schedule for the web job. I would simply be using Visual Studio to create the schedule before publish it. On Visual Studio, right click the project and click “Publish as Azure Web Job…” and it will launch a UI to specify your schedule as shown below:

Schedule settings

That’s it. Happy SharePointing 🙂