Sharepoint 2013 Cumulative Updates Patching Overview

A couple months ago, I had the opportunity to help a client to patch their Sharepoint on-premise 2013, which was last patched up to 2014. It was a challenging but interesting experience. We decided to use the Nth-1 patch, which was March 2018 at that time. It was a 5-weeks engagements, where we had to break down the steps and processes from back-up preparations, roll back strategies, on-patching strategies to post-patching testings. Lots of inputs and discussions among DBAs, system engineers, IT manager and Sharepoint engineers. After all, it was a worthwhile experience as we all learnt a lot from it. Here’ i am breaking down the summary in what we have done.

First of all, we need to lay out the path from early stage to implementation and completion. The high level steps are as:

Inventory check on Sharepoint sites

It is very important to do an inventory check through the existing sites. Do a breakdown list of major sites and their owners, find out which are the most used sites, find out about the amount of storage being used, find out how many custom farm solutions there are.

Test through all aspects of Sharepoint

Application level and backend (central admin) level – We need to be confident with the existing functionalities in Sharepoint, whether they are out of box or custom. Testings need to be done before anything, this is to ensure what you test after the patching still work the same way as before patching.

I would also recommend backing up the existing search service application, just in case it gone corrupted after patching. Remember, anything can happen with the patching, regardless of whether it worked on DEV or TEST farms as each farm configurations can have different configurations and settings.

I would also suggest taking lots of screenshots in Central Admin so that you have a record of existing settings and configurations to compare at later stage.

Backup content databases

Liaise with DBAs to perform backups for content databases. This will likely to take overnight depending on the sizes of your farm and tool being used. Should it have completed successfully, we need to liaise with the system engineer to perform snapshots on all VMs. This should take less time than the content databases backup, however all VMs need to be shutdown prior.

The important thing to note is, we are trying to capture the current state of the farm, so any mis-sync between the VMs or databases would result in a corrupted farm.

Stopping all Sharepoint services

It is important to stop these critical Sharepoint services: IISAdmin, SPTimerV4 and Search. There is Powershell script that can do this but you could also stop them in local server’s Services window.

Applying CU patches

Always login using the farm account on each server.

The order of applying Sharepoint patch is critical. The order we chose to apply was as follow:

  • Front end servers
  • Backend servers
  • Search servers

You can run the patch asynchronously on all server as it doesn’t interfere with each other. The patch only updates the local file system, it doesn’t update the database just yet. Each patch should take no longer than half an hour to complete. Upon completion, restart the server.

Run PSConfig.exe

This must be run in order for the patching process to complete, or else the server status page in Central Admin will not show the updates being applied! I recommend using the command line version rather than the GUI version as we found the GUI version likes to hide error messages and just shows a completion message.

The command we used was this:

PSConfig.exe -cmd upgrade -inplace b2b -wait -cmd applicationcontent -install -cmd installfeatures -cmd secureresources -cmd services -install

The order of running this is also critical. We ran it on App servers first, then front end servers and lastly search servers. Always check on the results, any error or issue would likely be reported here. You need to fix all issues and re-run the command until a success message is shown.

If the patch has been applied successfully, Central Admin -> Upgrade and Migration -> Check product and patch installation status page would immediately reflect the update. Also, check the database configuration version reflects the right new version number.

Note, one issue we had to fix was the removal of orphan features found in various sites. There were a number of these orphan features which was reported by the tool and we used Powershell script to manually remove one by one.

Post-patching testing

Once all patches have been applied, ensure all servers are back up running. Check all the major services are running (i.e. Central Admin, IIS, Search, etc). This is when snapshots from your pre-testing come handy if you can’t remember what services should be running on what server. There are also ULS logs which you could check and scan through to see if there’s any critical issues found.

Then we started with post-patching testings. This should be the same test plans as the pre-testing and the result should match as well.

Some useful links for reference:

 

 

 

“Cannot complete this action” error in Sharepoint team site

Lately I was assigned a long standing issue in a well known organisation with Sharepoint 2013 on premise, in which some of its users have been getting this weird “Cannot complete this action” error screen whenever they delete a document from a file or modify a list view in their team sites.

Lots and lots of testings were done throughout a few days and I came out with the following an analysis summary:

  • Issue exists in some sub-sites of a team site under a team site collection (Sharepoint 2013 on-premise)
  • Error occurring consistently for team site users (including site collection admin), changes did get actioned/ saved
  • Users have to click back to get back to the previous screen to get back to the site
  • Error didn’t occur for some other team sites
  • No specific error correlation ID, nor anything suspicious in ULS log

Luckily i was able to find an answer from Microsoft. This was due to the fact that there was a load balanzer running which hasn’t had HTTP compression enabled plus these sub-sites having Minimal Download Strategy (MDS) site feature enabled.

Minimal Download Strategy feature was supposed to optimize loading speed for users as it allows the browser to only retrieve the changes required for a web page, rather than the whole page. However, when you have a load balanzer running for Sharepoint, HTTP compression will need to be enabled in order for MDS to work properly. Otherwise, simply turn off the site feature and this error will disappear.

See: https://support.microsoft.com/en-au/help/2934590/modifying-a-list-view-returns-cannot-complete-this-action-when-mds-is

 

Sharepoint Approval Workflow – Updating existing assignees

Workflows play a big role in everyday’s Sharepoint ‘Business as Usual’ activities. Users need to get things approved by stakeholders before publishing the content to others. In Sharepoint, there’s a built in Approval workflow template that you can enable on any list or library (this template will only be visible once you activated the Workflows Site Collection Feature). Once enabled, we can setup an Approval workflow by configuring some settings as below:

The Approval workflow initiation form, without any default values, looks like this.

Initiator will fill in the list of approvers and set either serial or parallel approval. For the approvers listed in the workflow, they receive a task assigned to them, and they receive an email notification. For further information on how approval workflow works, read: https://support.office.com/en-us/article/understand-approval-workflows-in-sharepoint-2010-a24bcd14-0e3c-4449-b936-267d6c478579

Modifying existing workflow

What i would like to bring up in this topic, is that with workflows setup using these existing templates, you could actually go in and change the list of approvers by clicking on the Add or update assignees of Approval in the workflow in progress dashboard.

Read More

Query multiple object classes from AD using LDAP Query

Recently I had to make a query to the Active Directory to get the list of users and contacts. To achieve this, I used the LDAP query. See the following function:

 ///<summary>
/// Queries the Active Directory using LDAP 
///</summary>
///<param name="entry">Directory entry</param>
///<param name="search">Directory searcher with properties to load and filters</param>
///<returns>A dictionary with ObjectGuid as the key</returns>
public static Dictionary<string, SearchResult> QueryLDAP(DirectoryEntry entry, DirectorySearcher search)
{
    entry.AuthenticationType = AuthenticationTypes.SecureSocketsLayer;
    entry.Path = ConfigurationManager.AppSettings["LDAP.URL"].ToString();
    entry.Username = ConfigurationManager.AppSettings["LDAP.Username"].ToString();
    entry.Password = ConfigurationManager.AppSettings["LDAP.Password"].ToString();
    /// Load any attributes you want to retrieve
    search.SearchRoot = entry;
    search.PropertiesToLoad.Add("name");
    search.PropertiesToLoad.Add("telephonenumber");
    search.PropertiesToLoad.Add("mobile");
    search.PropertiesToLoad.Add("mail");
    search.PropertiesToLoad.Add("title");
    search.PropertiesToLoad.Add("department");
    search.PropertiesToLoad.Add("objectguid");
    search.PropertiesToLoad.Add("sn");
    search.PropertiesToLoad.Add("userAccountControl");
    search.PropertiesToLoad.Add("userPrincipalName");
    search.PropertiesToLoad.Add("msexchhidefromaddresslists");
    search.PropertiesToLoad.Add("samaccountname");
    search.Filter = "(|(ObjectClass=user)(ObjectClass=contact))";
    search.SearchScope = SearchScope.Subtree;
    SearchResultCollection result = search.FindAll();
    Dictionary<string, SearchResult> dicResult = new
    Dictionary<string, SearchResult>();
    foreach (SearchResult profile in result)
    {
       if (profile.Properties["objectGUID"] != null && profile.Properties["objectGUID"].Count > 0)
         {
           Guid guid = new Guid((Byte[])profile.Properties["objectGUID"][0]);
           dicResult.Add(guid.ToString(), profile);
         }
     } 
     result.Dispose();
     entry.Close();
     entry.Dispose();

    return dicResult;


}

What this function does is, it queries the Active Directory and returns all profiles (set by filter) in a dictionary object. Notice the search filter set to return all objects class of user AND contact. The settings would come from a config file as below. Replace the tags with your settings:

<appSettings>
<!--LDAP settings-->
<add key="LDAP.URL" value="LDAP://OU=<OU_NAME>,DC=<DC_NAME>,DC=com" />
<add key="LDAP.Username" value="<SERVICE_ACCOUNT_USERNAME>" />
<add key="LDAP.Password" value="<SERVICE_ACCOUNT_PWD>" />
</appSettings>


So to use it, we will do:

using (DirectoryEntry entry = new DirectoryEntry())
using (DirectorySearcher search = new DirectorySearcher())
{
      //extract all AD profiles
      sbLog.AppendLine("Preparing to query LDAP...");
      Dictionary<string, SearchResult> AD_Results = QueryLDAP(entry, search);
      foreach (SearchResult ADProfile in AD_Results)
       {
         string email = ADProfile.GetDirectoryEntry().Properties["mail"].Value.ToString();
         //etc
       }
}

You can now loop through the dictionary to get each profile. 🙂


Migrating Sharepoint 2013 on prem to Office365 using Sharegate

Recently I completed a migration project which brought a number of sub-sites within Sharepoint 2013 on-premise to the cloud (Sharepoint Online). We decided to use Sharegate as the primary tool due to the simplistic of it.

Although it might sound as a straightforward process, there are a few things worth to be checked pre and post migration and I have summarized them here. I found it easier to have these information recorded in a spreadsheet with different tabs:

Pre-migration check:

  1. First thing, Get Site Admin access!

    This is the first and foremost important step, get yourself the admin access. It could be a lengthy process especially in a large corporation environment. The best level of access is being granted as the Site Collection Admin for all sites, but sometimes this might not be possible. Hence, getting Site Administrator access is the bare minimum for getting migration to work.

    You will likely be granted Global Admin on the new tenant at most cases, but if not, ask for it!

  2. List down active site collection features

    Whatever feature activated on the source site would need to be activated on the destination site as well. Therefore, we need to record down what have been activated on the source site. If there is any third party feature activated, you will need to liaise with relevant stakeholder in regards to whether it is still required on the new site. If it is, it is highly likely that a separate piece of license is required as the new environment will be a cloud based, rather than on-premise. Take Nintex Workflow for example, Nintex Workflow Online is a separate license comparing to Nintex Workflow 2013.

  3. Segregate the list of sites, inventory analysis

    I found it important to list down all the list of sites you are going to migrate, distinguish if they are site collections or just subsites. What I did was to put each site under a new tab, with all its site contents listed. Next to each lists/ libraries, I have fields for the list type, number of items and comment (if any).

    Go through each of the content, preferably sit down with the site owner and get in details of it. Some useful questions can be asked

  • Is this still relevant? Can it be deleted or skipped for the migration?
  • Is this heavily used? How often does it get accessed?
  • Does this form have custom edit/ new form? Sometimes owners might not even know, so you might have to take extra look by scanning through the forms.
  • Check if pages have custom script with site URL references as this will need to be changed to accommodate the new site url.

It would also be useful to get a comprehensive knowledge of how much storage each site holds. This can help you working out which site has the most content, hence likely to take the longest time during the migration. Sharegate has an inventory reporting tool, which can help but it requires Site Collection Admin access.

  1. Discuss some of the limitations

    Pages library

    Pages library under each site need specific attention, especially if you don’t have site collection admin! Pages which inherit any content type and master page from the parent site will not have these migrated across by Sharegate, meaning these pages will either not be created at the new site, or they will simply show as using default master page. This needs to be communicated and discussed with each owners.

    External Sharing

    External users will not be migrated across to the new site! These are users who won’ be provisioned in the new tenant but still require access to Sharepoint. They will need to be added (invited) manually to a site using their O365 email account or a Microsoft account.

    An O365 account would be whatever account they have been using to get on to their own Sharepoint Online. If they have not had one, they would need to use their Microsoft account, which would be a Hotmail/ Outlook account. Once they have been invited, they would need to response to the email by signing into the portal in order to get provisioned. New SPO site collection will need to have external sharing enabled before external access can happen. For more information, refer to: https://support.office.com/en-us/article/Manage-external-sharing-for-your-SharePoint-Online-environment-C8A462EB-0723-4B0B-8D0A-70FEAFE4BE85

    What can’t Sharegate do?

    Some of the following minor things cannot be migrated to O365:

  • User alerts – user will need to reset their alerts on new site
  • Personal views – user will need to create their personal views again on new site
  • Web part connections – any web part connections will not be preserved

For more, refer: https://support.share-gate.com/hc/en-us/categories/115000076328-Limitations

Performing the migration:

  1. Pick the right time

    Doing the migration at the low activity period would be ideal. User communications should be sent out to inform about the actual date happening as earlier as possible. I tend to stick to middle of the week as that way we still have a couple of days left to solve any issues instead of doing it on Friday or Saturday.

  2. Locking old sites

    During the migration, we do not want any users to be making changes to the old site. If you are migrating site collections, fortunately there’s a way to lock it down, provided you having access to the central admin portal. See https://technet.microsoft.com/en-us/library/cc263238.aspx

    However, if you are migrating sub-sites, there’s no way to lock down a sole sub-site, except changing its site permissions. That also means changing the site permissions risk having all these permissions information lost, so it would be ideal to record these permissions before making any changes. Also, take extra note on lists or libraries with unique permissions, which means they do not inherit site permissions, hence won’t be “locked unless manually changed respectively.

  3. Beware of O365 traffic jam

    Always stick to the Insane mode when running the migration in Sharegate. The Insane mode makes use of the new Offie 365 Migration API which is the fastest way to migrate huge volumes of data to Office365. While it’s been fast to export these data to Office365, I did find a delay in waiting for Office365 to import these into Sharepoint tenant. Sometimes, it could sit there for an hour before continuing with the import. Also, avoid running too many sessions if your VM is not powerful enough.

  4. Delta migration

    The good thing with using Sharegate is that you could do delta migration, which means you only migrate those files which have been modified or added since last migrated. However, it doesn’t handle deletion! If any files have been removed since you last migrated, running a delta sync will not delete these files from the destination end. Therefore, best practice is still delete the list from the destination site and re-create it using the Site Object wizard.

Post-migration check:

Doing the migration at the low activity period would be ideal. User communications should be sent out to inform about the actual date happening as earlier as possible. I tend to stick to middle of the week as that way we still have a couple of days left to solve any issues instead of doing it on Friday or Saturday.

Things to check:

  • Users can still access relevant pages, list and libraries
  • Users can still CRUD files/ items
  • Users can open Office web app (there can be different experience related to authentication when opening Office files, in most cases, users should only get prompted the very first time opening)

Restoring deleted OneDrive sites in Office365

A customer has requested whether it was possible to restore a OneDrive site that had been deleted when the user’s account was marked for deletion in AD. After a bit of research, I was able to restore the site back and retrieved the files (luckily it was deleted less than 30 days ago).

Read More