Notes From The Field – Enabling GAL Segmentation in Exchange Online

First published at https://nivleshc.wordpress.com

Introduction

A few weeks back, I was tasked with configuring Global Address List (GAL) Segmentation for one of my clients. GAL Segmentation is not a new concept, and if you were to Google it (as you would do in this day and age), you will find numerous posts on it.

However, during my research, I didn’t find any ONE article that helped me. Instead I had to rely on multiple articles/blogposts to guide me into reaching the result.

For those that are new to GAL Segmentation, this can be a daunting task. This actually is the inspiration for this blog, to provide the steps from an implementers view, so that you get the full picture about the preparation, the steps involved and the gotchas so that you feel confident about carrying out this simple yet scary change.

This blog will be focus on GAL Segmentation for an Exchange Online hybrid setup.

So what is GAL Segmentation?

I am glad you asked 😉

By default, in Exchange Online (and On-Premises Exchange environment as well), a global address list is present. This GAL contains all mail enabled objects contained in the Exchange Organisation. There would be mailboxes, contacts, rooms, etc.

This is all well and good, however, at times a company might not want everyone to see all the objects in the Exchange environment. This might be for various reasons, for instance, the company has too many employees and it won’t make sense to have a GAL a mile long. Or, the company might have different divisions, which do not require to correspond to each other. Or the company might be trying to sell off one of its divisions, and to start the process, is trying to separate the division from the rest of the company.

For this blog, we will use the last reason, as stated above. A “filter” will be applied to all users who are in division to be sold off, so that when they open their GAL, they only see objects from their own division and not everyone in the company. In similar fashion, the rest of the company will see all objects except the division that will be sold off. Users will still be able to send/receive emails with that particular division, however the GAL will not show them.

I would like to make it extremely clear that GAL Segmentation DOES NOT DELETE any mail enable objects. It just creates a filtered version of the GAL for the user.

Introducing the stars

Lets assume there was once a company called TailSpin Toys. They owned the email namespace tailspintoys.com and had their own Exchange Online tenant.

One day, the board of TailSpin Toys decided to acquire a similar company called WingTip ToysWingTip Toys had their own Exchange Online Tenant and used the email namespace wingtiptoys.com. After the acquisition, WingTip Toys email resources were merged into the TailSpin Toys Exchange Online tenant, however WingTip Toys still used their wingtiptoys.com email namespace.

After a few years, the board of TailSpin Toys decided it was time to sell of WingTip Toys. As a first step, they decided to implement GAL Segmentation between TailSpin Toys and WingTip Toys users.

Listed below is what was decided

  • TailSpin Toys users should only see email objects in their GAL corresponding to their own email namespace (any object with the primary smtp address of @tailspintoys.com). They should not be able to see any WingTip Toys email objects.
  • Only TailSpin Toys users will be able to see Public Folders in their GAL
  • WingTip Toys users should only see email objects in their GAL corresponding to their own email namespace (any object with the primary smtp address of @wingtiptoys.com). They should not be able to see any TailSpin Toys email objects.
  • The All Contacts in the GAL will be accessible to both WingTip Toys and TailSpin Toys users.

The Steps

Performing a GAL Segmentation is a very low risk change. The steps that will be carried out are as follows

  • Create new Global Address Lists, Address Lists, Offline Address Book and Address Book Policy for TailSpin Toys and WingTip Toys users.
  • Assign the respective policy to TailSpin Toys users and WingTip Toys users

The only issue is that by default, no users are assigned an Address Book Policy (ABP) in Exchange Online (ABPs are the “filter” that specifies what a user sees in the GAL).

Due to this, when we are creating the new address lists, users might see them in their GAL as well and get confused as to which one to use. If you wish to carry out this change within business hours, the simple remedy to the above issue is to provide clear communications to the users about what they could expect during the change window and what they should do (in this case use the GAL that they always use). Having said that, it is always a good practice to carry carry out changes out of business hours.

Ok, lets begin.

  • By default, the Address Lists Management role is not assigned in Exchange Online. The easiest way to assign this is to login to the Exchange Online Portal using a Global Administrator account and add this role to the Organization Management role group. This will then provide all the Address List commands to the Global Administratos.
  • Next, connect to Exchange Online using PowerShell
  • For TailSpin Toys
    • Create a default Global Address List called Default TST Global Address List
    • New-GlobalAddressList -Name "Default TST Global Address List" -RecipientFilter {((Alias -ne $null) -and (((ObjectClass -eq 'user') -or (ObjectClass -eq 'contact') -or (ObjectClass -eq 'msExchSystemMailbox') -or (ObjectClass -eq 'msExchDynamicDistributionList') -or (ObjectClass -eq 'group') -or (ObjectClass -eq 'publicFolder'))) -and (WindowsEmailAddress -like "*@tailspintoys.com") )}
    • Create the following Address Lists
      • All TST Distribution Lists
      • New-AddressList -Name "All TST Distribution Lists" -RecipientFilter {((Alias -ne $null) -and (ObjectCategory -like 'group') -and (WindowsEmailAddress -like "*@tailspintoys.com"))}
      • All TST Rooms
      • New-AddressList -Name "All TST Rooms" -RecipientFilter {((Alias -ne $null) -and (((RecipientDisplayType -eq 'ConferenceRoomMailbox') -or (RecipientDisplayType -eq 'SyncedConferenceRoomMailbox'))) -and (WindowsEmailAddress -like "*@tailspintoys.com"))}
      • All TST Users
      • New-AddressList -Name "All TST Users" -RecipientFilter {((Alias -ne $null) -and (((((((ObjectCategory -like 'person') -and (ObjectClass -eq 'user') -and (-not(Database -ne $null)) -and (-not(ServerLegacyDN -ne $null)))) -or (((ObjectCategory -like 'person') -and (ObjectClass -eq 'user') -and (((Database -ne $null) -or (ServerLegacyDN -ne $null))))))) -and (-not(RecipientTypeDetailsValue -eq 'GroupMailbox')))) -and (WindowsEmailAddress -like "*@tailspintoys.com"))}
    • Create an Offline Address Book called TST Offline Address Book (this uses the Default Global Address List that we had just created)
    • New-OfflineAddressBook -Name "TST Offline Address Book" -AddressLists "Default TST Global Address List"
    • Create an Address Book Policy called TST ABP
    • New-AddressBookPolicy -Name "TST ABP" -AddressLists "All Contacts", "All TST Distribution Lists", "All TST Users", “Public Folders” -RoomList "All TST Rooms" -OfflineAddressBook "TST Offline Address Book" -GlobalAddressList "Default TST Global Address List"
  • For WingTip Toys
    • Create a default Global Address List called Default WTT Global Address List
    • New-GlobalAddressList -Name "Default WTT Global Address List" -RecipientFilter {((Alias -ne $null) -and (((ObjectClass -eq 'user') -or (ObjectClass -eq 'contact') -or (ObjectClass -eq 'msExchSystemMailbox') -or (ObjectClass -eq 'msExchDynamicDistributionList') -or (ObjectClass -eq 'group') -or (ObjectClass -eq 'publicFolder'))) -and (WindowsEmailAddress -like "*@wingtiptoys.com") )}
    • Create the following Address Lists
      • All WTT Distribution Lists
      • New-AddressList -Name "All WTT Distribution Lists" -RecipientFilter {((Alias -ne $null) -and (ObjectCategory -like 'group') -and (WindowsEmailAddress -like "*@wingtiptoys.com"))}
      • All WTT Rooms
      • New-AddressList -Name "All WTT Rooms" -RecipientFilter {((Alias -ne $null) -and (((RecipientDisplayType -eq 'ConferenceRoomMailbox') -or (RecipientDisplayType -eq 'SyncedConferenceRoomMailbox'))) -and (WindowsEmailAddress -like "*@wingtiptoys.com"))}
      • All WTT Users
      • New-AddressList -Name "All WTT Users" -RecipientFilter {((Alias -ne $null) -and (((((((ObjectCategory -like 'person') -and (ObjectClass -eq 'user') -and (-not(Database -ne $null)) -and (-not(ServerLegacyDN -ne $null)))) -or (((ObjectCategory -like 'person') -and (ObjectClass -eq 'user') -and (((Database -ne $null) -or (ServerLegacyDN -ne $null))))))) -and (-not(RecipientTypeDetailsValue -eq 'GroupMailbox')))) -and (WindowsEmailAddress -like "*@wingtiptoys.com"))}
    • Create an Offline Address Book called WTT Offline Address Book (this uses the Default Global Address List that we had just created)
    • New-OfflineAddressBook -Name "WTT Offline Address Book" -AddressLists "Default WTT Global Address List"
    • Create an Address Book Policy called WTT ABP
    • New-AddressBookPolicy -Name "WTT ABP" -AddressLists "All Contacts", "All WTT Distribution Lists", "All WTT Users" -RoomList "All WTT Rooms" -OfflineAddressBook "WTT Offline Address Book" -GlobalAddressList "Default WTT Global Address List"
  • Once you create all the Address Lists, after a few minutes, you will be able to see them using Outlook Client or Outlook Web Access. One of the obvious things you will notice is that they are all empty! If you are wondering if the recipient filter is correct or not, you can use the following to confirm the membership
  • Get-Recipient -RecipientPreviewFilter (Get -AddressList -Identity {your address list name here}).RecipientFilter

    Aha, you might say at this stage. I will just run the Update-AddressList cmdlet. Unfortunately, this won’t work since this cmdlet is only available for On-Premises Exchange Servers. There is none for Exchange Online. Hmm. How do I update my Address Lists ? Its not too difficult. All you have to do is change some attribute of the members and they will start popping into the Address List! For a hybrid setup, this means we will have to change the setting using On-Premise Exchange Server and use Azure Active Directory Connect Server to replicate the changes to Azure Active Directory, which in turn will update Exchange Online objects, thereby updating the newly created Address Lists. Simple? Yes. Lengthly? Yes indeed

  • I normally use CustomAttribute for such occasions. Before using any CustomAttribute, ensure it is not used by anything else. You might    be able to ascertain this by checking if for all objects, that CustomAttribute currently holds any value or not. Lets assume CustomAttribute10 can be used.
    #Get all On-Premise Mailboxes
    $OnPrem_MBXs = Get-Mailbox -Resultsize unlimited
    
    #Get all Exchange Online Mailboxes
    $EXO_MBXs = Get-RemoteMailbox -Resultsize Unlimited
    
    #Get all the Distribution Groups
    $All_DL = Get-DistributionGroup -Resultsize unlimited
    
    #Update the CustomAttribute10 Value
    #Since Room mailboxes are a special type of Maibox, the following update will
    #address Room Mailboxes as well
    
    $OnPrem_MBXs | Set-Mailbox -CustomAttribute10 “GAL”
    $EXO_MBXs | Set-RemoteMailbox -CustomAttribute10 “GAL”
    
    $All_DL | Set-DistributionGroup -CustomAttribute10 “GAL”
  • Using your Azure Active Directory Connect server run a synchronization cycle so that the updates are synchronized to Azure Active Directory and subsequently to Exchange Online
  • One Gotcha here is if you have any Distribution Groups that are not synchronised from OnPremises. You will have to find these and update their settings as well. One simple way to find them is to use the property isDirSynced. Connect to Exchange Online using PowerShell and then use the following command
  • $All_NonDirsyncedDL = Get-DistributionGroup -Resultsize unlimited| ?{$_.isdirsynced -eq $FALSE}   
    
    #Now, we will update CustomAttribute10 (please check to ensure this customAttribute doesn't have any values)
     
    $All_NonDirSyncedDL | Set-DistributionGroup -CustomAttribute10 "GAL"
  • Check using Outlook Client or Outlook Address Book to see that the new Address Lists are now populated
  • Once confirmed that the new Address Lists have been populated, lets go assign the new Address Book Policies to TailSpin Toys and WingTip Toys users It can take anywhere from 30min – 1hr for the Address Book Policy to take effect
  • $allUserMbx = Get-Mailbox -RecipientTypeDetails UserMailbox -Resultsize unlimited
    
    #assign "TST ABP" Address Book Policy to TailSpin Toys users
    
    $allUserMbx | ?{($_.primarysmtpaddress -like "*@tailspintoys.com")} | Set-Mailbox -AddressBooksPolicy “TST ABP”
    
    #assign "WTT ABP" Address Book Policy to WingTip Toys users
    $allUserMbx | ?{($_.primarysmtpaddress -like "*@wingtiptoys.com")} | Set-Mailbox -AddressBooksPolicy “WTT ABP”
  • While waiting, remove the CustomAttribute10 values you had populated. Using PowerShell on On-Premises Exchange Server, run the following
  • #Get all On-Premise Mailboxes
    
    $OnPrem_MBXs = Get-Mailbox -Resultsize unlimited
    
    #Get all Exchange Online Mailboxes
    
    $EXO_MBXs = Get-RemoteMailbox -Resultsize Unlimited
    
    #Get all the Distribution Groups
    
    $All_DL = Get-DistributionGroup -Resultsize unlimited
    
    #Set the CustomAttribute10 Value to null
    
    #Since Room mailboxes are a special type of Maibox, the following update will
    
    #address Room Mailboxes as well
    
    $OnPrem_MBXs | Set-Mailbox -CustomAttribute10 $null
    
    $EXO_MBXs | Set-RemoteMailbox -CustomAttribute10 $null
    
    $All_DL | Set-DistributionGroup -CustomAttribute10 $null
  • Connect to Exchange Online using PowerShell and remove the value that was set for CustomAttribute10 for nonDirSynced Distribution Groups
  • $All_NonDirsyncedDL = Get-DistributionGroup -Resultsize unlimited| ?{$_.isdirsynced -eq $FALSE}   
    
    #Change CustomAttribute10 to $null
    
    $All_NonDirSyncedDL | Set-DistributionGroup -CustomAttribute10 $null

     

    Thats it folks! Your GAL Segmentation is now complete! Users from TailSpin Toys will only see TailSpin Toys mail enabled objects and WingTip Toys users will only see WingTip Toys mail enabled objects

A few words of wisdom

In the above steps, I would advise that once the new Address Lists have been populated

  • apply the Address Book Policy to a few test mailboxes
  • wait between 30min – 1 hour, then confirm that the Address Book Policy has been successfully applied to the test mailboxes and has the desired result
  • once you have confirmed that the test mailboxes had the desired result for ABP, then and ONLY then continue to the rest of the mailboxes

This will give you confidence that the change will be successful. Also, if you find that there are issues, the rollback is not too difficult and time consuming.

Another thing to note is that when users have their Outlook client configured to use  cached mode, they might notice that their new GAL is not fully populated. This is because their Outlook client uses the Offline Address Book to show the GAL and at that time, the Offline Address Book would not have regenerated to include all the new members. Unfortunately in Exchange Online, the Offline Address Book cannot be regenerated on-demand and we have to wait for the the Exchange Online servers to do this for us. I have noticed the regeneration happens twice in 24 hours, around 4am and 4pm AEST (your times might vary). So if users are complaining that their Outlook Client GAL doesn’t show all the users, confirm using Outlook Web Access that the members are there (or you can run Outlook in non-cached mode) and then advise the users that the issue will be resolved when the Offline Address Book gets re-generated (in approximately 12 hours). Also, once the Offline Address Book has regenerated, it is best for users to manually download the latest Offline Address Book, otherwise Outlook client will download it at a random time in the next 24 hours.

The next gotcha is around which Address Lists are available in Offline mode (refer to the screenshot below)

GAL01

When in Offline mode, the only list available is Offline Global Address List . This is the one that is pointed to by the  green arrow. Note that the red arrow is pointing to Offline Global Address List as well however this is an “Address List” that has been named Offline Global Address List by Microsoft to confuse people! To repeat, the Offline Global Address List pointed to by the green arrow is available in Offline mode however the one pointed to by red is not!

In our case, the Offline Global Address List is named Default TST Global Address List and Default WTT Global Address List).

If you try to access any others in the drop down list when in Offline mode, you will get the following error

AddressListError

This has always been the case, unfortunately hardly anyone tries to access all the Address Lists in Offline mode. However, after GAL Segmentation, if users receive the above error, it is very easy to blame the GAL Segmentation implementation 😦 Rest assured, this is not the case and this “feature” has always been present.

Lastly, the user on-boarding steps will have to be modified to ensure that when their mailbox is created, the appropriate Address Book Policy is applied. This will ensure they only see the address lists that they are supposed to (on the flip side, if no address book policy is applied, they will see all address lists, which will cause a lot of confusion!)

With these words, I will now stop. I hope this blog comes in handy to anyone trying to implement GAL Segmentation.

If you have any more gotchas or things you can think of regarding GAL Segmentation, please leave them in the comments below.

Till the next time, Enjoy 😉

 

 

 

 

 

 

Proactive Problem Management – Benefits and Considerations

IT Service Management – Proactive Problem Management

The goal of Proactive Problem Management is to prevent Incidents by identifying weaknesses in the IT infrastructure and applications, before any issues have been instigated.

Benefits

PPM.jpg

  • Greater system stability – This leads to increased user satisfaction.
  • Increased user productivity – This adds to a sizable productivity gain across the enterprise.
  • Positive customer feedback – When we proactively approach users who have been experiencing issues and offer to fix their problems the feedback will be positive.
  • Improved security – When we reduce security incidents, this leads to increased enterprise security.
  • To improve quality of software/product – The data we collect will be used to improve the quality.
  • To Reduce volume of problems – Lower the ratio of immediate (Reactive) support efforts against planned support efforts in overall Problem Management process.

Considerations

  • Proactive Problem Management can be made easier by the use of a Network Monitoring System.
  • Proactive Problem Management is also involved with getting information out to your customers to allow them to solve issues without the need to log an Incident with the Service Desk.
    • This would be achieved by the establishment of a searchable Knowledgebase of resolved Incidents, available to your customers over the intranet or internet, or the provision of a useable Frequently Asked Question page that is easily accessible from the home page of the Intranet, or emailed regularly.
  • Many organisations are performing Reactive Problem Management; very few are successfully undertaking the proactive part of the process simply because of the difficulties involved in implementation.
    • Proactive Problem Management to Business Value
    • Cost involved with Proactive vs. Reactive Problem Management
    • Establishment of other ITIL processes such as configuration Management, Availability Management and Capacity Management.

 

Proactive Problem Management – FAQ

Q – At what stage of our ITIL process implementation should we look at Implementing Proactive Problem Management?

  • A – Proactive Problem Management cannot be contemplated until you have Configuration Management, Availability Management and Capacity Management well established as the outputs of these processes will give you the information that is required to pinpoint weaknesses in the IT infrastructure that may cause future Incidents.

Q – How can we performance measure and manage?

  • A – Moving from reactive to proactive maintenance management requires time, money, human resources, as well as initial and continued support from management. Before improving a process, it is necessary to define the improvement. That definition will lead to the identification of a measurement, or metric. Instead of intuitive expectations of benefits, tangible and objective performance facts are needed. Therefore, the selection of appropriate metrics is an essential starting point for process improvement.

 

Proactive Problem Management – High Level Process Diagram

PPMP1.jpg

Summary

Implementing proactive problem management will require an agreed uniform approach specially when multiple managed service providers (MSPs) involved with an organisation. Hope you found this useful.

Removing Specific Azure Tags – PowerShell

Azure Tags

You apply tags to your Azure resources to logically organize them by categories. Each tag consists of a name and a value. For example, you can apply the name “Environment” and the value “Production” to all the resources in production.

After you apply tags, you can retrieve all the resources in your subscription with that tag name and value. Tags enable you to retrieve related resources from different resource groups. This approach is helpful when you need to organize resources for billing or management.

 

Problem:

Sometimes tags are applied in environments prior to developing a tagging strategy. The problem in exponentially increased with the size of the environment and the number of users creating resources.

Currently we are looking for a solution to remove specific unwanted tags from Virtual Machines.

For this purpose , the below mentioned script was developed that solves the problem.

Solution :

The below mentioned script performs the following tasks

  • Get the list of all the VMs based on the Scope
  • Get all the Tag Values for each VM in a $VMTags variable.
  • Copy all the values from $VMtags to $newtags except the $tagtoremove value.
  • Configure the resources with the $newtags values using Set-AzureRmResource command.

 

Code:

#Getting the list of VMs based on the resource group. THe Scope can be changed to include more resources.

$VMS = Get-AzureRmVM -ResourceGroupName ResourceGroupName

#Details of the tag to remove are stored in the $TagtoRemove variable.

$TagtoRemove = @{Key="TestVmName";Value="abcd"}

foreach ($VM in $VMs)
{
 $VMtags = $VM.tags # Getting the list of all the tags for the VM.
 $newtag = @{} # Creating a new Hashtable variable to store the Tag Values.
 foreach ( $KVP in $VMtags.GetEnumerator() )
 { 
 Write-Host "`n`n`n"
 If($KVP.Key -eq $TagtoRemove.Key)
 { 
 write-host $TagtoRemove.Key "exists in the "$VM.Name " will be removed `n"}
 Else 
 { 
 
 $newtag.add($KVP.Key,$KVP.Value) # Adding all the tags in the $newtag Variable except the $TagtoRemove.key values 
 }

}
 #Updating the Virtual machine with the updated tag values $newtag.
 Set-AzureRmResource -ResourceGroupName $VM.ResourceGroupName -ResourceName $VM.Name -Tag $newtag -Force -ResourceType Microsoft.Compute/VirtualMachines
}

 

 

Automation and Creation of Office 365 groups using Flow, Microsoft Graph and Azure Function – Part 2

In the Part 1 blog here, we discussed an approach for the Group creation process and important considerations for provisioning groups. In this blog, we will look at getting a Graph App ID and App secret for invoking the graph service and then implementation of the group provisioning process.

MS Graph App Set up

Before we start creating groups we will need to set up a Graph App that will be used to create the group in the Office 365 tenancy. The details are in this blog here on how to create a Microsoft Graph app.

Regarding the permissions, below are the settings that are necessary to allow creating groups through the graph service.

GroupApp_Rights

Creating a Group

As discussed in Part 1 here, below are the broad level steps for automating group creation using a SharePoint inventory list, Microsoft Flow and Azure Function

1. Create a SharePoint list, with the metadata necessary for Group and SharePoint assets provisioning

We can use a SharePoint list to act as a trigger to create groups with the custom metadata necessary for provisioning the groups such as Owners and metadata necessary for creating site assets for SharePoint sites. As a best practice, I recommend you create multiple master lists to manage the details separately if there are too many to manage. In our case, we have created three separate lists for managing the Group details.

1. Group details and metadata
2. Owners and Team Members List
3. Site Assets configuration list

2. Create a Microsoft flow. The flow will validate a new or existing group and pick the unique Group Alias from the list which will allow us to find the group if it exists.

The flow will act as a trigger to start the provisioning process and call the Azure function passing the appropriate metadata as shown below. The flow also allows error handling scenarios as described in the Part 1 blog here

Note: The GroupAlias is the unique name of the Group and is not necessarily the SharePoint URL. For example, in the case where a group was created and subsequently deleted, the unique alias could be used again but the Site URL will be different (unless cleared from the SharePoint recycle bin).

Group_FlowAzureFunctionCall

3. Create the Group in an Azure Function using SharePoint Online CSOM

In order to create the group, we will need to authenticate to the Graph service using the Graph App created earlier. For authenticating the app through Azure AD, please install the NuGet Package for Microsoft.IdentityModel.Clients.ActiveDirectory.

After authenticating, we will create the group using the UnifiedGroup Utility provided through the SharePoint Online CSOM.

Below is a quick snapshot of the code. Note the inclusion of Graph module of the OfficeDevPnP class.

Note: One important bit to note is that, in the above code owners and members email array is the same. If the owners and members email array differ, then the group provisioning delays significantly. Also, it is important to keep the other parameters same as during creation in the below method because it might reset the other properties to default otherwise. For eg. if isPrivate is not set, then the group becomes public.

4. After the group is created, we can fetch the details of the group as below.

5. The group provisioning generally takes about 2-3 mins to provision. However, if there are multiple hits, then one request might override the second request causing the group creation to fail. In such cases, we can sequence the Azure Functions to run by modifying the host.json file. A quick blog covering this can be found here.

Provisioning SharePoint Assets in Azure Function after Group Creation

1. For provisioning of the SharePoint assets, we might have to wait for the Office 365 AD sync to finish granting access to the Admin account.

Sometimes, the AD sync process takes much longer, so to grant direct access to the SharePoint Site Collection using tenant admin, we could use the below code. Recommendation: Only proceed with the below code approach if the access fails for more than few mins.

Note: As a best practice, I would recommend using a Service Account when working on the SharePoint Site. We could also use an App as suggested in the Site Scripting blog here.

2. Once you have access, you can use the normal SharePoint CSOM to do the activities that are pertaining to SharePoint asset provisioning such as Libraries, Site Pages content, Lists, etc.

3. After you’re done, you can return the success from the Azure function as below.

Note: Use HttpStatusCode.Accepted instead of HttpStatusCode.Error in case there is error handling in the Flow or else Flow will trigger another instance of the flow when the Azure Function fails

Conclusion:

Above we saw how we can have a SharePoint Inventory list and create groups using Flow and Azure Functions. For a quick reference, below are the links to the other related blogs.

Part 1 – Automation and Creation of Office 365 groups approach

How to create a Microsoft Graph App

Sequencing calls in Azure Functions

ViewModel-first approach in Xamarin forms

There are primarily 3 patterns to choose when developing mobile applications. They are MVC, MVVM and MVP. For a detailed discussion about them check Xamarin application architecture.

The focus of this post will be around MVVM pattern. One of the earliest and most stable MVVM libraries for Xamarin has been the MVVM Cross library Like most libraries, this also followed a ViewModel-first approach. What this means, that the focus of the developer is always on the ViewModel and the data in the application. All transitions and animations are also data dependent, which means a user moves from one ViewModel to another and not from one screen to another. It’s quite a capable library but suffered from certain limitations, such as no preview in Android layouts due to embedded binding information in ‘.axml’ files, no header view for listviews etc. Also, to use this, developer requires native UI development experience as well. That being said, it is still quite a capable library and most applications can be written using this.

Xamarin forms, on the other hand, allows better preview of UI, even live updates (using live player). Most developers with ASP.net and XAML experience and without any mobile experience can start writing mobile applications targeting Android, iOS and Windows. However, it is a UI-first approach, which means, the developer is concerned more about UI and data becomes secondary.

Below we will look at a way of ViewModel-first approach in Xamarin Forms, allowing developers with XAML experience to quickly start writing mobile apps. The complete application can be downloaded from here ViewModel-first xamarin forms sample

First is our BaseViewModel

ViewModel-first navigation

Next is our main contract for navigation within the application

The implementation of this contract will allow transition from one screen to another, by only providing the ViewModel’s type and optional parameter.

In the sample application, Autofac is being used for IoC. For more information check it out here.

For implementing the ViewModel-first approach, we will use an anti-pattern, that is Service Locator. It is labelled as such because of the complexity of resolving dependencies when the code base is quite large, and it only works well in simple scenarios. This is exactly the case with a View and it is ViewModel. In most applications, each view will only have one view model, and each view model only caters to one view. Thus, this pattern becomes a perfect candidate for this situation.

Service Container with IoC

Below is the code for AppContainer.cs, where we register all services. Note the mocked services used for Automation. For more discussion about Automation in Xamarin check out Enterprise testing in Xamarin.

The LoadViewModel method is utilised for locating view model for a view.

A BaseView class is created, which acts as the parent for all views inside the application and takes care of loading its view model as shown below. This is where we use Service locator pattern to load ViewModel for the view.

Implementation of ViewModel-first contract

Below is the NavigationService class that implements INavigationService contract.

UI transition using ViewModel

In the sample application, we have search screen and a result screen. Using above service, we can move from Search screen to the result screen using the code below.

Thus, using Service locator pattern, and an IoC type service container we are able to develop mobile applications in Xamarin Forms using the ViewModel-first approach.

 

Key Vault Secrets and ARM Templates

What is Azure Key Vault

Azure Key Vault helps safeguard cryptographic keys and secrets used by cloud applications and services. By using Key Vault, you can encrypt keys and secrets (such as authentication keys, storage account keys, data encryption keys, .PFX files, and passwords) using keys protected by hardware security modules (HSMs).

Key Vault streamlines the key management process and enables you to maintain control of keys that access and encrypt your data. Developers can create keys for development and testing in minutes, and then seamlessly migrate them to production keys. Security administrators can grant (and revoke) permission to keys, as needed.

Anybody with an Azure subscription can create and use key vaults. Although Key Vault benefits developers and security administrators, it could be implemented and managed by an organization’s administrator who manages other Azure services for an organization. For example, this administrator would sign in with an Azure subscription, create a vault for the organization in which to store keys, and then be responsible for operational tasks, such as:

  • Create or import a key or secret
  • Revoke or delete a key or secret
  • Authorize users or applications to access the key vault, so they can then manage or use its keys and secrets
  • Configure key usage (for example, sign or encrypt)
  • Monitor key usage

This administrator would then provide developers with URIs to call from their applications, and provide their security administrator with key usage logging information.

( Ref: https://docs.microsoft.com/en-us/azure/key-vault/key-vault-whatis ).

 

Current Scenario of the Key Vault.

In the current scenario , we are utilizing Key Vault for the provisioning of arm resources. Instead of any username and passwords , the template only contains references to these values stored in Azure Key Vault.

Theses secrets are extracted while the resources are being deployed using the template and the parameter file together.

Utilizing the Key Vault.

The following tasks are involved in utilizing the Key Vault

  • Creating the key Vault.
  • Adding Keys and Secrets in the Vault
  • Securing the Key Vault
  • Referencing Keys

Creating a Key Vault

Step 1 : Login to Azure Portal and click on All Services  and select Key Vault

1.png

Step 2 : Click on Add and enter the following details and click on Create

  • Key Vault Name
  • Subscription
  • Resource Group
  • Pricing Tier

2.PNG

Step 3 : Select the Key Vault Name

Step 4 : Select Secrets

Step 5 : Click on Generate/Import

3

Step 6: Select Manual in upload options

Step 7: Enter the following information.

  • Name of the Secret ( Like MyPassword )
  • Value of the Secret ( example P@ssword1)
  • Set Activation Date ( If required )
  • Set Expiration Date ( If required)

Step 8 : Select Yes on the Enabled option

Step 9: Click on Create

4

Securing the Key Vault

Step 1: Select the Newly created Key Vault
Step 2:  Select Access Policies
Step 3 : Select click to show advanced access policies

5

Step 4: Select the checkboxes as shown in the snapshot below.

Step 5: Click on Add New

6

Step 6: Select Secret management in the configure from template option.
Step 7:  Select the Principal ( name of the resource which needs access to the secret.)
Step 8: Select the permissions required from the Secret Permissions
Step 9: Select OK.

8

 

Referencing the Secrets.

Currently we are referencing the secrets stored in the Key vault using the arm templates.
A parameter type “securestring” is created in the parameter section of the arm template armtemplate.json file

kvsecret

We add the parameter in the parameters file of the template armtemplate.parameters.json with the following details

  • ID of the key vault ( resource ID of the KeyVault under the properties section )
  • name of the secret to extract (Mypassword)

kvsecretparamters.PNG

 

 

Summary:

Based on the above example , we achieved the following

  • Secure information ( usernames and passwords ) are not mentioned anywhere in the template or the parameter files
  • The secure values will be acquired when the template is running
  • The values are only accessible to those who have access to the Key vault.

 

Adding Paging capability into HTML table using AngularJS

Background

This is in continuation with my previous post in which we made HTML table sortable if you haven’t read it yet, give it a read first at URL. Next obvious request from end users is to introduce pagination in it.

Solution

We will be coming up with the following table in which data can be sorted and paged at the same time, this will have all of its data retrieved on the client side for paging and sorting.

Dashboard

The first part of this blog will help our users to select/set no. of records displayed on a page by displaying a list of available options.

NoOfRecords
The second part will let users navigate to the next or previous page, and they will also have an option to jump directly to first and/or last pages.

Pager

Overall appproach and solution will be described in the following sections.

Directive Template

Our HTML template for the directive will be as follows, it will have some basic HTML and events to respond to change in user interface.

Angular watch for a Change in items

The first thing we must have to do in our directive is to create a copy of original items on which we have to apply pagination.

Then we need to populate options for page sizes and set the default so initial rendering will take default page size and start from page # 1

Thirdly, as we are doing a client-side paging, we need to maintain original array and only show a slice of it in our HTML table, and then as the user selects or change page no. or no. of records on a given page we will render that page accordingly.

Change Page Size (no# of records on a page)

We had attached an event with our drop down list to ensure whenever a user selects new page size, it shows no. of records by recalculating as follows:

  1. Resets the start position (begin) to start from index 0.
  2. Sets the no of records to be shown on the page as per user selection (10, 20 or 50)
  3. Resets current page to page no. 1
  4. Recalculates last page based on total no of items in an array and page size desired by a user

Change of Page No

We had attached an event with our next/previous and first/last buttons to ensure whenever a user selects a new page, it shows correct records by recalculating their indexes as follows:

  1. Check what value has been passed and take action accordingly
    • if value ‘-2’ is passed > user requested for FIRST page
    • if value ‘-1’ is passed > user requested for PREVIOUS page
    • if value ‘1’ is passed > user requested for NEX page
    • if value ‘2’ is passed > user requested for LAST page
  2. And then reset the current page based on the analogy above in point # 1
  3. Calculates the start position (begin) to start showing records based on current page index and page size.
  4. Sets the no of records to be shown on the page as per user selection of page size (10, 20 or 50)

Conclusion

By playing around with AngularJS, we can create our custom directives best suited to our needs and without an overhead of adding an external library to reuse a part of it only. This can give significant control in functionality implementation as per our needs.

 

Highly Available deployment and availability pattern for heritage Windows applications on AWS

Quite often I am presented with the challenge of deploying Windows COTS applications onto the AWS platform with a need to requirement for taking advantage of cloud native patterns like auto-scaling and auto-healing. In this blog post I’m going to describe how I’ve used Auto Scaling Groups, Load Balancers, Cloudwatch Alarms and Route 53 to provide a self healing implementation for a heritage COTS Windows application. This pattern was also extended to use lifecycle hooks to support a Blue/Green deployment with zero downtime.

This pattern works quite nicely for heritage applications which suggest an Active/Passive configuration and the web tier is stateless, ie the Passive node does not have full application capability or is write only. When the primary node is unavailable during failure or blue/green upgrade, clients are transparently redirected to the passive node. I like to use the term “heritage” as it seems to have a softer ring than “legacy”. During actual failure, outage is less than 2 minutes in order for automatic failover to complete.

The diagram below summarises a number of the key components used in the design. In essence we have two autoscaling groups each within minimum, maximum of 1 within two availability zones. We have a private Route 53 hosted zone (int.aws) to host custom CNAME records which typically point to load balancers. A cross zone load balancer, in the example below I’m using a classic load balancer as I’m not doing SSL offload, however it could just as easily be an application load balancer. Route 53 and custom Cloudwatch alarms have been utilised to reduce the time required to fail over between nodes and support separate configuration of primary and secondary nodes.

A number of other assumptions:

  • Cloudwatch Alarm is set to detect where number of healthy nodes in AutoScaleGroup (ASG) ELB is less than 1. Current minimum polling interval is 60 seconds.
  • Independent server components – can support different configurations, ie primary/secondary config
  • Route 53 component (TTL 30 seconds) with a CNAME created with internal DNS (app.corp.com) to point to Route 53 CNAME (dns.master.int.aws). I use
  • ASG health checks on TCP port 443 configured (5 seconds interval, Healthy and Unhealthy threshold of 2). No point in setting any more granular as dependent on Cloudwatch alarm interval.
  • Single ASG deployed within each availability zone
  • Web tier is stateless
  • ELB still deployed over two availability zones.
  • TCP port monitors configured without SSL offload
  • No session stickiness configured as there is only a single server behind each ASG/ELB. In failover scenario clients will need to re-authenticate.
  • Use pre-baked AMIs to support shortest possible healing times.

Normal behaviour, client traffic is directed to Active node in AZ A.

 

Instance fails, and within 60 seconds Cloudwatch Alarm is triggered.

 

Route 53 health check is updated and Route 53 updates DNS record to the passive node. Clients now access to secondary/passive server. Clients may need to re-authenticate if application requires a stateful session.

 

Auto-healing rebuilds the failed server within AZ A.

 

Client now passes Route 53 health check and so Route 53 updates DNS record back to the primary node. Clients may need to re-authenticate if application requires a stateful session.

Secondary Node Failure

If secondary instance fails, there is no service disruption to service as traffic is never actively sent to this node, except during primary node failure.

Availability Zone Failure

These behave in a similar manner to instance failure and are dependent upon Cloudwatch alarm being sent.

Blue Green deployments

Blue Green deployments can be achieved using similar behaviour as experienced before.

On the left we see the existing release/build of the application stack, whilst on the right is the environment to be built. These are all within the same account and same availability zones, just different cloudformation stacks. There will be two stages described, a deploy stage where the new environment is being deployed and a release stage, where DNS is cutover. No additional build activities are conducted during the release stage.

DEPLOY Stage

1.Servers are built as independent components and then baked as AMIs.

2.Scales down server 2 component from previous build.

3. Server 2 is scaled up as part of deploy stage. Team can now test and validate this release prior to release via ELB for second instance. I like to include custom host headers with the servername and specific build number in order to easily identify which server I am hitting, which can be identified through Chrome debugger or fiddler.

RELEASE STAGE

4.Route 53 DNS is automatically updated to point to server 2 ELB. No service outage

5.Terminates the previous primary instance of the build and the primary server is now built within the new stack.

 

6. Server 1 bootstrap is initiated within the new cloudformation stack.

7 Route 53 DNS is updated to the CNAME of the ELB in front of the primary node and normal service resumes in the newly deployed/released environment.

 

Originally published at http://cloudconsultancy.info

ITSM – Continual Service Improvement (CSI) – All you need to know

Why Continual Service Improvement (CSI) is Required?

  • The goal of Continual Service Improvement (CSI) is to align and realign IT Services to changing business needs by identifying and implementing improvements to the IT services that support Business Processes.
  • The perspective of CSI on improvement is the business perspective of service quality, even though CSI aims to improve process effectiveness, efficiency and cost effectiveness of the IT processes through the whole life-cycle.
  • To manage improvement, CSI should clearly define what should be controlled and measured.

It is also important to understand the differences in between Continuous and Continual

csi-cVC

What are the Main Objectives of Continual Service Improvement (CSI)

CSI OBJ.jpg

Continual Service Improvement (CSI) – Approach

pdca.jpg

Continual Service Improvement (CSI) – 7 Step Process

CSI – Measure and improvement process has 7 steps. These steps will help to define the corrective action plan.

csi7 steps.jpg

Continual Service Improvement (CSI) – Challenges, CSFs & Risks

Like all programs, CSI also have its challenges, critical success factors and risks. Some of these are listed below. It is absolutely important that to implement the CSI program that we need the senior management’s buy-in.

CSI csfs.jpg

Please remember transforming IT is a Process/Journey, not an Event.

Hope these are useful.

Latest updates to Modern Libraries experience in SharePoint Communication sites (Apr 2018)

Modern Libraries in Communication Sites have got some welcome facelift during the last few months (Apr 2018) and there have been many great changes. I am going to list of few of these updates here.

Note: Some of these updates might be limited to Targeted release (or First release) versions only. In case these changes are not available then they might be not in Standard release ( or GA release) yet.

1. Full page view of SharePoint libraries

The SharePoint libraries now have a full page view, which provides it to use the full home page layout of Communication sites. It looks great 🙂

ModernLibExperienceSitePages

2. Custom Metadata support for Site Pages.

Now it is possible for newly created Communication sites (after Mar 2018) to have custom metadata updates with Site Pages content type.

Modern_Site_Pages_with_custom_content_type

Few catches in this scenario are:

1. It is still not possible to create a page by selecting a Child Site Page content type unless the child content type is set to default. When a page is created, it is set to default content type of the Site Pages library

2. Any communication sites, created prior to March 2018 mayn’t get this update. For associating metadata to site pages prior to Mar 2018, please check this blog for a custom approach to associate custom metadata to Site Pages. This will require custom code build for the same.

3. Support for more columns types through Modern UI Panel

Now we can create columns of additional metadata types such as Date, Choice and Picture with Modern Libraries, so don’t have to go to classic experience which is great from a UX and usability prespective.

ModernLibExperience_O3651

4. New command bar on Modern Libs

The Library command bar now provides a seemless experience of search and command items at the same level. Though small this is a great change because it would drive user to search content prior to creating new one.

NewCommandBar_SitePages

Conclusion

The above are some great updates for SharePoint modern libraries.