Adding Paging capability into HTML table using AngularJS

Background

This is in continuation with my previous post in which we made HTML table sortable if you haven’t read it yet, give it a read first at URL. Next obvious request from end users is to introduce pagination in it.

Solution

We will be coming up with the following table in which data can be sorted and paged at the same time, this will have all of its data retrieved on the client side for paging and sorting.
Dashboard
The first part of this blog will help our users to select/set no. of records displayed on a page by displaying a list of available options.
NoOfRecords
The second part will let users navigate to the next or previous page, and they will also have an option to jump directly to first and/or last pages.
Pager
Overall appproach and solution will be described in the following sections.

Directive Template

Our HTML template for the directive will be as follows, it will have some basic HTML and events to respond to change in user interface.

Angular watch for a Change in items

The first thing we must have to do in our directive is to create a copy of original items on which we have to apply pagination.
Then we need to populate options for page sizes and set the default so initial rendering will take default page size and start from page # 1
Thirdly, as we are doing a client-side paging, we need to maintain original array and only show a slice of it in our HTML table, and then as the user selects or change page no. or no. of records on a given page we will render that page accordingly.

Change Page Size (no# of records on a page)

We had attached an event with our drop down list to ensure whenever a user selects new page size, it shows no. of records by recalculating as follows:

  1. Resets the start position (begin) to start from index 0.
  2. Sets the no of records to be shown on the page as per user selection (10, 20 or 50)
  3. Resets current page to page no. 1
  4. Recalculates last page based on total no of items in an array and page size desired by a user

Change of Page No

We had attached an event with our next/previous and first/last buttons to ensure whenever a user selects a new page, it shows correct records by recalculating their indexes as follows:

  1. Check what value has been passed and take action accordingly
    • if value ‘-2’ is passed > user requested for FIRST page
    • if value ‘-1’ is passed > user requested for PREVIOUS page
    • if value ‘1’ is passed > user requested for NEX page
    • if value ‘2’ is passed > user requested for LAST page
  2. And then reset the current page based on the analogy above in point # 1
  3. Calculates the start position (begin) to start showing records based on current page index and page size.
  4. Sets the no of records to be shown on the page as per user selection of page size (10, 20 or 50)

Conclusion

By playing around with AngularJS, we can create our custom directives best suited to our needs and without an overhead of adding an external library to reuse a part of it only. This can give significant control in functionality implementation as per our needs.
 

Making HTML table sortable in AngularJS

 

Problem

We were working on a project and have built its front-end on AngularJS v1.6.6 along with Office UI Fabric, we have used community-driven library ngofficeuifabric when we came across a situation of table sorting and realised that it wasn’t a great fit for our custom sorting and filtering requirements raised during future sprints by our product owners and business in general.

Solution

We have replaced ‘uif-table’ with ‘table’ and added custom events for sorting, the markup  is as follows:
[code language=”html”]

ID


 



 


Object ID


 



 


Object Date Submitted


 



 


Object Date


 



 


{{item.ID}} {{item.ObjectID}} {{item.DateSubmitted | date:’dd MMM yyyy’}} {{item.ObjectDate | date:’dd MMM yyyy’}}

[/code]
You need to have two variables to store what ‘order’ is required (ascending/descending) and which ‘field’ (column) to sort records on, we have initialised these variables as follows:
[code language=”javascript”]
//setting sorting variables for the first page load
$scope.sortAsc = false;
$scope.sortField = ‘item.ID’;
[/code]
We have attached a function to table header to record a click on a specific column and get the column’s desired sorting order, this will just record at the start of the function to set the direction for sorting.
[code language=”javascript”]
function headerClick(fieldName, sortType) {
$scope.sortField = fieldName;
$scope.sortAsc = sortType;
var sortedItems = [];
angular.forEach($scope.items, function (item) {
sortedItems.push(item);
});
$scope.items = [];
if (fieldName === “item.ObjectDate”) {
// sorting for custom date field as it can include text
sortedItems.sort(function (a, b) {
return compareObjectDate(a.Date, b.Date);
});
}
else if (fieldName === “item.DateSubmitted”) {
// sorting for any date field
sortedItems.sort(function (a, b) {
return compareDate(a.DateSubmitted, b.DateSubmitted);
});
}
else {
// sorting for any text field
sortedItems.sort(function (a, b) {
return a.fieldName.localeCompare(b.fieldName);
});
}
//check what type of sorting (ascending/descending) is required and then do the needful
if ($scope.sortAsc) { // ASC
$scope.items = sortedItems;
}
else { // DSC
$scope.items = sortedItems.reverse();
}
}
[/code]
Here is a sample function to sort based on date field. You can create your own sorting function for each column type following the simple logic:

  • if two values are equal, a function should return “0”
  • if the first value is lesser than 2nd value, a function should return “-1”
  • if the first value is greater than 2nd value, a function should return “1”

[code language=”javascript”]
function compareDate(date1,date2) {
var dateFirst = new Date(date1);
var dateSecond = new Date(date2);
if (dateFirst === dateSecond) {
return 0;
}
else if (dateFirst < dateSecond) {
return -1;
}
else {
return 1;
}
}
[/code]
Pictures below will show custom sorting on the column Object Date as it has got a combination dates and text and client wanted to sort it in a specified order.

Ascending order (custom field)

032018_1123_Creatingsor2.png

Descending order (custom field)

032018_1123_Creatingsor1.png
Another view of looking at the same table but sorting on the column ‘Object Date submitted’ as it contains only dates.

Ascending order (date field)

Capture1

Descending order (date field)

Capture
It’s an easy way to use plain Angular and HTML to order/sort your table as per the needs. This can be enhanced further to cater specific needs or use 3rd party library to deliver rich functionality.

Migrate SharePoint contents using ShareGate

Background

The first step in any migration project is to do the inventory and see what is the size of the data you have got which you are looking to migrate. For the simplicity, this post assumes you have already done this activity and have already planned new destination sites for your existing content. This means you have some excel sheet somewhere to identify where your content is going to reside in your migrated site and every existing site (in scope) have got at least a new home pointer if it qualifies for migration.
The project I was working on had 6 level of sites and subsites within a site collection, for simplicity we just consider one site collection in scope for this post and had finalised it will have max. 3 levels of site and a subsite initially after discussing it with business.
In this blog post, we will be migrating SharePoint 2010 and 2013 contents from on-premise to SharePoint Online (SPO) in our customer’s Office 365 tenant.

Creating Structure

After doing the magic of mapping an existing site collection, site and subsites; we came up with an excel sheet to have the mapping for our desired structure as shown below:

Level1.csv

Level2.csv

Level3.csv


The above files will be used as a reference master sheet to create site structure before pulling the trigger for migrating contents. We will be using PowerShell script below to create the structure for our desired state within our new site collection.

SiteCreation.ps1

[code language=”powershell”]
$url = “https://your-client-tenant-name.sharepoint.com/”
function CreateLevelOne(){
Connect-PnPOnline -Url $url -Credentials ‘O365-CLIENT-CREDENTIALS’
filter Get-LevelOne {
New-PnPWeb -Title $_.SiteName -Url $_.URL1 -Template BLANKINTERNETCONTAINER#0 -InheritNavigation -Description “Site Structure created as part of Migration”
}
Import-Csv C:\_temp\Level1.csv | Get-LevelOne
}
function CreateLevelTwo(){
filter Get-LevelTwo {
$connectUrl = $url + $_.Parent
Connect-PnPOnline -Url $connectUrl -Credentials ‘O365-CLIENT-CREDENTIALS’
New-PnPWeb -Title $_.SiteName -Url $_.URL2 -Template BLANKINTERNETCONTAINER#0 -InheritNavigation -Description “Site Structure created as part of Migration”
}
Import-Csv C:\_temp\Level2.csv | Get-LevelTwo
}
function CreateLevelThree(){
filter Get-LevelThree {
$connectUrl = $url + $_.GrandPrent + ‘/’ + $_.Parent
Connect-PnPOnline -Url $connectUrl -Credentials ‘O365-CLIENT-CREDENTIALS’
New-PnPWeb -Title $_.SiteName -Url $_.URL3 -Template BLANKINTERNETCONTAINER#0 -InheritNavigation -Description “Site Structure created as part of Migration”
}
Import-Csv C:\_temp\Level3.csv | Get-LevelThree
}
CreateLevelOne
CreateLevelTwo
CreateLevelThree
[/code]

Migrating Contents

Once you have successfully created your site structure, this is the time now to start migrating contents to the newly created structure as per the mapping you have identified earlier. CSV file format looks like below:
MG-Batch.csv

Final step is to execute PowerShell script and migrate content using ShareGate commands from your source site to your destination site (as defined in your mapping file above)

Migration-ShareGate.ps1

[code language=”powershell”]
# folder where files will be produced
$folderPath = “C:\_Kloud\SGReports\”
$folderPathBatches = “C:\_Kloud\MigrationBatches\”
filter Migrate-Content {
# URLs for source and destination
$urlDes = $_.DesintationURL
$urlSrc = $_.SourceURL
# file where migration log will be added again each run of this script
$itemErrorFolderPath = $folderPath + ‘SG-Migration-Log-Webs.csv’
# migration settings used by ShareGate commands
$copysettings = New-CopySettings -OnContentItemExists IncrementalUpdate -VersionOrModerationComment “updated while migration to Office 365”
#
$pwdDest = ConvertTo-SecureString “your-user-password” -AsPlainText -Force
$siteDest = Connect-Site -Url $urlDes -Username your-user-name -Password $pwdDest
$listsToCopy = @()
$siteSrc = Connect-Site -Url $urlSrc
$listsInSrc = Get-List -Site $siteSrc
foreach ($list in $listsInSrc)
{
if ($list.Title -ne “Content and Structure Reports” -and
$list.Title -ne “Form Templates” -and
$list.Title -ne “Master Page Gallery” -and
$list.Title -ne “Web Part Gallery” -and
$list.Title -ne “Pages” -and
$list.Title -ne “Style Library” -and
$list.Title -ne “Workflow Tasks”){
$listsToCopy += $list
}
}
# building a log entry with details for migration run
$date = Get-Date -format d
$rowLog = ‘”‘ + $siteSrc.Address + ‘”,”‘ + $siteSrc.Title + ‘”,”‘ + $listsInSrc.Count + ‘”,”‘ + $siteDest.Address + ‘”,”‘ + $siteDest.Title + ‘”,”‘ + $listsToCopy.Count + ‘”,”‘ + $date + ‘”‘
$rowLog | Out-File $itemErrorFolderPath -Append -Encoding ascii
Write-Host Copying $listsToCopy.Count out of $listsInSrc.Count lists and libraries to ‘(‘$siteDest.Address’)’
#Write-Host $rowLog
$itemLogFolderPath = $folderPath + $siteSrc.Title + ‘.csv’
#Write-Host $itemLogFolderPath
$result = Copy-List -List $listsToCopy -DestinationSite $siteDest -CopySettings $copysettings -InsaneMode -NoCustomPermissions -NoWorkflows
Export-Report $result -Path $itemLogFolderPath
}
function Start-Migration($batchFileName)
{
$filePath = $folderPathBatches + $batchFileName
Import-Csv $filePath | Migrate-Content
}
Start-Migration -batchFileName “MG-Batch.csv”
[/code]

Conclusion

In this blog post, we had used a silent mode of ShareGate using which we can run in parallel multiple migration jobs from the different workstations (depending on your ShareGate licensing).
For a complete list of ShareGate PowerShell commands, you can refer a list at URL.
I hope you have found this post useful to create a site structure and migrate contents (list/libraries) to content’s new home inside SharePoint Online.

Export/Import SharePoint Designer Workflows using PowerShell

Background

Your organisation is using SharePoint as a collaboration and productivity platform, and your business units have built their solutions on it. As part of business solutions, IT teams have built workflows using SharePoint designer to cater business needs. One of the challenges developers or IT professionals used to have is how to export and import this workflow to other SharePoint sites in case if you are having multiple environments like Dev, Test, UAT and Prod.
In today’s blog post, we will go through it and use PowerShell to export and import workflows across different site collections/tenants.

Solution

PowerShell will come to rescue while dealing with such problems,  scripts listed below will help you in doing that task. It is using commands from PnP-PowerShell as well.
You need to export the workflow definition that you have built using SharePoint Designer in your existing site collection. It relies on parameters like URL, Application Client ID, Application Client Secret and Workflow name to connect to SharePoint and extract workflow definition from it and then writes it into a XAML file.

Export.ps1

[code language=”powershell”]
[CmdletBinding(SupportsShouldProcess = $true)]
param (
$Url,
$ClientId,
$ClientSecret,
$WFDefinitionName
)
# include the common script
. "$PSScriptRoot\Common.ps1"
$FilePath = "$PSScriptRoot\WorkFlowDefinition\$($WFDefinitionName).xaml"
Connect-PnPOnline -Url $Url -AppId $ClientId -AppSecret $ClientSecret
Write-Host " "
Write-Host "Connected successfully to: $($Url)" -ForegroundColor Yellow
Get-PnPWorkflowDefinition -Name $WFDefinitionName
<span data-mce-type="bookmark" id="mce_SELREST_start" data-mce-style="overflow:hidden;line-height:0" style="overflow:hidden;line-height:0" ></span>
Save-WorkflowDefinition (Get-PnPContext) $WFDefinitionName $FilePath
[/code]
Once you have successfully exported workflow definition, you need to import it into the destination site. Import function relies on parameters like URL, Application Client ID, Application Client Secret and Workflow name to connect to SharePoint and import workflow definition from the file and then updates it into SharePoint site.

Import.ps1

[code language=”powershell”]
[CmdletBinding(SupportsShouldProcess = $true)]
param (
$Url,
$ClientId,
$ClientSecret,
$WFDefinitionName
)
# include the common script
. "$PSScriptRoot\Common.ps1"
$FilePath = "$PSScriptRoot\WorkFlowDefinition\$($WFDefinitionName).xaml"
Connect-PnPOnline -Url $Url -AppId $ClientId -AppSecret $ClientSecret
Write-Host " "
Write-Host "Connected successfully to: $($Url)" -ForegroundColor Yellow
$ctx = Get-PnPContext
$parentContentTypeId = $null
$wf = Get-PnPWorkflowDefinition -Name $WFDefinitionName
if ($wf -eq $null)
{
# Load workflow definition
Publish-WorkflowDefinition (Get-PnPContext) $FilePath $WFDefinitionName
Add-PnPWorkflowSubscription -Name $WFDefinitionName -DefinitionName $WFDefinitionName -List "YourSharePointListName" -HistoryListName "Workflow History" -TaskListName "Workflow Tasks" -StartOnCreated
}
else
{
Write-Host "Workflow definition $($WFDefinitionName) already exists.." -ForegroundColor Yellow
}
[/code]
Above two import and export commands rely on some PowerShell functions and are references in Common.ps1 and are listed below:

Common.ps1

[code language=”powershell”]
function Save-WorkflowDefinition($ctx, $wfName, $filePath) {
$web = $ctx.Web
$wfm = New-Object Microsoft.SharePoint.Client.WorkflowServices.WorkflowServicesManager -ArgumentList $ctx, $web
$wfds = $wfm.GetWorkflowDeploymentService()
$wfdefs = $wfds.EnumerateDefinitions($false)
$ctx.Load($wfdefs);
$ctx.ExecuteQuery();
$wdef = $wfdefs | Where-Object {$_.DisplayName -eq $wfName}
if (!$wdef) {
Write-Error "Could not find Workflow definition to Export";
return;
}
$wdef | Export-Clixml $filePath
Write-Host " "
Write-Host "Workflow definition ‘$($wfName)’ exported successfully.." -ForegroundColor Green
}
function Publish-WorkflowDefinition($ctx, $filePath, $wfName) {
$ctx = Get-PnPContext
$web = $ctx.Web
Write-Host " "
Write-Host "Loading Workflow definition: $($filePath)"
$wfLoadedDefinition = Import-Clixml $filePath
$wfDefinition = Get-PnPWorkflowDefinition -Name $wfName
if ($wfDefinition) {
Write-Host "Updating existing workflow definition for $($wfName)"
}
else {
Write-Host "Creating new workflow definition: $($wfName)"
$wfDefinition = New-Object Microsoft.SharePoint.Client.WorkflowServices.WorkflowDefinition -ArgumentList $ctx
}
$xamlActivity = $wfLoadedDefinition.Xaml;
$wfDefinition.DisplayName = $wfLoadedDefinition.DisplayName;
$wfDefinition.Description = $wfLoadedDefinition.Description;
$wfDefinition.Xaml = $xamlActivity;
$wfDefinition.FormField = $wfLoadedDefinition.FormField;
$wfDefinition.SetProperty("ContentTypeId", $wfLoadedDefinition.Properties["ContentTypeId"]);
$wfDefinition.SetProperty("FormField", $wfLoadedDefinition.Properties["FormField"]);
$wfDefinition.RequiresAssociationForm = $wfLoadedDefinition.RequiresAssociationForm;
$wfDefinition.RequiresInitiationForm = $wfLoadedDefinition.RequiresInitiationForm;
$wfDefinition.RestrictToType = $wfLoadedDefinition.RestrictToType;
$wfm = New-Object Microsoft.SharePoint.Client.WorkflowServices.WorkflowServicesManager -ArgumentList $ctx, $web
$wfDeploymentService = $wfm.GetWorkflowDeploymentService()
$definitionId = $wfDeploymentService.SaveDefinition($wfDefinition)
$ctx.Load($wfDefinition)
$ctx.ExecuteQuery()
Write-Host "Workflow definition saved successfully for workflow: $($wfName)"
$wfDeploymentService.PublishDefinition($definitionId.Value)
$ctx.Load($wfDefinition)
$ctx.ExecuteQuery()
Write-Host " "
Write-Host "Workflow definition ‘$($wfName)’ published successfully.." -ForegroundColor Green
}
[/code]
I hope above set of scripts will help some of you to take SPD workflows around and deploy it across multiple development sites, test sites and/or UAT sites.

Adding Bot to Microsoft Teams

If you are following up on my previous blog posts about Bots and integrating LUIS with them, you are almost done with building bots and already had some fun with it. Now it’s time to bring them to life and let internal or external users interact with Bot via some sort of front end channel accessible by them. If you haven’t read my previous posts on the subject yet, please give them a read at Creating a Bot and Creating a LUIS app before reading further.
In this blog post, we will be integrating our previously created intelligent Bots into Microsoft Teams channel. Following a step by step process, you can add your bot to MS Teams channel.

Bringing Bot to Life

  1. As a first step, you need to create a package as outlined here and build a manifest as per the schema listed here. This will include your Bot logos and a manifest file as shown below.
  2. Once manifest file is created, you need to zip it along with logos, as shown above, to make it a package with (*.zip)
  3. Open Microsoft team interface, select the particular team you want to add Bot to and go to Manage team section as highlighted below.
  4. Click on Bots tab, and then select Sideload a bot as highlighted and upload your previously created zip file
  5. Once successful, it will show the bot that you have just added to your selected team as shown below.
  6. If everything went well, your Bot is now ready and available in team’s conversation window to interact with. While addressing Bot, you need to start with symbol @BotName to direct messages to Bot as shown below.
  7. Based on the configuration you have done as part of the manifest file, your command list will be available against your Bot name.
  8. Now you can ask your Bot question that you have trained your LUIS app with and it will respond as programmed.
  9. You just need to ensure your Bot is programmed to respond possible questions your end user can ask it for.
  10. You can program a bot to acknowledge user first and then respond in detail on user’s question. If the response contains multiple records, you can represent it using cards as shown below.
  11. Or if a response requires some additional actions, you can have a link or a button to launch a URL directly from your team conversation.
  12. Besides adding Bot to teams, you can add tabs to a team as well which can show any SPA (single page application) or even a dashboard built as per your needs. Below is just an example of what can be achieved using tabs inside MS Teams.

As MS Teams is evolving as a group chat software, it can be leveraged to build useful integrations as a front face to many of the organisation’s needs capitalising on Bots as an example.

Using a Bot Framework to build LUIS enabled Bots

History

In this post, we are going to build a bot using Microsoft Bot framework and add intelligence to it to extract meanings from the conversation with users utilising Microsoft cognitive service named LUIS. The last post discussed details about LUIS, give it a read before you continue on reading. This post assumes you have a basic understanding of Language Understanding Intelligent Service (LUIS) and Bot Framework, further details can be read about them at LUIS and Bot Framework.

Pre-requisites

You need to download few items to start your quick bot development, please get all of them before you jump on to the next section.

  • Bot template is available at URL (this will help you in scaffolding your solution)
  • Bot SDK is available at NuGet (this is mandatory to build a Bot)
  • Bot emulator is available at GitHub (this helps you in testing your bot during development)

Building a Bot

  1. Create an empty solution in your Visual Studio and add a Bot template project as an existing solution.
  2. Your solution directory should like the one below:
  3. Replace parameters $safeprojectname$ and $guid1$ with some meaningful name for your project and set a unique GUID
  4. Next step is to restore and update NuGet packages and ensure all dependencies are resolved.
  5. Run the application from Visual Studio and you should see bot application up and running
  6. Now open Bot emulator and connect to your Bot as follows:
  7. Once connected, you can send a test text message to see if Bot is responding
  8. At this point, your bot is up and running and in this step you will add Luis dialogue to it. Add a new class named RootLuisDialog under Dialogs folder and add methods as shown below against each intent that you have defined under your LUIS app. Ensure you have your LUIS app id and a key to decorate your class as shown below:
  9. Let’s implement a basic response from LUIS against intent ‘boot’ as shown in the code below.
  10. Open up an emulator, and try to use any utterance we have trained our LUIS application with. A sample bot response should be received as we have implemented in the code above. LUIS will identify intent ‘boot’ from a user message as shown below.
  11. And now we will be implementing a bit advanced response from LUIS against our intent ‘status’ as shown in the code below.
  12. And now you can send a bit complex message to your bot and it will send a message to LUIS to extract entity and intent from the utterance and respond to the user accordingly as per your implementation.

And the list of intent implementation goes on and on, you can customise behaviour as per your needs as your LUIS framework is ready to rock and roll within your bot and users can take advantage of it to issue specific commands or inquire about entities using your Bot. Happy Botting 🙂

How LUIS can help BOTs in understanding natural language

Since bots are evolving, you need a mechanism to better understand what user wants from his/her language and take actions or respond to user queries appropriately. In the days of increasing automation, bots can certainly help provided they are backed by tools to understand user language both naturally and contextually.
Azure Cognitive Services has an API that can help to identify what user wants, extracts concepts and entities from a sentence (user input) using an intelligent service name Language Understanding Intelligent Service (LUIS). It can help process natural language using custom trained language models and can incorporate Active learning concept based on how it was trained.
In this blog post, we will be building a LUIS app that can be utilised in a Bot or any other client application to respond to the user in a more meaningful way.

Create a LUIS app

  1. Go to https://www.luis.ai/ and sign up.
  2. You need to create a LUIS app by clicking ‘New App’ – this is the app you will be using in Bot Framework
  3. Fill out a form and give your app a unique name
  4. Your app will be created, and you can see details as below (page will be redirected to Overview)
  5. You need to create entities to identify the concept, and is very important part of utterances (input from a user). Let’s create few simple entities using the form below
  6. You can also reuse pre built entity like email, URL, date etc.
  7. Next step is to build intent which represents a task or an action from utterance (input from a user). By default, you will have None which is for irrelevant utterances to your LUIS app.
  8. Once you have defined the series of intents, you need to add possible utterances against each intent which forms the basis of Active Learning. You need to make sure to include varied terminology and different phrases to help LUIS identify.You can build Phrase list to include words that must be treated similarly like company name or phone models etc.
  9. As you write utterances, you need to identify or tag entities like we selected $service-request in your utterance.Remember: you are identifying possible phrases to help LUIS extract intents and entities from utterances.
  10. Next step is to train your LUIS app to help it identify entities and intents from utterances. Ensure you click Train Application when you are done with enough training (you can also do such training on per entity or per intent basis)
  11. You can repeat step 10 as much time as you like to ensure LUIS app is trained well enough on your language model.
  12. Publish the app once you have identified all possible entities, intents, utterances and have trained LUIS well to extract them from user input.
  13. Keep a note of Programmatic API key from MyKey section and Application ID from Settings menu of your LUIS app, you will need these two keys when integrating LUIS with your client application.

Now you are ready to go ahead and use your LUIS app in your Bot or any other client application to process natural language in a meaningful manner – Cheers!

Integrating Yammer data within SharePoint web-part using REST API

Background

We were developing a SharePoint application for one of our client and have some web-parts that had to retrieve data from Yammer. As we were developing on SharePoint Online (SPO) using a popular SharePoint Framework (SPFx), so for the most part of our engagement we were developing using a client-side library named React to deliver what is required from us.
In order for us to integrate client’s Yammer data into our web-parts, we were using JavaScript SDK provided by Yammer.

Scenario

We were having around 7-8 different calls to Yammer API in different web-parts to extract data from Yammer on behalf of a logged-in user. Against each API call, a user has to be authenticated before a call to Yammer API has been made and this has to be done without the user being redirected to Yammer for login or presented with a popup or a button to log in first.
If you follow Yammer’s JavaScript SDK instructions, we will not be meeting our client’s requirement of not asking the user to go Yammer first (as this will change their user flow) or a pop-up with login/sign-in dialog.

Approach

After looking on the internet to fulfill above requirements, I could not find anything that serves us. I have found the closest match in PnP sample but it only works if a client has already consented to your Yammer app before. In our case, this isn’t possible as many users will be accessing SharePoint home page for the first them and have never accessed Yammer before.
What we have done is, let our API login calls break into two groups. Randomly one of the calls was chosen to let the user login to Yammer and get access token in the background and cache it with Yammer API and make other API login calls to wait for the first login and then use Yammer API to log in.

Step-1

This function will use standard Yammer API to check login status if successful then it will proceed with issuing API data retrieval calls, but if could not log in the first time; it will wait and check again after every 2 sec until it times out after 30 sec.
[code language=”javascript” highlight=”5,10,13,19″]
public static loginToYammer(callback: Function, requestLogin = true) {
SPComponentLoader.loadScript(‘https://assets.yammer.com/assets/platform_js_sdk.js’, { globalExportsName: "yam"}).then(() => {
const yam = window["yam"];
yam.getLoginStatus((FirstloginStatusResponse) => {
if (FirstloginStatusResponse.authResponse) {
callback(yam);
}
else {
let timerId = setInterval(()=>{
yam.getLoginStatus((SecondloginStatusResponse) => {
if (SecondloginStatusResponse.authResponse) {
clearInterval(timerId);
callback(yam);
}
});
}, 2000);
setTimeout(() => {
yam.getLoginStatus((TimeOutloginStatusResponse) => {
if (TimeOutloginStatusResponse.authResponse) {
clearInterval(timerId);
}
else {
console.error("iFrame – user could not log in to Yammer even after waiting");
}
});
}, 30000);
}
});
});
}
[/code]

Step-2

This method will again use the standard Yammer API to check login status; then tries to log in user in the background using an iframe approach as called out in PnP sample; if that approach didn’t work either then it will redirect user to Smart URL in the same window to get user consent for Yammer app with a redirect URI set to home page of  your SharePoint where web-parts with Yammer API are hosted.
[code language=”javascript” highlight=”5,10,15″]
public static logonToYammer(callback: Function, requestLogin = true) {
SPComponentLoader.loadScript(‘https://assets.yammer.com/assets/platform_js_sdk.js’, { globalExportsName: "yam"}).then(() => {
const yam = window["yam"];
yam.getLoginStatus((loginStatusResponse) => {
if (loginStatusResponse.authResponse) {
callback(yam);
}
else if (requestLogin) {
this._iframeAuthentication()
.then((res) => {
callback(yam);
})
.catch((e) => {
window.location.href="https://www.yammer.com/[your-yammer-network-name]/oauth2/authorize?client_id=[your-yammer-app-client-id]&response_type=token&redirect_uri=[your-sharepoint-home-page-url]";
console.error("iFrame – user could not log in to Yammer due to error. " + e);
});
} else {
console.error("iFrame – it was not called and user could not log in to Yammer");
}
});
});
}
[/code]
The function _iframeAuthentication is copied from PnP sample with some modifications to fit our needs as per the client requirements were developing against.
[code language=”javascript” highlight=”13,19,36″]
private static _iframeAuthentication(): Promise<any> {
let yam = window["yam"];
let clientId: string = "[your-yammer-app-client-id]";
let redirectUri: string = "[your-sharepoint-home-page-url]";
let domainName: string = "[your-yammer-network-name]";
return new Promise((resolve, reject) => {
let iframeId: string = "authIframe";
let element: HTMLIFrameElement = document.createElement("iframe");
element.setAttribute("id", iframeId);
element.setAttribute("style", "display:none");
document.body.appendChild(element);
element.addEventListener("load", _ => {
try {
let elem: HTMLIFrameElement = document.getElementById(iframeId) as HTMLIFrameElement;
let token: string = elem.contentWindow.location.hash.split("=")[1];
yam.platform.setAuthToken(token);
yam.getLoginStatus((res: any) => {
if (res.authResponse) {
resolve(res);
} else {
reject(res);
}
});
} catch (ex) {
reject(ex);
}
});
let queryString: string = `client_id=${clientId}&response_type=token&redirect_uri=${redirectUri}`;
let url: string = `https://www.yammer.com/${domainName}/oauth2/authorize?${queryString}`;
element.src = url;
});
}
[/code]

Conclusion

This resulted in authenticating Office 365 tenant user within the same window of SharePoint home page with the help of an iframe [case: the user had consented Yammer app before] or getting a Yammer app consent from the Office 365 tenant user without being redirected to Yammer to do OAuth based authentication [case: the user is accessing Yammer integrated web-parts for the 1st time].
We do hope future releases of Yammer API will cater seamless integration among O365 products without having to go through a hassle to get access tokens in a way described in this post.

Sharing a report using a Power BI app

History

You have created reports and built dashboards in Power BI desktop to surface your data from multiple data sources, it is a time for you to share dashboards to a wider audience in your organisation and looking for how to do it. Power BI service came up with a powerful feature of Power BI apps to cater such scenarios.
If you have not yet created reports or did not setup a gateway for leveraging our on-premises data, please follow my earlier posts Setup a Power BI Gateway and Create reports using a Power BI Gateway to do so.

Approach

Sharing and Collaborating in a Power BI service is a three-step process, each step is explained in this blog post. At a surface level, tasks are as follows:

  1. Creation of an App Workspace
  2. Publishing reports to an App Workspace
  3. Publishing a Power BI App

A typical usage scenario for a Power BI apps in Office 365 services is depicted below:

1) Create an App Workspace

App Workspace is a new concept introduced in Power BI using which you can collaborate on datasets, reports and dashboards (authored by members) and builds/package Power BI apps to be distributed to your wider audience.

  1. Log in to your Power BI service https://app.powerbi.com and click on your Workspace list menu on the left

    If this is your first-time login, you need to create a new app workspace. (it’s just a new name for group workspaces)

  2. A form needs to be filled inside your Office 365 Power BI service for creating and a unique name is required for each app workspace
  3. Whilst creating the workspace, you need to set the privacy which can’t be changed later – so please decide carefully.
  4. And you need to set Permission levels for your workspace accordingly, please only add members who can edit content as viewers can be added later during publishing your Power BI app.

  5. Next step is to add users to it and set admins for the workspace. (default role is Member, change it for Owner against users you are intending to give administrator permissions)Note: you can only add individual users to this list, security group and modern groups support is not yet available at the time of writing this post.
  6. Upon reaching this step, your app workspace has been created successfully and is ready for use.

2) Publishing Reports to an App Workspace

Power BI app workspace is a collaboration tool, any member can create a model using their Power BI desktop and then publish it to a workspace so members can get advantage existing datasets, reports and dashboards. Follow the steps listed below to share your model in an app workspace.

  1. Open your Power BI desktop file (*.pbix) you have created earlier and hit the Publish button
  2. Select app workspace you want to publish your reports to:and Power BI desktop will start publishing reports to your Office 365 Power BI service
  3. Subsequent publishing to same app workspace will remind you if your data set already exists.
  4. Depending on the size of your data and internet speed may take some time to publish reports to Power BI service. Sooner or later you will receive a success message
  5. Upon reaching this step your reports, datasets and dashboards are published and available in your Power BI service.

3) Publishing Power BI App

  1. Login into your Power BI service and go to your app workspaces list and select your newly created workspace from the list
  2. On the right top, you will see a button to publish an app
  3. Provide description for the app in ‘Details’ tab, as your Power BI app will get the same name as of your app workspace
  4. In the next ‘Content’ tab, you will see a list of all contents within app workspace that will be published within this app. In this step, you can set a landing page of a Power BI app which users will see when they click on your Power BI app. I have selected a specific dashboard to be shown
  5. You will then need to set audience for your app in ‘Access’ tab, it can be either whole organisation or a combination of users or groups. On the top right corner, it will show you how many artefacts will be published within this Power BI app.
  6. Once you publish it, Power BI service will advise you the URL of your app as shown below:

AppSource and Power BI

Power BI users intending to use apps shared by other users or organisation must get apps first to use dashboards and reports from it.

  1. You need to go to ‘Apps’ menu in Power BI service (in the left menu)
  2. On selecting Apps from the menu will list apps you are subscribed to, if you are using it for the 1st time it’s usually empty and you need to click on ‘Get apps’ to get Power BI apps from AppSource store
  3. You can then select which apps you want to subscribe to from the list, they are listed by category

Behind the Scenes

The moment users create an app workspace, Office 365 group will be created in the background having the same name as of app workspace and users maintained as Office 365 groups users.

  • Admins of the workspace will become Owners of the group
  • Members of the workspace will become Members of the group


And a SharePoint site will be created as well with same members as of Power BI app workspace and Office 365 group.

You can see the details of users (admins/members) by checking ‘Site permissions’ menu under site settings

In the

Create reports using a Power BI Gateway

Background

Once you have a Power BI gateway setup to ensure data flow from your on-premises data sources to Power BI service in the cloud, next step is to create reports using Power BI desktop and build reports using data from multiple on-premises data sources.
Note: If you didn’t have a gateway setup already, please follow my earlier post to set it up before you continue reading this post.

Scenario

All on-premises data is stored in SQL server instances and spread across few data warehouses and multiple databases built and managed by your internal IT teams.
Before building reports, you need to ensure following key points:

  1. Each data source should be having connectivity to your gateway with minimum latency, this should be ensured.
  2. Every data source intended to be used within reports needs to be configured within a gateway in the Power BI service
  3. List of people needs to be configured against each data source who can publish reports using this data source

An interaction between on-premises data sources and cloud services is depicted below:

Pre-requisites

Before you build reports, you need to setup on-premises data sources in the gateway to ensure Power BI service knows which data sources are allowed by gateway administrator to pull data from on-premises sources.
Login into https://app.powerbi.com with Power BI service administrator service credentials.

  1. Click on Manage gateways to modify settings
  2. You will see a screen with gateway options that your setup earlier while configuring gateway on-premises
  3. Next step is to setup gateway administrators, who will have permission to setup on-premises data sources as and when required
  4. After gateway configuration, you need to add data sources one by one so published reports can use on-premises data sources (pre-configured within gateway)
  5. You need to setup users against each data source within a gateway who can use this data source to pull data from on-premises sources within their published reports
  6. Repeat above steps for each of your on-premises data sources by selecting appropriate data source type and allowing users who can use them while building reports

Reports

Upon reaching this step, you are all good to create reports.

  1. Open Power BI desktop
  2. Select sources you want to retrieve data from
  3. Just ensure while creating reports, data source details are same as what was configured in Power BI service while you were setting up data sources.
  4. Great! once you publish reports to your Power BI service – your gateway will be able to connect to relevant on-premises data sources if you have followed steps above.

 

Follow ...+

Kloud Blog - Follow