Unfortunately (as with most things auth-related), there are some gotcha’s to be aware of. One relates to how ADAL obtains refresh tokens in this crazy world of implicit auth.
Implicit Auth Flow
Implicit auth allows for the application developer to not have to host their own token authentication service. The ADAL.js and the Azure AD auth endpoint do all the heavy lifting:
It’s the bottom third of the diagram (after the token expires) that causes the issue I am addressing in this post. This is where ADAL.js creates a hidden iframe (browser fragment) that sends a request to get a fresh token. This will show up in the DOM (if you inspect in the browser dev tools) as an iframe element with an ID of “adalRenewFrame” followed by the endpoint that it is renewing the token for (in the below example this is https://graph.microsoft.com).
What’s the problem?
What’s the solution?
The ADAL.js team recommend a couple of different approaches to getting around this issue in their FAQ page on github. The simpler solution is to control how your app is bootstrapped so that it only loads if the window === window.parent (i.e. it isn’t in an iframe), which is fine if you have this kind of control over how your app starts (like with AngularJS or React). But this won’t always suit.
The other option is to have an alternative redirect page that is targeted after the iframe renews the token (by specifying it in the ADAL config with the redirectUri property). N.B. you have to specify the exact url in the Azure AD app settings for your app as well.
Suffice it to say, just pointing to an empty page doesn’t do the trick and there is a bunch of hassle with getting everything working (see the comments on this gist for the full adventure), but to cut a long story short – here’s what worked for me.
The redirect page itself redirects back to our SPA (in this case, the root of the web app) only if the window === window.parent (not an iframe) and passes the token etc in the window.location.hash as well. See the below example.
Hope this saves you some time/grief with getting this all working. I haven’t seen a satisfactory write up of a solution to this issue before (hence this post).
Any questions, suggested improvements, please let me know in the comments!
‘Till next time…
If you’re a technical consultant working with cloud services like Office 365 or Azure on behalf of various clients, you have to deal with many different logins and passwords for the same URLs. This is painful, as your default browser instance doesn’t handle multiple accounts and you generally have to resort to InPrivate (IE) or Incognito (Chrome) modes which mean a lot of copying and pasting of usernames and passwords to do your job. If this is how you operate today: stop. There is an easier way.
Two tools for seamless logins
OK, the first one is technically a feature. The most important part of removing the login bottleneck is Chrome Profiles. This essential feature of Chrome lets you maintain completely separate profiles for Chrome, including saved passwords, browser cache, bookmarks, plugins, etc. Fantastic.
Set one up for each customer that you have a dedicated account for. Once you log in once, the credentials will be cached and you’ll be able to pass through seamlessly.
This is obviously a great improvement, but only half of the puzzle. It’s when Profiles are combined with another tool that the magic happens…
SlickRun your Chrome sessions
If you haven’t heard of the venerable SlickRun (which must be pushing 20 years if it’s a day) – download it right now. This gives you the godlike power of being able to launch any application or browse to any Url nearly instantaneously. Just hit ALT-Q and input the “magic word” (which autocompletes nicely) that corresponds to the command you want to execute and Bob’s your Mother’s Brother! I tend to hide the SlickRun prompt by default, so it only shows up when I use the global ALT-Q hotkey.
First we have to set up our magic word. If you simply put a URL into the ‘Filename or URL’ box, SlickRun will open it using your default browser. We don’t want that. Instead put ‘chrome.exe’ in the box and use the ‘–profile-directory’ command line switch to target the profile you want, followed by the URL to browse to. N.B. You don’t seem to be able to reference the profiles by name. Instead you have to put “Profile n” (where n is the number of the profile in the order you created it).
That’s all there is to it. Once you’ve set up your magic words for the key web apps you need to be able to access for each client (I go with a naming convention of ‘client–appname‘ and extend that further if I have multiple test accounts I need to log in as, etc), then get to any of them in seconds and usually as seamlessly as single-sign-on would provide.
This hands-down my favourite productivity trick and yet I’ve never seen anyone else do it, or seen a better solution to the multiple logins problem. Hence this post! Hope you find it as awesome a shortcut as I do…
Till next time!
I love Azure Functions. So much power for so little effort or cost. The only downside is that the consumption model that keeps the cost so dirt-cheap means that unless you are using your Function constantly (in which case, you might be better off with the non-consumption options anyway), you will often be hit with a long delay as your Function wakes up from hibernation.
So very cold…
This isn’t a big deal if you are dealing with a fire and forget queue trigger scenario, but if you have web app that is calling the HTTP trigger and you need to wait for the Function to do it’s job before responding with a 200 OK… that’s a long wait (well over 15 seconds in my experience with a PowerShell function that loads a bunch of modules).
Now, the blunt way to mitigate this (as suggested by some in github issues on the subject) is to set up a timer function in the same Function App to run every 5 minutes to keep things warm. This to me seems like a wasteful and potentially expensive approach. For my use-case, there was a better way that would work.
The Classic CRUD Use-case
Here’s my use case: I’m building some custom SharePoint forms for a customer and using my preferred JS framework, good old Angular 1.x. Don’t believe the hype around the newer frameworks, ng1 still gets the job done with no performance problems at the scale I’m dealing with in SharePoint. It also comes with a very strong ecosystem of libraries to support doing amazing things in the browser. But that’s a topic for another blog.
Anyway, the only thing I couldn’t do effectively on the client-side was break permissions on the list item created using the form and secure it to the creator and some other users (eg. their manager, etc). You need elevated permissions for that. I called on the awesome power of the PnP PowerShell library (specifically the Set-PnPListItemPermission cmdlet) to do this and wrapped it in a PowerShell Azure Function:
Pretty simple. Nice and clean – gets the job done. My Angular service calls this right after saving the item based on the form input by the user. If the Azure Function is starting from cold, then that adds an extra 20 seconds to save operation. Unacceptable and avoidable.
A more nuanced warmup for those who can…
This seemed like such an obvious solution once I hit on it – but I hadn’t thought of it before. When the form is first opened, it’s pretty reasonable to assume that (unless the form is a monster), the user should be hitting that submit/save button in under 5 mins. So that’s when we hit our function with a modified HTTP payload of ‘WARMUP’.
Just a simple ‘If’ statement to bypass the function if the ‘WARMUP’ payload is detected. The Function immediately responds with a 200. We ignore that – this is effectively fire-and-forget. Yes, this would be even simpler if we had a separate warmup Function that did absolutely nothing except warm up the Functions App, but I like that this ensures that my dependent PnP dlls (in the ‘modules’ folder of my Function) have been fired up on the current instance before I hit the function for real. Maybe it makes no difference. Don’t really care – this is simple enough anyway.
Here’s the Angular code that calls the Function (both as a warmup and for real):
Anyway, nothing revolutionary here, I know. But I hadn’t come across this approach before, so I thought it was worth writing up as it suits this standard CRUD forms over data scenario so nicely.
Till next time!
I’ve been trying to come up with a good general-purpose tech stack for building web apps for small, local community user-bases. There are lots of great “share-economy-style” collaborative use cases at this level of organisation and I want a vehicle to test out ideas, to see what works and what doesn’t in my local area, from a social standpoint.
I obviously want to make what I develop freely available in hopes that others might contribute ideas or code and I want people to be able to trust that it does what it says on the tin to be able to use it, so everything needs to go up on github.
I’m a big believer in the power of constraints to focus the design process and these were nice and clear for this endeavour in regards to the technology to be used.
Cost – needs to be as close to free to operate as possible
Usability – the barriers to someone using the apps built on the stack need to be as low as possible
Administrative simplicity – spinning up an instance of the stack, needs to be a straight-forward as possible
Open Source tech is to be favoured over proprietary (where feasible)
Flexibility to extend into native mobile versions in future
Serverless Cloud Provider
Being a huge fan of the serverless paradigm and the opportunities it unlocks (not least from a cost perpective), I knew I would be leaning heavily on the serverless capabilities of one of the major cloud providers to build out my solution. Already being well versed in what Azure has to offer (and admittedly knowing precious little about the alternatives), the choice was easy.
Realtime comms on a shoe-string
Get an effectively free static website up is laughably easy these days. So many great services like now and glitch exist in addition to the big-name cloud providers who all have free tiers, but for me the trick was real-time communication (via websockets or similar) was never covered on those platforms (or at least not to the scale I needed to potentially support). Same goes for the many real-time “pub-sub-as-a-service” 3rd party offerings – their free tiers all stop at 100 simultaneous connections (which is probably way more than I need, but still a constraint that I could well bump up against at some point). Plus, as per point 3 above, I don’t particularly want people to have to sign up to multiple cloud providers to get this all working.
The solution: the distributed magic that is IPFS pubsub.
Interplanetary File System (IPFS)
I won’t go into any detail into what IPFS is here – the official website is as good an intro as you could want. Suffice it to say, it is a distributed storage platform for web content, including publish-subscribe realtime messaging capbilities (experimental feature). You don’t have to pay to host an IPFS node, and through some voodoo that I have no idea about, you can even host one in a web browser (a decent modern one). So basically, we get ephemeral (don’t expect it to stick around in the IPFS network) realtime comms for free. N.B. In the next installment in this series of posts, I’ll be persisting the realtime messages to Azure Table Storage… stay tuned…
Reactive Web Framework – Cycle.js
I’m a relative newcomer to React (love React Native, less enamoured with the original web version), but having worked with RxJS on a large Angular project recently, I was keen to see if I could find a framework that does a better job of managing state (yes, React has redux and and various side effects plugins, but they all seemed a little tacked-on) and uses the power of observables and found one in Cycle.js
Cycle has state management as it’s raison detré or at least, the way that it is structured kind of relegates state to a by-product of how data flows through your application, as opposed to treating it as an object you need to explicitly maintain.
Cycle uses a abstraction called a “driver” to handle any external effects (incoming or outgoing) to your application. The primary ones are drivers for interacting with the virtual DOM and making HTTP requests, but there are many others, including myriad community efforts. I couldn’t find one for IPFS, so created one to wrap the ipfs-pubsub-room helper library.
Here we are setting up the incoming (listening for new messages being broadcasted) and outgoing (broadcasting messages input by the user) observable streams. There are more API methods for ipfs-pubsub-room (e.g. for sending private messages to individual peers) but for this example, we’ll stick with the basics. A quick note on importing modules: I wasn’t able to get IPFS-JS to bundle cleanly on my Windows machine, so am just loading from a CDN in a script tag. Works just fine for the purposes of this demo.
As alluded to before, this is just the first in a planned series of posts as I build out my little project that I’m calling Community AppStack for now. Please check back for the next installment (or follow me on Twitter @BruceAlbany) and let me know if you liked this one in the comments!
As an ex-Excel Developer, I tend to resolve any perceived inefficiencies when dealing with any tabular data by automating the snot out of it with Excel and VBA. As a SharePoint developer, there is no shortage of stuff that we need to provision into SharePoint to build business solutions, often repeatedly throughout the development cycle or between environments (Dev, Test, UAT, etc).
PowerShell was always going to be the automation tool of choice, and while PowerShell scripts are commonly fed with CSV files of flat tabular data, editing those has always been painful if you’ve got more than a handful of rows. Excel has always been a far better tool than a text editor for quickly editing tabular data, thanks to features like autofill, but that’s just the tip of the iceberg…
PowerShell Deployment Agent
So, I’ve been using Excel and VBA to deploy things for a long time (about 10 years at best guess). I’ve changed companies a handful of times over that period, and every time I do I create a new and better Excel model for generating stuff in whatever platform I was working with. Last year, I started to open source that model as it had reached a maturity level where I no longer want to start over – it’s pretty darned solid. The simplest model of this was called CSV-Exporter and it did what it said on the tin. I’ve extended it into what I now call my PowerShell Deployment Agent (PSDA), which doesn’t just export CSVs from Excel Tables, but also launches a PowerShell script of your choosing to streamline the deployment process.
By careful script design, Excel filtering, and of course some VBA, this allows for some very fine-grained control over exactly what you want to deploy. To the extent that you can just highlight a bunch of rows in a sheet and hit CTRL-SHIFT-D to deploy them to your target:
When / Why would you use it?
Excel’s always been a good way to store configuration data, but it really comes into it’s own as launching pad for PowerShell when you have rows of data that need to be pumped through the same cmdlet or function.
To prove how easy it is to get started, we’ll go with the the use-case that I’m usually working with: SharePoint.
So let’s open the workbook and pick something simple that we want to deploy. How about folders in a document library?
All we need to do once we’ve input our target site is:
Find the PnP PowerShell cmdlet we want (Add-PnPFolder in this case)
Click the ‘New Blank Sheet’ button
Select the name for our sheet (‘Folders’ in this case – a new .ps1 file with the same name will be created from a template in the ‘Functions’ subfolder under the path to your deployment script)
Copy and paste the cmdlet signature to let PSDA know what columns to map to the cmdlet parameters,
Fill in our data
Double click the URL of the target site we want to deploy to
This is obviously the most basic scenario and there’s a good chance that you’ll want to customize both the auto-generated script and the table (with some of the advanced features below).
Speaking of… what other benefits of using Excel over raw CSVs are there? I’m glad you asked.
You obviously can’t have a calculated value in a CSV file, which means that your PowerShell script is more complex than it needs to be, by performing that calculation on each row at run-time. Excel is clearly the superior tool here – you can see if your calculation is correct right there in the cell. PSDA Perk – Formulas are exported as values when you are deploying against a target, but are preserved when you want to export your data for versioning, etc.
You obviously can’t have comments in a CSV, but these are very useful in Excel, particularly in column headers to advise what data/format to put in that column. PSDA Perk – You can preserve column header comments when exporting from the workbook. They will be reinstated when re-importing.
Want to guard against data entry errors while keeping your script clean? Use Excel to prevent erroneous entries before deployment with data validation. You can restrict based on a formula, or a reference cell range (there’s a reference sheet in the workbook for that). PSDA Perk – Data validation rules will be exported and reinstated on re-import.
In addition to data validation, conditional formatting is a powerful way to show that some data is incorrect or missing under certain conditions. Obviously, from a data entry standpoint, we don’t need anything too fancy here – usually setting the font or background of the cell to red when a formula evaluates to false is all we need to prompt the user. PSDA Perk – Basic conditional formatting (as per the above) can be exported and reinstated on re-import.
A Note on Credentials
If you went to the trouble of downloading the workbook and poking it with a stick, you’ll note that there is a single column in the launch table on the control sheet for the credential to use for each target environment. Nowhere to put a password. Because you shouldn’t be storing passwords in Excel workbooks. Ever.
Instead, you should be using something like CredentialManager, which leverages the Windows Credential Manager (a safe place to store your admin passwords). Which means that you just refer to the label of the credential in WCM in the workbook. Nice and clean and means that anyone getting your workbook doesn’t have the keys to your environments listed.
If you are using the outstanding PnP PowerShell cmdlets for deploying to SharePoint Online/On Premises, you get this functionality OOTB (no need to use CredentialManager).
Thanks for checking this out and please try out the workbook and let me know what you like or needs improvement.
I’ve been using Angular 1.x for building custom UI components and SPAs for SharePoint for years now. There’s a lot of rinse and repeat here, and over time my “stack” of open-source custom Directives and Services that make Angular such a powerful framework, have settled down to a few key ones that I wanted to highlight – hence this post.
Some may be wondering why am I still working almost exclusively with Angular 1.x? Why not Angular 2, or React, or Aurelia? A few reasons:
It’s often already in use. Quite often a customer is already leveraging Angular 1.x in a custom masterpage, so it makes sense not to add another framework to the mix.
Performance improvements are not a high priority. I work almost entirely with SharePoint Online. The classic ASP.NET pages served up there aren’t exactly blindingly fast to load, so Angular 1 (used carefully) doesn’t slow things down measurably. Will this change when SPFx finally GA’s? Of course! But in the meantime, Angular 1.x is very comfortable, which leads to…
Familiarity = Productivity. Ramping up a custom application in SharePoint with Angular is now very quick to do. This is the whole “go with what you know well and can iterate fast on” approach to framework selection. Spend your time building out the logic of your app rather than fighting an unfamiliar framework.
The absolute smorgasbord of community-produced libraries that enhance Angular. A lot of the major ones have Angular 2 versions, but there are some notable exceptions (highlighted below).
So here, in order of frequency of use, are the plugins that I go to time and time again. Check them out and star those github repos!
An awesome state-based routing service for Angular (and there are Angular 2 and React versions as well) – more widely used than the default Angular 1 router as it has a fantastic API which allows you resolve asynchronous data calls (via promises) before you transition to the state that needs it. This keeps your controllers/components light and clean. I use this every custom webpart/SPA I build that has more than one view (which is almost all of them).
If you need modals in your app, you can add in the uib-modal extension that allows UI-Bootstrap modals to be declared as state in your UI-Router state config. Great for deep linking through to modal windows!
Formly makes using custom controls / directives in forms really easy and gives you uniform validation rules over all of them. It’s also wonderfully extensible – there’s so much you can do with it, once you learn the basics. I put off trying it for AGES and now I wouldn’t be without it – if you have any user input in your Angular app, do yourself a favour and use Formly!
Here’s the first Angular 1 only library in my toolbox. Makes creating complex Gantt-chart interfaces if not dead easy, at least feasible! I shudder to think what I nightmare it would be to write this kind of functionality from scratch…
There’s loads of features here, just like the other libararies listed.
Another Angular 1-only (although similar Angular 2 projects exist). Great little wizard form directive that allows you to declare steps of your wizard declaratively in your template, or when teamed up with Formly, in your controller. The latter allows you to create dynamic wizard steps by filtering the questions in the next step based on the response to the previous (once again – need to document this in another post in future).
A few extra tricks for SharePoint…
A few other more generic practices when slinging Angular in SharePoint:
Don’t be afraid to use $q to wrap your own promises – yes it is overused in a lot of example code on Stack Overflow (hint: if you are calling the $http service, you don’t need $q, just return the result of the $http call), but it’s great if you want/need to use CSOM. Just wrap the result of executeQueryAsync in a promise’s resolve method and you’ve got a far cleaner implementation (no callbacks when you utilise it), so it’s easily packaged up in a service.
Create a reusable service layer – lots of people don’t bother to use Angular services, as most example code just keeps the $http calls in the controller for simplicity. Keep all your REST and CSOM calls to interact with SharePoint in a service module and you’ll get a lot more reuse of your code from application to application. Ideally, use ui-router to resolve the promises from your service before the controller is even initialised (as mentioned above).
Use Widget Wrangler for hosting your Angular apps in SharePoint pages– this handles all your script dependencies cleanly and lets you host in a ScriptEditor webpart (easily deployed with PnP PowerShell).
Think about caching your data or real-time sync – the excellent Angular-Cache is great for client-side caching of data and if your application’s data is frequently updated, you may want to consider a real-time data option to enhance the solution and prevent the need for page refreshes (another post on this coming soon too), such as Firebase or GunJS.
Azure Functions-All-The-Things! No more PowerShell running in a scheduled task on a VM for background processing. There is a better (and even cheaper) way.
I hope some people find this useful. Please leave a comment if you’ve got some other Angular-goodness you’d like to share!
Azure Functions have officially reached ‘hammer’ status
I’ve been enjoying the ease with which we can now respond to events in SharePoint and perform automation tasks, thanks to the magic of Azure Functions. So many nails, so little time!
The seminal blog post that started so many of us on that road, was of course John Liu’s Build your PnP Site Provisioning with PowerShell in Azure Functions and run it from Flow and that pattern is fantastic for many event-driven scenarios.
One where it currently (at time of writing) falls down is when dealing with a list item delete event. MS Flow can’t respond to this and nor can a SharePoint Designer workflow.
Without wanting to get into Remote Event Receivers (errgh…), the other way to deal with this is after the fact via the SharePoint change log (if the delete isn’t time sensitive). In my use case it wasn’t – I just needed to be able to clean up some associated items in other lists.
SharePoint Change Logs
SharePoint has had an API for getting a log of changes of certain types, against certain objects, in a certain time window since the dawn of time. The best post for showing how to query it from the client side is (in my experience) Paul Schaeflin’s Reading the SharePoint change log from CSOM and was my primary reference for the below PowerShell-based Function.
In my case, I am only interested in items deleted from a single list, but this could easily be scoped to an entire site and capture more/different event types (see Paul’s post for the specifics).
The biggest challenge in getting this working was persisting the change token to Azure Storage, and this wasn’t that difficult in and of itself – it’s just that the PowerShell bindings for Azure are as of yet woefully under-documented (TIP: Get-Content and Set-Content are the key to the PowerShell bindings… easy when you know how). In my case I have an input and output binding to a single Blob Storage blob (to persist the change token for reference the next time the Function runs) and another output to Queue Storage to trigger another function that actually does the cleanup of the other list items linked to the one now sitting in the recycle bin. The whole thing is triggered by an hourly timer. If nothing has been deleted, then no action is taken (other than the persisted token blob update).
A Note on Scaling
Note that if multiple delete events occurred since the last check, then these are all deposited in one message. This won’t cause a problem in my use case (there will never be more than a handful of items deleted in one pass of the Function), but it obviously doesn’t scale well, as too many being handled by the receiving Function would threaten to bump up against the 5 min execution time limit. I wanted to use the OOTB message queue binding for simplicity, but if you needed to push multiple messages, you could simple use the Azure Storage PowerShell cmdlets instead of an out binding.
Code Now Please
Here’s the Function code (following the PnP PowerShell Azure Functions implementation as per John’s article above and liberally stealing from Paul’s guide above).
This is obviously a simple example with a single objective, but you could take this pattern and ramp it up as high as you like. By targeting the web instead of a single list, you could push a lot of automation through this single pipeline, perhaps ramping up the execution recurrence to every 5 mins or less if you needed that level of reduced latency. Although watch out for your Functions consumption if you turn up the executions too high!