Writing for the Web – that includes your company intranet!

You can have a pool made out of gold – if the water in it is as dirty and old as a swamp- no one will swim in it!

The same can be said about the content of an intranet. You can have the best design, the best developers and the most carefully planned out navigation and taxonomy but if the content and documents are outdated and hard to read, staff will lose confidence in its authority and relevance and start to look elsewhere – or use it as an excuse to get a coffee.

The content of an intranet is usually left to a representative from each department (if you’re lucky!) Usually people that have been working in a company for years. Or worse yet, to the IT guy. They are going to use very different language to a new starter, or to a book keeper, or the CEO. Often content is written for an intranet “because it has to be there” or to “cover ourselves” or because “the big boss said so” with no real thought into how easy it is to read or who will be reading it.

adaptive

Content on the internet has changed and adapted to meet a need that a user has and to find it as quickly as possible. Why isn’t the same attitude used for your company? If your workers weren’t so frustrated finding the information they need to do their job, maybe they’d perform better, maybe that would result in faster sales, maybe investing in the products your staff use is just as important as the products your consumers use.

I’m not saying that you have to employ a copywriter for your intranet but at least train the staff you nominate to be custodians of your land (your intranet- your baby).

Below are some tips for your nominated content authors.

The way people read has changed

People read differently thanks to the web. They don’t read. They skim.

  • They don’t like to feel passive
  • They’re reluctant to invest too much time at one site or your intranet
  • They don’t want to work for the information

People DON’T READ MORE because

  • What they find isn’t relevant to what they need or want.
  • They’re trying to answer a question and they only want the answer.
  • They’re trying to do a task and they only want what’s necessary.

Before you write, identify your audience

People come to an intranet page with a specific task in mind. When developing your pages content, keep your users’ tasks in mind and write to ensure you are helping them accomplish those tasks.  If your page doesn’t help them complete that task, they’ll leave (or call your department!)

Ask these questions

  • Who are they? New starters? Experienced staff? Both? What is the lowest common denominator?
  • Where are they? At work? At home? On the train? (Desktop, mobile, laptop, ipad)
  • What do they want?
  • How educated are they? Are they familiar with business Jargon?

Identify the purpose of your text

As an intranet, the main purpose is to inform and educate. Not so much to entertain or sell.

When writing to present information ensure:

  • Consistency
  • Objectivity
  • Consider tables, diagrams or graphs

Structuring your content

Headings and Sub headings

Use headings and sub headings for each new topic. This provides context and informs readers about what is to come. It provides a bridge between chunks of content.

Sentences and Paragraphs

Use short sentences. And use only 1-3 sentences per paragraph.

‘Front Load’ your sentences. Position key information at the front of sentences and paragraphs.

Chunk your text. Break blocks of text into smaller chunks. Each chunk should address a single concept. Chunks should be self-contained and context-independent.

Layering vs scrolling

It is OK if a page scrolls. It just depends how you break up your page! Users’ habits have changed in the past 10 years due to mobile devices, scrolling is not a dirty word, as long as the user knows there’s more content on the page by using visual cues.

phone.png

Use lists to improve comprehension and retention

  • Bullets for list items that have no logical order
  • Numbered lists for items that have a logical sequence
  • Avoid the lonely bullet point
  • Avoid death by bullet point

General Writing tips

  • Write in plain English
  • Use personal pronouns. Don’t say “Company XYZ prefers you do this” Say “We prefer this”
  • Make your point quickly
  • Reduce print copy – aim for 50% less copy than what you’d write for print
  • Be objective and don’t exaggerate
  • USE WHITE SPACE – this makes content easier to scan, and it is more obvious to the eye that content is broken up into chunks.
  • Avoid jargon
  • Don’t use inflated language

Hyperlinks

  • Avoid explicit link expressions (eg. Click here)
  • Describe the information readers will find when they follow the link
  • Use VERBS (doing word) as links.
  • Warn users of a large file size before they start downloading
  • Use links to remove secondary information from the bulk of the text (layer the content)

Remove

  • Empty words and phrases
  • Long words or phrases that could be shorter
  • Unnecessary jargon and acronyms
  • Repetitive words or phrases
  • Adverbs (e.g., quite, really, basically, generally, etc.)

Avoid Fluff

  • Don’t pad write with unnecessary sentences
  • Stick to the facts
  • Use objective language
  • Avoid adjectives, adverbs, buzzwords and unsubstantiated claims

Tips for proofreading

  1. Give it a rest
  2. Look for one type of problem at a time
  3. Double-check facts, figures, dates, addresses, and proper names
  4. Review a hard copy
  5. Read your text aloud
  6. Use a spellchecker
  7. Trust your dictionary
  8. Read your text backwards
  9. Create your own proofreading checklist
  10. Ask for help!

A Useful App

Hemingwayapp.com assesses how good your content is for the web.

A few examples (from a travel page)

Bad Example

Our Approved an​​​d Preferred Providers

Company XYZ has contracted arrangements with a number of providers for travel.  These arrangements have been established on the basis of extracting best value by aggregating spend across all of Company XYZ.

Why it’s Bad

Use personal pronouns such as we and you, so the user knows you are talking to them. They know where they work. Remove Fluff.

Better Example

Our Approved an​​​d Preferred Providers

We have contracted arrangements with a number of providers for travel to provide best value

Bad Example

Travel consultant:  XYZ Travel Solutions is the approved provider of travel consultant services and must be used to make all business travel bookings.  All airfare, hotel and car rental bookings must be made through FCM Travel Solutions

Why it’s bad

The author is saying the same thing twice in two different ways. This can easily be said in one sentence.

Better Example

Travel consultant

XYZ Travel Solutions must be used to make all airfare, hotel and car rental bookings.

Bad Example

Qantas is Company XYZ preferred airline for both domestic and international air travel and must be used where it services the route and the “lowest logical fare” differential to another airline is less than $50 for domestic travel and less than $400 for international travel

Why it’s bad

This sentence is too long. This is a case of using too much jargon. What does lowest logical fare even mean? And the second part does not make any sense. What exactly are they trying to say here? I am not entirely sure, but if my guess is correct it should read something like below.

Better Example

Qantas is our preferred airline for both domestic and international air travel. When flying, choose the cheapest rate available within reason. You can only choose another airline if it is cheaper by $50 for domestic and cheaper by $400 for international travel.

Bad Example

Ground transportation:  Company XYZ preferred provider for rental vehicle services is Avis.  Please refer to the list of approved rental vehicle types in the “Relevant Documents” link to the right hand side of this page.

Why it’s bad

Front load your sentences. With the most important information first. Don’t make a user dig for a document, have the relevant document right there. Link the Verb. Don’t say CLICK HERE!

Better Example

Ground transportation

Avis is our preferred provider to rent vehicles.

View our list of approved rental vehicles.

Bad Example

Booking lead times:  To ensure that the best airfare and hotel rate can be obtained, domestic travel bookings should be made between 14-21 days prior to travel, and international travel bookings between 21 and 42 days prior to travel.  For international bookings, also consider lead times for any visas that may need to be obtained.

Why it’s bad

Front load your sentence… most important information first. This is a good opportunity to chunk your text.

Better Example

Booking lead times

Ensure your book your travel early

14-21 day prior to travel for domestic

21-42 days prior to travel for internatonal (also consider lead times for visas)

This will ensure that the best airfare and hotel rate can be obtained.

 

Using MIMWAL to mass update users

The generalised Workflow Activity Library for Microsoft Identity Manager (MIMWAL) is not particularly new, but I’m regularly finding new ways of using it.

TL;DR: [//Queries/Key/Attribute] can be used as a target to update multiple accounts at once

Working from colleague Michael’s previous post Introduction to MIM Advanced Workflows with MIMWAL (Update Resource workflow section), user accounts can be populated with location details when a location code is set or updated.

But, consider the question: what happens when the source location object is updated with new details, without moving the user between locations? A common occurrence is when the building name/number/street changes due to typing errors. New accounts and accounts moved into the location have the updated details, but accounts already in the location are stuck with old address details. The same can also occur with department codes and department names, or a number of other value->name mappings.

This is a scenario I’ve seen built poorly several times, with a variety of external script hackery used to address it, if it is addressed at all, and I’m here to say the MIMWAL makes it ridiculously easy. If you don’t have the MIMWAL deployed into your MIM (or FIM) environment, I seriously recommend doing so – it will repay the effort taken to build and deploy very quickly (Check the post above for build/deploy notes).

Mass Updates Solution

All it takes with MIMWAL, is one workflow, containing just activity, paired with a policy rule (not documented here).

Start a new workflow definition:

  • Name: Update all people in location when Location is updated
  • Type: Action
  • Run on policy update: False

CreateWorkflowLocationUpdate1

Add Activity -> Activity Picker -> “WAL: Update Resources” -> Select

CreateWorkflowLocationUpdate2

You’ll have to tick Advanced Features, then tick Query Resources when revealed to be able to enter the query.

Here, we’re searching for all person objects which have their location reference set to the location object which has just been updated. If you’re not using location references, you could use a search such as “/Person[_locationCode = ‘[//Target/_locationCode]’]” instead.

  • Advanced Features: True
  • Query Resources: True
  • Queries:
    • Key: Users
    • XPath Filter: /Person[_locationObject = ‘[//Target/ObjectID]’]

CreateWorkflowLocationUpdate3

Here is where the magic happens. I haven’t found many examples on the web; hopefully this makes it more obvious how updating multiple objects at a time works.

The target expression is the result set from the above query, and the particular attribute required. In this example, we’re collecting the Address attribute from the updated location object ([//Target/Address]) if it exists, or null otherwise, and sending it to the Address attribute on the query result set called Users ([//Queries/Users/Address]).

Updates:

  • Value Expression: IIF(IsPresent([//Target/Address]),[//Target/Address],Null())
  • Target: [//Queries/Users/Address]
  • Allow Null: True

and so on, for all appropriate attributes.

CreateWorkflowLocationUpdate4

Notes

Very simple to set up, but can be slow to execute across large result sets as each object (e.g. Person) is updated as a separate request, so try to make changes to location data in quiet processing times, or on an admin service instance … but you do that anyway, right?

Cosmos DB Server-Side Programming with TypeScript – Part 6: Build and Deployment

So far in this series we’ve been compiling our server-side TypeScript code to JavaScript locally on our own machines, and then copying and pasting it into the Azure Portal. However, an important part of building a modern application – especially a cloud-based one – is having a reliable automated build and deployment process. There are a number of reasons why this is important, ranging from ensuring that a developer isn’t building code on their own machine – and therefore may be subject to environmental variations or differences that cause different outputs – through to running a suite of tests on every build and release. In this post we will look at how Cosmos DB server-side code can be built and released in a fully automated process.

This post is part of a series:

  • Part 1 gives an overview of the server side programmability model, the reasons why you might want to consider server-side code in Cosmos DB, and some key things to watch out for.
  • Part 2 deals with user-defined functions, the simplest type of server-side programming, which allow for adding simple computation to queries.
  • Part 3 talks about stored procedures. These provide a lot of powerful features for creating, modifying, deleting, and querying across documents – including in a transactional way.
  • Part 4 introduces triggers. Triggers come in two types – pre-triggers and post-triggers – and allow for behaviour like validating and modifying documents as they are inserted or updated, and creating secondary effects as a result of changes to documents in a collection.
  • Part 5 discusses unit testing your server-side scripts. Unit testing is a key part of building a production-grade application, and even though some of your code runs inside Cosmos DB, your business logic can still be tested.
  • Finally, part 6 (this post) explains how server-side scripts can be built and deployed into a Cosmos DB collection within an automated build and release pipeline, using Microsoft Visual Studio Team Services (VSTS).

Build and Release Systems

There are a number of services and systems that provide build and release automation. These include systems you need to install and manage yourself, such as Atlassian Bamboo, Jenkins, and Octopus Deploy, through to managed systems like Amazon CodePipeline/CodeBuild, Travis CI, and AppVeyor. In our case, we will use Microsoft’s Visual Studio Team System (VSTS), which is a managed (hosted) service that provides both build and release pipeline features. However, the steps we use here can easily be adapted to other tools.

I will assume that you have a VSTS account, that you have loaded the code into a source code repository that VSTS can access, and that you have some familiarity with the VSTS build and release system.

Throughout this post, we will use the same code that we used in part 5 of this series, where we built and tested our stored procedure. The exact same process can be used for triggers and user-defined functions as well. I’ll assume that you have a copy of the code from part 5 – if you want to download it, you can get it from the GitHub repository for that post. If you want to refer to the finished version of the whole project, you can access it on GitHub here.

Defining our Build Process

Before we start configuring anything, let’s think about what we want to achieve with our build process. I find it helpful to think about the start point and end point of the build. We know that when we start the build, we will have our code within a Git repository. When we finish, we want to have two things: a build artifact in the form of a JavaScript file that is ready to deploy to Cosmos DB; and a list of unit test results. Additionally, the build should pass if all of the steps ran successfully and the tests passed, and it should fail if any step or any test failed.

Now that we have the start and end points defined, let’s think about what we need to do to get us there.

  • We need to install our NPM packages. On VSTS, every time we run a build our build environment will be reset, so we can’t rely on any files being there from a previous build. So the first step in our build pipeline will be to run npm install.
  • We need to build our code so it’s ready to be tested, and then we need to run the unit tests. In part 5 of this series we created an NPM script to help with this when we run locally – and we can reuse the same script here. So our second build step will be to run npm run test.
  • Once our tests have run, we need to report their results to VSTS so it can visualise them for us. We’ll look at how to do this below. Importantly, VSTS won’t fail the build automatically if there are any test failures, so we’ll look at how to do this ourselves shortly.
  • If we get to this point in the build then our code is successfully passing the tests, so now we can create the real release build. Again we have already defined an NPM script for this, so we can reuse that work and call npm run build.
  • Finally, we can publish the release JavaScript file as a build artifact, which makes it available to our release pipeline.

We’ll soon see how we can actually configure this. But before we can write our build process, we need to figure out how we’ll report the results of our unit tests back to VSTS.

Reporting Test Results

When we run unit tests from inside a VSTS build, the unit test runner needs some way to report the results back to VSTS. There are some built-in integrations with common tools like VSTest (for testing .NET code). For Jasmine, we need to use a reporter that we configure ourselves. The jasmine-tfs-reporter NPM package does this for us – its reporter will emit a specially formatted results file, and we’ll tell VSTS to look at this.

Let’s open up our package.json file and add the following line into the devDependencies section:

Run npm install to install the package.

Next, create a file named spec/vstsReporter.ts and add the following lines, which will configure Jasmine to send its results to the reporter we just installed:

Finally, let’s edit the jasmine.json file. We’ll add a new helpers section, which will tell Jasmine to run that script before it starts running our tests. Here’s the new jasmine.json file we’ll use:

Now run npm run test. You should see that a new testresults folder has been created, and it contains an XML file that VSTS can understand.

That’s the last piece of the puzzle we need to have VSTS build our code. Now let’s see how we can make VSTS actually run all of these steps.

Creating the Build Configuration

VSTS has a great feature – currently in preview – that allows us to specify our build definition in a YAML file, check it into our source control system, and have the build system execute it. More information on this feature is available in a previous blog post I wrote. We’ll make use of this feature here to write our build process.

Create a new file named build.yaml. This file will define all of our build steps. Paste the following contents into the file:

This YAML file tells VSTS to do the following:

  • Run the npm install command.
  • Run the npm run test command. If we get any test failures, this command will cause VSTS to detect an error.
  • Regardless of whether an error was detected, take the test results that have been saved into the testresults folder and publish them. (Publishing just means showing them within the build; they won’t be publicly available.)
  • If everything worked up till now, run npm run build to build the releaseable JavaScript file.
  • Publish the releasable JavaScript file as a build artifact, so it’s available to the release pipeline that we’ll configure shortly.

Commit this file and push it to your Git repository. In VSTS, we can now set up a new build configuration, point it to the YAML file, and let it run. After it finishes, you should see something like this:

release-1

We can see that four tests ran and passed. If we click on the Artifacts tab, we can view the artifacts that were published:

release-2

And by clicking the Explore button and expanding the drop folder, we can see the exact file that was created:

release-3

You can even download the file from here, and confirm that it looks like what we expect to be able to send to Cosmos DB. So, now we have our code being built and tested! The next step is to actually deploy it to Cosmos DB.

Deciding on a Release Process

Cosmos DB can be used in many different types of applications, and the way that we deploy our scripts can differ as well. In some applications, like those that are heavily server-based and have initialisation logic, we might provision our database, collections, and scripts through our application code. In other systems, like serverless applications, we want to provision everything we need during our deployment process so that our application can immediately start to work. This means there are several patterns we can adopt for installing our scripts.

Pattern 1: Use Application Initialisation Logic

If we have an Azure App Service, Cloud Service, or another type of application that provides initialisation lifecycle events, we can use the initialisation code to provision our Cosmos DB database and collection, and to install our stored procedures, triggers, and UDFs. The Cosmos DB client SDKs provide a variety of helpful methods to do this. For example, the .NET and .NET Core SDKs provide this functionality. If the platform you are using doesn’t have an SDK, you can also use the REST API provided by Cosmos DB.

This approach is also likely to be useful if we dynamially provision databases and collections while our application runs. We can also use this approach if we have an application warmup sequence where the existence of the collection can be confirmed and any missing pieces can be added.

Pattern 2: Initialise Serverless Applications with a Custom Function

When we’re using serverless technologies like Azure Functions or Azure Logic Apps, we may not have the opportunity to initialise our application the first time it loads. We could check the existence of our Cosmos DB resources whenever we are executing our logic, but this is quite wasteful and inefficient. One pattern that can be used is to write a special ‘initialisation’ function that is called from our release pipeline. This can be used to prepare the necessary Cosmos DB resources, so that by the time our callers execute our code, the necessary resources are already present. However, this presents some challenges, including the fact that it necessitates mixing our deployment logic and code with our main application code.

Pattern 3: Deploying from VSTS

The approach that I will adopt in this post is to deploy the Cosmos DB resources from our release pipeline in VSTS. This means that we will keep our release process separate from our main application code, and provide us with the flexibility to use the Cosmos DB resources at any point in our application logic. This may not suit all applications, but for many applications that use Cosmos DB, this type of workflow will work well.

There is a lot more to release configuration than I’ll be able to discuss here – that could easily be its own blog series. I’ll keep this particular post focused just on installing server-side code onto a collection.

Defining the Release Process

As with builds, it’s helpful to think through the process we want the release to follow. Again, we’ll think first about the start and end points. When we start the release pipeline, we will have the build that we want to release (which will include our compiled JavaScript script). For now, I’ll also assume that you have a resource group containing a Cosmos DB account with an existing database and collection, and that you know the account key. In a future post I will elaborate how some of this process can also be automated, but this is outside of the scope of this series. Once the release process finishes, we expect that the collection will have the server-side resource installed and ready to use.

VSTS doesn’t have built-in support for Cosmos DB. However, we can easily use a custom PowerShell script to install Cosmos DB scripts on our collection. I’ve written such a script, and it’s available for download here. The script uses the Cosmos DB API to deploy stored procedures, triggers, and user-defined functions to a collection.

We need to include this script into our build artifacts so that we can use it from our deployment process. So, download the file and save it into a deploy folder in the project’s source repository. Now that we have that there, we need to tell the VSTS build process to include it as an artifact, so open the build.yaml file and add this to the end of the file, being careful to align the spaces and indentation with the sections above it:

Commit these changes, and then run a new build.

Now we can set up a release definition in VSTS and link it to our build configuration so it can receive the build artifacts. We only need one step currently, which will deploy our stored procedure using the PowerShell script we included as a build artifact. Of course, a real release process is likely to do a lot more, including deploying your application. For now, though, let’s just add a single PowerShell step, and configure it to run an inline script with the following contents:

This inline script does the following:

  • It loads in the PowerShell file from our build artifact, so that the functions within that file are available for us to use.
  • It then runs the DeployStoredProcedure function, which is defined in that PowerShell file. We pass in some parameters so the function can contact Cosmos DB:
    • AccountName – this is the name of your Cosmos DB account.
    • AccountKey – this is the key that VSTS can use to talk to Cosmos DB’s API. You can get this from the Azure Portal – open up the Cosmos DB account and click the Keys tab.
    • DatabaseName – this is the name of the database (in our case, Orders).
    • CollectionName – this is the name of the collection (in our case again, Orders).
    • StoredProcedureName – this is the name we want our stored procedure to have in Cosmos DB. This doesn’t need to match the name of the function inside our code file, but I recommend it does to keep things clear.
    • SourceFilePath – this is the path to the JavaScript file that contains our script.

Note that in the script above I’ve assumed that the build configuration’s name is CosmosServer-CI, so that appears in the two file paths. If you have a build configuration that uses a different name, you’ll need to replace it. Also, I strongly recommend you don’t hard-code the account name, account key, database name, and collection name like I’ve done here – you would instead use VSTS variables and have them dynamically inserted by VSTS. Similarly, the account key should be specified as a secret variable so that it is encrypted. There are also other ways to handle this, including creating the Cosmos DB account and collection within your deployment process, and dynamically retrieving the account key. This is beyond the scope of this series, but in a future blog post I plan to discuss some ways to achieve this.

After configuring our release process, it will look something like this:

release-4

Now that we’ve configured our release process we can create a new release and let it run. If everything has been configured properly, we should see the release complete successfully:

release-5

And if we check the collection through the Azure Portal, we can see the stored procedure has been deployed:

release-6

This is pretty cool. It means that whenever we commit a change to our stored procedure’s TypeScript file, it can automatically be compiled, tested, and deployed to Cosmos DB – without any human intervention. We could now adapt the exact same process to deploy our triggers (using the DeployTrigger function in the PowerShell script) and UDFs (using the DeployUserDefinedFunction function). Additionally, we can easily make our build and deployments into true continuous integration (CI) and continuous deployment (CD) pipelines by setting up automated builds and releases within VSTS.

Summary

Over this series of posts, we’ve explored Cosmos DB’s server-side programming capabilities. We’ve written a number of server-side scripts including a UDF, a stored procedure, and two triggers. We’ve written them in TypeScript to ensure that we’re using strongly typed objects when we interact with Cosmos DB and within our own code. We’ve also seen how we can unit test our code using Jasmine. Finally, in this post, we’ve looked at how our server-side scripts can be built and deployed using VSTS and the Cosmos DB API.

I hope you’ve found this series useful! If you have any questions or similar topics that you’d like to know more about, please post them in the comments below.

Key Takeaways

  • Having an automated build and release pipeline is very important to ensure reliable, consistent, and safe delivery of software. This should include our Cosmos DB server-side scripts.
  • It’s relatively easy to adapt the work we’ve already done with our build scripts to work on a build server. Generally it will simply be a matter of executing npm install and then npm run build to create a releasable build of our code.
  • We can also run our unit tests by simply executing npm run test.
  • Test results from Jasmine can be published into VSTS using the jasmine-tfs-reporter package. Other integrations are available for other build servers too.
  • Deploying our server-side scripts onto Cosmos DB can be handled in different ways for different applications. With many applications, having server-side code deployed within an existing release process is a good idea.
  • VSTS doesn’t have built-in support for Cosmos DB, but I have provided a PowerShell script that can be used to install stored procedures, triggers, and UDFs.
  • You can view the code for this post on GitHub.

Cosmos DB Server-Side Programming with TypeScript – Part 5: Unit Testing

Over the last four parts of this series, we’ve discussed how we can write server-side code for Cosmos DB, and the types of situations where it makes sense to do so. If you’re building a small sample application, you now have enough knowledge to go and build out UDFs, stored procedures, and triggers. But if you’re writing production-grade applications, there are two other major topics that need discussion: how to unit test your server-side code, and how to build and deploy it to Cosmos DB in an automated and predictable manner. In this part, we’ll discuss testing. In the next part, we’ll discuss build and deployment.

This post is part of a series:

  • Part 1 gives an overview of the server side programmability model, the reasons why you might want to consider server-side code in Cosmos DB, and some key things to watch out for.
  • Part 2 deals with user-defined functions, the simplest type of server-side programming, which allow for adding simple computation to queries.
  • Part 3 talks about stored procedures. These provide a lot of powerful features for creating, modifying, deleting, and querying across documents – including in a transactional way.
  • Part 4 introduces triggers. Triggers come in two types – pre-triggers and post-triggers – and allow for behaviour like validating and modifying documents as they are inserted or updated, and creating secondary effects as a result of changes to documents in a collection.
  • Part 5 (this post) discusses unit testing your server-side scripts. Unit testing is a key part of building a production-grade application, and even though some of your code runs inside Cosmos DB, your business logic can still be tested.
  • Finally, part 6 explains how server-side scripts can be built and deployed into a Cosmos DB collection within an automated build and release pipeline, using Microsoft Visual Studio Team Services (VSTS).

Unit Testing Cosmos DB Server-Side Code

Testing JavaScript code can be complex, and there are many different ways to do it and different tools that can be used. In this post I will outline one possible approach for unit testing. There are other ways that we could also test our Cosmos DB server-side code, and your situation may be a bit different to the one I describe here. Some developers and teams place different priorities on some of the aspects of testing, so this isn’t a ‘one size fits all’ approach. In this post, the testing approach we will build out allows for:

  • Mocks: mocking allows us to pass in mocked versions of our dependencies so that we can test how our code behaves independently of a working external system. In the case of Cosmos DB, this is very important: the getContext() method, which we’ve looked at throughout this series, provides us with access to objects that represent the request, response, and collection. Our code needs to be tested without actually running inside Cosmos DB, so we mock out the objects it sends us.
  • Spies: spies are often a special type of mock. They allow us to inspect the calls that have been made to the object to ensure that we are triggering the methods and side-effects that we expect.
  • Type safety: as in the rest of this series, it’s important to use strongly typed objects where possible so that we get the full benefit of the TypeScript compiler’s type system.
  • Working within the allowed subset of JavaScript: although Cosmos DB server-side code is built using the JavaScript language, it doesn’t provide all of the features of JavaScript. This is particularly important when testing our code, because many test libraries make assumptions about how the code will be run and the level of JavaScript support that will be available. We need to work within the subset of JavaScript that Cosmos DB supports.

I will assume some familiarity with these concepts, but even if they’re new to you, you should be able to follow along. Also, please note that this series only deals with unit testing. Integration testing your server-side code is another topic, although it should be relatively straightforward to write integration tests against a Cosmos DB server-side script.

Challenges of Testing Cosmos DB Server-Side Code

Cosmos DB ultimately executes JavaScript code, and so we will use JavaScript testing frameworks to write and run our unit tests. Many of the popular JavaScript and TypeScript testing frameworks and helpers are designed specifically for developers who write browser-based JavaScript or Node.js applications. Cosmos DB has some properties that can make these frameworks difficult to work with.

Specifically, Cosmos DB doesn’t support modules. Modules in JavaScript allow for individual JavaScript files to expose a public interface to other blocks of code in different files. When I was preparing for this blog post I spent a lot of time trying to figure out a way to handle the myriad testing and mocking frameworks that assume modules are able to be used in our code. Ultimately I came to the conclusion that it doesn’t really matter if we use modules inside our TypeScript files as long as the module code doesn’t make it into our release JavaScript files. This means that we’ll have to build our code twice – once for testing (which include the module information we need), and again for release (which doesn’t include modules). This isn’t uncommon – many development environments have separate ‘Debug’ and ‘Release’ build configurations, for example – and we can use some tricks to achieve our goals while still getting the benefit of a good design-time experience.

Defining Our Tests

We’ll be working with the stored procedure that we built out in part 3 of this series. The same concepts could be applied to unit testing triggers, and also to user-defined functions (UDFs) – and UDFs are generally easier to test as they don’t have any context variables to mock out.

Looking back at the stored procedure, the purpose is to do return the list of customers who have ordered any of specified list of product IDs, grouped by product ID, and so an initial set of test cases might be as follows:

  1. If the productIds parameter is empty, the method should return an empty array.
  2. If the productIds parameter contains one item, it should execute a query against the collection containing the item’s identifier as a parameter.
  3. If the productIds parameter contains one item, the method should return a single CustomersGroupedByProduct object in the output array, which should contain the productId that was passed in, and whatever customerIds the mocked collection query returned.
  4. If the method is called with a valid productIds array, and the queryDocuments method on the collection returns false, an error should be returned by the function.

You might have others you want to focus on, and you may want to split some of these out – but for now we’ll work with these so we can see how things work. Also, in this post I’ll assume that you’ve got a copy of the stored procedure from part 3 ready to go – if you haven’t, you can download it from the GitHub repository for that part.

If you want to see the finished version of the whole project, including the tests, you can access it on GitHub here.

Setting up TypeScript Configurations

The first change we’ll need to make is to change our TypeScript configuration around a bit. Currently we only have one tsconfig.json file that we use to build. Now we’ll need to add a second file. The two files will be used for different situations:

  • tsconfig.json will be the one we use for local builds, and for running unit tests.
  • tsconfig.build.json will be the one we use for creating release builds.

First, open up the tsconfig.json file that we already have in the repository. We need to change it to the following:

The key changes we’re making are:

  • We’re now including files from the spec folder in our build. This folder will contain the tests that we’ll be writing shortly.
  • We’ve added the line "module": "commonjs". This tells TypeScript that we want to compile our code with module support. Again, this tsconfig.json will only be used when we run our builds locally or for running tests, so we’ll later make sure that the module-related code doesn’t make its way into our release builds.
  • We’ve changed from using outFile to outDir, and set the output directory to output/test. When we use modules like we’re doing here, we can’t use the outFile setting to combine our files together, but this won’t matter for our local builds and for testing. We also put the output files into a test subfolder of the output folder so that we keep things organised.

Now we need to create a tsconfig.build.json file with the following contents:

This looks more like the original tsconfig.json file we had, but there are a few minor differences:

  • The include element now looks for files matching the pattern *.ready.ts. We’ll look at what this means later.
  • The module setting is explicitly set to none. As we’ll see later, this isn’t sufficient to get everything we need, but it’s good to be explicit here for clarity.
  • The outFile setting – which we can use here because module is set to none – is going to emit a JavaScript file within the build subfolder of the output folder.

Now let’s add the testing framework.

Adding a Testing Framework

In this post we’ll use Jasmine, a testing framework for JavaScript. We can import it using NPM. Open up the package.json file and replace it with this:

There are a few changes to our previous version:

  • We’ve now imported the jasmine module, as well as the Jasmine type definitions, into our project; and we’ve imported moq.ts, a mocking library, which we’ll discuss below.
  • We’ve also added a new test script, which will run a build and then execute Jasmine, passing in a configuration file that we will create shortly.

Run npm install from a command line/terminal to restore the packages, and then create a new file named jasmine.json with the following contents:

We’ll understand a little more about this file as we go on, but for now, we just need to understand that this file defines the Jasmine specification files that we’ll be testing against. Now let’s add our Jasmine test specification so we can see this in action.

Starting Our Test Specification

Let’s start by writing a simple test. Create a folder named spec, and within it, create a file named getGroupedOrdersImpl.spec.ts. Add the following code to it:

This code does the following:

  • It sets up a new Jasmine spec named getGroupedOrdersImpl. This is the name of the method we’re testing for clarity, but it doesn’t need to match – you could name the spec whatever you want.
  • Within that spec, we have a test case named should return an empty array.
  • That test executes the getGroupedOrdersImpl function, passing in an empty array, and a null object to represent the Collection.
  • Then the test confirms that the result of that function call is an empty array.

This is a fairly simple test – we’ll see a slightly more complex one in a moment. For now, though, let’s get this running.

There’s one step we need to do before we can execute our test. If we tried to run it now, Jasmine would complain that it can’t find the getGroupedOrdersImpl method. This is because of the way that JavaScript modules work. Our code needs to export its externally accessible methods so that the Jasmine test can see it. Normally, exporting a module from a Cosmos DB JavaScript file will mean that Cosmos DB doesn’t accept the file anymore – we’ll see a solution to that shortly.

Open up the src/getGroupedOrders.ts file, and add the following at the very bottom of the file:

The export statement sets up the necessary TypeScript compilation instruction to allow our Jasmine test spec to reach this method.

Now let’s run our test. Execute npm run test, which will compile our stored procedure (including the export), compile the test file, and then execute Jasmine. You should see that Jasmine executes the test and shows 1 spec, 0 failures, indicating that our test successfully ran and passed. Now let’s add some more sophisticated tests.

Adding Tests with Mocks and Spies

When we’re testing code that interacts with external services, we often will want to use mock objects to represent those external dependencies. Most mocking frameworks allow us to specify the behaviour of those mocks, so we can simulate various conditions and types of responses from the external system. Additionally, we can use spies to observe how our code calls the external system.

Jasmine provides a built-in mocking framework, including spy support. However, the Jasmine mocks don’t support TypeScript types, and so we lose the benefit of type safety. In my opinion this is an important downside, and so instead we will use the moq.ts mocking framework. You’ll see we have already installed it in the package.json.

Since we’ve already got it available to us, we need to add this line to the top of our spec/getGroupedOrders.spec.ts file:

This tells TypeScript to import the relevant mocking types from the moq.ts module. Now we can use the mocks in our tests.

Let’s set up another test, in the same file, as follows:

This test does a little more than the last one:

  • It sets up a mock of the ICollection interface.
  • This mock will send back a hard-coded string (self-link) when the getSelfLink() method is called.
  • It also provides mock behaviour for the queryDocuments method. When the method is called, it invokes the callback function, passing back a list of documents with a single empty string, and then returns true to indicate that the query was accepted.
  • The mock.object() method is used to convert the mock into an instance that can be provided to the getGroupedOrderImpl function, which then uses that in place of the real Cosmos DB collection. This means we can test out how our code will behave, and we can emulate the behaviour of Cosmos DB as we wish.
  • Finally, we call mock.verify to ensure that the getGroupedOrdersImpl function executed the queryDocuments method on the mock collection exactly once.

You can run npm run test again now, and verify that it shows 2 specs, 0 failures, indicating that our new test has successfully passed.

Now let’s fill out the rest of the spec file – here’s the complete file with all of our test cases included:

You can execute the tests again by calling npm run test. Try tweaking the tests so that they fail, then re-run them and see what happens.

Building and Running

All of the work we’ve just done means that we can run our tests. However, if we try to build our code to submit to Cosmos DB, it won’t work anymore. This is because the export statement we added to make our tests work will emit code that Cosmos DB’s JavaScript engine doesn’t understand.

We can remove this code at build time by using a preprocessor. This will remove the export statement – or anything else we want to take out – from the TypeScript file. The resulting cleaned file is the one that then gets sent to the TypeScript compiler, and it emits a Cosmos DB-friendly JavaScript file.

To achieve this, we need to chain together a few pieces. First, let’s open up the src/getGroupedOrders.ts file. Replace the line that says export { getGroupedOrdersImpl } with this section:

The extra lines we’ve added are preprocessor directives. TypeScript itself doesn’t understand these directives, so we need to use an NPM package to do this. The one I’ve used here is jspreproc. It will look through the file and handle the directives it finds in specially formatted comments, and then emits the resulting cleaned file. Unfortunately, the preprocessor only works on a single file at a time. This is OK for our situation, as we have all of our stored procedure code in one file, but we might not do that for every situation. Therefore, I have also used the foreach-cli NOM package to search for all of the *.ts files within our src folder and process them. It saves the cleaner files with a .ready.ts extension, which our tsconfig.build.json file refers to.

Open the package.json file and replace it with the following contents:

Now we can run npm install to install all of the packages we’re using. You can then run npm run test to run the Jasmine tests, and npm run build to build the releasable JavaScript file. This is emitted into the output/build/sp-getGroupedOrders.js file, and if you inspect that file, you’ll see it doesn’t have any trace of module exports. It looks just like it did back in part 3, which means we can send it to Cosmos DB without any trouble.

Summary

In this post, we’ve built out the necessary infrastructure to test our Cosmos DB server-side code. We’ve used Jasmine to run our tests, and moq.ts to mock out the Cosmos DB server objects in a type-safe manner. We also adjusted our build script so that we can compile a clean copy of our stored procedure (or trigger, or UDF) while keeping the necessary export statements to enable our tests to work. In the final post of this series, we’ll look at how we can automate the build and deployment of our server-side code using VSTS, and integrate it into a continuous integration and continuous deployment pipeline.

Key Takeaways

  • It’s important to test Cosmos DB server-side code. Stored procedures, triggers, and UDFs contain business logic and should be treated as a fully fledged part of our application code, with the same quality criteria we would apply to other types of source code.
  • Because Cosmos DB server-side code is written in JavaScript, it is testable using JavaScript and TypeScript testing frameworks and libraries. However, the lack of support for modules means that we have to be careful in how we use these since they may emit release code that Cosmos DB won’t accept.
  • We can use Jasmine for testing. Jasmine also has a mocking framework, but it is not strongly typed.
  • We can get strong typing using a TypeScript mocking library like moq.ts.
  • By structuring our code correctly – using a single entry-point function, which calls out to getContext() and then sends the necessary objects into a function that implements our actual logic – we can easily mock and spy on our calls to the Cosmos DB server-side libraries.
  • We need to export the functions we are testing using the export statement. This makes them available to the Jasmine test spec.
  • However, these export statements need to be removed before we can compile our release version. We can use a preprocessor to remove those statements.
  • You can view the code for this post on GitHub.

Automating the submission of WordPress Blog Posts to your Microsoft MVP Community Activities Profile using PowerShell

Introduction

In November last year (2017) I was honored to be awarded Microsoft MVP Status for Enterprise Mobility – Identity and Access. MVP Status is awarded based on community activities and even once you’ve attained MVP Status you need to keep your community activity contributions updated on your profile.

Up until recently this was done by accessing the portal and updating your profile, however mid last year a MVP PowerShell Module (big thanks to Francois-Xavier Cat and Emin Atac) was released that allows for some automation.

In this post I’ll detail using the MVP PowerShell Module to retrieve your latest WordPress Blog Post and submit it to your MVP Profile as a MVP Community Contribution.

Prerequisites

In order for this to work you will need;

  • to be a Microsoft MVP
  • Register at the MS MVP API Developer Portal using your MVP registered email/profile address
    • subscribe to the MVP Production Application
    • copy your API key (you’ll need this in the script below)

The Script

Update the script below for;

  • MVP API Key
    • Update Line 5 with your API key as detailed above
  • WordPress Blog URL (mine is blog.darrenjrobinson.com)
  • Your Award Category
    • Update Line 17 with your category $contributionTechnology = “Identity and Access”
    • type New-MVPContribution – ContributionTechnology and you’ll get a list of the MVP Award Categories

Award Categories.PNG

The Script

Here is the simple script. Run it after publishing a new entry you also want added to your Community Contributions Profile. No error handling, nothing complex, but you’re a MVP so you can plagiarise away for your submissions to make it do what suits your needs.

Summary

Hopefully that makes it simple to keep your MVP profile up to date. If you’re using a different Blogging platform I’m sure the basic process will work with a few tweaks after returning a query to get the content. Enjoy.

How to set Property Bag values in SharePoint Modern Sites using SharePoint Online .Net CSOM

If you think setting Property Bag values programmatically for Modern SharePoint sites using CSOM would be as straight forward as in the Old Classic SharePoint sites, then there is a surprise waiting for you.

As you can see below, in the old C# CSOM this code would set the Property Bag values as follows:

The challenge with the above method is that the PropertyBag values are not persisted after saving. So, if you load the context object again, the values are back to the initial values (i.e. without the “PropertyBagValue1” = “Property”).

The cause of the issue is that Modern Sites have the following setting set; IsNoScript = false, which prevents us from updating the Object model through script. Below is a screenshot of the documentation from MS docs – https://docs.microsoft.com/en-us/sharepoint/dev/solution-guidance/modern-experience-customizations-customize-sites

ModernTeamSitesPropertyBag_Limitation

Resolution:

Using PowerShell

Resolving this is quite easy using PowerShell, by setting the -DenyAddAndCustomizePages to 0.

Set-SPOsite <SiteURL> -DenyAddAndCustomizePages 0

However, when working with the CSOM model we need to take another approach (see below).

Using .NET CSOM Model

When using the CSOM Model, we must use the SharePoint PnP Online CSOM for this.  Start by downloading it from Nuget or the Package Manager in Visual Studio.  Next, add the code below.

Walking through the code, we are first initializing Tenant Administration and then accessing the Site Collection using the SiteUrl property. Then we use the SetSiteProperties method to set noScriptSite to false.  After that, we can use the final block of code to set the property bag values.

When done, you may want to set the noScriptSite back to true, as there are implications on modern sites as specified in this article here – https://support.office.com/en-us/article/security-considerations-of-allowing-custom-script-b0420ab0-aff2-4bbc-bf5e-03de9719627c

 

Recommendations on using Terraform to manage Azure resources

siliconvalve

If you’ve been working in the cloud infrastructure space for the last few years you can’t have missed the buzz around Hashicorp’s Terraform product. Terraform provides a declarative model for infrastructure provisioning that spans multiple cloud providers as well as on-premises services from the likes of VMWare.

I’ve recently had the opportunity to use Terraform to do some Azure infrastructure provisioning so I thought I’d share some recommendations on using Terraform with Azure (as at January 2018). I’ll also preface this post by saying that I have only been provisioning Azure PaaS services (App Service, Cosmos DB, Traffic Manager, Storage and Application Insights) and haven’t used any IaaS components at all.

In the beginning

I needed to provide an easy way to provision around 30 inter-related services that together constitute the hosting environment for single customer solution. Ideally I wanted a way to make it easy to re-provision these…

View original post 1,113 more words

Automating the creation of Azure IoT Hubs and the registration of IoT Devices with PowerShell and VS Code

The creation of an Azure IoT Hub is quick and simple, either through the Azure Portal or using PowerShell. But what can get more time-consuming is the registration of IoT Devices with the IoT Hub and generation of SAS Tokens for them for authentication.

In my experiments with micro-controllers and their integration with Azure IoT Services I often find I keep having to manually do tasks that should have just been automated. So I did. In this post I’ll cover using PowerShell to;

  • create an Azure IoT Hub
  • register an Azure IoT Device
  • generate a SAS Token for the IoT Device to use for authentication to an Azure IoT Hub from a Mongoose OS enabled ESP8266 micro controller

IoT Integration

Prerequisites

In order to fully test this, ideally you will have a micro-controller. I’m using an ESP8266 based micro-controller like this one. If you want to test this out without physical hardware, you could generate your own DeviceID (any text string) and use the AzureIoT Library detailed further on to send MQTT messages.

You will also require an Azure Subscription. I detail using a Free Tier Azure IoT Hub which is limited to 8000 messages per day. And instead of using PowerShell/PowerShell ISE get/use Visual Studio Code.

Finally you will need the AzureRM and AzureIoT PowerShell modules. With WinRM 5.x you can get them from the PowerShell Gallery with;

install-module AzureRM
install-module AzureIoT

Create an Azure IoT Hub

The script below will create a Free Tier Azure IoT Hub. Change the location (line 15) for which Azure Region you will use (the commands on the lines above will list what regions are available), the Resource Group Name that will be created to hold it (line 18) and the name of the IoT Hub (line 23) and let it rip.

From your micro-controller we will need the DeviceID. I’m using the ID generated by the device which I obtained from the Device Configuration => Expert View of my Mongoose OS enabled ESP8266.

Device Config.PNG

Register the IoT Device with our Azure IoT Hub

Using the AzureIoT PowerShell module we can automate the creation/registration of the IoT Device. Update the script below for the name of your IoTHub and the Resource Group that contains it that you created earlier (lines 7 and 11). Update line 21 for the DeviceID or your new IoT Device. I’m using the AzureIoT module to do this. With WinRM 5.x you can install it quickly fromt the gallery with install-module AzureIoT

Looking at our IoTHub in the Azure Portal we can see the newly registered IoT Device.

DeviceCreated.png

Generate an IoT Device SAS Token

The final step is to create a SAS Token for our IoT Device to use to connect to the Azure IoTHub. Historically you would use the IoT Device Explorer to do that. Alternatively you can also use the code samples to implement the SAS Device Token generation via an Azure Function App. Examples exist for JavaScript and C#. However as of mid-January 2018 you can do it direct from VS Code or Azure Cloud Shell using the Azure CLI and the IOT Extension. I’m using this method here as it is the quickest and simplest method of generating the Device SAS Token.

The command to generate a token that would work for all Devices on an IoT Hub is

az iot hub generate-sas-token --hub-name

Here I show executing it via the Azure Cloud Shell after installing the IOT Extensions as detailed here. To open the Bash Cloud Shell select the >_ icon next to the notification bell in the right top menu list.

Generate IOT Device SAS Token.PNG

As we have done everything else via PowerShell and VS Code we can also do it easily from VS Code. Install the Azure CLI Tools (v0.4.0 or later in VS Code as detailed here. Then from within VS Code press Control + Shift + P to open the Command Palette and enter Azure: Sign In. Sign in to Azure. Then Control + Shift + P again and enter Azure: Open Bash in Cloud Shell to open a Bash Azure CLI Shell. You can check to see if you have the Azure CLI IOT Extension (if you’ve previously used the Azure CLI for IoT operations) by typing;

az extension show --name azure-cli-iot-ext

and install it if you don’t with;

az extension add --name azure-cli-iot-ext

Then run the same command from VS Code to generate the SAS Token

az iot hub generate-sas-token --hub-name

VSCode Generate SAS Token.PNG

NOTE: That token can then be used for any Device registered with that IOT Hub. Best practice is to have a token per device. To do that type

az iot hub generate-sas-token --hub-name  --device-id

Generate SAS Token VS Code Per Device.PNG

By default you will get a token valid for 1 hour. Use the –duration switch to specify the duration of the token you require for your environment.

We can now take the SAS Token and put it into our MQTT Config on our Mongoose OS IoT Device. Update the Device Configuration using Expert View and Save.

Mongoose SAS Config.PNG

We can then test our IoT Device sending updates to our Azure IoT Hub. Update Init.js using the telemetry sample code from Mongoose.

load('api_config.js');
 load('api_mqtt.js');
 load('api_sys.js');
 load('api_timer.js');

let topic = 'devices/' + Cfg.get('device.id') + '/messages/events/';

Timer.set(1000, true /* repeat */, function() {
 let msg = JSON.stringify({ ram: Sys.free_ram() });
 let ok = MQTT.pub(topic, msg, 1);
 print(ok, topic, '->', msg);
 }, null);

We can then see the telemetry being sent to our Azure IOT Hub using MQTT. In the Device Logs after the datestamp and before device/ if you see a 0 instead of 1 (as shown below) then your conenction information or SAS Token is not correct.

Mongoose IOT Events.png

On the Auzre IoT side we can then check the metrics and see the incoming telemetry using the counter Telemetry Metrics Sent as shown below.

Telemetry Metrics Sent.PNG

If you don’t have an IoT Device you can simulate one using PowerShell. The following example shows sending a message to our IoT Hub (using variables from previous scripts).

$deviceParams = @{
 iotConnString = $IoTConnectionString
 deviceId = $deviceID
}
$deviceKeys = Get-IoTDeviceKey @deviceParams 
# Get Device 
$device = Get-IoTDeviceClient -iotHubUri $IOTHubDeviceURI -deviceId $deviceID -deviceKey $deviceKeys.DevicePrimaryKey

# Send Message
$deviceMessageParams = @{
 deviceClient = $device
 messageString = "Azure IOT Hub"
}
Send-IoTDeviceMessage -deviceClient $deviceMessageParams

Summary

Using PowerShell we have quickly been able to;

  • Create an Azure IoT Hub
  • Register an IoT Device
  • Generate the SAS Token for the IoT Device to authenticate to our IoT Hub with
  • Configure our IoT Device to send telemetry to our Azure IoT Hub and verify integration/connectivity

We are now ready to implement logic onto our IoT Device for whatever it is you are looking to achieve.

 

Using Intune and AAD to protect against Spectre and Meltdown

Kieran Jacobsen is a Melbourne based IT professional specialising in Microsoft infrastructure, automation and security. Kieran is Head of Information Technology for Microsoft partner, Readify.

I’m a big fan of Intune’s device compliance policies and Azure Active Directory’s (AAD) conditional access rules. They’re one piece of the puzzle in moving to a Beyond Corp model, that I believe is the future of enterprise networks.

Compliance policies allow us to define what it takes for a device (typically a client) to be considered secure. The rules could include the use of a password, encryption, OS version or even if a device has been jail-broken or rooted. In Intune we can define policies for Windows 8.1 and 10, Windows Phone, macOS, iOS and Android.

One critical thing to highlight is that compliance policies don’t enforce settings and don’t make changes to a device. They’re simply a decision-making tool that allows Intune (and AAD) to determine the status of the device. If we want to make changes to a device, we need to use Intune configuration policies. It’s up to the admin or the user to make a non-compliant device compliant.

A common misconception with compliance policies are that the verification process occurs in real-time, that is, when a user tries to login the device’s compliance status is checked. The check occurs on an hourly basis, though users and admins can trigger off a check manually.

The next piece of the puzzle are conditional access policies. These are policies that allow us to target different sign-in experiences for different applications, devices and user accounts. A user on a compliant device may receive a different sign-in experience to someone using a web browser on some random unknown device.

How compliance policies and conditional access work together

To understand how Compliance Policies and Conditional Access works, let’s look at a user story.

Fred works in the Accounting department at Capital Systems. Fred has a work PC issued by Capital’s IT Team, and a home PC that he bought from a local computer store.

The IT team has defined two Conditional Access policies:

  • For Office 365: a user can connect from a compliant device, or needs to pass an MFA check.
  • For the finance system: the user can only connect from a compliant device and must pass an MFA check.

How does this work in practice?

When Fred tries to access his email from his work device, perhaps through a browser, AAD will check his device’s compliance status during login. As Fred’s work PC is compliant, it will allow access to his email.

Fred now goes home, on the train he remembers he forgot to reply to an important email. When Fred gets home, he starts his home PC and navigates to the Office 365 portal. This time, AAD doesn’t know the device, so it will treat the device as non-compliant. This time, Fred will be prompted to complete MFA before he can access his email.

Things are different for Fred when he tries to access Capital’s finance system. Fred will be able to access this system from his work PC as its complaint, assuming he completes an MFA request. Fred won’t be able to access this finance system from his home PC as his device isn’t compliant.

These rules allow Capital System’s IT team to govern who can access an application, from what devices they can access it from, and if they need to complete MFA.

Ensuring Spectre and Meltdown Patches are installed

We can use compliance policies to check if a device’s OS version contains the Spectre and Meltdown patches. When Intune checks the devices compliance, if isn’t running with expected patch level, it will be marked as non-compliant.

What does this mean for the user? In Fred’s case, if his work PC lacks those updates, he may receive extra MFA prompts and loose access to the finance system, until he installs the right patches.

The Intune portal and PowerBI can be used to generate reports on device compliance and identify devices that need attention. You can also configure Intune to email a user when their device becomes non-compliant. This email can be customised, I recommend that you include a link to a remediation guide or to your support system.

Configuring Intune Compliance Policies

Compliance policies can be created and modified in the Azure Portal via the Intune panel. Simply navigate to the Device Compliance and then Policies. You’ll need to create a separate policy for each OS that you want to manage compliance.

Within a compliance policy, we specify an OS version using a “major.minor.build” formatted string.

The major versions numbers are:

  • Windows 10 – 10.0 Note that the .0 is important*
  • Windows 8.1 – 3
  • macOS – 10

We can express things like Windows 10 Fall Creators, or macOS High Sierra using the minor version number.

  • Windows 10 Fall Creators Update – 10.0.16299
  • macOS High Sierra – 10.13

Finally, we can narrow down to a specific release or patch by using the build version number. For instance, the January updates for each platform are:

  • Windows 10 Fall Creators Update – 10.0.16299.192
  • macOS High Sierra – 10.13.2

You can specify the minimum and maximum OS version by navigating to Properties, Settings and then Device Properties.

 

Windows+10

Setting the minimum Windows 10 version in a compliance policy.

macOS

Setting the minimum macOS version in a compliance policy.

Once you have made this change, devices that don’t meet the minimum version will be marked as non-compliant during their next compliance evaluation.

Kieran Jacobsen

Automate deployment pipeline tasks using Gulpjs APIs


Introduction

In this post I will be talking about gulpjs api and how gulp can be useful in automating deployment tasks. In a greenfield project there are a lot of post development tasks that a developer has to focus on besides development and with CI/CD being in focus now, post-deployment tasks are expected to be automated to make deployment pipeline more consistent and repeatable. These repetitive and common tasks not only adds-on to the project time and effort for the developer but also takes the focus away from the primary task.

Overview

In JavaScript ecosystem there are many tool libraries which help’s the developer with various coding tasks. One of the common one is the Linting libraries which are helpful during the development phase. A good linting tool will capture the unhandled error early and can also help make sure a project adheres to a coding standard. Out of the toolkit repository another useful one is the task runner which help automate certain time consuming tasks which have to be done over and over again.

Task runner tools are used to automate those repetitive and time consuming tasks.

GulpJs

Gulp is just another tool out of the javascript toolbox which is used as a stream build system, it is JavaScript based task runner that lets you automate common tasks in a very simple manner.

Gulp has become very popular over the years and comes with huge library of plugin. Gulp automates tasks, helps with handling build pipeline, watching for changes, doing tasks like minification/concatenating of css or javascript files, handling vendor prefix, preparing javascript /html/css files for production or testing.

Gulp passes files through a stream and it starts with set of source files, process those files in the stream and delivers the processed output out of the stream. Gulpjs APIs are used to handle input, alter and deliver output during different phases. In this post, i will focus on is gulp.tasks , gulp.src , gulp.dest and gulp.watch.


In most scenario’s with project deployment constant value or configuration variables have to be managed based on the environment or files need to be updated or inject certain html during build time. This is where gulp api’s can be handy to automate those repetitive tasks in a consistent way. Gulp helps to improve quality, deliver faster and provides better transparency and control over the processes.

Gulp APIs

Gulp.task (name , [,dep] , fn) : As you can see in the task signature Gulp task takes three parameters the first parameter defines the name of the task “customTask” then if next one i.e dependencies is optional so only if there are any dependencies which needs to be passed in can be added or otherwise the last parameter is the callback function.

In the code snippet below after the task definition next the “.src” takes in the path of the files to be parsed. Once source has the files then it is piped to the next step to alter the files. In this example it is performing minify action on the js files and in the last step it will pipe them in the destination path defined in the gulp.dest.

More steps can be added to the pipe here to alter the files or perform more action on the files before passing them to the gulp.dest.

Gulp.src : gulp.src(glob[,options]) : gulp.src looks for source pattern match for the files that we want to pass to the stream and gets them.

Gulp.dest : gulp.dest(folder[,options] ) this api is used as to where we are going to send our files to after files have been altered in the gulp pipeline.

Gulp watch: gulp.watch(glob,[options,callback]) : looks for when an event/change happens on the file system matching glob string or array parameter. If watcher notice a change in the files which match the path mentioned in the glob parameter, then it will execute a task or series of task from the task array in this case it is task1 and task2. If required a callback function can also be added instead of passing in the tasks array.

Gulp watch is useful to run test on changed files, look for code changes and perform tasks.

In general syntax definition below it defines as gulp tasks with two parameters task name and callback function. Here gulp.watch takes two parameters first is the path of the files to be parsed , second one the single task or array of task it will execute. We can also add a callback function here to the watch task.

Example

I will stick to the earlier example to update the configuration variables during deployment. We can consider here an SPFx project to understand that. Web parts usually have constant values that are being used across by different methods and value of the constants change with the environment they are being deployed. In order to make sure it is consistent with the build and to make it more robust we can use the gulp task to define build task.

//Command

gulp setConstants –-env=uat

//files path read by the gulp source command above, it will look for *.base.json files on the folder path below


In the code snipped above a custom task is added to the pipeline as “SetConstants” and an “env” parameter is passed to the callback function. As the task command is run it will read the file in the source path defined i.e. “config/env” folder and pipe the files to the function or another task to alter the file. Once files are altered they can be send to destination using the gulp.dest api.

This will help with updating those repetitive tasks and automate the deployment pipeline.

Hope this helped in getting understanding on Gulpjs APIs in project to automate tasks during deployment. Happy Coding!!