Xamarin Forms – Platform Specifics (iOS) : Blur Effect

As a Xamarin mobile developer ever wonder, why we need write some much of code in PCL and iOS projects to do simple Native feature, some of which are usually one-liners code natively.

Xamarin has now introduced a nice nifty feature that helps us to write code in Xamarin Forms in the form of Platform Specifics.

In short, Platform Specifics helps us to consume the features or functionalities that are only available on iOS, without needing to implement custom renderers or effects in the required platform project.

One of the best example to understand this features is Blur effect. Platform specifics are baked into the Xamarin and it is ready to use.

Below are the steps to test this feature

Create a Xamarin Forms projectScreen Shot 2018-04-12 at 22.28.54

Namespaces: It is important to understand Xaml namespaces to get to know about Xamarin specifics. Below is the required namespace on each page.

xmlns:ios=clr-namespace:Xamarin.Forms.PlatformConfiguration.iOSSpecific;assembly=Xamarin.Forms.Core 

Blur Effect

Below the way we can define Blur effect on a boxview, this effect can be implemented on any visual elements in Xamarin Forms.

<BoxView x:Name=boxView ios:VisualElement.BlurEffect=Dark HeightRequest=50 WidthRequest=50 />

Blur effect have enumeration options to be set to it.

  1. Dark – applies a Dark blur effect
  2. Light – applies the light blur effect
  3. ExtraLight – applies an extra light blur effect
  4. None – applies no blur effect

Below is sample code for various blur effects we can  notice

<StackLayout>
    <Image Source=Aus.png HeightRequest=50 WidthRequest=50 />
    <BoxView x:Name=boxView ios:VisualElement.BlurEffect=Dark HeightRequest=50 WidthRequest=50 />
</StackLayout>
<StackLayout>
    <Image Source=Aus.png HeightRequest=50 WidthRequest=50 />
    <BoxView x:Name=boxView1 ios:VisualElement.BlurEffect=Light HeightRequest=50 WidthRequest=50 />
</StackLayout>
<StackLayout>
    <Image Source=Aus.png HeightRequest=50 WidthRequest=50 />
    <BoxView x:Name=boxView2 ios:VisualElement.BlurEffect=ExtraLight HeightRequest=50 WidthRequest=50 />
</StackLayout>

Below is the sample output on iOS

Screen Shot 2018-04-12 at 22.50.55

Here is the sample available in Github

https://github.com/divikiran/PlatformSpecficBlurEffect.git

 

 

 

 

Microsoft Teams Q&A

This page is a collection of Microsoft Teams and Skype for Business related questions and answers. It’s regularly updated as more information becomes available.

Microsoft Teams Q&A – Last Updated: 15th March 2018

Q: What is Microsoft Teams?

A: Microsoft Teams is a complete communications platform, that takes the best bits of Skype for Business, Yammer, SharePoint, Email and other web sources and presents them in one easy to use application. You can send IM’s, make voice and video calls, phone calls, Share documents, and collaborate all from within the one application.

Q: How do I get Microsoft Teams?

A: Simple! Sign up for an Office 365 plan over at http://www.office.comIf you’re already an Office 365 subscriber, check to see if Teams is available – it more than likely is!

Q: We currently use Skype for Business and Yammer. Should we make the switch to Teams?

A: Yes, but gradually. At some point in the future, Microsoft Teams will likely completely replace Skype for Business online from Office 365 (on premise versions of Skype are safe for now). It’s wise to start evaluating Teams now so that you can familiarize and prepare yourself once your organisation moves across.

You can also check out this short video on the subject of Yammer and Teams.

Q: We currently use Slack, and our devs aren’t going to want to give up slack easily.

A: Sure, change is scary. However, the ability to collaborate so simply in multiple ways within Teams makes it the ideal platform for dev teams to effectively work on. Give it a go!

Q: What about the investments we have made in Skype for Business hardware (phones, meeting room devices) and licensing?

A: Vendors are working hard to ensure their devices are compatible with Microsoft Teams. Right now, if the device is a year or two old and works with Skype for Business, there’s a good chance that with a firmware update from the manufacturer, it’ll work with Teams too.

Q: Are we able to develop custom applications for Teams?

A: Absolutely! There’s an entire section dedicated to creating custom applications for Microsoft Teams over at https://docs.microsoft.com/en-us/microsoftteams/platform/#pivot=home&panel=home-all

Q: I want to be able to send an SMS from Teams

A: Sure. You could write a custom application!

Q: Teams to Standard SIP video InterOP, will there be a open API available to develop one?
or should we depend on the Teams Partnered service providers only?

A: My understanding is that right now there’s only Teams/Skype integration. Whether that will change is a great question. Keep your eyes peeled on the Office 365 Roadmap.

Q: Will Teams support open Federation with other (on-premise or 3rd-party hosted) Skype for Business deployments?

A: Federated communication between other teams and Skype for business environments is on the Office 365 road map (https://products.office.com/en-us/business/office-365-roadmap?filters=#abc) and is currently in development.

Q: Should I be able to copy and paste Screenshots into the Wiki Feature? They keep disappearing.

A: From my testing, I am able to copy and paste screenshots that I took with the snipping tool in Windows into Wiki pages, and they remain in place.

Q: Am I able to invite people from other organisations into Teams chats, file sharing and other things?

A: Yes! Microsoft recently announced that guest access in Microsoft Teams is now fully rolled out, meaning you can now invite anyone into a Microsoft Teams team to chat, share files and more!

 

Do you have a question about Skype or Teams? Leave a comment below or tweet @cchiffers

Cosmos DB Server-Side Programming with TypeScript – Part 6: Build and Deployment

So far in this series we’ve been compiling our server-side TypeScript code to JavaScript locally on our own machines, and then copying and pasting it into the Azure Portal. However, an important part of building a modern application – especially a cloud-based one – is having a reliable automated build and deployment process. There are a number of reasons why this is important, ranging from ensuring that a developer isn’t building code on their own machine – and therefore may be subject to environmental variations or differences that cause different outputs – through to running a suite of tests on every build and release. In this post we will look at how Cosmos DB server-side code can be built and released in a fully automated process.

This post is part of a series:

  • Part 1 gives an overview of the server side programmability model, the reasons why you might want to consider server-side code in Cosmos DB, and some key things to watch out for.
  • Part 2 deals with user-defined functions, the simplest type of server-side programming, which allow for adding simple computation to queries.
  • Part 3 talks about stored procedures. These provide a lot of powerful features for creating, modifying, deleting, and querying across documents – including in a transactional way.
  • Part 4 introduces triggers. Triggers come in two types – pre-triggers and post-triggers – and allow for behaviour like validating and modifying documents as they are inserted or updated, and creating secondary effects as a result of changes to documents in a collection.
  • Part 5 discusses unit testing your server-side scripts. Unit testing is a key part of building a production-grade application, and even though some of your code runs inside Cosmos DB, your business logic can still be tested.
  • Finally, part 6 (this post) explains how server-side scripts can be built and deployed into a Cosmos DB collection within an automated build and release pipeline, using Microsoft Visual Studio Team Services (VSTS).

Build and Release Systems

There are a number of services and systems that provide build and release automation. These include systems you need to install and manage yourself, such as Atlassian Bamboo, Jenkins, and Octopus Deploy, through to managed systems like Amazon CodePipeline/CodeBuild, Travis CI, and AppVeyor. In our case, we will use Microsoft’s Visual Studio Team System (VSTS), which is a managed (hosted) service that provides both build and release pipeline features. However, the steps we use here can easily be adapted to other tools.

I will assume that you have a VSTS account, that you have loaded the code into a source code repository that VSTS can access, and that you have some familiarity with the VSTS build and release system.

Throughout this post, we will use the same code that we used in part 5 of this series, where we built and tested our stored procedure. The exact same process can be used for triggers and user-defined functions as well. I’ll assume that you have a copy of the code from part 5 – if you want to download it, you can get it from the GitHub repository for that post. If you want to refer to the finished version of the whole project, you can access it on GitHub here.

Defining our Build Process

Before we start configuring anything, let’s think about what we want to achieve with our build process. I find it helpful to think about the start point and end point of the build. We know that when we start the build, we will have our code within a Git repository. When we finish, we want to have two things: a build artifact in the form of a JavaScript file that is ready to deploy to Cosmos DB; and a list of unit test results. Additionally, the build should pass if all of the steps ran successfully and the tests passed, and it should fail if any step or any test failed.

Now that we have the start and end points defined, let’s think about what we need to do to get us there.

  • We need to install our NPM packages. On VSTS, every time we run a build our build environment will be reset, so we can’t rely on any files being there from a previous build. So the first step in our build pipeline will be to run npm install.
  • We need to build our code so it’s ready to be tested, and then we need to run the unit tests. In part 5 of this series we created an NPM script to help with this when we run locally – and we can reuse the same script here. So our second build step will be to run npm run test.
  • Once our tests have run, we need to report their results to VSTS so it can visualise them for us. We’ll look at how to do this below. Importantly, VSTS won’t fail the build automatically if there are any test failures, so we’ll look at how to do this ourselves shortly.
  • If we get to this point in the build then our code is successfully passing the tests, so now we can create the real release build. Again we have already defined an NPM script for this, so we can reuse that work and call npm run build.
  • Finally, we can publish the release JavaScript file as a build artifact, which makes it available to our release pipeline.

We’ll soon see how we can actually configure this. But before we can write our build process, we need to figure out how we’ll report the results of our unit tests back to VSTS.

Reporting Test Results

When we run unit tests from inside a VSTS build, the unit test runner needs some way to report the results back to VSTS. There are some built-in integrations with common tools like VSTest (for testing .NET code). For Jasmine, we need to use a reporter that we configure ourselves. The jasmine-tfs-reporter NPM package does this for us – its reporter will emit a specially formatted results file, and we’ll tell VSTS to look at this.

Let’s open up our package.json file and add the following line into the devDependencies section:

Run npm install to install the package.

Next, create a file named spec/vstsReporter.ts and add the following lines, which will configure Jasmine to send its results to the reporter we just installed:

Finally, let’s edit the jasmine.json file. We’ll add a new helpers section, which will tell Jasmine to run that script before it starts running our tests. Here’s the new jasmine.json file we’ll use:

Now run npm run test. You should see that a new testresults folder has been created, and it contains an XML file that VSTS can understand.

That’s the last piece of the puzzle we need to have VSTS build our code. Now let’s see how we can make VSTS actually run all of these steps.

Creating the Build Configuration

VSTS has a great feature – currently in preview – that allows us to specify our build definition in a YAML file, check it into our source control system, and have the build system execute it. More information on this feature is available in a previous blog post I wrote. We’ll make use of this feature here to write our build process.

Create a new file named build.yaml. This file will define all of our build steps. Paste the following contents into the file:

This YAML file tells VSTS to do the following:

  • Run the npm install command.
  • Run the npm run test command. If we get any test failures, this command will cause VSTS to detect an error.
  • Regardless of whether an error was detected, take the test results that have been saved into the testresults folder and publish them. (Publishing just means showing them within the build; they won’t be publicly available.)
  • If everything worked up till now, run npm run build to build the releaseable JavaScript file.
  • Publish the releasable JavaScript file as a build artifact, so it’s available to the release pipeline that we’ll configure shortly.

Commit this file and push it to your Git repository. In VSTS, we can now set up a new build configuration, point it to the YAML file, and let it run. After it finishes, you should see something like this:

release-1

We can see that four tests ran and passed. If we click on the Artifacts tab, we can view the artifacts that were published:

release-2

And by clicking the Explore button and expanding the drop folder, we can see the exact file that was created:

release-3

You can even download the file from here, and confirm that it looks like what we expect to be able to send to Cosmos DB. So, now we have our code being built and tested! The next step is to actually deploy it to Cosmos DB.

Deciding on a Release Process

Cosmos DB can be used in many different types of applications, and the way that we deploy our scripts can differ as well. In some applications, like those that are heavily server-based and have initialisation logic, we might provision our database, collections, and scripts through our application code. In other systems, like serverless applications, we want to provision everything we need during our deployment process so that our application can immediately start to work. This means there are several patterns we can adopt for installing our scripts.

Pattern 1: Use Application Initialisation Logic

If we have an Azure App Service, Cloud Service, or another type of application that provides initialisation lifecycle events, we can use the initialisation code to provision our Cosmos DB database and collection, and to install our stored procedures, triggers, and UDFs. The Cosmos DB client SDKs provide a variety of helpful methods to do this. For example, the .NET and .NET Core SDKs provide this functionality. If the platform you are using doesn’t have an SDK, you can also use the REST API provided by Cosmos DB.

This approach is also likely to be useful if we dynamially provision databases and collections while our application runs. We can also use this approach if we have an application warmup sequence where the existence of the collection can be confirmed and any missing pieces can be added.

Pattern 2: Initialise Serverless Applications with a Custom Function

When we’re using serverless technologies like Azure Functions or Azure Logic Apps, we may not have the opportunity to initialise our application the first time it loads. We could check the existence of our Cosmos DB resources whenever we are executing our logic, but this is quite wasteful and inefficient. One pattern that can be used is to write a special ‘initialisation’ function that is called from our release pipeline. This can be used to prepare the necessary Cosmos DB resources, so that by the time our callers execute our code, the necessary resources are already present. However, this presents some challenges, including the fact that it necessitates mixing our deployment logic and code with our main application code.

Pattern 3: Deploying from VSTS

The approach that I will adopt in this post is to deploy the Cosmos DB resources from our release pipeline in VSTS. This means that we will keep our release process separate from our main application code, and provide us with the flexibility to use the Cosmos DB resources at any point in our application logic. This may not suit all applications, but for many applications that use Cosmos DB, this type of workflow will work well.

There is a lot more to release configuration than I’ll be able to discuss here – that could easily be its own blog series. I’ll keep this particular post focused just on installing server-side code onto a collection.

Defining the Release Process

As with builds, it’s helpful to think through the process we want the release to follow. Again, we’ll think first about the start and end points. When we start the release pipeline, we will have the build that we want to release (which will include our compiled JavaScript script). For now, I’ll also assume that you have a resource group containing a Cosmos DB account with an existing database and collection, and that you know the account key. In a future post I will elaborate how some of this process can also be automated, but this is outside of the scope of this series. Once the release process finishes, we expect that the collection will have the server-side resource installed and ready to use.

VSTS doesn’t have built-in support for Cosmos DB. However, we can easily use a custom PowerShell script to install Cosmos DB scripts on our collection. I’ve written such a script, and it’s available for download here. The script uses the Cosmos DB API to deploy stored procedures, triggers, and user-defined functions to a collection.

We need to include this script into our build artifacts so that we can use it from our deployment process. So, download the file and save it into a deploy folder in the project’s source repository. Now that we have that there, we need to tell the VSTS build process to include it as an artifact, so open the build.yaml file and add this to the end of the file, being careful to align the spaces and indentation with the sections above it:

Commit these changes, and then run a new build.

Now we can set up a release definition in VSTS and link it to our build configuration so it can receive the build artifacts. We only need one step currently, which will deploy our stored procedure using the PowerShell script we included as a build artifact. Of course, a real release process is likely to do a lot more, including deploying your application. For now, though, let’s just add a single PowerShell step, and configure it to run an inline script with the following contents:

This inline script does the following:

  • It loads in the PowerShell file from our build artifact, so that the functions within that file are available for us to use.
  • It then runs the DeployStoredProcedure function, which is defined in that PowerShell file. We pass in some parameters so the function can contact Cosmos DB:
    • AccountName – this is the name of your Cosmos DB account.
    • AccountKey – this is the key that VSTS can use to talk to Cosmos DB’s API. You can get this from the Azure Portal – open up the Cosmos DB account and click the Keys tab.
    • DatabaseName – this is the name of the database (in our case, Orders).
    • CollectionName – this is the name of the collection (in our case again, Orders).
    • StoredProcedureName – this is the name we want our stored procedure to have in Cosmos DB. This doesn’t need to match the name of the function inside our code file, but I recommend it does to keep things clear.
    • SourceFilePath – this is the path to the JavaScript file that contains our script.

Note that in the script above I’ve assumed that the build configuration’s name is CosmosServer-CI, so that appears in the two file paths. If you have a build configuration that uses a different name, you’ll need to replace it. Also, I strongly recommend you don’t hard-code the account name, account key, database name, and collection name like I’ve done here – you would instead use VSTS variables and have them dynamically inserted by VSTS. Similarly, the account key should be specified as a secret variable so that it is encrypted. There are also other ways to handle this, including creating the Cosmos DB account and collection within your deployment process, and dynamically retrieving the account key. This is beyond the scope of this series, but in a future blog post I plan to discuss some ways to achieve this.

After configuring our release process, it will look something like this:

release-4

Now that we’ve configured our release process we can create a new release and let it run. If everything has been configured properly, we should see the release complete successfully:

release-5

And if we check the collection through the Azure Portal, we can see the stored procedure has been deployed:

release-6

This is pretty cool. It means that whenever we commit a change to our stored procedure’s TypeScript file, it can automatically be compiled, tested, and deployed to Cosmos DB – without any human intervention. We could now adapt the exact same process to deploy our triggers (using the DeployTrigger function in the PowerShell script) and UDFs (using the DeployUserDefinedFunction function). Additionally, we can easily make our build and deployments into true continuous integration (CI) and continuous deployment (CD) pipelines by setting up automated builds and releases within VSTS.

Summary

Over this series of posts, we’ve explored Cosmos DB’s server-side programming capabilities. We’ve written a number of server-side scripts including a UDF, a stored procedure, and two triggers. We’ve written them in TypeScript to ensure that we’re using strongly typed objects when we interact with Cosmos DB and within our own code. We’ve also seen how we can unit test our code using Jasmine. Finally, in this post, we’ve looked at how our server-side scripts can be built and deployed using VSTS and the Cosmos DB API.

I hope you’ve found this series useful! If you have any questions or similar topics that you’d like to know more about, please post them in the comments below.

Key Takeaways

  • Having an automated build and release pipeline is very important to ensure reliable, consistent, and safe delivery of software. This should include our Cosmos DB server-side scripts.
  • It’s relatively easy to adapt the work we’ve already done with our build scripts to work on a build server. Generally it will simply be a matter of executing npm install and then npm run build to create a releasable build of our code.
  • We can also run our unit tests by simply executing npm run test.
  • Test results from Jasmine can be published into VSTS using the jasmine-tfs-reporter package. Other integrations are available for other build servers too.
  • Deploying our server-side scripts onto Cosmos DB can be handled in different ways for different applications. With many applications, having server-side code deployed within an existing release process is a good idea.
  • VSTS doesn’t have built-in support for Cosmos DB, but I have provided a PowerShell script that can be used to install stored procedures, triggers, and UDFs.
  • You can view the code for this post on GitHub.

Cosmos DB Server-Side Programming with TypeScript – Part 5: Unit Testing

Over the last four parts of this series, we’ve discussed how we can write server-side code for Cosmos DB, and the types of situations where it makes sense to do so. If you’re building a small sample application, you now have enough knowledge to go and build out UDFs, stored procedures, and triggers. But if you’re writing production-grade applications, there are two other major topics that need discussion: how to unit test your server-side code, and how to build and deploy it to Cosmos DB in an automated and predictable manner. In this part, we’ll discuss testing. In the next part, we’ll discuss build and deployment.

This post is part of a series:

  • Part 1 gives an overview of the server side programmability model, the reasons why you might want to consider server-side code in Cosmos DB, and some key things to watch out for.
  • Part 2 deals with user-defined functions, the simplest type of server-side programming, which allow for adding simple computation to queries.
  • Part 3 talks about stored procedures. These provide a lot of powerful features for creating, modifying, deleting, and querying across documents – including in a transactional way.
  • Part 4 introduces triggers. Triggers come in two types – pre-triggers and post-triggers – and allow for behaviour like validating and modifying documents as they are inserted or updated, and creating secondary effects as a result of changes to documents in a collection.
  • Part 5 (this post) discusses unit testing your server-side scripts. Unit testing is a key part of building a production-grade application, and even though some of your code runs inside Cosmos DB, your business logic can still be tested.
  • Finally, part 6 explains how server-side scripts can be built and deployed into a Cosmos DB collection within an automated build and release pipeline, using Microsoft Visual Studio Team Services (VSTS).

Unit Testing Cosmos DB Server-Side Code

Testing JavaScript code can be complex, and there are many different ways to do it and different tools that can be used. In this post I will outline one possible approach for unit testing. There are other ways that we could also test our Cosmos DB server-side code, and your situation may be a bit different to the one I describe here. Some developers and teams place different priorities on some of the aspects of testing, so this isn’t a ‘one size fits all’ approach. In this post, the testing approach we will build out allows for:

  • Mocks: mocking allows us to pass in mocked versions of our dependencies so that we can test how our code behaves independently of a working external system. In the case of Cosmos DB, this is very important: the getContext() method, which we’ve looked at throughout this series, provides us with access to objects that represent the request, response, and collection. Our code needs to be tested without actually running inside Cosmos DB, so we mock out the objects it sends us.
  • Spies: spies are often a special type of mock. They allow us to inspect the calls that have been made to the object to ensure that we are triggering the methods and side-effects that we expect.
  • Type safety: as in the rest of this series, it’s important to use strongly typed objects where possible so that we get the full benefit of the TypeScript compiler’s type system.
  • Working within the allowed subset of JavaScript: although Cosmos DB server-side code is built using the JavaScript language, it doesn’t provide all of the features of JavaScript. This is particularly important when testing our code, because many test libraries make assumptions about how the code will be run and the level of JavaScript support that will be available. We need to work within the subset of JavaScript that Cosmos DB supports.

I will assume some familiarity with these concepts, but even if they’re new to you, you should be able to follow along. Also, please note that this series only deals with unit testing. Integration testing your server-side code is another topic, although it should be relatively straightforward to write integration tests against a Cosmos DB server-side script.

Challenges of Testing Cosmos DB Server-Side Code

Cosmos DB ultimately executes JavaScript code, and so we will use JavaScript testing frameworks to write and run our unit tests. Many of the popular JavaScript and TypeScript testing frameworks and helpers are designed specifically for developers who write browser-based JavaScript or Node.js applications. Cosmos DB has some properties that can make these frameworks difficult to work with.

Specifically, Cosmos DB doesn’t support modules. Modules in JavaScript allow for individual JavaScript files to expose a public interface to other blocks of code in different files. When I was preparing for this blog post I spent a lot of time trying to figure out a way to handle the myriad testing and mocking frameworks that assume modules are able to be used in our code. Ultimately I came to the conclusion that it doesn’t really matter if we use modules inside our TypeScript files as long as the module code doesn’t make it into our release JavaScript files. This means that we’ll have to build our code twice – once for testing (which include the module information we need), and again for release (which doesn’t include modules). This isn’t uncommon – many development environments have separate ‘Debug’ and ‘Release’ build configurations, for example – and we can use some tricks to achieve our goals while still getting the benefit of a good design-time experience.

Defining Our Tests

We’ll be working with the stored procedure that we built out in part 3 of this series. The same concepts could be applied to unit testing triggers, and also to user-defined functions (UDFs) – and UDFs are generally easier to test as they don’t have any context variables to mock out.

Looking back at the stored procedure, the purpose is to do return the list of customers who have ordered any of specified list of product IDs, grouped by product ID, and so an initial set of test cases might be as follows:

  1. If the productIds parameter is empty, the method should return an empty array.
  2. If the productIds parameter contains one item, it should execute a query against the collection containing the item’s identifier as a parameter.
  3. If the productIds parameter contains one item, the method should return a single CustomersGroupedByProduct object in the output array, which should contain the productId that was passed in, and whatever customerIds the mocked collection query returned.
  4. If the method is called with a valid productIds array, and the queryDocuments method on the collection returns false, an error should be returned by the function.

You might have others you want to focus on, and you may want to split some of these out – but for now we’ll work with these so we can see how things work. Also, in this post I’ll assume that you’ve got a copy of the stored procedure from part 3 ready to go – if you haven’t, you can download it from the GitHub repository for that part.

If you want to see the finished version of the whole project, including the tests, you can access it on GitHub here.

Setting up TypeScript Configurations

The first change we’ll need to make is to change our TypeScript configuration around a bit. Currently we only have one tsconfig.json file that we use to build. Now we’ll need to add a second file. The two files will be used for different situations:

  • tsconfig.json will be the one we use for local builds, and for running unit tests.
  • tsconfig.build.json will be the one we use for creating release builds.

First, open up the tsconfig.json file that we already have in the repository. We need to change it to the following:

The key changes we’re making are:

  • We’re now including files from the spec folder in our build. This folder will contain the tests that we’ll be writing shortly.
  • We’ve added the line "module": "commonjs". This tells TypeScript that we want to compile our code with module support. Again, this tsconfig.json will only be used when we run our builds locally or for running tests, so we’ll later make sure that the module-related code doesn’t make its way into our release builds.
  • We’ve changed from using outFile to outDir, and set the output directory to output/test. When we use modules like we’re doing here, we can’t use the outFile setting to combine our files together, but this won’t matter for our local builds and for testing. We also put the output files into a test subfolder of the output folder so that we keep things organised.

Now we need to create a tsconfig.build.json file with the following contents:

This looks more like the original tsconfig.json file we had, but there are a few minor differences:

  • The include element now looks for files matching the pattern *.ready.ts. We’ll look at what this means later.
  • The module setting is explicitly set to none. As we’ll see later, this isn’t sufficient to get everything we need, but it’s good to be explicit here for clarity.
  • The outFile setting – which we can use here because module is set to none – is going to emit a JavaScript file within the build subfolder of the output folder.

Now let’s add the testing framework.

Adding a Testing Framework

In this post we’ll use Jasmine, a testing framework for JavaScript. We can import it using NPM. Open up the package.json file and replace it with this:

There are a few changes to our previous version:

  • We’ve now imported the jasmine module, as well as the Jasmine type definitions, into our project; and we’ve imported moq.ts, a mocking library, which we’ll discuss below.
  • We’ve also added a new test script, which will run a build and then execute Jasmine, passing in a configuration file that we will create shortly.

Run npm install from a command line/terminal to restore the packages, and then create a new file named jasmine.json with the following contents:

We’ll understand a little more about this file as we go on, but for now, we just need to understand that this file defines the Jasmine specification files that we’ll be testing against. Now let’s add our Jasmine test specification so we can see this in action.

Starting Our Test Specification

Let’s start by writing a simple test. Create a folder named spec, and within it, create a file named getGroupedOrdersImpl.spec.ts. Add the following code to it:

This code does the following:

  • It sets up a new Jasmine spec named getGroupedOrdersImpl. This is the name of the method we’re testing for clarity, but it doesn’t need to match – you could name the spec whatever you want.
  • Within that spec, we have a test case named should return an empty array.
  • That test executes the getGroupedOrdersImpl function, passing in an empty array, and a null object to represent the Collection.
  • Then the test confirms that the result of that function call is an empty array.

This is a fairly simple test – we’ll see a slightly more complex one in a moment. For now, though, let’s get this running.

There’s one step we need to do before we can execute our test. If we tried to run it now, Jasmine would complain that it can’t find the getGroupedOrdersImpl method. This is because of the way that JavaScript modules work. Our code needs to export its externally accessible methods so that the Jasmine test can see it. Normally, exporting a module from a Cosmos DB JavaScript file will mean that Cosmos DB doesn’t accept the file anymore – we’ll see a solution to that shortly.

Open up the src/getGroupedOrders.ts file, and add the following at the very bottom of the file:

The export statement sets up the necessary TypeScript compilation instruction to allow our Jasmine test spec to reach this method.

Now let’s run our test. Execute npm run test, which will compile our stored procedure (including the export), compile the test file, and then execute Jasmine. You should see that Jasmine executes the test and shows 1 spec, 0 failures, indicating that our test successfully ran and passed. Now let’s add some more sophisticated tests.

Adding Tests with Mocks and Spies

When we’re testing code that interacts with external services, we often will want to use mock objects to represent those external dependencies. Most mocking frameworks allow us to specify the behaviour of those mocks, so we can simulate various conditions and types of responses from the external system. Additionally, we can use spies to observe how our code calls the external system.

Jasmine provides a built-in mocking framework, including spy support. However, the Jasmine mocks don’t support TypeScript types, and so we lose the benefit of type safety. In my opinion this is an important downside, and so instead we will use the moq.ts mocking framework. You’ll see we have already installed it in the package.json.

Since we’ve already got it available to us, we need to add this line to the top of our spec/getGroupedOrders.spec.ts file:

This tells TypeScript to import the relevant mocking types from the moq.ts module. Now we can use the mocks in our tests.

Let’s set up another test, in the same file, as follows:

This test does a little more than the last one:

  • It sets up a mock of the ICollection interface.
  • This mock will send back a hard-coded string (self-link) when the getSelfLink() method is called.
  • It also provides mock behaviour for the queryDocuments method. When the method is called, it invokes the callback function, passing back a list of documents with a single empty string, and then returns true to indicate that the query was accepted.
  • The mock.object() method is used to convert the mock into an instance that can be provided to the getGroupedOrderImpl function, which then uses that in place of the real Cosmos DB collection. This means we can test out how our code will behave, and we can emulate the behaviour of Cosmos DB as we wish.
  • Finally, we call mock.verify to ensure that the getGroupedOrdersImpl function executed the queryDocuments method on the mock collection exactly once.

You can run npm run test again now, and verify that it shows 2 specs, 0 failures, indicating that our new test has successfully passed.

Now let’s fill out the rest of the spec file – here’s the complete file with all of our test cases included:

You can execute the tests again by calling npm run test. Try tweaking the tests so that they fail, then re-run them and see what happens.

Building and Running

All of the work we’ve just done means that we can run our tests. However, if we try to build our code to submit to Cosmos DB, it won’t work anymore. This is because the export statement we added to make our tests work will emit code that Cosmos DB’s JavaScript engine doesn’t understand.

We can remove this code at build time by using a preprocessor. This will remove the export statement – or anything else we want to take out – from the TypeScript file. The resulting cleaned file is the one that then gets sent to the TypeScript compiler, and it emits a Cosmos DB-friendly JavaScript file.

To achieve this, we need to chain together a few pieces. First, let’s open up the src/getGroupedOrders.ts file. Replace the line that says export { getGroupedOrdersImpl } with this section:

The extra lines we’ve added are preprocessor directives. TypeScript itself doesn’t understand these directives, so we need to use an NPM package to do this. The one I’ve used here is jspreproc. It will look through the file and handle the directives it finds in specially formatted comments, and then emits the resulting cleaned file. Unfortunately, the preprocessor only works on a single file at a time. This is OK for our situation, as we have all of our stored procedure code in one file, but we might not do that for every situation. Therefore, I have also used the foreach-cli NOM package to search for all of the *.ts files within our src folder and process them. It saves the cleaner files with a .ready.ts extension, which our tsconfig.build.json file refers to.

Open the package.json file and replace it with the following contents:

Now we can run npm install to install all of the packages we’re using. You can then run npm run test to run the Jasmine tests, and npm run build to build the releasable JavaScript file. This is emitted into the output/build/sp-getGroupedOrders.js file, and if you inspect that file, you’ll see it doesn’t have any trace of module exports. It looks just like it did back in part 3, which means we can send it to Cosmos DB without any trouble.

Summary

In this post, we’ve built out the necessary infrastructure to test our Cosmos DB server-side code. We’ve used Jasmine to run our tests, and moq.ts to mock out the Cosmos DB server objects in a type-safe manner. We also adjusted our build script so that we can compile a clean copy of our stored procedure (or trigger, or UDF) while keeping the necessary export statements to enable our tests to work. In the final post of this series, we’ll look at how we can automate the build and deployment of our server-side code using VSTS, and integrate it into a continuous integration and continuous deployment pipeline.

Key Takeaways

  • It’s important to test Cosmos DB server-side code. Stored procedures, triggers, and UDFs contain business logic and should be treated as a fully fledged part of our application code, with the same quality criteria we would apply to other types of source code.
  • Because Cosmos DB server-side code is written in JavaScript, it is testable using JavaScript and TypeScript testing frameworks and libraries. However, the lack of support for modules means that we have to be careful in how we use these since they may emit release code that Cosmos DB won’t accept.
  • We can use Jasmine for testing. Jasmine also has a mocking framework, but it is not strongly typed.
  • We can get strong typing using a TypeScript mocking library like moq.ts.
  • By structuring our code correctly – using a single entry-point function, which calls out to getContext() and then sends the necessary objects into a function that implements our actual logic – we can easily mock and spy on our calls to the Cosmos DB server-side libraries.
  • We need to export the functions we are testing using the export statement. This makes them available to the Jasmine test spec.
  • However, these export statements need to be removed before we can compile our release version. We can use a preprocessor to remove those statements.
  • You can view the code for this post on GitHub.

Automate deployment pipeline tasks using Gulpjs APIs


Introduction

In this post I will be talking about gulpjs api and how gulp can be useful in automating deployment tasks. In a greenfield project there are a lot of post development tasks that a developer has to focus on besides development and with CI/CD being in focus now, post-deployment tasks are expected to be automated to make deployment pipeline more consistent and repeatable. These repetitive and common tasks not only adds-on to the project time and effort for the developer but also takes the focus away from the primary task.

Overview

In JavaScript ecosystem there are many tool libraries which help’s the developer with various coding tasks. One of the common one is the Linting libraries which are helpful during the development phase. A good linting tool will capture the unhandled error early and can also help make sure a project adheres to a coding standard. Out of the toolkit repository another useful one is the task runner which help automate certain time consuming tasks which have to be done over and over again.

Task runner tools are used to automate those repetitive and time consuming tasks.

GulpJs

Gulp is just another tool out of the javascript toolbox which is used as a stream build system, it is JavaScript based task runner that lets you automate common tasks in a very simple manner.

Gulp has become very popular over the years and comes with huge library of plugin. Gulp automates tasks, helps with handling build pipeline, watching for changes, doing tasks like minification/concatenating of css or javascript files, handling vendor prefix, preparing javascript /html/css files for production or testing.

Gulp passes files through a stream and it starts with set of source files, process those files in the stream and delivers the processed output out of the stream. Gulpjs APIs are used to handle input, alter and deliver output during different phases. In this post, i will focus on is gulp.tasks , gulp.src , gulp.dest and gulp.watch.


In most scenario’s with project deployment constant value or configuration variables have to be managed based on the environment or files need to be updated or inject certain html during build time. This is where gulp api’s can be handy to automate those repetitive tasks in a consistent way. Gulp helps to improve quality, deliver faster and provides better transparency and control over the processes.

Gulp APIs

Gulp.task (name , [,dep] , fn) : As you can see in the task signature Gulp task takes three parameters the first parameter defines the name of the task “customTask” then if next one i.e dependencies is optional so only if there are any dependencies which needs to be passed in can be added or otherwise the last parameter is the callback function.

In the code snippet below after the task definition next the “.src” takes in the path of the files to be parsed. Once source has the files then it is piped to the next step to alter the files. In this example it is performing minify action on the js files and in the last step it will pipe them in the destination path defined in the gulp.dest.

More steps can be added to the pipe here to alter the files or perform more action on the files before passing them to the gulp.dest.

Gulp.src : gulp.src(glob[,options]) : gulp.src looks for source pattern match for the files that we want to pass to the stream and gets them.

Gulp.dest : gulp.dest(folder[,options] ) this api is used as to where we are going to send our files to after files have been altered in the gulp pipeline.

Gulp watch: gulp.watch(glob,[options,callback]) : looks for when an event/change happens on the file system matching glob string or array parameter. If watcher notice a change in the files which match the path mentioned in the glob parameter, then it will execute a task or series of task from the task array in this case it is task1 and task2. If required a callback function can also be added instead of passing in the tasks array.

Gulp watch is useful to run test on changed files, look for code changes and perform tasks.

In general syntax definition below it defines as gulp tasks with two parameters task name and callback function. Here gulp.watch takes two parameters first is the path of the files to be parsed , second one the single task or array of task it will execute. We can also add a callback function here to the watch task.

Example

I will stick to the earlier example to update the configuration variables during deployment. We can consider here an SPFx project to understand that. Web parts usually have constant values that are being used across by different methods and value of the constants change with the environment they are being deployed. In order to make sure it is consistent with the build and to make it more robust we can use the gulp task to define build task.

//Command

gulp setConstants –-env=uat

//files path read by the gulp source command above, it will look for *.base.json files on the folder path below


In the code snipped above a custom task is added to the pipeline as “SetConstants” and an “env” parameter is passed to the callback function. As the task command is run it will read the file in the source path defined i.e. “config/env” folder and pipe the files to the function or another task to alter the file. Once files are altered they can be send to destination using the gulp.dest api.

This will help with updating those repetitive tasks and automate the deployment pipeline.

Hope this helped in getting understanding on Gulpjs APIs in project to automate tasks during deployment. Happy Coding!!


How far to take response group

I have been working on a SFB Enterprise Voice Implementation project recently. The client is very keen to use native response group to create a corporate IVR for their receptions. The requirement in particular ended up needing 4 workflows, 19 Queues, 2 Groups and going beyond 2-Level, 4-Options IVR simple cases. The whole implementation won’t be completed under GUI, instead, Lync Powershell is the only way to meet the requirement.

I drew the reception IVR workflow below:

RGS

The root level menu is 7 options with the option 9 to loop back and the sub menu is also up to 8 options to help receptions to reduce the workload.

I like to start with GUI to set up the quickly set up the IVR framework with first 4 options and then we use scripts to extend options and manage the IVR framework. Take the “Reception Main Menu” as an example, I used the below scripts adding in Option 5, Option 6, Option 9.

##Create Option 5

$Workflow=get-csrgsworkflow -name "Reception Main Menu"

$queue = Get-CsRgsQueue -name "Press5sub Queue[R]"

$Question = $workflow.DefaultAction.Question

$Action5 = New-CsRgsCallAction -Action TransferToQueue -QueueID $queue.Identity

$Answer5 = New-CsRgsAnswer -Action $Action5 -DtmfResponse 5 -VoiceResponseList "Option5"

$Question.AnswerList.Add($Answer5)

Set-CsRgsWorkflow -Instance $workflow

##Create Option 6

$Workflow=get-csrgsworkflow -name "Reception Main Menu"

$queue = Get-CsRgsQueue -name "Press6sub Queue[R]"

$Question = $workflow.DefaultAction.Question

$Action6 = New-CsRgsCallAction -Action TransferToQueue -QueueID $queue.Identity

$Answer6 = New-CsRgsAnswer -Action $Action6 -DtmfResponse 6 -VoiceResponseList "Option6"

$Question.AnswerList.Add($Answer6)

Set-CsRgsWorkflow -Instance $workflow

##Create Option 9

$Workflow=get-csrgsworkflow -name "Reception Main Menu"

$queue = Get-CsRgsQueue -name "Press9sub Queue[R]"

$Question = $workflow.DefaultAction.Question

$Action9 = New-CsRgsCallAction -Action TransferToQueue -QueueID $queue.Identity

$Answer9 = New-CsRgsAnswer -Action $Action9 -DtmfResponse 9 -VoiceResponseList "Option9"

$Question.AnswerList.Add($Answer9)

Set-CsRgsWorkflow -Instance $workflow

To manage the business hours of IVR workflows, I used the below scripts to reset/update the business hours:

##Business Hours update

$weekday = New-CsRgsTimeRange -Name "Weekday Hours" -OpenTime 00:08:30 -CloseTime 17:30:00

$x = Get-CsRgsHoursOfBusiness -Identity "service:ApplicationServer:nmlpoolaus01.company.com.au" -Name "Reception Main Menu_434d7c29-9893-4946-afcf-3bb9ac7aad8a"

$x.MondayHours1 = $weekday

$x.TuesdayHours1 = $weekday

$x.WednesdayHours1 = $weekday

$x.ThursdayHours1 = $weekday

$x.FridayHours1 = $weekday

Set-CsRgsHoursOfBusiness -Instance $x

$x

To manage the greeting/announcement of IVR workflows, I used the below scripts to reset/update the IVR greeting:

##greeting/announcement update

$audioFile = import-CsRgsAudioFile -Identity "service:ApplicationServer:nmlpoolaus01.company.com.au" -FileName "Greeting reception.wma" -Content (Get-Content "C:\temp\Greeting Reception.wma" -Encoding byte -readcount 0)

$prompt = New-CsRgsPrompt -AudioFilePrompt $audioFile -TextSpeechPrompt ""

$workflow.DefaultAction.Question.Prompt = $prompt

$workflow.DefaultAction.Question

Set-CsRgsWorkflow $workflow

The native Lync response group is a basic IVR platform that covers most simple cases and can even go as far as multiple level and multiple option IVR with text-to-speech, and speech recognition (Interactive workflow), that’s not too shabby at all!

Hopefully my scripts can help you to extend your Lync IVR RGS workflow. 😊

Xamarin Forms: Mircosoft.EntityFrameworkCore.Sqlite issue with Physical devices

Introduction

Building Xamarin Forms apps using .Net Standard 2.0 is still pretty much new to industry, we are just started to learn how differently we have to configure Xamarin setting to get it working when compared to PCL based projects.

I was building a Xamarin Forms based App using Microsoft’s Entityframeworks SQlite to store app’s data. Entity framework using sqlite is an obvious choice when it comes to building App using .Net Standard 2.0

Simulator

Works well on pretty much on all simulators without any issue, all read/write operations works well.

Issue  – Physical Device

App crashes on physical device, when tried to read or write data from the SQlite database

Error

System.TypeInitializationException: The type initializer for ‘Microsoft.EntityFrameworkCore.EntityFrameworkQueryableExtensions’ threw an exception. —> System.InvalidOperationException: Sequence contains
no matching element

Resolution

Change linker behavior to “Don’t Link”

Xamarin forms using .Net Standard 2.0

Introduction

All Xamarin developers, please welcome Net standard 2.0. This is the kind of class library we were waiting for all these years. The .Net standard 2.0 specification is now complete and it is included with Net core 2.0, Net framework 4.6.1 and up to latest versions. It can be used using Visual Studio versions 15.3 and up. Net Standard 2.0 obviously supports C# and also F# and Visual Basic.

More APIs

Net Standard 2.0 is for sharing code via various platforms. It is included with all the common APIs that all .Net implementations, it unified all .net frameworks to avoid any fragmentations in future. There are more than 32000 APIs in .Net Standard 2.0 most of them that are already available in .Net Framework APIs. Microsoft has made it easy to port existing code to .Net Standard 2.0. It is now easy to extend any .Net Standard to .Net core 2.0 or any versions that come in future.

NuGet Support

Most NuGet packages currently work well with .Net framework, but not all projects are compatible to move to .Net Standard 2.0, therefore a compatibility mode is added to support them.  Even after compatibility mode, only upt0 70% of packages are supported.

Frameworks and Libraries

Below is the table,list all the support frameworks and libraries. Click here for more details

.NET Standard
1.0 1.1 1.2 1.3 1.4 1.5 1.6 2.0
.NET Core 1.0 1.0 1.0 1.0 1.0 1.0 1.0 2.0
.NET Framework 4.5 4.5 4.5.1 4.6 4.6.1 4.6.1 4.6.2 4.6.1 vNext 4.6.1
Mono 4.6 4.6 4.6 4.6 4.6 4.6 4.6 5.4
Xamarin.iOS 10.0 10.0 10.0 10.0 10.0 10.0 10.0 10.14
Xamarin.Mac 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.8
Xamarin.Android 7.0 7.0 7.0 7.0 7.0 7.0 7.0 8.0
Universal Windows Platform 10.0 10.0 10.0 10.0 10.0 10.0.16299 10.0.16299 10.0.16299
Windows 8.0 8.0 8.1
Windows Phone 8.1 8.1 8.1
Windows Phone Silverlight 8.0

Sample to convert PCL or Shared to .Net Standard 2.0

  1. Create a default PCL or Shared based Xamarin Forms applications and name it appropreately and wait for solution to loadScreen Shot 2017-12-09 at 09.18.05
  2. Add .Net Standard class library by selecting .Net Standard 2.0Screen Shot 2017-12-09 at 09.24.38Screen Shot 2017-12-09 at 09.25.41Now project should look something like belowScreen Shot 2017-12-09 at 09.26.38.png
  3. Now remove PCL or Shared based project (VERY Important only after moving all the required project files to Netstandard20Test library) and compileScreen Shot 2017-12-09 at 09.28.38.png
  4. now rename the NetStandard20Test to NetStandardTest (Same as deleted library), make sure to rename DefaultNameSpace and Assembly to NetStandarTestScreen Shot 2017-12-09 at 09.30.07Screen Shot 2017-12-09 at 09.30.14Screen Shot 2017-12-09 at 09.30.24Screen Shot 2017-12-09 at 09.30.44Screen Shot 2017-12-09 at 09.34.23.png
  5. Now build the project and see if build is successfully.
  6. Your build should fail with errors as shown below, it is because of the deleted project, now we have to reference back the newly created .Net Standard 2.0 to both Android and iOSScreen Shot 2017-12-09 at 09.35.53.png
  7. Now edit references on each platform project to add newly created project as shown below onceScreen Shot 2017-12-09 at 09.37.58Screen Shot 2017-12-09 at 09.38.05
  8. references are applied correctly, you should get below errorsScreen Shot 2017-12-09 at 09.52.14
  9. Now add Xamarin.Forms NuGet package for all projectsScreen Shot 2017-12-09 at 09.54.04.png
  10. Now build the project and you should see any errorsScreen Shot 2017-12-09 at 10.58.06
  11. Microsoft has also released a compatibility NuGet package that makes sure’s all the existing packages are compatible to .Net Standard 2.0
  12. Add NuGet package, Mirosoft.NETCore.Portable.Compatibility to .Net Standard 2.0 project.Screen Shot 2017-12-09 at 11.03.01

Hope this blog is useful to you.

 

Seamless Multi-identity Browsing for Cloud Consultants

If you’re a technical consultant working with cloud services like Office 365 or Azure on behalf of various clients, you have to deal with many different logins and passwords for the same URLs. This is painful, as your default browser instance doesn’t handle multiple accounts and you generally have to resort to InPrivate (IE) or Incognito (Chrome) modes which mean a lot of copying and pasting of usernames and passwords to do your job. If this is how you operate today: stop. There is an easier way.

Two tools for seamless logins

OK, the first one is technically a feature. The most important part of removing the login bottleneck is Chrome Profiles. This essential feature of Chrome lets you maintain completely separate profiles for Chrome, including saved passwords, browser cache, bookmarks, plugins, etc. Fantastic.

Set one up for each customer that you have a dedicated account for. Once you log in once, the credentials will be cached and you’ll be able to pass through seamlessly.

This is obviously a great improvement, but only half of the puzzle. It’s when Profiles are combined with another tool that the magic happens…

SlickRun your Chrome sessions

If you haven’t heard of the venerable SlickRun (which must be pushing 20 years if it’s a day) – download it right now. This gives you the godlike power of being able to launch any application or browse to any Url nearly instantaneously. Just hit ALT-Q and input the “magic word” (which autocompletes nicely) that corresponds to the command you want to execute and Bob’s your Mother’s Brother! I tend to hide the SlickRun prompt by default, so it only shows up when I use the global ALT-Q hotkey.

First we have to set up our magic word. If you simply put a URL into the ‘Filename or URL’ box, SlickRun will open it using your default browser. We don’t want that. Instead put ‘chrome.exe’ in the box and use the ‘–profile-directory’ command line switch to target the profile you want, followed by the URL to browse to.

N.B. You don’t seem to be able to reference the profiles by name. Instead you have to put “Profile n” (where n is the number of the profile in the order you created it).

SlickRun-MagicWord

That’s all there is to it. Once you’ve set up your magic words for the key web apps you need to be able to access for each client (I go with a naming convention of ‘clientappname‘ and extend that further if I have multiple test accounts I need to log in as, etc), then get to any of them in seconds and usually as seamlessly as single-sign-on would provide.

This hands-down my favourite productivity trick and yet I’ve never seen anyone else do it, or seen a better solution to the multiple logins problem. Hence this post! Hope you find it as awesome a shortcut as I do…

Till next time!

HoloLens – Continuous Integration

Continuous integration is best defined as the process of constantly merging development artifacts produced or modified by different members of a team into a central shared repository. This task of collating changes becomes more and more complex as the size of the team grows. Ensuring the stability of the central repository becomes a serious challenge in such cases.

A solution to this problem is to validate every merge with automated builds and automated testing. Modern code management platforms like Visual Studio Team Services (VSTS) offers built-in tools to perform these operations. Visual Studio Team Services (VSTS) is a hosted service offering from Microsoft which bundles a collection of Dev Ops services for application developers.

The requirement for a Continuous Integration workflow is important for HoloLens applications considering the agility of the development process. In a typical engagement, designers and developers will work on parallel streams sharing scripts and artifacts which constitute a scene. Having an automated process in place to validate every check-in to the central repository can add tremendous value to the quality of the application. In this blog, we will walk through the process of setting up a Continuous Integration workflow for a HoloLens application using VSTS build and release tools.

Build pipeline

A HoloLens application will have multiple application layers. The development starts with creating the game world using Unity and then proceeds to wiring up backend scripts and services using Visual Studio. To build a HoloLens application package, we need to first build the front-end game world with the Unity compiler and then, the back-end with the visual studio compiler. The following diagram illustrates the build pipeline:

pipeline

In the following sections, we will walk through the process of setting up the infrastructure for building a HoloLens application package using VSTS.

Build agent setup

VSTS uses build agents to perform the task of compiling an application on the central repository. These build agents can either be Microsoft hosted agents, which is available as a service in VSTS or they can be custom-deployed services managed by you. HoloLens application will require custom build agents as they run custom build tasks for compiling the Unity application. Following are the steps for creating a build agent to run the tasks required for building a HoloLens application:

1.      Provision hosting environment for the build agent

The first step in this process is to provision a machine to run the build agent as a service. I’d recommend using an Azure Virtual Machine hosted within an Azure DevTest Lab for this purpose. The DevTest Lab comes with built-in features for managing start up and shut down schedules for the virtual machines which are very effective in controlling the consumption cost.  Following are the steps for setting up the host environment for the build agent in Azure.

  1. Login to the Azure portal and create a new instance of DevTest LabDevtest labs
  2. Add a Virtual machine to the Lab.Add VM
  3. Pick an image with Visual Studio 2017 pre-installed.Image
  4. Choose the hardware with a high number of CPUs and IOPS as the agents are heavy on disks and compute. I’d advice a D8S_V3 machine for a team of approximately 15 developers.                                                                                                        imagesize
  5. Select the PowerShell artifacts to be added to the Virtual machineselected atrefacts
  6. Provision the Virtual Machine and remote desktop into it.

2.      Create authorization token

Build agent will require an authorized channel to communicate with the build server which in our case is the VSTS service. Following are the steps to generate a token:

  1. On VSTS portal, navigate to the security screen using the profile menumenu
  2. Create a personal access token for the agent to authorize to the server. Ensure that you have selected ‘Agent pools (read, manage) in the authorized scope.Create PAT
  3. Note the generated token. This will be used to configure the agent the build host virtual machine.

3.      Installing and configuring the agent

Once the token is generated we are now ready to configure the VSTS agent. Following are the steps

  1. Remote desktop into the build host virtual machine on Azure
  2. Open the VSTS portal on a browser and navigate to the ‘Agent Queue’ screen. (https://.visualstudio.com/Utopia/_admin/_AgentQueue)
  3. Click on ‘Download Agent’ buttondownload agent
  4. Click on the ‘Download’ button to download the installer onto the disk of your VM. Choose the default download location.configuring account
  5. Follow the steps listed in the previous step to configure the agent using PowerShell commands. Detailed instructions can be found at the below link:

https://docs.microsoft.com/en-au/vsts/build-release/actions/agents/v2-windows

  1. Once configures, the agent should appear on the agent list within the selected pool.agent post creation

This completes the build environment setup. We can now configure a build definition for our HoloLens application.

Build definition

Creating the build definition involves queuing up a sequence of activities to be performed during a build. In our case, this includes the following steps.

  • Performing Unity build
  • Restoring NuGet packages
  • Performing Visual Studio build

Following are the steps to be performed:

  1. Login to the VSTS portal and navigate to the marketplace.Marketplace icon
  2. Search for the ‘HoloLens Unity Build’ component and install it. Make sure that you are selecting the right VSTS project while installing the component.Install task
  3. Navigate to Builds on the VSTS portal and click on the ‘New’ button under ‘Build Definitions’new buld defnition
  4. Select an empty template    template selection
  5. Add the following dev tasks
    1. HoloLens Unity Build
    2. NuGet
    3. Visual Studio Build

tasks

  1. Select the Unity Project folder to configure the build taskUnity project folder
  2. Configure the Nuget build task to restore the packages.nuget restoration
  3. Configure the Visual Studio build task by selecting the solution path, platform, and configuration.visual studio build task
  4. Navigate to the ‘Triggers’ tab and Enable the build to be triggered for every check-in.                                                 Trigger

You should now see a build being fired for every merge into the repository. The whole build process for an empty HoloLens application can take anywhere between four to six minutes on an average.

To summarise, in this blog, we learned about the build pipeline for a HoloLens application. We also explored the build agent set up and build definition required to enable continuous integration for a HoloLens application.