Testing Precompiled Azure Functions

Azure Functions has recently added a new feature that allows precompiled assembly to run functions. This gives us a great confidence with regards to unit testing. In this post, we are walking through how to unit test functions with ease, like which we do tests everyday.

The sample code used in this post can be found at HERE.

Function without Dependency

We’re not digging down precompiled function too much as it’s on the document. Let’s have a quick look at the HTTP trigger function code:

Nothing special. Now, we’re writing a test code for this function using xUnit and FluentAssertions.

How does it look like? It’s the same unit test code as what we do everyday. Let’s move on.

Function with Dependency

As I wrote Managing Dependencies in Azure Functions on the other day, dependency management is a bit tricky for Azure Functions due to its static nature. Therefore, we should introduce Service Locator Pattern for dependency management (or injection). Here’s the sample function code:

As we can see the code above, value is retrieved from the service locator instance. Of course, this is just a simple implementation of service locator pattern (If we need more sophisticated one, we should consider an IoC container library like Autofac). And here’s the poorman’s service locator:

Let’s see the test code for the function with dependencies. With the service locator, we can inject mocked object for unit testing, which is convenient for developers. In order for mocking, we use Moq in the following test code.

We create a mocked instance and inject it into the service locator. Then the injected value (or instance) is consumed within the function. How different is it from the everyday testing? There’s no difference at all. In other words, implementing a service locator gives us the same development experiences on Azure Functions, from the testing point of view.

I wrote another article for testing a few months ago, using ScriptCs. This used to be one approach, when Azure Functions didn’t support the precompiled assemblies. Now, we have precompiled functions supported. Therefore, I hope this post would be useful to design functions with better testability.

Is Azure Functions over Web API Beneficial?

Whenever I meet clients and give a talk about Azure Functions, they are immediately interested in replacing their existing Web API features with Azure Functions. In this post, I’d like to discuss:

  • Can Azure Functions replace Web API?
  • Is it worth doing?

It would be a good idea to have a read through this article, Serverless Architectures, before starting.

HTTP Trigger Function == Web API Action

One of characteristics of Serverless Architecture is “event-driven”. In other words, all functions written in Azure Functions are triggered by events. Those events of course include HTTP requests. From this HTTP request point of view, both HTTP trigger function and Web API action work exactly the same way. Let’s compare both codes to each other:

How do both look like? They look pretty similar to each other. Both take an HTTP request, process it and return a response. Therefore, with minor modification, it seems that Web API can be easily migrated to Azure Functions.

HTTP Trigger Function != Web API Action

However, life is not easy. There are some major differences we should know before migration:

Functions are always static methods

Even though Azure Functions are extensions of Azure WebJobs, each function has a static modifier by design, unlike Azure WebJobs can be without the static modifier.

Actions of Web API, by the way, don’t have the static modifier. This results in a significant architectural change during the migration, especially with dependency injection (DI). We will touch it later.

Functions always receive HttpRequestMessage instance as a parameter

Within the HTTP request/response pipeline, a Web API controller internally creates an HttpContext instance to handle data like headers, cookies, sessions, querystrings and request body (of course querystrings and request body can be handled in a different way). The HttpContext instance works as an internal property so any action can directly access to it. As a result, each action only passes necessary details as its parameters.

On the other hand, each function takes a different HTTP request/response pipeline from Web API, which passes the HttpRequestMessage instance to the function as a parameter. The HttpRequestMessage instance only handles headers, querystrings and request body. It doesn’t look after cookies or sessions. This is the huge difference between Web API and Azure Functions in terms of stateless.

Functions define HTTP verbs and routes in function.json

In Web API, we put some decorators like HttpGet, HttpPost, HttpPut, HttpPatch and HttpDelete on each action to declare which HTTP verbs take which action, by combining with the Route decorator.

On the other hand, each function has a definition of HTTP verbs and routes on function.json. With this definition, different functions having the same route URI can handle requests based on HTTP verbs.

Functions define base endpoint URI in host.json

Other than the host part of URI, eg) https://api.myservice.com, the base URI is usually defined on the controller level of Web API by adding the Route decorator. This is dead simple.

However, as there’s no controller on Azure Functions, it is defined in host.json. Default value is api, but we can remove or redefine to others by modifying host.json.

While function.json can be managed at the function level through GUI or editor, unfortunately it’s not possible to edit host.json directly in the function app. There’s a workaround using Azure App Service Editor to modify host.json, by the way.

Functions should consider service locator pattern for dependency management

There are many good IoC container libraries for Web API to manage dependencies. However, we have already discussed this in my previous post, Managing Dependencies in Azure Functions, that Service Locator Pattern should be considered for DI in Azure Functions and actually this is the only way to deal with dependencies for now. This is because every Azure Function has the static modifier which prevents us from using the same way as the one in Web API.

We know different opinions against service locator patterns for Azure Functions exist out there, but this is beyond our topic, so we will discuss it later in another post.

Is Azure Functions over Web API Beneficial?

So far, we have discussed what are same and what are different between Web API and Azure Functions HTTP Trigger. Back to the initial question, is it really worth migrating Web API to Azure Functions? How does your situation fall under any of below?

  • My Web API is designed for microservices architecture: Then it’s good to go for migration to Azure Functions.
  • My Web API takes long for response: Then consider Azure Functions using empty instance in App Service Plan because it costs nothing more. Consumption Plan (or Dynamic Service Plan) would cost too much in this case.
  • My Web API is refactored to use queues: Then calculate the price carefully, not only price for Azure Functions but also price for Azure Service Bus Queue/Topic and Azure Storage Queue. In addition to this, check the number of executions as each Web API is refactored to call one Http Trigger function plus at least one Queue Trigger function (two executions in total, at least). Based on the calculations, we can make a decision to stay or move.
  • My Web API needs a significant amount of efforts for refactoring: Then it’s better to stay until it’s restructured and suitable for microservices architecture.
  • My Web API is written in ASP.NET Core: Then stay there, do not even think of migration, until Azure Functions support ASP.NET Core.

To sum up, unless your Web API requires a significant amount of refactoring or written in ASP.NET Core, it surely is worth considering migration to Azure Functions. It is much easier to use and cost-effective solution for your Web API.

Debugging Azure Functions in Our Local Box

Because of the nature of Azure Functions – Serverless Architecture, it’s a bit tricky to run it on my local machine for debugging purpose.

There is an approach related to the issue in this post, Testing Azure Functions in Emulated Environment with ScriptCs. According to the article, we can use ScriptCs for local unit testing. However, the question on debugging still remains because testing and debugging is a different story. Fortunately, Microsoft has recently released Visual Studio Tools for Azure Functions. It’s still a preview stage, but worth having a look. In this post, I’m going to walk-through how to debug Azure Functions within Visual Studio.

Azure Functions Project & Templates

After we install the toolings, we can create an Azure Functions project.

That gives us the same development experiences. Pretty straight forward. Once we create the new project, we can find nothing but only a couple of .json files   appsettings.json and host.json. The appsettings.json is only used for our local development, not for the production, to hook up the actual Azure Functions in the cloud. We are going to touch this later in this post.

Now let’s create a function in C# codes. Right mouse click on the project and add a new function.

Then we can see a list of templates to start with. We just select from HttpTrigger function in C#.

Now we have a fresh new function.

As we can see, we have a couple of another .json files for settings. function.json defines input and output, and project.json defines list of NuGet packages to import, same as what .NET Core projects do.

We’ve all got now. How to debug the function then? Let’s move on.

Debugging Functions – HTTP Trigger

Open the run.csx file. Set a break point wherever we want.

Now, it’s time for debugging! Just punch F5 key. If we haven’t installed Azure Functions CLI, we will be asked to install it.

We can manually install CLI through npm by typing:

Once the CLI is installed, a command prompt console is open. In the console, it shows a few various useful information.

  • Functions in the debugging mode only takes the 7071 port. If any of application running in our local has already taken the port, it would be in trouble.
  • The endpoint of the Function is always http://localhost:7071/api/Function_Name, which is the same format as the actual Azure Functions in the cloud. We can’t change it.

Now it’s waiting for the request coming in. It’s basically an HTTP request, we can send the request through our web browser, Postman or even curl.

Any of request above is executed, it will hit the break point within Visual Studio.

And the CLI console prints out the log like:

Super cool! Isn’t it? Now, let’s do queue trigger function.

Debugging Functions – Queue Trigger

Let’s create another function called QueueTriggerCSharp. Unlike HTTP triggered functions, this doesn’t have endpoint URL (at least not publicly exposed) as it relies on Azure Storage Queue (or Azure Service Bus Queue). We have Azure Storage Emulator that runs on our dev machine. With this emulator, we can debug our queue-triggered functions.

In order to run this function, we need to setup both appsettings.json and function.json

Here’s the appsettings.json file. We simply assign UseDevelopmentStorage=true value to AzureWebJobsStorage for now. Then, open function.json and assign AzureWebJobsStorage to the connection key.

We’re all set. Hit F5 key see how it’s going.

Seems nothing has happened. But when we see the console, certainly the queue-triggered function is up and running. How can we pass the queue value then? We have to use the CLI command here. If environment variable doesn’t recognise func.exe, we have to use the full path to run it.

Now we can see the break point at Visual Studio.

So far, we’ve briefly walked through how to debug Azure Functions in Visual Studio. There are, however, some known issues. One of those issues is, due to the nature of .csx format, IntelliSense doesn’t work as expected. Other than that, it works great! So, if your organisation was hesitating at using Azure Functions due to the lack of debugging feature, now it’s the time to play around it!

Implementing HTTP Request Handler on ASP.NET Core Applications

ASP.NET WebForm or MVC applications rely on global.asax to process HTTP request pipelines. Within global.asax, each HTTP request goes through declared HTTP modules and HTTP handlers based on events. On the other hands, ASP.NET Core applications use OWIN middlewares. Actually, those middlewares now take care of what HTTP modules and HTTP handlers do. In this post, we are going to implement an HTTP request handler on a ASP.NET Core application.

Why Does It Matter?

Single Page Applications (SPA) is now a trend of web application development. SPA only contains UI logics and all other business logics are processed by calling APIs. Calling APIs through AJAX is not a problem. For example, if we use jQuery, typical AJAX call will look like:

If we need an API key for the call, the AJAX call might look like:

We might be feeling agitated here because auth key and API key are exposed within the JavaScript block, which is not we want. In this case, we usually implement API proxy (or API facade or whatever we call) so that we hide that sensitive information from the UI side. Instead, the proxy takes care of those details. Well, how can we implement the proxy then? We can implement this proxy through MVC controllers or middlewares. The good thing with middlewares to handle this is that WE DON’T NEED CONTROLLERS AT ALL. Let’s have a look.

Middleware

The main differences between HTTP modules and HTTP handler are:

  • HTTP modules process requests and pass them to another modules.
  • HTTP handlers process requests and return their responses to browsers.

As middlewares in ASP.NET Core take care of both HTTP modules and HTTP handlers, implementation becomes a lot easier and simpler. Here’s a basic middleware looking like:

And, in order to easily use this middleware at Startup.cs, we can just create an extension method like:

And this extension method is placed into the Configure method of Startup.cs like:

Within the OWIN pipeline, middlewares are invoked by an order where they are declared. Therefore, MyMiddleware is invoked after the previous middleware then passes HttpContext instance to the next middleware for further processing. Here’s the clue. If we want to implement the middleware as an HTTP handler, we simply don’t run the this._next.Invoke(context) method so that all HTTP requests complete processing at this point. Let’s make a working example.

All codes used in this post here can be found at: https://github.com/devkimchi/ASP.NET-Core-HTTP-Request-Handler-Sample

HTTP Request Header Handler Middleware

We don’t need MVC controller for our ASP.NET Core application any more because the middleware takes all AJAX requests, put some additional header and pass the requests to API server. Therefore, we can simply remove MVC middleware from the Startup.cs and add HttpRequestHeaderHandlerMiddleware into it.

Then, implement HttpRequestHeaderHandlerMiddleware like:

As we can see above, it processes the request and sends HttpContext back with response. This perfectly works as API proxy, without needing controllers. Let’s go further.

Middleware with Options Pattern

The potential problem of HttpRequestHeaderHandlerMiddleware written above is that all custom headers are hard-coded. If we add/update/delete header values, we have to update the code. Fortunately, ASP.NET Core provides Options Pattern. This is basically for dependency injection for configuration values that are declared within appsettings.json or environment variables. We can use this pattern for our middleware implementation with additional extension methods.

We have HttpRequestHeaderHandlerMiddlewareOptons and this will be injected to HttpRequestHeaderHandlerMiddleware by calling Options.Create(options) method, which creates IOptions instance. Here’s another extension method to declare options with lambda expressions.

Once it’s done, the HttpRequestHeaderHandlerMiddleware needs to be updated to accept the options instance as a parameter.

Now we can use options within our middleware. Those options can be defined within appsettings.json file, populated as a strongly-typed instance and injected to the middleware like:

So far, we have implemented a middleware as an API proxy to capture all HTTP requests, put extra header information into it and process it. With this approach, our SPA backed by ASP.NET Core would be much simpler and light-weighted.

Managing Dependencies in Azure Functions

Just before the Connect(); event, Azure Functions has become GA. Now more and more developers and IT pros are interested in it. One of the main attractions of using Azure Functions is, as a developer using C# codes, we can directly import existing private assemblies into Functions. In other words, we can easily migrate our existing applications to Azure Functions with minimal changes!

However, as we all know, migration is not that easy, especially if a huge architectural change is required. Once of the biggest challenges for the migration is probably “Managing Dependencies”. In this post, we are going to have a look which approach would be suitable for us to migrate applications smoothly.

We can find the source codes used in this post here: https://github.com/devkimchi/Azure-Functions-Dependency-Injections-Sample

Azure Function Structure

Let’s see the basic function structure. Here’s a function code when we create a manual trigger function:

What can we see here? That’s right. It’s a static method! That simply means our existing IoC container won’t fit here, if we use an IoC container for dependency management. So, what can we do here? We now need to consider a service locator pattern.

Autofac and CommonServiceLocator

If we use Autofac for our application’s dependency management, we need to create a service locator object using Autofac.Extras.CommonServiceLocator. Let’s see some code bits:

First of all, like what we are doing with Autofac, we just add dependencies using builder.RegisterType(). Then, we create an AutofacServiceLocator instance and put the instance to the service locator instance. That’s it! Let’s create a function.

Azure Functions with Service Locator

We don’t need to follow the whole step, but as a best practice that Azure Functions team suggests, we’re putting a private assemblies into the Shared folder using KUDU.

And create a .csx file within the Shared folder to create the service locator instance.

Now we have a static locator instance. We all set. Let’s create a function using this instance.

If we run this function, we get the result as expected.

This also works in an async function:

So far, we have a brief look how the service locator pattern works for our Azure Functions to manage dependencies. Migration to Azure Functions from our existing application wouldn’t be that easy, of course. However, we still use existing private assemblies as long as we introduce the service locator pattern for dependency management. Service locator pattern might be old-fashioned, but surely it’s working. There might be a better solution for managing dependencies. If you guys find anything better, let’s discuss here!

Deserialising .NET Core Configuration Settings

With System.Configuration.dll, we can add custom configSections into either App.config or Web.config files so that we can utilise strongly-typed configuration objects. On the other hand, .NET Core applications have replaced those App.config or Web.config with appsettings.json, which is more convenient for developers to use, especially for dependency injection (DI). In this post, we are going to walkthrough how we can deserialise the appsettings.json for DI purpose.

Basics

When we create an ASP.NET Core web application, we have a basic appsettings.json file looking like:

This is loaded up and built as an IConfiguration instance in Startup.cs. If we want to access to values, we simply call like:

This is good for fast application prototyping purpose, in general. However, if we need strongly-typed objects for those configurations, this is not ideal. How can we get the strongly typed objects then? Let’s have a look.

.NET Core RC1/RC2 Applications

Microsoft.Extensions.Configuration used to provide an extension method called Get(string key). With this method, we can easily deserialise appsettings.json object to an object of type T. Let’s say we have the AuthenticationSettings class representing the authentication section in appsettings.json. This is a sample code:

Now we have the auth instance to be injected wherever we want by:

.NET Core Applications

In .NET Core applications, unfortunately, the Get(string key) extension method has been removed. Instead, Microsoft.Extensions.Configuration.Binder provides another extension method called Bind(object instance). This actually binds an instance with the configuration value. Therefore, in order to have the same development experience as RC1/RC2, we should implement an extension method by ourselves like:

By implementing this extension method, we do the same approach as what we used to do in RC1/RC2 like above.

So far, we have briefly looked how we can deserialise appsettings.json on .NET Core applications for injection use. This would be useful for our robust application development.

testing-azure-functions-00

Testing Azure Functions in Emulated Environment with ScriptCs

In the previous post, Azure Functions Deployment Strategies, we have briefly looked several ways to deploy Azure Functions codes. In this post, I’m going to walk through how we can test Azure Functions codes on developers’ local machine.

Mocking and Asserting on ScriptCs

We need to know how to run test code scripts using ScriptCs. We’re not going too much details how to install ScriptCs here. Instead, we assume that we have ScriptCs installed on our local machine. In order to run test codes, mocking and asserting are crucial. There are ScriptPack NuGet packages for mocking and asserting, called ScriptCs.Moq and ScriptCs.FluentAssertions. If we use those packages, we can easily setup unit testing codes, with similar development experiences. Let’s have a look.

First of all, we need to install NuGet packages. Make sure that ScriptCs doesn’t yet support NuGet API version 3 at the moment of this writing, but it will support sooner or later based on their roadmap. In other words, we should stay on NuGet API version 2. This is possible by creating a scriptcs_nuget.config file in our script folder like:

Then run the command below on our Command Prompt console, to install ScriptCs.Moq.

Now, create a run.csx file like:

Run the following command and we’ll be able to see mocked result as expected.

This time, install ScriptCs.FluentAssertions for assertions.

This is the script for it:

We expected world but the value was hello, so we can see the message like:

Now, we know how to run test codes on our local machine using ScriptCs. Let’s move on.

Emulating Azure Functions on ScriptCs

There is no official tooling for Azure Functions running in a developer’s local machine yet. However, there is a script pack called ScriptCs.AzureFunctions that runs on ScriptCs. With this, we can easily emulate Azure Functions environment on our local dev machine. Therefore, as long as we have a running Azure Functions code, ScriptCs.AzureFunctions wraps the code and run it within the emulated local environment.

Azure Functions Code with Service Locator Pattern

This is a sample Azure Functions script, run.csx.

When you look at the code above, you might have found that we use a Service Locator Pattern. This is very important to manage dependencies in Azure Functions. Without a service locator, we can’t properly test Azure Functions codes. In addition to this, Azure Functions only supports a static method so the service locator instance is instantiated with the static accessor.

Here are the rest bits of the poorman’s service locator pattern.

Emulation Code for Azure Functions

Now, we need to write a test code, test.csx. Let’s have a look. This is relatively simple.

  1. Import run.csx by using the #load directive.
  2. Arrange parameters, variables and mocked objects.
  3. Run the Azure Functions code.
  4. Checks the expected result.

If we use ScriptCs.FluentAssertions package here, the last assert part would be much nicer.

Now run the test code:

Now, we have an expected result in the emulated environment.

So far, we’ve briefly walked through how we can emulate Azure Functions on our development environment. For testing, Service Locator Pattern should be considered. With ScriptCs.AzureFunctions, testing would be easily performed in the emulated environment. Of course this is not a perfect solution but surely it’s working. Therefore, it would be useful until the official tooling is released. If our build server allows ScriptCs environment, this emulation environment can be easily integrated so that we can perform testing within the build and test pipeline.

Azure Functions Deployment Strategies

As a serverless architecture, Azure Functions gives us a great benefit, ie. we don’t have to worry about server maintenance. However, we still need to manage our codes and setup a proper strategy for deployment. In this post, I am going to describe list of deployment strategies for Azure Functions.

Git Integration

Azure Functions provides a git integration as an out-of-the-box feature. We can simply integrate local git repository, GitHub, BitBucket or Visual Studio Team Service. For more details, find the other post, Code Management in Serverless Computing – AWS Lambda and Azure Functions.

Git integration is the easiest and simplest way for Azure Functions deployment. However, there are two significant downsides of using git integration.

We can’t modify any code directly on the web editor console

Once deployed, if anything goes wrong, we MUST fix in the developer’s local environment and push it to the git repository to be deployed to Azure Functions. This is OK, as long as we all agree to update any function codes on our local environment first.

We need to have a separate git repository from main repository

Many application development environments don’t only have Azure Functions code, but also contains other code bits and pieces in a repository. In this case, if we use one single repository, pushing changes that are not related to Azure Functions is still deployed, which might cause unforseen side effects.

Then, how do we overcome this issue? We still want to use git as a version control system but also want to be free from those two restrictions. There are two other ways for us.

KUDU REST API

Azure Functions is integrated with KUDU as its backend and KUDU provides us with bunch of REST APIs. With this REST API, we can easily upload a zip file that contains our codes for deployment. Let’s have a look.

We’ve got several Azure Functions codes and they are zipped to ase-dev-fn-demo.zip in the picture above. Once this zip file is uploaded through the KUDU REST API, then all codes in the zip file will be deployed. For now, we are using PowerShell. Here’s the script.

Change [USERNAME], [PASSWORD], [FUNCTION_SITE] and $filePath value and run it. It will upload the ase-dev-fn-demo.zip file to KUDU and KUDU will unzip and deploy it automatically. Once the deployment is done, we can see all the functions are properly deployed.

Of course we can use other environment, rather than PowerShell, as long as it supports REST API call with file uploads.

With this approach, we:

  • Use our own repository structure,
  • Deploy Azure Functions by creating a zip file, and
  • Are able to use web editor in case we need.

MSdeploy.exe / WAWSDeploy.exe

Another one is using the classical msdeploy.exe utility. If we prefer, we can alternatively use WAWSDeploy to utilise Azure web app publish settings.

As we can see above, we have WAWSDeploy utility and the ase-dev-fn-demo.PublishSettings file downloaded from Azure web app. Simply run the following command on our Command Prompt Console:

With this, it will upload directly from your local machine (or build server) to Azure web app. Once it is done, we will be able to see the same result like:

This also enables us to:

  • Use our own repository structure,
  • Deploy Azure Functions by creating a zip file, and
  • Use web editor in case we need.

So far, I have briefly described what kind of strategy we can take for Azure Functions deployment. Each approach has its own benefits and drawbacks. Therefore, based on our business requirements, we can wisely choose one of them.

Streaming HoloLens Video to Your Web Browser

I am lucky enough to have a play around HoloLens at my office. One of the interests around HoloLens might be how to share what the user wearing HoloLens is actually viewing. In this post, I am going to briefly describe how to setup streaming video from HoloLens to your web browser.

Activating Developer Mode and Device Portal

First of all, Developer Mode in HoloLens must be activated. This is the actual screen the HoloLens user can see. Go to Settings.

Select the Updates & Security menu.

There is a menu, For Developers at the left bottom corner. Air tap the menu.

Now, if the Developer Mode is not activated, turn it on.

Scroll down a bit and you will see the menu, Device Portal. This also needs to be activated. Once we complete this step, we will be able to access to HoloLens from our web browser.

Confirming IP Address

Now, we have HoloLens setup for web browser access. We need to identify what internal IP address that HoloLens is currently using. Go to the Settings screen again and select Network & Internet menu.

Air tap Advanced Options on the Wi-Fi screen.

Now, we can identify the IP address used by HoloLens. In this screen, the internal IP address is 192.168.1.6.

HoloLens Web Portal

We know the IP address for HoloLens, 192.168.1.6. Open a web browser and access to the web portal. We will see a certificate error on the browser by accessing to https://192.168.1.6, but don’t panic. Just proceed for now.

Now, we see the web browser portal screen. We need to register a user for streaming. Go to the Security menu at the top of the browser. This will guide us to user registration page.

If you click the Request PIN button, a PIN will be popping up on our view. Get the PIN and enter it on our web browser, with username and password.

Note: The username and password doesn’t have to be the same as our Microsoft account or Office365 account.

User registration completed.

This Official HoloLens Document will give us more details about setting up Device Portal and User Registration.

Streaming HoloLens Video to Web Browser

Let’s try streaming. HoloLens bundles Mixed Reality Capture (MRC) tools that enable streaming through web browsers.

Click the Mixed Reality Capture menu at the left hand side followed by clicking the Live Preview button in the middle of the screen. We’ll be able to see a small live preview pane just underneath the button. Open a developer tool on your web browser and take the live streaming URL. The URL might look like:

api/holographic/stream/live_high.mp4?holo=true&pv=true&mic=true&loopback=true

By combining username, password, IP address and the streaming URL, we can directly access to the live streaming URL, without relying on the web portal. We’ve got IP address of 192.168.1.6 in this post, so the direct streaming URL might look like:

https://[USERNAME]:[PASSWORD]@192.168.1.6/api/holographic/stream/live_high.mp4?holo=true&pv=true&mic=true&loopback=true

Enter the URL on the location bar of our web browser and we now are able to watch the live streaming.

Broadcasting HoloLens over the Internet

So far, we have had a brief overview how to stream HoloLens live video. It only works within a private network. However, if we share our browser through Skype, Hangout or OBS, we can easily broadcast it.

Keynote & Demo Video

Here is a keynote and demo video for HoloLens that I used in a meetup.

Passing Parameters to Linked ARM Templates

Recently, my workmate Vic wrote some great posts regarding to Azure Linked Templates. This is, a supplementary post to his ones, to show how to share parameters across the linked templates.

Scripts and templates used in this post can be found at: https://github.com/devkimchi/Linked-ARM-Templates-Sample

parametersLink and parameters Properties

We have a master template, master-deployment.json, and it looks like:

Each nested template has a parameter called environment that has the same value as the one in the master template. Each template also has a corresponding parameter file. Therefore, as above, we use both parametersLink and parameters properties to handle both. However, life is not easy.

Oooops! We can’t use both parametersLink and parameters at the same time. We have to use one or the other. This is by design at the time of this writing. Because the environment parameter is a common denominator across all the nested templates, it is natural to think that the parameter can be passed from master to nested ones.

How can we work this out then? There are several workarounds.

  1. Add the common parameters to each parameter JSON file and upload it to Azure Storage Account programatically.
  2. Create a set of parameter files for all possible combinations.
  3. Use something to refer during the template deployment, whose value will be changing over time during the deployment but the reference point remains the same.

Update Parameters Programatically

The easiest way is the first option. It can be simply achieved by running a separate PowerShell script before the deployment. Let’s have a look.

This is merely to upload nested template files to Azure Storage Account. It excludes parameter files because they need to be updated before being uploaded. Let’s have a look at the following PowerShell script.

The core part of the script is:

  • To read parameter file as a JSON object,
  • To add the environment property to the JSON object, and
  • To overwrite the parameter file.

Then this updated parameter files are uploaded to Azure Storage Account. Now our Storage Account has got all nested templates and their parameter files. Let’s run the master template again without the parameters property.

Tada~! It all works fine!

So far, we’ve taken a look how to sort out the restriction that we can’t use both parametersLink and parameters properties at the same time. We obviously need a help of another script to run linked templates with their parameters by updating it. It’s definitely not an ideal scenario but certainly it works. If this update is done by hand, it’s tedious, which should be avoided. However, in CI/CD pipelines, it wouldn’t be an issue because we can automate it.