API Mocking for Developers

API is the most common practice to exchange messages in a microservices architecture world. There are actually two different approaches for API development. One is called Model First and the other is called Design First. Usually the latter, AKA Spec-Driven Development (SDD), is preferred over the former.

When is the Model First approach useful? If you are running legacy API applications, this would be a good example of using this approach. If those systems are well documented, API documents can be easily extracted by tools like Swagger which is now renamed to Open API. There are many implementations for Swagger, Swashbuckle for example, so it’s very easy to extract API spec document using those tools.

What if we are developing a new API application? Should I develop the application and extract its API spec document like above? Of course we can do like this. There’s no problem at all. If the API spec is updated, what should we do then? The application should be updated first then the updated API spec should be extracted, which might be expensive. In this case, the Design First approach would be useful because we can make all API spec clearly before the actual implementation, which we can reduce time and cost, and achieve better productivity. Here’s a rough diagram how SDD workflow looks like:

  1. Design API spec based on the requirements.
  2. Simulate the API spec.
  3. Gather feedback from API consumers.
  4. Validates the API spec if it meets the requirements.
  5. Publish the API spec.

There is an interesting point. How can we run/simulate the API without the actual implementation? Here we can introduce API mocking. By mocking APIs, front-end developers, mobile developers or other back-end developers who actually consume the APIs can simulate expected result, regardless that the APIs are working or not. Also by mocking those APIs, developers can feed back more easily.

In this post, we will have a look how API mocking features work in different API management tools such as MuleSoft API Manager, Azure API Management and Amazon API Gateway, and discuss their goods and bads.

MuleSoft API Manager + RAML

MuleSoft API Manager is a strong supporter of RAML(RESTful API Modelling Language) spec. RAML spec version 0.8 is the most popular but recently its Spec version 1.0 has been released. RAML basically follows the YAML format, which brings more readability to API spec developers. Here’s a simple RAML based API spec document.

According to the spec document, API has two different endpoints of /products and /products/{productId} and they define GET, POST, PATCH, DELETE respectively. Someone might have picked up in which part of the spec document would be used. Each response body has its example node. These nodes are used for mocking in MuleSoft API Manager. Let’s see how we can mock both API endpoints.

First of all, login to Anypoint website. You can create a free trial account, if you want.

Navigate to API Manager by clicking either the button or icon.

Click the Add new API button to add a new API and fill the form fields.

After creating an API, click the Edit in API designer link to define APIs.

We have already got the API spec in RAML, so just import it.

The designer screen consists of three sections – left, centre, and right. The centre section shows the RAML document itself while the right section displays how RAML is visualised as API document. Now, click the Mocking Service button at the top-right corner to activate the mocking status. If mocking is enabled, the original URL is commented out and a new mocking URL is added like below:

Of course, when the mocking is de-activated, the mocking URL disappears and the original URL is uncommented. With this mocking URL, we can actually send requests through Postman for simulation. In addition to this, developers consuming this API don’t have to wait until it’s implemented but just using it. The API developer can develop in parallel with front-end/mobile developers. This is the screenshot of using Postman to send API requests.

As defined in the example nodes of RAML spec document, the pre-populated result is popping out. Like the same way, other endpoints and/or methods return their mocked results.

So far, we have looked how MuleSoft API Manager can handle RAML and API mocking. It’s very easy to use. We just imported a RAML spec document and switched on the mocking feature, and that’s it. Mocking can’t be easier that this. It also supports Swagger 2.0 spec. When we import a Swagger document, it is automatically converted to a RAML 1.0 document. However, in the API designer, we still have to use RAML. It would be great if MuleSoft API Manager supports to edit Swagger spec documents out of the box sooner rather than later, which is a de-facto standard nowadays.

There are couple of downsides of using API Manager. We can’t have precise controls on individual endpoints and methods.

  • “I want to mock only the GET method on this specific endpoint!”
  • “Nah, it’s not possible here. Try another service.”

Also, the API designer changes the base URL of API, if we activate the mocking feature. It might be critical for other developers consuming the API for their development practice. They have to change the API URL once the API is implemented.

Now, let’s move onto the example supporting Swagger natively.

Azure API Management + Swagger

Swagger has now become Open API, and spec version 2.0 is the most popular. Recently a new spec version 3.0 has been released for preview. Swagger is versatile – ie. it supports YAML and JSON format. Therefore, we can design the spec in YAML and save it into a JSON file so that Azure API Management can import the spec document natively. Here’s the same API document written in YAML based on Swagger spec.

It looks very similar to RAML, which includes a examples node in each endpoint and/or method. Therefore, we can easily activate the mocking feature. Here’s how.

First of all, we need to create an API Management instance on Azure Portal. It takes about half an hour for provisioning. Once the instance is fulfilled, click the APIs - PREVIEW blade at the left-hand side to see the list of APIs.

We can also see tiles for API registration. Click the Open API specification tile to register API.

Upload a Swagger definition file in JSON format, say swagger.json, and enter appropriate name and suffix. We just used shop as a prefix for now.

So, that’s it! We just uploaded the swagger.json file and it completes the API definition on API Management. Now, we need to mock an endpoint. Unlike MuleSoft API Manager, Azure API Management can handle mocking on endpoints and methods individually. Mocking can be set in the Inbound processing tile as it intercepts the back-end processing before it hits the back-end side. It can also set mocking at a global level rather than individual level. For now, we simply setup the mocking feature only on the /products endpoint and the GET method.

Select /products - GET at the left-hand side and click the pencil icon at the top-right corner of the Inbound Processing tile. Then we’re able to see the screen below:

Click the Mocking tab, select the Static responses option on the Mocking behavior item, and choose the 200 OK option of the Sample or schema responses item, followed by Save. We can expect the content defined under the examples node. Once saved, the screen will display something like this:

In order to use the API Management, we have to send a special header key in every request. Go to the Products - PREVIEW blade and add the API that we’ve defined above.

Go to the Users - PREVIEW blade to get the Subscription Key.

We’re ready now. In Postman, we can see the result like:

The subscription key has been sent through the header with a key of Ocp-Apim-Subscription-Key and the result was what we expected above, which has already been defined in the Swagger definition.

So far, we have used Azure API Management with Swagger definition for mocking. The main differences between MuleSoft API Manager and Azure API Management are:

  • Azure API Management doesn’t change the mocking endpoint URL, while MuleSoft API Manager does. That’s actually very important for front-end/mobile developers because they don’t get bothered with changing the base URL – same URL with mocked result or actual result based on the setting.
  • Azure API Management can mock endpoint and method individually. We only mock necessary ones.

However, the downside of using Azure API Management will be the cost. It costs more than $60/month on the developer pricing plan, which is the cheapest. MuleSoft API Manager is literally free to use, from the mocking perspective (Of course we have to pay for other services in MuleSoft).

Well, what else service can we use together with Swagger? Of course there is Amazon AWS. Let’s have a look.

Amazon API Gateway + Swagger

Amazon API Gateway also uses Swagger for its service. Login to the API Gateway Console and import the Swagger definition stated above.

Once imported, we can see all API endpoints. We choose the /products endpoint with the GET method. Select the Mock option of the Integration type item,, and click Save.

Now, we’re at the Method Execution screen. When we see at the rightmost-hand side of the screen, it says Mock Endpoint where this API will hit. Click the Integration Request tile.

We confirm this is mocked endpoint. Right below, select the When there are no templates defined (recommended) option of the Request body passthrough item.

Go back to the Method Execution screen and click the Integration Response tile.

There’s already a definition of the HTTP Status Code of 200 in Swagger, which is automatically showed upon the screen. Click the triangle icon at the left-hand side.

Open the Body Mapping Templates section and click the application/json item. By doing so, we’re able to see the sample data input field. We need to put a JSON object as a response example by hand.

I couldn’t find any other way to automatically populate the sample data. Especially, the array type response object can’t be auto-generated. This is questionable; why doesn’t Amazon API Gateway allow a valid object? If we want to avoid this situation, we have to update our Swagger definition, which is vendor dependent.

Once the sample data is updated, save it and go back to the Method Execution screen again. We are ready to use the mocking feature. Wait. One more step. We need to publish for public access.

Eh oh… We found another issue here. In order to publish (or deploy) the API, we have to set up the Integration Type of all endpoints and methods INDIVIDUALLY!!! We can’t skip one single endpoint/method. We can’t set up a global mocking feature, either. This is a simple example that only contains four endpoints/methods. But, as a real life scenario, there are hundreds of API endpoints/methods in one application. How can we set up individual endpoints/methods then? There’s no such way here.

Anyway, once deployment completes, API Gateway gives us a URL to access to API Gateway endpoints. Use Postman to see the result:

So far, we have looked Amazon API Gateway for mocking. Let’s wrap up this post.

  • Global API Mocking: MueSoft API Manager provides a one-click button, while Amazon API Gateway doesn’t provide such feature.
  • Individual API Mocking: Both Azure API Management and Amazon API Gateway are good for handling individual endpoints/methods, while MuleSoft API Manager can’t do it. Also Amazon API Gateway doesn’t well support to return mocked data, especially for array type response. Azure API Management perfectly supports this feature.
  • Uploading Automation of API Definitions: Amazon API Gateway has to manually update several places after uploading Swagger definition files, like examples as mocked data. On the other hand, both Azure API Management and MuleSoft API Manager perfectly supports this feature. There’s no need for manual handling after uploading definitions.
  • Cost of API Mocking: Azure API Management is horrible from the cost perspective. MuleSoft provides virtually a free account for mocking and Amazon API Gateway offers the first 12 months free trial period.

We have so far briefly looked how we can mock our APIs using the spec documents. As we saw above, we don’t need to code for this feature. Mocking purely relies on the spec document. We also have looked how this mocking can be done in each platform – MuleSoft API Manager, Azure API Management and Amazon API Gateway, and discuss merits and demerits of each service, from the mocking perspective.

Know Your Cloud Resource Costs on Azure

An organisation used to invest their IT infrastructure mostly for computers, network or data centre. Over time, they spent their budget for hosting spaces. Nowadays, in cloud environments, they mostly spend their funds to purchase computing power. Here’s a simple diagram about the cloud computing evolution. From left to right, expenditure shifts from infrastructure to computing power.

In the cloud environment, when we need resources, we just create and use them, and when we don’t need them any longer, we just delete them. But let’s think about this. If your organisation runs dev, test and production environment on cloud, the cost of resources running on dev or test environment is likely to be overlooked unless carefully monitored. In this case, your organisation might be receiving an invoice with massive amount of cost! That has to be avoided. In this post, we are going to have a look at the Azure Billing API that was released in preview and build a simple application to monitor costs in an effective way.

The sample codes used for this post can be found here.

Azure Billing API Structure

There are two distinctive APIs for Azure Billing – one is Usage API and the other is Rate Card API. Therefore, we can calculate how much we spent during a particular period.

Usage API

This API is based on a subscription. Within a subscription, we can send a request to calculate how much resources we used in a specified period. Here are the parameters we can use for these requests.

  • ReportedStartTime: Starting date/time reported in the billing system.
  • ReportedEndTime: Ending date/time reported in the billing system.
  • Granularity: Either Daily or Hourly. Hourly can return more detailed result but takes far longer time to get the result.
  • Details: Either true or false. This determines how usage is split into instance level or not. If false is selected, all same instance types are aggregated.

Here’s an interesting point on the term Reported. When we USE cloud resources, that can be interpreted from two different perspectives. The term, USE, might mean that the resources were actually used at the specified date/time, or the resource used events were reported to the billing system at the specified date/time. This happens because Azure is basically a distributed system scattered all around the world, and based on the data centre the resources are situated, the actual usage date/time can be reported to the billing system in a delayed manner. Therefore, even though we send requests based on the reported date/time, the responses containing usage data show the actual usage date/time.

Rate Card API

When you open a new Azure subscription, you might have noticed a code looking like MS-AZR-****P. Have you seen that code before? This is called Offer Durable ID and, based on this, different rates on resources apply. Please refer to this page to see more details about various types of offers. In order to send requests for this, we can use the following query parameters.

  • OfferDurableId: This is the offer Id. eg) MS-AZR-0017P (EA Subscription)
  • Currency: Currency that you want to look for. eg) AUD
  • Locale: Locale of your search region. eg) en-AU
  • Region: Two-letter ISO country code that you purchased this offer. eg) AU

Therefore, in order to calculate the actual spending, we need to combine these two API responses. Fortunately, there’s a good NuGet library called CodeHollow.AzureBillingApi. So we just use it to figure out Azure resource consumption costs.

Scenario

Kloud, as a cloud consulting firm, offers all consultants access to the company’s subscription without restriction so that they can create resources to develop/test scenarios for their clients. However, once resources are created, there’s high chance that those resources are not destroyed in a timely manner, which brings about unnecessary cost spending. Therefore, management team has made a decision to perform cost control by resource groups 1) assigning resource group owners, 2) setting total spend limit, and 3) setting daily spend limit, using tags. By virtue of these tags, resource group owners are notified via email when cost approaches 90% of the total spend limit, and when it reaches the total spend limit. They also get notified if the cost exceeds the daily spend limit so they can take appropriate actions for their resource groups.

Sounds simple, right? Let’s code it!

When the application is written, it should be run daily to aggregate all costs, store it to database, and send notifications to resource group owners that meets the conditions above.

Writing Common Libraries

The common libraries consist of three parts. Firstly, it calls Azure Billing API, and aggregates data by date and resource group. Secondly, it stores those aggregated data into database. Finally, it sends notification to resource group owners who have resource groups that exceeds either total spend limit or daily spend limit.

Azure Billing API Call & Data Aggregation

CodeHollow.AzureBillingApi can reduce huge amount of API calling work. Its simple implementation might look like:

First of all, like the code above, we need to fetch all resource usage/cost data then, like below, those data needs to be grouped by dates and resource groups.

We now have all cost related data per resource group. We then need to fetch tag values from resource groups using another API call and merge it with the data previously populated.

We can look up all resource groups in a given subscription like above, and merge this result with the cost data that we previously found, like below.

Date Storage

This is the simplest part. Just use Entity Framework and store data into the database.

We’ve so far implemented data aggregation part.

Notification

First of all, we need to fetch resource groups that meet conditions, which is not that hard to write.

The code above is self-explanatory: it only returns resource groups that 1) approach the total spend limit or 2) exceed the total spend limit, or 3) exceed the daily spend limit. It works well, even though it looks smelly.

The following code bits show how to send notifications to the resource group owners.

It only writes alarms onto the screen, but we can implement SendGrid for email notification or Twillio for SMS alert, in here.

Now we’ve got the basic application structure. How can we execute it, by the way? We might have two approaches – Azure WebJobs and Azure Functions. Let’s move on.

Monitoring Application on Azure WebJob

A console application might be the simplest way for this purpose. Once the console app is built, it can be deployed to an Azure WebJob straight away. Here’s the simple console application code.

Aggregator service collects and store data and Reminder service sends alerts to resource group owners. In order to deploy this to Azure WebJob, we need to create two extra files, run.cmd and settings.job.

  • settings.job: It contains CRON expression for scheduled job. For example, if this WebJob runs every night at 00:20, the JSON object might look like:

  • run.cmd: When this WebJob is run, it always looks up run.cmd first, which is a simple batch command file. Therefore, if necessary, we can enter the actual executable command with appropriate arguments into this file.

That’s how we can use Azure WebJob for monitoring.

Monitoring Application on Azure Function

We can use Azure Functions instead. But in this case we HAVE TO make sure:

Azure Functions instance MUST be with App Service Plan, NOT Consumption Plan

Basically, this app runs for 1-2 minutes at the shortest or 30-40 minutes at the longest. This execution time is not affordable for Consumption Plan, which charges costs based on execution time. On the other hand, as we have already paid for App Service Plan, we don’t need to pay extra for the Function instance, if we create it under the App Service Plan.

Timer Trigger Function code suits our purpose. Also using Precompiled Azure Functions approach would be more helpful and the function code might look like:

Here’s the function.json for this Timer Trigger one:

Here we have shown how to quickly write a simple application for cost monitoring, using the Azure Billing API. Cloud resources can certainly be used effectively and efficiently, but the flipside of it is, of course, that we have to be very careful not to be wasteful. Therefore, implementing a monitoring application would help in preventing unwanted cost leak.

Precompiled Azure Functions Revisited

Since Microsoft released a preview version of Visual Studio Tools for Azure Functions in December, 2016, it’s been reported very buggy. Current roadmap says the tooling won’t be unveiled until .NET Standard 2.0 is released. Therefore, Azure App Service Team published an article to utilise ASP.NET Web Application projects in the meantime. Even though the approach requires a lot of hands for initial setup, this is literally the only way to play Azure Functions on Visual Studio without the tooling.

In this post, as a complementary one to the article, we are going to build a simple Web API using HTTP Triggers and implement a simple CQRS pattern using Queue Triggers by precompiled approach. I hope this post would offer more details.

The sample code used for this post can be found here.

SIDE NOTES:

  • We use VS2015 in this post. If you want to use VS2017, that would be fine.
  • We only use .NET Framework version 4.6 as Azure Functions uses this version.

Benefits of Precompiled Azure Functions

There are several benefits when we use Azure Functions that come as precompiled .dll files:

  1. We can use full features on Visual Studio, including IntelliSense.
  2. We can easily write unit test codes.
  3. We can easily attach function codes to existing CI/CD pipelines.
  4. We can easily migrate the existing codebase with barely modifying them.
  5. We don’t need project.json for NuGet package management.
  6. We can reduce the total amount of cold start time by removing compiling on-the-fly when requests hit to the Functions.

Let’s find out a scenario we’re using in this post.

Web APIs to Create or View Product Details

We’re creating two Web API endpoints – one to add/update product details, and the other to view product details. We also add another Function code using Azure Storage Queue to implement CQRS patterns.

Entity Framework Code-First Approach for Product Details

First of all, we’re creating a class library to take care of database transaction using Entity Framework. Here are a simple product class and database context.

We can find this at the PrecompiledSample.EntityModels project.

API Request/Response Object

Next, we create a simple DTO, ProductModel.csnot to directly expose database entity.

We can find this at the PrecompiledSample.Models project.

Service Layer

Lastly, When we develop Web API applications, we usually implement service layers that are called by controllers. Of course, there’s no controller in Azure Functions, those service layers are instantiated within the Run method of Azure Functions code.

We can find this at the PrecompiledSample.Services project.

So far, have you found out any difference? Probably not. Now, it’s time to write Azure Functions code.

Creating Web Application Project for Azure Functions

As Azure Functions basically run on top of an Azure Web App instance, we can start with an empty ASP.NET Web Application project, targeting .NET Framework 4.6, within Visual Studio.

Of course there’s nothing in the project, except packages.config and Web.config.

Install NuGet packages below to run Azure Functions within our Web Application project:

For our simple Web API, for database transactions, we need more NuGet packages:

We’ve completed our basic setup for Azure Functions code. Let’s move on.

Writing Azure Functions

Let’s think about the Azure Functions app structure. Each folder of the Web Application project works as an individual function. Therefore, we can simply create folders and put function.json in each of them. Let’s have a look.

  • AddProductHttpTrigger: This is an HTTP Trigger works as an API endpoint, which takes POST requests, sends the message body to Azure Storage Queue and returns HTTP Status Code of 202 (Accepted) right away.
  • AddProductQueueTrigger: This is a Queue Trigger that handles database transaction, which watches Azure Storage Queue, takes messages from there and process them to the database.
  • GetProductHttpTrigger: This is an HTTP Trigger works as an API endpoint, which takes GET request and returns messages with HTTP Status Code of 200 (OK).

Here’s a high-level diagram how POST request is processed through Functions.

We mentioned about function.json just above. What do we put inside it, then? The easiest way to check its content will be to just create a function code on Azure Functions instance. Let’s have a look.

AddProductHttpTrigger

If we create an HTTP Trigger function from Azure Portal, we can find the function.json. It looks like:

There are two bindings defined – one for input binding and the other for output binding. The input binding has a type of httpTrigger, while the output binding has a type of http. Copy all and paste it into the function.json in our Visual Studio project. And add another output binding with the type of queue. Also, we need to provide precompiled .dll file’s information like below:

As defined above, we need AddProductHttpTrigger.cs.

Copy the code from Run.csx of Azure Portal and paste it into the file, then modify it like below:

How can we find out the differences between our code and Run.csx?

  1. There’s no #r directive. This is because the function code is not a script anymore.
  2. There are namespace and class name definition. Previously in script-style function code didn’t allow those definitions around the Run method.
  3. Message body from the request is directly sent to the queue.
  4. At the same time, it returns HTTP Status Code of 202 (Accepted). This is more semantic than returning HTTP Status Code of 200 (OK) as it’s the non-blocking and async function.

We’ve now created the API endpoint to create a resource, which is corresponding to the C of the CQRS pattern.

AddProductQueueTrigger

When we create a sample Queue Trigger function code, we can see the function.json. Copy and paste it into our function.json in Visual Studio and define the entry point like:

Create the AddProductQueueTrigger.cs file like below:

Copy the code from Run.csx and paste it to the file and modify it like below:

This function actually does database transactions. Let’s have a look:

  1. While normal web applications refer to Web.config for database connection string, Azure Functions is not designed to read from it. Instead, it uses the App Settings blade of the Azure Function Portal that defines database connection string. In spite of this fact, we can still write code the same way, ConfigurationManager.ConnectionStrings["NAME"].ConnectionString.
  2. If we want to use additional configurations, use ConfigurationManager.AppSettings["KEY"] that is defined in the App Settings blade of the Azure Functions Portal. We can’t use custom configuration section for this purpose. Alternatively, if you really want to custom configuration settings, create a JSON file, mysettings.json for example, and deserialise it using Json.NET.
  3. It uses the service layer instance that is written in another project, PrecompiledSample.Services. If you are concerned at dependencies, consider the service locator pattern. Testing Precompiled Azure Functions explained well how to apply service locator pattern in Azure Functions.

So far, we have written function codes for resource creation.

GetProductHttpTrigger

Copy the function.json from the HTTP Trigger function created earlier in the portal, paste it into the function.json in Visual Studio, and modify it like:

Then, create the GetProductHttpTrigger.cs file.

Here’s the code for it:

Let’s have a look at the code.

  1. This function code takes the GET request, so it looks up the id query parameter from the querystring and use it as the ProductId value.
  2. It uses the service layer instance that is written in another project, PrecompiledSample.Services.
  3. It returns the resource details with HTTP Status Code of 200 (OK).

So far, we have created the resource lookup API, corresponding to the Q of the CQRS pattern. we’ve completed all necessary function codes. If we integrate this Web Application project with Azure Functions CLI, we can easily perform testing and debugging in our local environment with the same development experiences that Visual Studio offers.

Setting Up Debugging Environment for Azure Functions within Visual Studio

We need two more tools for our local debugging experience within Visual Studio.

In order to install Azure Functions CLI, we can simply run the command, npm install --global azure-functions-cli. Make sure that those two tools only work in Windows at the time of writing. Once both are installed, open the project property window like below:

Move to the Web tab and enter the necessary information.

When we integrate Azure Functions CLI with the Web Application project, there are a few points that we need to make sure.

  • The installed location of node.js might be different:
    • If it is downloaded from https://nodejs.org, the CLI location would be C:\Users\[USERNAME]\AppData\Roaming\npm\node_modules\azure-functions-cli\bin\func.exe.
    • If it is installed through NVM, the CLI would be located at C:\Program Files\nodejs\node_modules\azure-functions-cli\bin\func.exe.
  • For Command line arguments, its value would be host start to run WebJobs host in our local machine.
  • Working directory needs to have the absolute path of the Web Application project where Azure Functions codes reside.

Unfortunately, we can’t use % environment variables here.

Finally, we need to add two .json files – appsettings.json and host.json. Unless necessary, host.json is empty. On the other hand, we need to put some details into appsettings.json like below:

As we’re using Azure Storage Emulator, the value, UseDevelopmentStorage=true, is used. As well as for the database connection string is defined in this file.

Now, it’s time for debugging! Set the Web Application project as a startup project and punch the F5 key, then we’ll be able to see the command prompt window that is running Azure Functions CLI.

Let’s send a POST request through a REST API testing tool like Postman. As we can see the screenshot above, the endpoint URL for resource creation is http://localhost:7071/api/AddProductHttpTrigger, so send a POST request like below:

Then, the code stops at the break point where we setup in Visual Studio.

How did you feel it? We can use the same development experience for Azure Functions development. Now, it’s time for deployment to the actual Azure Functions instance.

Deploying Azure Functions

We have developed Azure Function codes within a Web Application project. That means we will have the same deployment experience.

When we choose the publish menu like above, we can select either Azure App Service option

or import publish profile settings file downloaded from the Azure Function instance.

Once we complete deployment, we can confirm on Azure Function portal that all function codes have been successfully deployed. Please note that we don’t need to build separate web application project for each function, but just put everything in one web application project, which would be sufficient.

Once deployed, let’s send a POST request to the endpoint for the AddProductHttpTrigger function. Request body will flow the diagram mentioned before.

Once data has been processed, let’s check the result on the Azure Function side:

And here’s the database query result.

So far, we have built a sample Azure Functions code using Web Application project on Visual Studio. We have used .dll files instead of .csx files for Function codes. With these precompiled .dll library files, we have performed debugging and deployment as well. How did you guys find out? Is it easier to use? Does it give you the same development experiences? It may not be easy at the first glance. However, because this is the same approach that we develop a web application, we can easily get used to it.

Hope this post will help you write Azure Function codes with full support by Visual Studio.

MuleSoft Anypoint Studio in High DPI Mode

MuleSoft‘s development platform, Anypoint Studio, is a great tool for service integration. However, if we’re using an OS that supports High DPI mode like Windows 10, its user experience is not quite nice.

Icons are barely recognisable! By the way, this is NOT the issue from the Anypoint Studio side. Rather, it’s the well-known issue that Eclipse has had so far at least since 2013. Anypoint Studio is built on top of Eclipse, so the same issue occurs here.

The version of Anypoint Studio is 6.2.3 at the time of writing.

So, how to fix this? Here’s the magic.

First of all, create a manifesto file at the same location where our AnypointStudio.exe is located and give it a name of AnypointStudio.exe.manifest. Its content will look like:

The main point of the manifesto file is the dpiAware element and its value to be false. By adding this manifesto file, our Anypoint Studio can run by overriding that High DPI settings.

I appreciated Roberto, one of my colleagues at Kloud Solutions, for extracting this manifesto content by decompiling AnypointStudio.exe.

Next, we need to let application know there is an external manifesto file to read at runtime by modifying Windows registry. Open Registry Editor by running regedit. Then follow the steps below:

  1. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SideBySide\
  2. Add new DWORD (32-bit) Value
  3. Set the name to PreferExternalManifest
  4. Set the value to 1

All good to go! Now run our Anypoint Studio again and see how it looks different from the previous screen.

Can we see the differences? Of course this black magic contains a couple of downsides:

  • The icons are a bit blurry, and
  • Workspace becomes smaller

But that’s fine, it’s not too bad as long as we can recognise all those icons. Hopefully, next version of Eclipse will fix this issue.

Performing OCR with Azure Cognitive Services and HTML5 Media Capture API

There are a few ways to access camera on mobile devices during application development. In our previous post, we used the getUserMedia API for camera access. Unfortunately, as of this writing, not all browsers support this API, so we should provide a fallback approach. On the other hand, HTML5 Media Capture API is backed by almost of all modern browsers, which we can utilise with ease. In this post, we’re going to use Vue.js, TypeScript and ASP.NET Core to build an SPA that performs OCR using Azure Cognitive Service and HTML Media Capture API.

The sample code used in this post can be found here.

Vision API – Azure Cognitive Services

Cognitive Services is an intelligence service provided by Azure and uses machine learning resources. Vision API is a part of Cognitive Services to analyse pictures and videos. It performs analysis of expressions, ages and so on from someone or something in pictures or videos. It even extracts texts from them, which is OCR feature. Previously it was known as Project Oxford, while it was renamed to Cognitive Services when it came out for public preview. Therefore, the NuGet package still has its title of ProjectOxford.

HTML Media Capture API

Media Capture API is one of HTML5 features. It enables us to access to camera or microphone on our mobile devices. According to the W3 document, this is just an extension of the existing input tag with the type="file" attribute. Hence, by adding both accept="image/*" and capture="camera" attributes to the input tag, we can use the Media Capture API straight away on our mobile devices.

Of course, this doesn’t interrupt on existing user experiences for desktop browsers. In fact, this link confirms how the Media Capture API works on either desktop or mobile browsers.

ASP.NET Core Web API

The image file passed from the front-end side is handled by the IFormFile interface in ASP.NET Core.

Well, theory is enough. Let’s make it!

Prerequisites

  • ASP.NET Core application from the previous post
  • Computer, tablet or smart phone having camera

Implementing Vue Component – Ocr.vue

First of all, we need a Vue component for OCR. This component is as simple as to have an input element, a button element, an img element, and a textarea element.

If we put the ref attribute on each element above, the Vue component can directly handle it. The button element binds the onclick event with the event handler, getText. Ocr.ts contains the actual logic to pass image data to the back-end server.

Like this previous post, in order to use dependency injection (DI), we create a Symbols instance and use it. axios is injected from the top-most component, App.vue, which will be touched later in this post.

We also create a FormData instance to pass the image file extracted from the input element, through an AJAX request. This image data then will be analysed by Azure Cognitive Services.

Updating Vue Component – Hello.vue

Ocr.vue is now combined with Hello.vue as a child component.

Dependency Injection – App.vue

The axios instance is provided at the top-most component, App.vue, which is consumed its child components. Let’s see how it’s implemented.

We use the symbol instance as a key and provide it as a dependency.

Everything on the front-end side is done. Let’s move onto the back-end side.

Subscribing Azure Cognitive Service

We need to firstly subscribe Azure Cognitive Service. This can be done through Azure Portal like:

At the time of this writing, Azure Cognitive Services are in public preview, which we only can choose the West US region. Choose Computer Vision API (preview) for API Type and F0 (free) for Pricing Tier. Make sure that we only can have ONE F0 tier in ONE subscription for ONE API type.

It takes about 10 minutes to activate the subscription key. In the meantime, let’s develop the actual logic.

Developing Web API – ProjectOxford Vision API

This is relatively easy. Just use the HttpClient class to directly call REST API. Alternatively, ProjectOxford – Vision API NuGet package even makes our lives easier to call the Vision API. Here’s the sample code.

The IFormFile instance takes the data passed from the front-end through the FormData instance. For some reason, if the IFormFile instance is null, the same data sitting in the Request.Form.Files also needs to be checked. Put the API key to access to the Vision API. The VisionServiceClient actually returns the image analysis result, which is included to the JSON response.

We’ve completed development on both front-end and back-end sides. Let’s run this app and access it from our mobile device. The following video clip shows how iPhone takes a photo, sends it to the app, and gets the result.

So far, we’ve briefly looked at Azure Cognitive Services – Vision API for OCR implementation. In fact, depending on the original source images, the analysis quality varies. In the video clip above, the result is very accurate. However, if there are outlines around the text, or contrast between text and its background is very low, the quality significantly drops. In addition to this, CAPTCHA-like images don’t return satisfactory results. Once after Cognitive Services performs enough learning with substantial number of sources, the quality becomes high. It’ll be just matter of time.

Dependency Injection in Vue.js App with TypeScript

Dependency management is one of critical points while developing applications. In the back-end world, there are many IoC container libraries that we can make use of, like Autofac, Ninject, etc. Similarly, many modern front-end frameworks also provide DI features. However, those features work way differently from how back-end libraries do. In this post, we’re going to use TypeScript and Vue.js for development and apply an IoC container library called InversifyJS that offers very similar development experiences to back-end application development.

The code samples used in this post can be found here.

provide/inject Pair in VueJs

According to the official document, vue@2.2.0 supports DI feature using the provide/inject pair. Here’s how DI works in VueJs. First of all, declare dependency, MyDependency in the parent component like:

Then its child component consumes the dependency like:

Maybe someone from the back-end development got a question. Child components only consumes dependencies that are declared from their parent component. In other words, in order for all components to consume all dependencies, this declaration MUST be done at the top-level component of its hierarchy. That’s the main difference between VueJs and other back-end IoC containers. There’s another question – VueJs doesn’t provide a solution for inter-dependency issue. This inter-dependency should be solved by a third-party library. But that’s fine. We’re going to use TypeScript anyway, which has a solution for the inter-dependency issue.

DI in VueJs and TypeScript

Evan You, the creator of VueJs, has recently left a comment about his design philosophy on VueJs framework.

While using a class-based API by default may make it more “friendly” to devs used to classes, it also makes it more hostile to a large group of users who use Vue without build tools or transpilers. When you are advocating your preference, you might be missing some nuance we have to take into account as a framework.

This is why we offer the object-based API as the baseline and the class-based API as an opt-in. This allows us to cater to both groups of users.

Therefore, we need to sort out either using the provide/inject pair or using another approach, ie. service locator pattern. In order to use the provide/inject pair, as we found above, we need to put an IoC container instance at the top-level of component. On the other hand, we can simply use the container as a service locator. Before applying either approach, let’s implement the IoC container.

Building IoC Container using InversifyJS

InversifyJS is a TypeScript library for IoC container, which is heavily influenced from Ninject. Therefore syntax is very similar to each other. Interface and class samples used here is merely modified from both libraries’ conventions – yeah, the ninja stuff!

Defining Interfaces

Let’s define Weapon and Warrior interfaces like below:

Defining Models

InversifyJS uses Symbol to resolve instances. This is a sample code to define multiple symbols in one object. This object contains multiple symbols for Warrior, Weapon and Container.

The @injectable decorator provided by InversifyJS defines classes that are bound into an IoC container.

The @inject decorator goes to constructor parameters. Make sure that those parameters require the Symbol objects defined earlier.

Make sure that we should use the same Symbol object defined earlier. If we simply use Symbol("Weapon") here, it wouldn’t be working as each Symbol object is immutable.

Implementing IoC Container

Let’s implement the IoC container using the interfaces and models above.

The last part of the code snippet above, container.bind(...).to(...), is very similar to how IoC container works in C#. Now we’re ready for use of this container.

Attaching Child Component

Unlike the Previous Posts, We’re adding a new child Vue component, Ninja.vue to Hello.vue for dependency injection.

Hello.vue has got the Ninja.vue component as its child. Let’s have a look at the Ninja.vue component.

Now, let’s apply both service locator and provide/inject pair.

Applying Service Locator

We’re updating the Ninja.vue to use service locator:

As we can see above, the IoC container instance, container is directly consumed within the Ninja.vue component. When we run the application, the result might be looking like:

As some of us might uncomfortable to use the service locator pattern, now we’re applying the built-in provide/inject pair.

Applying provide/inject Pair

As we identified above, in order to consume all dependencies at all Vue components, we should declare IoC container as a dependency at the top-level of the component, ie) App.vue.

We can see that the container instance is provided with the symbol, SERVICE_IDENTIFIER.CONTAINER defined earlier. Now let’s modify the Ninja.vue component:

The @Inject decorator takes care of injecting the container instance from the App.vue component. Make sure that the same symbol, SERVICE_IDENTIFIER.CONTAINER is used. All good! Now we can see the same result like the picture above.

So far, we’ve had an overview how to use DI in VueJs & TypeScript app in two different approaches – service locator or provide/inject pair. Which one to choose? It’s all up to you.

Accessing to Geolocation on Mobile Devices from ASP.NET Core Application in Vue.js and TypeScript

In the previous post, we used HTML5 getUserMedia() API to access camera on our mobile devices. In this post, we’re using geolocation data on our mobile devices.

The code samples used for this post can be found here.

navigator.geolocation API

Unlike getUserMedia() API, geolocation API has a great level of compatibility of almost all browsers.

Therefore, with a simple TypeScript code, we can easily use the geolocation data.

NOTE: In order to use the geolocation API, the device must be connected to the Internet. Also, each browser vendor uses its own mechanism to get geolocation data, which will cause different result even in the same device. This article gives us more details.

Prerequisites

  • ASP.NET Core App from the previous post
  • Computer or mobile devices that can access to the Internet through Wi-Fi or mobile network

NOTE 1: We use vue@2.2.2 and typescript@2.2.1 in this post. There are breaking changes on VueJs for TypeScript, so it’s always a good idea to check out the official guideline.

NOTE 2: Code samples used in this post were from the MDN document that was altered to fit in TypeScript.

Updating Hello.vue

In order to display latitude, longitude and altitude retrieved from the geolocation API, we need to update the Hello.vue file:

That’s pretty much self descriptive – clicking or tapping the Get Location button will display those geolocation data. Let’s move onto the logic side.

Updating Hello.ts

The Get Location button is bound with the getLocation() event, which needs to be implemented like below:

First of all, we need to declare properties for latitude, longitude and altitude, followed by the getLocation() method. Let’s dig into it.

  • First of all, we check the navigator.geolocation isntance if the web browser supports geolocation API or not.
  • Call getCurrentPosition() method to get the current position. This method then passes two callback methods and an option instance as its parameters.
  • Callback method, success(), passes the position instance containing current position details and binds co-ordinates to the browser.
  • error() callback handles error.
  • options instance provides options for the geolocation API.

NOTE Each callback method has its return type, according to the type definition, which is not necessary. Therefore, we just return null

The options instance used above is actually an interface type of PositonOptions that needs to be implemented. Its implementation might be looking like below:

We completed the TypeScript part. Let’s run the app!

Results

When we use a web browser within our dev machine, it firstly asks us to get a permission to use our location data:

Click Allow and we’ll see the result.

This time, let’s do it on a mobile browser. This is taken from Chrome for iPhone. It also asks us a permission to use geolocation data.

Once tapping the OK button, we can see the result.

So far, we’ve briefly looked at the geolocation API to populate current location. That’s not that hard, isn’t it?

If we have more complex scenario, need more accurate location details, or need constant access to the location data even we’re not using the app, then native app might have to be considered. Here’s a good discussion regarding to these concerns. But using HTML5 geolocation API would be enough in majority of cases.

Accessing to Camera on Mobile Devices from ASP.NET Core Application in Vue.js and TypeScript

In the previous post, we built an ASP.NET Core application using Vue.js and TypeScript. As a working example, we’re building a mobile web application. Many modern web browsers supporting HTML5 can access to multimedia devices on users’ computer, smartphones or tablets, such as camera and microphone. The Navigator.getUserMedia() API enables us to access to those resources. In this post, we’re actually going to implement a feature for camera access on our computer and mobile devices, by writing codes in VueJs and TypeScript.

The code samples used for this post can be found here.

getUserMedia() API

Most modern web browsers support this getUserMedia() API, as long as they support HTML5. There are two different APIs around this method – one is Navigator.getUserMedia() that supports callback functions, while the other MediaDevices.getUserMedia(), that came up later, supports Promise so that we can avoid Callback Hell. However, not all browsers support the MediaDevices.getUserMedia(), so we need to support both anyway. For more details around getUserMedia(), we can find some practical samples in this MDN document.

Prerequisites

  • ASP.NET Core application from the previous post
  • Computer, tablet or smartphone having camera

NOTE 1: This post uses VueJs 2.2.1 and TypeScript 2.2.1. VueJs 2.2.1 introduced some breaking changes how it interacts with TypeScript. Please have a look at the official guide document.

NOTE 2: vue-webcam written by @smronju was referenced for camera access, and modified to fit in the TypeScript format.

Update Hello.vue

We need a placeholder for camera access and video streaming. Add the following HTML codes into the template section in Hello.vue.

  • video accepts the camera input. src, width, height and autoplay are bound with the component in Hello.ts. Additionally, we add the ref attribute for the component to recognise the video tag.
  • img is where the camera input is rendered. The photo field is used for data binding.
  • button raises the mouse click or finger tab event by invoking the takePhoto function.

The HTML bits are done. Let’s move on for TypeScript part.

Update Hello.ts

The existing Hello.ts was simple, while this time it’s grown up to handle the camera API. Here’s the bits:

We can see many extra data fields for two-way data binding between user input and application. Some of them comes with their default values so that we don’t have to worry about their initialisation too much.

  • The takePhoto() function creates a virtual DOM for canvas, converts the input signal from the video into an image, and sends it to the img tag to display snapshot.

  • The mounted() event function is invoked when this component, Hello.ts, is mounted to its parent. It uses the getUserMedia() API to bind streaming source to the video tag.
  • The video tag through this.$refs.video is the HTML element that has the ref attribute in Hello.vue. Without the ref attribute, VueJs cannot know where to access to the tag.

NOTE: The original type of the this.$refs instance is { [key: string]: Vue | Element | Vue[] | Element[] }, while we cast it to any. This is to avoid build failure due to the linting error caused by using the original type and accessing to the resource by referencing like this.$refs.video. If we don’t want to cast it to any, we can use this.$refs["video"] instead.

We’ve so far completed the coding part. Now, let’s build this up and run a local IIS Express, and access to the web app through http://localhost:port. It works fine.

This time, instead of localhost, use the IP address. If we want to remotely access to our local dev website, this post would help.

It says we can’t use the camera because of its insecure access. In order to use the getUserMedia() API, we should use HTTPS connection to prevent private data exposure. This only happens when we’re using Google Chrome, not FireFox or Edge. So, just change the connection to HTTPS.

Now we can use IP address for camera access. Once we allow it we can immediately see our face directly on the web like below (yeah, it’s me! lol).

Let’s try this from our mobile devices. The first one is taken from Android phone, followed by the one taken from Windows Phone, then the ones from iPhone. Thanks Boris for help take those pictures!

Errr… what happened on iPhone? The camera is not accessible from both Safari for iOS and Chrome for iOS!!

This is because not all mobile web browsers support the getUserMedia() API.

getUserMedia Browser Compatibility

Here’s the data sheet from http://mobilehtml5.org/.

Unfortunately, we can’t use the getUserMedia API on iOS for now. For iOS users, we have to provide alternative methods for their user experience. There’s another API called HTML Media Capture that is supported by all mobile web browsers. It uses the traditional input type="file" tag. With this, we can access to camera on our mobile devices.

In the next post, we’re going to figure out how to provide a fallback option, if getUserMedia() API is not available.

Remote Access to Local ASP.NET Core Applications from Mobile Devices

One of the most popular tools for ASP.NET or ASP.NET Core application development is IIS Express. We can’t deny it. Unless we need specific requirements, IIS Express is a sort of de-facto web server for debugging on developers’ local machines. With IIS Express, we can easily access to our local web applications with no problem during the debugging time.

There are, however, always cases that we need to access to our locally running website from another web browsers like mobile devices. As we can see the picture above, localhost is the loopback address so we can’t use it outside our dev box. It’s not working by simply replacing the loopback address with a physical IP address. We need to adjust our dev box to allow this traffic. In this post, we’re going to solve this issue by looking at two different approaches.

At the time of writing this post, we’re using Visual Studio (VS) 2015, as VS 2017 will be launched on March 7, 2017.

Network Sharing Options and Windows Firewall

Please make sure that all screenshots for this section are taken from Windows 10. Open my current connected network (either wireless or wired).

Make sure that the “Make this PC discoverable” option is turned on.

This option enables our network in “Private” mode on Windows Firewall:

WARNING!!!: If our PC is currently connected to a public network, for our better security, we need to turn off the private network settings; otherwise our PC will get vulnerable from malicious attacks.

Update Windows Firewall Settings

In this example, the locally running web app uses the port number of 7314. Therefore, we need to register a new inbound firewall rule to allow access through the port number. Open “Windows Firewall with Advanced Security” through Control Panel and create a new rule with options below:

  • Rule Type: Port
  • Protocol: TCP
  • Port Number: 7314
  • Action: Allow the Connection
  • Profile: Private (Domain can also be selected if our PC is bound with domain controllers)
  • Name: Self-descriptive name of anything! eg) IIS Express Port Opener

Now, all traffic through this port number is allowed from now on. So far, we’ve completed the basic environment settings including firewalls. Let’s move onto the first option using IIS Express itself.

1. Updating IIS Express Configurations Directly

When we install VS, IIS Express is also installed at the same time. Its default configuration file is located at somewhere but each solution that VS 2015 creates has its own settings that overwriting the default one and it’s stored to the .vs folder like:

Open applicationhost.config for update.

Add another binding with my local IP address like:

We can easily find our local IP address by running the ipconfig command. We’re using 192.168.1.3 for now.

IIS Express has now been set. Let’s try our mobile web browser to access the local dev website by IP address.

All good! It seems to be working now. However, if we have more web applications running on our dev environment for our development work, every time we create a new web application project, we have to register the port number, allocated by IIS Express, to Windows Firewall. No good. Too repetitive. Is there any other convenient way? Of course there is.

2. Conveyor – Visual Studio Extension

Conveyor can sort out this hassle. At the time of this writing, its version is 1.3.2. After installing this extension, run the debugging mode by typing the F5 key again and we will be able to see a new window like:

The Remote URL is what we’re going to use. In general, the IP address would look like 192.168.xxx.xxx, if we’re in a small network (home, for example), or something different type of IP address type, if we’re in a corporate network. This is the IP address that the mobile devices use. Another important point is Conveyor uses the port number starting from 45455. Whatever port number IIS Express assigns the web application project, Conveyor forwards it to 45455. If 45455 is taken by others, it looks up one and one until a free port number exists. Due to this behaviour, we can easily predict the port number range, instead of the random nature of IIS Express. Therefore, we can register the port number range starting from 45455 to whatever we want, 45500 for example.

Now, we can access to our local dev website by using this port number pool like:

If we’re developing a web application using HTTPS connection, that wouldn’t be an issue. If no self-signed certificate is installed on our local dev machine, Conveyor will install one and that’s it. Visiting the website again through HTTPS connection will display the initial warning message and finally gets the page.

We’ve so far discussed how to remotely access to our local dev website using either IIS Express configuration or Conveyor. Conveyor gets rid of repetitive firewall registration, so it’s worth installing for our web app development.

Writing Vue.js Applications in TypeScript on ASP.NET Core

In the previous post, we’ve briefly walked through how to build Vue.js application on ASP.NET Core. Like other modern JavaScript framework, VueJs also supports TypeScript out-of-the-box. If we can get full benefits from TypeScript to build a VueJs app, it would be awesome! There are many resources referring to the combination of VueJs and TypeScript. However, they are not using the basic template that VueJs provides, which brings about less confidence to those developers who just started using VueJs. Even worse, due to the recent version up of Webpack to 2.x, we might need a new tutorial to build a VueJs application using TypeScript. In this post, our goal will be:

  • To use the basic template provided by VueJs,
  • To use Webpack version 2.x, and
  • To run the app on ASP.NET Core.

The sample code used in this post can be found at here.

Prerequisites

We have already built a VueJs application running on ASP.NET Core in the previous post. So we’re going to re-use that.

Update on March 6th, 2017: We updated the TypeScript version to 2.2.1 for this post.

Installing npm Packages

TypeScript

We can install TypeScript locally only for this application:

Or we can install it globally:

If TypeScript is installed globally, we should link it to this application:

ts-loader

ts-loader offers us to load .ts files to .js file without actually building them during the development time.

vue-class-component & vue-property-decorator

If we want to use .ts in our VueJs development, as the official document recommends, we should install the vue-class-component library for class decorators.

It may be necessary to install vue-property-decorator to extend vue-class-component. This is not relevant to this post, though.

vue-typescript-import-dts

TypeScript needs type definitions. vue-typescript-import-dts helps recognise .vue files as .ts.

All necessary npm packages are installed. Let’s move on.

Configurations for TypeScript

tsconfig.json

In order to use .ts, we firstly need tsconfig.json. In this post we just use the bare minimum settings to work. Further details about tsconfig.json can be found here.

Let me explain the configuration in-depth.

  • VueJs supports ECMAScript 5. Therefore, we need to target TypeScript to es5. It means that module should be CommonJs as well as lib should include dom, es2015 and es2015.promise.
  • types declares custom type definitions. As we’ve installed vue-typescript-import-dts, include it here so that the application can recognise .vue files as .ts files.
  • In order to use class decorators, we’ve installed vue-class-component. But this is not enough. We need to enable it by setting the experimentalDecorators value to be true.
  • Within the include property, we need to declare which directories are considered containing .ts files.

Update on March 6th, 2017 Due to the version update of VueJs to 2.2.x, tsconfig.json also needs to be updated. This is the recommended configuration from the official guide.

Also, please make sure that we create the template from vue-cli by running vue init webpack. It installs vue@2.2.1 and vue-router@2.2.0. If those versions are different, please update them.


.eslintignore

While developing apps in TypeScript, .js files are automatically compiled and generated. But it’s not guaranteed those files comply to linting process. Therefore, we can’t be sure if those generated .js files are ESLint compliant or not. Therefore, to avoid linting errors from those generated .js files, we just turn it off by adding a line, src/**/*.js, to .eslintignore.

We just completed basic configurations for TypeScript compiling. Let’s move on.

Converting JavaScript to TypeScript

It’s time to convert existing .js files in the src directory to .ts ones situated. We’ll only look after both build and src directories.

build/webpack.base.conf.js

As Webpack is the only service to refer this file, so it’s not necessary to change this to .ts. But we do need to modify it.

First of all, the entry point should be changed from main.js to main.ts:

Then we need to replace the babel-loader part with the ts-loader one:

Every .ts file is handled by this loader. Here’s an interesting option, appendTsSuffixTo. If we use this, .vue files can be treated as .ts ones. VueJs uses the Single File Component approach – all HTML section, JavaScript section, and CSS sections are put in one single file called .vue. Therefore we need to handle it to be a TypeScript file, particularly for the JavaScript section.

We’ve completed webpack configuration to enable TypeScript handling. Let’s really convert JavaScript files to TypeScript ones.

src/main.jssrc/main.ts

Change the existing JavaScript syntax to the one for TypeScript like:

Spot on the new Vue({ ... }) part. Instead of template and components, the render function is placed. Everything has been compiled before hitting this point, and each component needs more control by itself, so we just use the render function. For more details about the render function, please refer to the official document.


Updated on March 6th, 2017: Due to the version update of VueJs to 2.2.x, the import statements part needs to be updated like below:


src/router/index.jssrc/router/index.ts

We don’t have to worry about this. Just be cautious when using import ....


Updated on March 6th, 2017: Due to the version update of VueJs to 2.2.x, the import statements part needs to be updated like below:


src/App.vuesrc/App.ts

Instead of using one single .vue file, we’re separating the TypeScript part from each .vue. Why are we doing this, by the way? We can still use .vue indeed. But for better maintainability, we’d better to create a separate .ts file. Let’s have a look how we can implement App.ts that is extracted from App.vue.

@Component decorator contains the name declaration so that the router can easily recognise it. The script part in the original App.vue can be altered like this:

Make sure that we should include lang="ts" as an attribute of the script tag.


Updated on March 6th, 2017: Due to the version update of VueJs to 2.2.x, the import statements part needs to be updated like below:


src/components/Hello.vuesrc/components/Hello.ts

Now, we’re going to extract the script section from Hello.vue to Hello.ts. Let’s have a look.

Likewise, @Component contains the name declaration. Previously all two-way binding fields were defined within the data function. Using properties makes them more class-friendly. Functions became methods.

Maybe someone indicates a small change, comparing to the previous post. In order to use AJAX requests and responses, we used vue-resource. However, it’s changed to axios. According to the official VueJs blog post, vue-resource is no more supported as an official VueJs extension. Instead axios is recommended because of its richer features. In addition to this, axios provides TypeScript definitions, so there’s no reason not to use this. Its usage is almost identical to vue-resource.

Once Hello.ts is extracted, the original Hello.vue should now be changed to:


Updated on March 6th, 2017: Due to the version update of VueJs to 2.2.x, the import statements part needs to be updated like below:


All done for conversion from JavaScript to TypeScript! It seems that we’ve done fairly massive conversion. The basic template is optimised to JavaScript, so what we’ve done so far is basically the conversion job. From now on, we can write all logic using TypeScript!

Press F5 key on your Visual Studio to run the application and see the result.

The right-hand side of the window on the picture above is Vue.js devtools, which is a Chrome extension. When we install it, we can use it right away through Chrome’s Developer Tools.

One More Thing …

So far, we’ve done the conversion of VueJs to TypeScript. As this is for local development environment, we need one last modification for deployment. Here’s the overall process of building applications for deployment:

  1. To compile .ts files and generate corresponding .js ones.
  2. To modularise and build bundles through webpack.
  3. To build ASP.NET Core libraries.
  4. To generate an artifact for deployment to Azure or IIS.
  5. To deploy.

By updating package.json and project.json we can easily achieve this goal.

package.json

Within package.json, the scripts was originally looking like:

We need to add another one for TypeScript compilation. Let’s change it like:

  • build:ts is to compile .ts files.
  • build:main is to be responsible for existing build.
  • build is to change to call both build:ts and build:main consecutively.
  • --no-deprecation flag may bring an attention. When compiling, ts-loader throws a deprecation warning. It’s OK but Visual Studio treats it as an error so build/deploy fails. By providing this flag will enable build/deploy through Visual Studio successfully.

project.json

Finally, open project.json to confirm the prepublish section.

All good now! After the deployment to Azure Web App, we can see the following screen:

Of course, if CI/CD is preferred, we can simply use dotnet publish feature.

We’ve so far had a quick look to write a VueJs application in TypeScript, bundle it on ASP.NET Core and deploy it to Azure. As mentioned earlier, the very first part is a bit complicating but it’s not that different from normal TypeScript development. Let’s build a real world application using VueJs and TypeScript!!