Use Azure Hybrid Connections to get on-premises data from SQL to SharePoint Online

Azure Hybrid Connections are an easier and less complicated way to connect cloud applications with on-premises SQL data. This provides great extensibility options for SharePoint Online such as,

  1. Provider Hosted Apps hosted in Azure
  2. Business Data Connectivity using WCF services hosted in Azure
  3. SharePoint Hosted Apps using BCS external sources.

In this blog, I will illustrate the steps to configure Azure Hybrid Connections. In a nutshell, the diagram below outlines the data flow in Hybrid connections.

AzureHybridConnection1_Asish

Firstly, in the on-premises SQL server, if you have a named instance then assign a static port to it and expose it through the firewall. If SQL is installed on the default instance, then make sure 1433 is exposed outside the firewall.

Next, log into the Azure Portal and create a Resource Group, add an Azure Web App, and then add a Hybrid Connection from Networking section
(Azure Web App -> Networking -> Configure Hybrid Connection)

AzureHybridConnection2_Asish

Note:Hybrid connections can also be added by other resources such as Azure Functions or other apps that can be tied to an App Service plan.
Note:The number of Hybrid Connections are limited by the type of App Service Plan.  A brief table of allowed connections is below. It is important to note that the Free App Service Plan doesn’t have any Hybrid Connections. It is shown in the table below
Pricing Plan Number of hybrid connections usable in the plan
Basic 5
Standard 25
Premium 200
Isolated 200

Next, add a New Hybrid Connection. In Endpoint Host, enter the fully qualified name of your SQL server along with domain. In the port field as in the below screenshot, provide the details of the SQL server port the instance is exposed at.

Note: No need to qualify the details of instance as server\instance in the endpoint host field as the application code will have to specify the connection details in it. The Hybrid connection will only need to just know the endpoint.
Note: You could also select existing hybrid connections from other resource groups.

AzureHybridConnection3_Asish

After the Hybrid connection is created, it will show up in the Azure Portal as in below screenshot

AzureHybridConnection4_Asish

Next, download the Connection Manager using Download Connection Manager. It is basically a download with pre-configured Azure subscription details which, when installed in an on-premises system environment (preferably in the same Data center as the SQL Server), acts as a listener to Azure Web App requests.

After installing the Hybrid Connection UI manager, connect to the Azure Subscription account to find the available hybrid connections. After selecting the connection, if the listener can connect to SQL it would show as Connection Successful.

AzureHybridConnection5_Asish

After the connection is successful, in the Azure Portal, the number of listeners will show as 1 and connection status to Connected.

In this blog, we saw how we could create Azure Hybrid connections to connect an on premises SQL with an Azure App Service. In the next blog, we will discuss the steps to consume this connection and connect SPO with the SQL data sources.

Azure Logic App – Evaluating IF condition with the help of JSON expression by passing null

Introduction

Yes, you read the title right, this blog is about evaluating IF condition. You might be wondering what about IF, even novice developer with no experience knows about it.

Allow me to explain a specific scenario that helps us understand it’s behavior in Logic Apps, it might blow your mind.

Some of us come from years of development experience, and at times we like to skill up ourselves to various other technologies, which leaves us with a mindset based on our past development experience and programming habits, which we gained over the years. When clients requirements are approached based on these backgrounds, we expect the code to work with the certain flow and these are where rules are broken while using IF condition in Azure Logic Apps.

Understanding JSON expression

JSON expression evaluates string to JSON object using syntax as shown below

json({"Person":{"Name": "Simpson"}})  evaluates to var name = Person.Name as Simpson

But, the same json(null), throws an error (important), avoid where possible.

Understanding IF condition

IF don’t need any special introduction, we know how it works. As we know, it has two code blocks that are evaluated based on the condition and falls to one of the blocks. As applies to Logic Apps and below is the syntax for it.

@if("condition","true","false")

To understand IF better, let’s also look into @equals(), it is a simple expression that returns true or false based on the given input and provided comparing value.

Example 1

Below is just an example, please ignore simple equality condition.
@if(equals(1,1),"true1","false1")
Result: true1

Example 2

@if(equals(1,2),"true1","false1")
Result: false1

Now, let us take our person JSON and understand it.
@if(equals(1,1),"Merge",json({"Person":{"Name": "Homer"}}) ['Name'])
Result: Merge

and similarly when the comparison is not equal

@if(equals(1,2),"Merge",json({"Person":{"Name": "Homer"}}) ['Name'])
Result: Homer

Now, recall that IF falls to one of the code blocks and returns. But in case of Azure Logic Apps, it evaluates both the code blocks and returns one code block result, that it falls into.

Here is the proof

For Example, if I do something like below, it should result as “Merge”, but it actually throws an error. According to current Logic Apps, this is the current behavior.

@if(equals(1,1),"Merge",json(null) ['Name'])
Result: error

And similarly when not equal

@if(equals(1,2),"Merge",json(null) ['Name'])
Result: error

The above examples imply that Logic App evaluates both the code blocks and returns one.

Actual error thrown is as below from real logic app

InvalidTemplate. Unable to process template language expressions in action ‘Compose’ inputs at line ‘1’ and column ‘1525’: ‘The template language function ‘json’ expects its parameter to be a string or an XML. The provided value is of type ‘Null’. Please see https://aka.ms/logicexpressions#json for usage details.’.

Be SOLID: uncle Bob

We have discussed STUPID issues in programming. The shared modules and tight coupling leads to dependency issues in design. The SOLID principles address those dependency issues in OOP.

SOLID acronym was popularized by Robert Martin as generic design principles dictated by common sense in OOP. They mainly address dependencies and tight coupling. We will discuss SOLID one by one and try to relate each of them with the underline problems and how they try to solve them.

          S – Single Responsibility Principle – SRP

“There should not be more than one reason for something to exist.”

 img.png

As the name suggest, a module/ class etc. should not have more than one responsibility in a system. The more a piece of code is doing, or trying to do, the more fragile, rigid, and difficult to (re)use it gets. Have a look at the code below:

 

class EmployeeService

{

               //constructor(s)//

                Add(Employee emp)

               {

                              //…..//

using (var db = new <Some Database class/ service>()) // or some SINGLETON or factory call, Database.Get()

                              {

try

{

                                              db.Insert(emp);

             db.Commit();

             //more code

}

catch(…)

{

   db.Rollback();

}

}

//….

}

}

 

All looks good yes? There are genuine issues with this code. The EmployeeService has too much responsibilities. The database handling should not be a responsibility of this class. Because of baked in database handling details, the EmployeeService has become rigid and harder to reuse or extend for multiple databases, for example. It’s like a Swiss knife; it looks easy but very rigid and inextensible.

Let’s KISS (keep it simple, stupid) it a bit.

 

//…

Database db = null;

public EmployeeService ()

{

               //…

               db = Database.Get(); // or a constructor etc.

//..

}

Add(Employee emp)

{

               //…

               db.Add<Employee>(emp);

//..

}

 

We have removed the database handling details from the EmployeeService class. This makes the code a bit cleaner and maintainable. It also ensures that everything is doing their job and their job only. Now the class care less about how the database is handled and more about Employee, its true purpose.

Also note that, SRP does not mean a structure/ class will only have a single function/ property etc. It means a piece of code should only have one responsibility related to the business: An Entity service should only be concerned about handling entities and not anything else like database handlings, logging, handling sub entities directly (like saving employee address explicitly) etc.

SRP might increase total number of classes in a module but it also increases their simplicity and reusability. This means in a longer run the codebase remains flexible and adoptive to changes. Singletons are often regarded as opposite of SRP because they quickly become God objects doing too many things (Swiss knife) and introducing too many hidden dependencies into a system.

           O – Open Close Principle – OCP

“Once done, don’t change it, extend it”

motorcycle-sidecar

A class in a system must not be open to any changes, except bug fixing. That means we should not introduce changes to a class to add new features/ functionality to it. Now this does not sound practical because every class would evolve relatively to the business it represents. The OCP says that to add new features, the classes must be extended (open) instead of modified (close). And this introduces abstractions as a part of a business need to add new features into classes, instead of just a fancy have-it.

Developing our classes in the form of abstractions (interfaces/ abstract classes) provides multiple implementation flexibility and greater reusability. It also ensures that once a piece of code is tested, it does not go through another cycle of code changes and retesting for new features. Have a look the above EmployeeService class.

 

class EmployeeService

{

               void Add(Employee emp)

{

//..

db.Add<Employee>(emp);

//…

}

//…

}

 

Now if there was a new requirement that would request an email to be sent to the Finance department, if the newly added employee is a contractor, say. We will have to make changes to this class. Let’s redo the service for the new feature.

 

void Add(Employee emp)

{

//..

db.Add<Employee>(emp);

 

if (emp.Type == EmplolyeeType.Contractor)

               //… send email to finance

//…

}

//…

 

The above, though seems straightforward and a lot easier, is a code smell. It introduces rigid code and hardwired conditioning into a class that would demand retesting all existing use cases related to EmployeeService on top of the new ones. It also makes the code cluttered and harder to manage and reuse as the requirements evolve with time. Instead what we could do is be close to modifications and open to extensions.

 

interface IEmployeeService

{

               void Add(Employee employee);

               //…

}

 

And then;

 

class EmployeeService : IEmployeeService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

}

class ContractorService: IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

}

 

Of course, we could have an abstract Employee service class instead of the interface that will have a virtual Add method with add the employee functionality, that would be DRY.

Now instead of a single EmployeeService class we have separate classes that are extensions of the EmployeeService abstraction. This way we can keep adding new features into the service without having a need to retest any existing ones. This also removed the unnecessary cluttering and rigidness from the code and made it more reusable.

          L – Liskov Substitution Principle – LSP

“If your duck needs batteries, it’s not a duck”

So Liskov worded the principle as:

               If for each object obj1 of type S, there is an object obj2 of type T, such that for all programs P defined in terms of T, the behaviour of P is unchanged when obj1 is substituted for obj2 then S is a subtype of T

Sounds too complex? I know. Let us say that in English instead.

If we have a piece of code using an object of class Parent, then the code should not have any issues, if we replace Parent object with an object of its Child, where Child inherits Parent.

likso1.jpg

Take the same Employee service code and try to add a new feature in it, Get leaves for an employee.

 

interface IEmployeeService

{

               void Add(Employee employee);

               int GetLeaves(int employeeId);

               //…

}

class EmployeeService : IEmployeeService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

int GetLeaves (int employeeId)

{

               // Calculate and return leaves

}

}

class ContractorService : IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

int GetLeaves (int employeeId)

{

               //throw some exception

}

}

 

Since the ContractorService does not have any business need to calculate the leaves, the GetLeaves method just throws a meaningful exception. Make sense, right? Now let’s see the client code using these classes, with IEmployeeService as Parent and EmployeeService and ContractorService as its children.

 

IEmployeeService employeeService = new EmployeeService();

IEmployeeService contractorService = new ContractorService ();

employeeService. GetLeaves (<id>);

contractorService. GetLeaves (<id2>);

 

The second line will throw an exception at RUNTIME. At this level, it does not mean much. So what? Just don’t invoke GetLeaves if it’s a ContractorService. Ok let’s modify the client code a little to highlight the problem even more.

 

List<IEmployeeService> employeeServices = new List<IEmployeeService>();

employeeServices.Add(new EmployeeService());

employeeServices.Add(new ContractorService ());

CalculateMonthlySalary(employeeServices);

//..

void CalculateMonthlySalary(IEnumerable<IEmployeeService> employeeServices)

{

               foreach(IEmployeeService eService in employeeServices)

               {

               int leaves = eService. GetLeaves (<id>);//this will break on the second iteration

               //… bla bla

}

}

 

The above code will break the moment it tries to invoke GetLeaves in that loop the second time. The CalculateMonthlySalary knows nothing about ContractorService and only understands IEmployeeService, as it should. But its behaviour changes (it breaks) when a child of IEmployeeService (ContractorService) is used, at runtime. Let’s solve this:

 

interface IEmployeeService

{

               void Add(Employee employee);

               //…

}

interface ILeaveService

{

               int GetLeaves (int employeeId);

               //…

}

class EmployeeService : IEmployeeService, ILeaveService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

int GetLeaves (int employeeId)

{

               // Calculate and return leaves

}

}

class ContractorService : IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

}

 

Now the client code to calculate leaves will be

 

void CalculateMonthlySalary(IEnumerable<ILeaveService> leaveServices)

 

and wallah, the code is as smooth as it gets. The moment we try to do.

 

List<ILeaveService> leaveServices = new List<ILeaveService>();

leaveServices.Add(new EmployeeService());

leaveServices.Add(new ContractorService ()); //Compile-time error.

CalculateMonthlySalary(leaveServices);

 

It will give us a compile-time error. Because the method CalculateMonthlySalary is now expecting IEnumberable of ILeaveService to calculate leaves of employees, we have a List of ILeaveService, but ContractorService does not implements ILeaveService.

 

List<ILeaveService> leaveServices = new List<ILeaveService>();

leaveServices.Add(new EmployeeService());

leaveServices.Add(new EmployeeService ());

CalculateMonthlySalary(leaveServices);

 

LSP helps fine graining the business requirements and operational boundaries of the code. It also helps identifying the responsibilities of a piece of code and what kind of resources it would need to do its job. This increases SRP, enhances decoupling and reduces useless dependencies (CalculateMonthlySalary does not care about the whole IEmployeeService anymore, and only depends upon ILeaveService).

Breaking down responsibilities sometimes can be a bit hard in complex business requirements and LSP also tends to increase the number of isolated code units (classes, interfaces etc.). But it becomes apparent in simple and carefully designed structures where Tight Coupling and Duplications are avoided.

          I – Interface Segregation Principle – ISP

Don’t give me something I don’t need”

 oop-principles

In LSP, we did see that the method CalculateMonthlySalary had no use of the complete IEmployeeService, it only needed a subset of IEmployeeService, the GetLeaves method. This is, in its basic form, the ISP. It asks to identify the resources needed by a piece of code to do its job and then only provide those resources to it, nothing more. ISP finds real dependencies in code and eliminates unwanted ones. This helps in decoupling code greatly, helps recognising code dependencies, and ensures code isolation and security (CalculateMonthlySalary does not have any access to Add method anymore).

ISP advocates module customization based on OCP; identify requirements and isolate code by creating smaller abstractions, instead of modifications. Also, when we fine-grain pieces of code using ISP, the individual components become smaller. This increases their testability, manageability and reusability. Have a look:

 

class Employee

{

               //…

               string Id;

               string Name;

               string Address;

               string Email;

//…

}

void SendEmail(Employee employee)

{

               //.. uses Name and Email properties only

}

 

The above is a violation of ISP. The method SendEmail has no use of the class Employee, it only uses a Name and an Email to send out emails but is dependent on Employee class definition. This introduces unnecessary dependencies into the system, though it seems small at start. Now the SendEmail method can only be used for Employees and with nothing else; no reusability. Also, it has access to all the other features of Employee, without any requirements; security and isolation. Let’s rewrite it.

 

 void SendEmail(string name, string email)

{

               //.. sends email of whatever

}

 

Now the method does not care about any changes in the Employee class; dependency identified and isolated. It can be reused and is testable with anything, instead of just Employee. In short, don’t be misguided by the word Interface in ISP. It has its usage everywhere.

          D – Dependency Inversion Principle – DIP

To exist, I did not depend upon my sister, and my sister not upon me. We both depended upon our parents

Remember the example we discussed in SRP, where we introduced Database class into the EmployeeService.

 

class EmployeeService : IEmployeeService, ILeaveService

{

Database db = null;

public EmployeeService ()

{

                              //…

                              db = Database.Get(); // or a constructor etc. the EmployeeService is dependent upon Database

//..

}

Add(Employee emp)

{

                              //…

                              db.Add<Employee>(emp);

//..

}

}

 

The DIP dictates that:

               No high-level modules (EmployeeService) should be dependent upon any low-level modules (Database), instead both should depend upon abstractions. And abstractions should not depend upon details, details should depend upon abstractions.

 dip

The EmployeeService here is a high-level module that is using, and dependent upon a low-level module, Database. This introduces a hidden dependency on the Database class. This also increases the coupling between EmployeeService and the Database. The client code using EmployeeService now must have access to the Database class definition, even though it’s not exposed to it and does not know, apparently, that Database class/ service/ factory/ interface exists.

Also note that it does not matter if, to get a Database instance, we use a singleton, factory, or a constructor. Inverting a dependency does not mean replacing its constructor with a service / factory/ singleton call, because then the dependency is just transformed into another class/ interface while remain hidden. One change in Database.Get, for example, could have unforeseen implications on the client code using the EmployeeService without knowing. This makes the code rigid and tightly coupled to details, difficult to test and almost impossible to reuse.

Let’s change it a bit.

 

 class EmployeeService : IEmployeeService, ILeaveService

{

Database db = null;

 public EmployeeService (Database database)

{

                              //…

                              db = database;

//..

}

Add(Employee emp)

{

                              //…

                              db.Add<Employee>(emp);

//..

}

//…

}

 

We have moved the Getting of Database to an argument in the constructor (because the scope of the db variable is class level). Now EmployeeService is not dependent upon the details of Database instantiation. This solves one problem but the EmployeeService is still dependent upon a low-level module (Database). Let’s change that:

 

IDatabase db = null;

public EmployeeService (IDatabase database)

{

//…

         db = database;

//..

}

 

We have replaced the Database with an abstraction (IDatabase). EmployeeService does not depend, nor care about any details of Database anymore. It only cares about the abstraction IDatabase. The Database class will be implementing the IDatabase abstraction (depends upon an abstraction).

Now the actual database implementation can be replaced anytime (testing) with a mock or with any other database details, as per the requirements and the Service will not be affected.

We have covered, in some details, the SOLID design principles, with few examples to understand the underline problems and how those can be solved using these principles. As can be seen, most of the time, a SOLID principle is just common sense and not making STUPID mistakes in designs and giving the business requirements some thought.

 

Adding Bot to Microsoft Teams

If you are following up on my previous blog posts about Bots and integrating LUIS with them, you are almost done with building bots and already had some fun with it. Now it’s time to bring them to life and let internal or external users interact with Bot via some sort of front end channel accessible by them. If you haven’t read my previous posts on the subject yet, please give them a read at Creating a Bot and Creating a LUIS app before reading further.

In this blog post, we will be integrating our previously created intelligent Bots into Microsoft Teams channel. Following a step by step process, you can add your bot to MS Teams channel.

Bringing Bot to Life

  1. As a first step, you need to create a package as outlined here and build a manifest as per the schema listed here. This will include your Bot logos and a manifest file as shown below.

  2. Once manifest file is created, you need to zip it along with logos, as shown above, to make it a package with (*.zip)
  3. Open Microsoft team interface, select the particular team you want to add Bot to and go to Manage team section as highlighted below.

  4. Click on Bots tab, and then select Sideload a bot as highlighted and upload your previously created zip file

  5. Once successful, it will show the bot that you have just added to your selected team as shown below.

  6. If everything went well, your Bot is now ready and available in team’s conversation window to interact with. While addressing Bot, you need to start with symbol @BotName to direct messages to Bot as shown below.

  7. Based on the configuration you have done as part of the manifest file, your command list will be available against your Bot name.

  8. Now you can ask your Bot question that you have trained your LUIS app with and it will respond as programmed.

  9. You just need to ensure your Bot is programmed to respond possible questions your end user can ask it for.

  10. You can program a bot to acknowledge user first and then respond in detail on user’s question. If the response contains multiple records, you can represent it using cards as shown below.

  11. Or if a response requires some additional actions, you can have a link or a button to launch a URL directly from your team conversation.

  12. Besides adding Bot to teams, you can add tabs to a team as well which can show any SPA (single page application) or even a dashboard built as per your needs. Below is just an example of what can be achieved using tabs inside MS Teams.

As MS Teams is evolving as a group chat software, it can be leveraged to build useful integrations as a front face to many of the organisation’s needs capitalising on Bots as an example.

Using a Bot Framework to build LUIS enabled Bots

History

In this post, we are going to build a bot using Microsoft Bot framework and add intelligence to it to extract meanings from the conversation with users utilising Microsoft cognitive service named LUIS. The last post discussed details about LUIS, give it a read before you continue on reading. This post assumes you have a basic understanding of Language Understanding Intelligent Service (LUIS) and Bot Framework, further details can be read about them at LUIS and Bot Framework.

Pre-requisites

You need to download few items to start your quick bot development, please get all of them before you jump on to the next section.

  • Bot template is available at URL (this will help you in scaffolding your solution)
  • Bot SDK is available at NuGet (this is mandatory to build a Bot)
  • Bot emulator is available at GitHub (this helps you in testing your bot during development)

Building a Bot

  1. Create an empty solution in your Visual Studio and add a Bot template project as an existing solution.
  2. Your solution directory should like the one below:

  3. Replace parameters $safeprojectname$ and $guid1$ with some meaningful name for your project and set a unique GUID
  4. Next step is to restore and update NuGet packages and ensure all dependencies are resolved.

  5. Run the application from Visual Studio and you should see bot application up and running

  6. Now open Bot emulator and connect to your Bot as follows:

  7. Once connected, you can send a test text message to see if Bot is responding

  8. At this point, your bot is up and running and in this step you will add Luis dialogue to it. Add a new class named RootLuisDialog under Dialogs folder and add methods as shown below against each intent that you have defined under your LUIS app. Ensure you have your LUIS app id and a key to decorate your class as shown below:

  9. Let’s implement a basic response from LUIS against intent ‘boot’ as shown in the code below.

  10. Open up an emulator, and try to use any utterance we have trained our LUIS application with. A sample bot response should be received as we have implemented in the code above. LUIS will identify intent ‘boot’ from a user message as shown below.

  11. And now we will be implementing a bit advanced response from LUIS against our intent ‘status’ as shown in the code below.

  12. And now you can send a bit complex message to your bot and it will send a message to LUIS to extract entity and intent from the utterance and respond to the user accordingly as per your implementation.

And the list of intent implementation goes on and on, you can customise behaviour as per your needs as your LUIS framework is ready to rock and roll within your bot and users can take advantage of it to issue specific commands or inquire about entities using your Bot. Happy Botting 🙂

Enabling and using Managed Service Identity to access an Azure Key Vault with Azure PowerShell Functions

Introduction

At the end of last week (14 Sept 2017) Microsoft announced a new Azure Active Directory feature – Managed Service Identity. Managed Service Identity helps solve the chicken and egg bootstrap problem of needing credentials to connect to the Azure Key Vault to retrieve credentials. When used in conjunction with Virtual Machines, Web Apps and Azure Functions that meant having to implement methods to obfuscate credentials that were stored within them. I touched on one method that I’ve used a lot in this post here whereby I encrypt the credential and store it in the Application Settings, but it still required a keyfile to allow reversing of the encryption as part of the automation process. Thankfully those days are finally behind us.

I strongly recommend you read the Managed Service Identity announcement to understand more about what MSI is.

This post details using Managed Service Identity in PowerShell Azure Function Apps.

Enabling Managed Service Identity on your Azure Function App

In the Azure Portal navigate to your Azure Function Web App. Select it and then from the main-pane select the Platform Features tab then select Managed service identity.

Platform Features

Turn the toggle the switch to On for Register with Azure Active Directory then select Save.

ManagedServiceIdentity

Back in Platform Features under General Settings select Application Settings. 

General Settings

Under Application Settings you will see a subset of the environment variables/settings for your Function App. In my environment I don’t see the Managed Service Identity variables there. So lets keep digging.

App Settings

Under Platform Features select Console.

DevelopmentTools

When the Console loads, type Set. Scroll down and you should see MSI_ENDPOINT and MSI_SECRET.

NOTE: These variables weren’t immediately available in my environment. The next morning they were present. So I’m assuming there is a back-end process that populates them once you have enabled Managed Service Identity. And it takes more than a couple of hours 

Endpoint

Creating a New Azure Function App that uses Managed Service Identity

We will now create a new PowerShell Function App that will use Managed Service Identity to retrieve credentials from an Azure Key Vault.

From your Azure Function App, next to Functions select the + to create a New Function. I’m using a HttpTrigger PowerShell Function. Give it a name and select Create.

NewFunction

Put the following lines into the top of your function and select Save and Run.

# MSI Variables via Function Application Settings Variables
# Endpoint and Password
$endpoint = $env:MSI_ENDPOINT
$endpoint
$secret = $env:MSI_SECRET
$secret

You will see in the output the values of these two variables.

Vars

Key Vault

Now that we know we have Managed Service Identity all ready to go, we need to allow our Function App to access our Key Vault. If you don’t have a Key Vault already then read this post where I detail how to quickly get started with the Key Vault.

Go to your Key Vault and select Access Polices from the left menu list.

Vault

Select Add new, Select Principal and locate your Function App and click Select.

Access Policy 1

As my vault contains multiple credential types, I enabled the policy for Get for all types. Select Ok. Then select Save.

Policy - GET

We now have our Function App enabled to access the Key Vault.

Access Policy 2

Finally in your Key Vault, select a secret you want to retrieve via your Function App and copy out the Secret Identifier from the Properties.

Vault Secret URI

Function App Script

Here is my Sample PowerShell Function App script that will connect to the Key Vault and retrieve credentials. Line 12 should be the only line you need to update for your Key Vault Secret that you want to retrieve. Ensure you still have the API version at the end (which isn’t in the URI you copy from the Key Vault) /?api-version=2015-06-01

When run the output if you have everything correct will look below.

KeyVault Creds Output

Summary

We now have the basis of a script that we can use in our Azure Functions to allow us to use the Managed Service Identity function to connect to an Azure Key Vault and retrieve credentials. We’ve limited the access to the Key Vault to the Azure Function App to only GET the credential. The only piece of information we had to put in our Function App was the URI for the credential we want to retrieve. Brilliant.

Azure Function Proxies for API Mocking

In my previous posts, Is Your Serverless Application Testable? – Azure Logic Apps and API Mocking for Developers, we have looked how to mock APIs with various approaches including Azure API Management, AWS API Gateway, MuleSoft and Azure Functions. Quite recently, Azure Functions Team released a new mocking feature in Azure Function Proxies. With this feature, API mocking can’t be even easier. In this post, I’m going to show how to use this mocking feature in several ways.

Enable API Mocking via Azure Portal

This is pretty straight forward. Within Azure Portal, simply enable the Azure Function Proxies feature:

If you have already enabled this feature, make sure that its runtime version is 0.2. Then click the plus sign next to Proxies (preview) and put details like:

At this stage, we might have found an interesting one. This proxy doesn’t have to link to an existing HTTP endpoint. In other words, we don’t need to have a real API for mocking. Just create a mocked HTTP endpoint and use it. Once we save this, we’ll have an endpoint like:

Use Postman to hit this mocked URL:

We receive the mocked HTTP status code and response body. How easy!

But if we have an existing Azure Function app and want to integrate this mocking feature via CI/CD pipeline, what can we do? Let’s move on.

Enable Azure Function Proxy Option via ARM Template

Azure Functions app instance is basically the same web app instance. So we can use the same ARM template definition for app settings configurations. In order to enable Azure Function Proxies, we simply add a ROUTING_EXTENSION_VERSION key with a value of ~0.2. Here’s a sample ARM template definition:

Once we update our ARM template, we can simply deploy this template to Azure.

Enable Azure Function Proxy Option via PowerShell

If ARM template is not feasible to enable this proxy option, then we can use a PowerShell script (or Azure CLI equivalent). Here’s a sample script:

Hence, this PowerShell script can be invoked with parameters like:

After either way, ARM template or PowerShell, is executed, the function app instance will have the value like:

Now, we have proxies feature enabled through code! Let’s move on.

Deploy Azure Function Proxy with Azure Function App

Defining Azure Function proxies is just to add another JSON file, proxies.json, to the app instance. The JSON schema definition can be found at http://json.schemastore.org/proxies. So, basically the proxies.json looks like:

As defined above, the mocked API has a URL of /mock/hello-world with response code of 200 and body { "message": "Hello World" }. As long as we follow this structure, we can add as many mocked APIs as we like.

Once we add this proxies.json into our Azure Functions project, deploy it to Azure. Then we’ll be able to see the list of mocked APIs.

So far, we have looked how to add mocked APIs to an Azure Function instance with various ways. Depending on preferences, any approach would be fine. But just make sure that this is the easiest and cheapest way to mock API endpoints.

Mobile-first development with Xamarin

 

Modern application users have high expectations for applications, even for in-house enterprise apps.  IT leaders realising this have adopted a mobile-first development approach, which ensures great user experience, reduce overall development and maintenance cost. In this post I will provide an example of a mobile-first development project for an enterprise application.

 Business case

A retailer is using Windows CE devices for performing daily tasks in it stores and is planning to move to the latest Android / iOS devices. Each particular task is performed using a single independent application, developed with different technologies (.Net Compact, C++), and each application communicates with many different back-end services and systems.

Apart from maintaining applications on variant technologies, the company also spent heavily on training employees to use these applications.

The road ahead is to develop these applications for Android powered devices.

Road ahead

The application was initially to be developed only for Android, with a future scope of developing a similar iOS application.

The company relied a lot on the daily activities performed through these applications, which meant that these should be powerful and simple enough for end users, minimising training and update costs.

Xamarin was chosen as the development platform, for the fact that it will prove beneficial for porting later to iOS, single code base with C# language (fully matured async patterns, better type-safety, front and back-end teams work more collaboratively), and also, it can realise the full potential of the native platforms (native UI, Geo-location, Notifications).

Mobile-first approach

Mobile-first approach in enterprise means allowing user to perform complex tasks, with ease. This means interactive and intuitive UI, build around the use-case of solving a problem from a user perspective. This also means developing new infrastructure and services that cater for specific use cases in the application.

Consider a case when the store manager of a hardware store wishes to order new stock for an item. This previously involved getting following information:

  • Current store stock
  • Stock availability for delivery
  • Back store capacity
  • Shelf capacity
  • Upcoming sales
  • Pending deliveries
  • Pending customer orders
  • Wasted / discarded stock
  • Understanding of stores’ sale patterns (daily / seasonal / monthly)

 

Some of the services interact with the in-store server, some will require the central server, and a manual review of previous orders, while some of the tasks are completely dependent on a person’s experience. The final task is to manually calculate the final stock to be ordered.

With the new system, a scheduled task interacts with all of the above services, uses machine learning (items bought together/ customer ordered / quantity etc.) to find out sale patterns, cache results daily and in case an order needs to be placed, pushes a notification to the user who has the authority to submit orders.

On the application side, the user can interact with the notification, review the order and submit or cancel with just 3 clicks.

Thus, focusing on the user-first mobile app, made a complex task very easy and efficient for the user.

Additional use case: An item in a store is to be recalled, or is towards the end of its life and needs to be discarded. Since the new web service caches all of the data every day, a new service can be added, that identifies such products and utilises the existing notification module to notify the user.

The mobile-first development can turn a ‘pull’ experience into a ‘push’ experience – making use of mobile platform features like push notifications. With powerful services in place, new functionality is easily added greatly improving store operations.

Code-sharing

The application was designed using Model View Presenter (MVP) architecture. In this pattern the presenter layer acts as the supervising controller for the view. The view itself is quite passive and is responsible for displaying the data provided to it by the presenter and to route UI interactions to the presenter. This allows for moving most of the logic into the presentation layer. The presentation layer is shared across platforms reducing platform specific code. Test coverage is also greatly improved as most of the presentation code is shared.

Use of Xamarin combined with MVP architecture allowed for more than 95% of the code to be shared across different platform, with emphasis on Unit and automation testing. This is evident from the fact that the whole project was delivered in under 9 months and the iOS application was started in the 7th month, and delivered with the Android app.

The components of the application were broken into individual packages, so that they could be used for all future enterprise or customer facing applications.

An excerpt from the OrderService shows how a sync component (LiveDataService) written to work with a live database could be used for a consumer application as well. In below case, it will notify user of current promotional stock.

It is also worth noting here that the use of Xamarin allowed for sharing components between mobile and back-end systems (as separate NuGet packages). This was the case with custom synchronisation framework, model classes, ORM components, utility libraries etc.

 

How LUIS can help BOTs in understanding natural language

Since bots are evolving, you need a mechanism to better understand what user wants from his/her language and take actions or respond to user queries appropriately. In the days of increasing automation, bots can certainly help provided they are backed by tools to understand user language both naturally and contextually.

Azure Cognitive Services has an API that can help to identify what user wants, extracts concepts and entities from a sentence (user input) using an intelligent service name Language Understanding Intelligent Service (LUIS). It can help process natural language using custom trained language models and can incorporate Active learning concept based on how it was trained.

In this blog post, we will be building a LUIS app that can be utilised in a Bot or any other client application to respond to the user in a more meaningful way.

Create a LUIS app

  1. Go to https://www.luis.ai/ and sign up.
  2. You need to create a LUIS app by clicking ‘New App’ – this is the app you will be using in Bot Framework
  3. Fill out a form and give your app a unique name
  4. Your app will be created, and you can see details as below (page will be redirected to Overview)
  5. You need to create entities to identify the concept, and is very important part of utterances (input from a user). Let’s create few simple entities using the form below
  6. You can also reuse pre built entity like email, URL, date etc.
  7. Next step is to build intent which represents a task or an action from utterance (input from a user). By default, you will have None which is for irrelevant utterances to your LUIS app.
  8. Once you have defined the series of intents, you need to add possible utterances against each intent which forms the basis of Active Learning. You need to make sure to include varied terminology and different phrases to help LUIS identify.You can build Phrase list to include words that must be treated similarly like company name or phone models etc.
  9. As you write utterances, you need to identify or tag entities like we selected $service-request in your utterance.Remember: you are identifying possible phrases to help LUIS extract intents and entities from utterances.
  10. Next step is to train your LUIS app to help it identify entities and intents from utterances. Ensure you click Train Application when you are done with enough training (you can also do such training on per entity or per intent basis)
  11. You can repeat step 10 as much time as you like to ensure LUIS app is trained well enough on your language model.
  12. Publish the app once you have identified all possible entities, intents, utterances and have trained LUIS well to extract them from user input.
  13. Keep a note of Programmatic API key from MyKey section and Application ID from Settings menu of your LUIS app, you will need these two keys when integrating LUIS with your client application.

Now you are ready to go ahead and use your LUIS app in your Bot or any other client application to process natural language in a meaningful manner – Cheers!

Quickly creating and using an Azure Key Vault with PowerShell

Introduction

A couple of weeks back I was messing around with the Azure Key Vault looking to centralise a bunch of credentials for my ever-growing list of Azure Functions that are automating numerous tasks. What I found was getting an Azure Key Vault setup and getting credentials in and out was a little more cumbersome than what I thought it should be. At that same point via Twitter this tweet appeared in my timeline from a retweet. I’m not too sure why, but maybe because I’m been migrating to VSCode myself I checked out Axel’s project.

Tweet

Axel’s PowerShell Module simplifies creating and integrating with the Azure Key Vault. After messing with it and suggesting a couple of enhancements that Axel graciously entertained, I’m creating vaults, adding and removing credentials in the simplified way I’d wanted.

This quickstart guide to using this module will get you started too.

Create an Azure Key Vault

This is one of the beauties of Axel’s module. If the Resource Group and/or Storage Group you want associated with your Key Vault doesn’t exist then it creates them.

Update the following script for the location (line 8) and the name (line 10) that will be given to your Storage Account, Resource Group and Vault. Modify if you want to use different names for each.

Done, Key Vault created.

Create Azure KeyVault

Key Vault Created

Connect to the Azure Key Vault

This script assumes you’re now in a new session and wanting to connect to the Key Vault. Again, a simplified version whereby the SG, RG and KV names are all the same.  Update for your location and Key Vault name.

Connected.

Connect to Azure Key Vault

Add a Certificate to the Azure Key Vault

To add a certificate to our new Key Vault use the command below. It will prompt you for your certificate password and add the cert to the key vault.

Add Cert to Vault

Certificate added to Key Vault.

Cert Added to Vault

Retrieve a Certificate from the Azure Key Vault

To retrieve a certificate from the Key Vault is just as simple.

$VaultCert = Get-AzureCertificate -Name "AADAppCert" -ResourceGroupName $name -StorageAccountName $name -VaultName $name

Retrieve a Cert

Add Credentials to the Azure Key Vault

Adding username/password or clientID/clientSecret to the Key Vault is just as easy.

# Store credentials into the Azure Key Vault
Set-AzureCredential -UserName "serviceAccount" -Password ($pwd = Read-Host -AsSecureString) -VaultName $name -StorageAccountName $name -Verbose

Credentials added to vault

Add Creds to Key Vault

Creds Added to Vault

Retrieve Credentials from the Azure Key Vault

Retrieving credentials is just as easy.

# Get credentials from the Azure Key Vault
$AzVaultCreds = Get-AzureCredential -UserName "serviceAccount" -VaultName $name -StorageAccountName $name -Verbose

Credentials retrieved.

Retrieve Account Creds

Remove Credentials from the Azure Key Vault

Removing credentials is also a simple cmdlet.

# Remove credentials from the Azure Key Vault
Remove-AzureCredential -UserName "serviceAccount" -VaultName $name -StorageAccountName $name -Verbose

Credentials removed.

Remove Credential

Summary

Hopefully this gets you started quickly with the Azure Key Vault. Credit to Axel for creating the module. It’s now part of my toolkit that I’m using a lot.