Use Azure Hybrid Connections to get on-premises data from SQL to SharePoint Online

Azure Hybrid Connections are an easier and less complicated way to connect cloud applications with on-premises SQL data. This provides great extensibility options for SharePoint Online such as,

  1. Provider Hosted Apps hosted in Azure
  2. Business Data Connectivity using WCF services hosted in Azure
  3. SharePoint Hosted Apps using BCS external sources.

In this blog, I will illustrate the steps to configure Azure Hybrid Connections. In a nutshell, the diagram below outlines the data flow in Hybrid connections.

AzureHybridConnection1_Asish

Firstly, in the on-premises SQL server, if you have a named instance then assign a static port to it and expose it through the firewall. If SQL is installed on the default instance, then make sure 1433 is exposed outside the firewall.

Next, log into the Azure Portal and create a Resource Group, add an Azure Web App, and then add a Hybrid Connection from Networking section
(Azure Web App -> Networking -> Configure Hybrid Connection)

AzureHybridConnection2_Asish

Note:Hybrid connections can also be added by other resources such as Azure Functions or other apps that can be tied to an App Service plan.
Note:The number of Hybrid Connections are limited by the type of App Service Plan.  A brief table of allowed connections is below. It is important to note that the Free App Service Plan doesn’t have any Hybrid Connections. It is shown in the table below
Pricing Plan Number of hybrid connections usable in the plan
Basic 5
Standard 25
Premium 200
Isolated 200

Next, add a New Hybrid Connection. In Endpoint Host, enter the fully qualified name of your SQL server along with domain. In the port field as in the below screenshot, provide the details of the SQL server port the instance is exposed at.

Note: No need to qualify the details of instance as server\instance in the endpoint host field as the application code will have to specify the connection details in it. The Hybrid connection will only need to just know the endpoint.
Note: You could also select existing hybrid connections from other resource groups.

AzureHybridConnection3_Asish

After the Hybrid connection is created, it will show up in the Azure Portal as in below screenshot

AzureHybridConnection4_Asish

Next, download the Connection Manager using Download Connection Manager. It is basically a download with pre-configured Azure subscription details which, when installed in an on-premises system environment (preferably in the same Data center as the SQL Server), acts as a listener to Azure Web App requests.

After installing the Hybrid Connection UI manager, connect to the Azure Subscription account to find the available hybrid connections. After selecting the connection, if the listener can connect to SQL it would show as Connection Successful.

AzureHybridConnection5_Asish

After the connection is successful, in the Azure Portal, the number of listeners will show as 1 and connection status to Connected.

In this blog, we saw how we could create Azure Hybrid connections to connect an on premises SQL with an Azure App Service. In the next blog, we will discuss the steps to consume this connection and connect SPO with the SQL data sources.

Azure Logic App – Evaluating IF condition with the help of JSON expression by passing null

Introduction

Yes, you read the title right, this blog is about evaluating IF condition. You might be wondering what about IF, even novice developer with no experience knows about it.

Allow me to explain a specific scenario that helps us understand it’s behavior in Logic Apps, it might blow your mind.

Some of us come from years of development experience, and at times we like to skill up ourselves to various other technologies, which leaves us with a mindset based on our past development experience and programming habits, which we gained over the years. When clients requirements are approached based on these backgrounds, we expect the code to work with the certain flow and these are where rules are broken while using IF condition in Azure Logic Apps.

Understanding JSON expression

JSON expression evaluates string to JSON object using syntax as shown below

json({"Person":{"Name": "Simpson"}})  evaluates to var name = Person.Name as Simpson

But, the same json(null), throws an error (important), avoid where possible.

Understanding IF condition

IF don’t need any special introduction, we know how it works. As we know, it has two code blocks that are evaluated based on the condition and falls to one of the blocks. As applies to Logic Apps and below is the syntax for it.

@if("condition","true","false")

To understand IF better, let’s also look into @equals(), it is a simple expression that returns true or false based on the given input and provided comparing value.

Example 1

Below is just an example, please ignore simple equality condition.
@if(equals(1,1),"true1","false1")
Result: true1

Example 2

@if(equals(1,2),"true1","false1")
Result: false1

Now, let us take our person JSON and understand it.
@if(equals(1,1),"Merge",json({"Person":{"Name": "Homer"}}) ['Name'])
Result: Merge

and similarly when the comparison is not equal

@if(equals(1,2),"Merge",json({"Person":{"Name": "Homer"}}) ['Name'])
Result: Homer

Now, recall that IF falls to one of the code blocks and returns. But in case of Azure Logic Apps, it evaluates both the code blocks and returns one code block result, that it falls into.

Here is the proof

For Example, if I do something like below, it should result as “Merge”, but it actually throws an error. According to current Logic Apps, this is the current behavior.

@if(equals(1,1),"Merge",json(null) ['Name'])
Result: error

And similarly when not equal

@if(equals(1,2),"Merge",json(null) ['Name'])
Result: error

The above examples imply that Logic App evaluates both the code blocks and returns one.

Actual error thrown is as below from real logic app

InvalidTemplate. Unable to process template language expressions in action ‘Compose’ inputs at line ‘1’ and column ‘1525’: ‘The template language function ‘json’ expects its parameter to be a string or an XML. The provided value is of type ‘Null’. Please see https://aka.ms/logicexpressions#json for usage details.’.

Adding Bot to Microsoft Teams

If you are following up on my previous blog posts about Bots and integrating LUIS with them, you are almost done with building bots and already had some fun with it. Now it’s time to bring them to life and let internal or external users interact with Bot via some sort of front end channel accessible by them. If you haven’t read my previous posts on the subject yet, please give them a read at Creating a Bot and Creating a LUIS app before reading further.

In this blog post, we will be integrating our previously created intelligent Bots into Microsoft Teams channel. Following a step by step process, you can add your bot to MS Teams channel.

Bringing Bot to Life

  1. As a first step, you need to create a package as outlined here and build a manifest as per the schema listed here. This will include your Bot logos and a manifest file as shown below.

  2. Once manifest file is created, you need to zip it along with logos, as shown above, to make it a package with (*.zip)
  3. Open Microsoft team interface, select the particular team you want to add Bot to and go to Manage team section as highlighted below.

  4. Click on Bots tab, and then select Sideload a bot as highlighted and upload your previously created zip file

  5. Once successful, it will show the bot that you have just added to your selected team as shown below.

  6. If everything went well, your Bot is now ready and available in team’s conversation window to interact with. While addressing Bot, you need to start with symbol @BotName to direct messages to Bot as shown below.

  7. Based on the configuration you have done as part of the manifest file, your command list will be available against your Bot name.

  8. Now you can ask your Bot question that you have trained your LUIS app with and it will respond as programmed.

  9. You just need to ensure your Bot is programmed to respond possible questions your end user can ask it for.

  10. You can program a bot to acknowledge user first and then respond in detail on user’s question. If the response contains multiple records, you can represent it using cards as shown below.

  11. Or if a response requires some additional actions, you can have a link or a button to launch a URL directly from your team conversation.

  12. Besides adding Bot to teams, you can add tabs to a team as well which can show any SPA (single page application) or even a dashboard built as per your needs. Below is just an example of what can be achieved using tabs inside MS Teams.

As MS Teams is evolving as a group chat software, it can be leveraged to build useful integrations as a front face to many of the organisation’s needs capitalising on Bots as an example.

Using a Bot Framework to build LUIS enabled Bots

History

In this post, we are going to build a bot using Microsoft Bot framework and add intelligence to it to extract meanings from the conversation with users utilising Microsoft cognitive service named LUIS. The last post discussed details about LUIS, give it a read before you continue on reading. This post assumes you have a basic understanding of Language Understanding Intelligent Service (LUIS) and Bot Framework, further details can be read about them at LUIS and Bot Framework.

Pre-requisites

You need to download few items to start your quick bot development, please get all of them before you jump on to the next section.

  • Bot template is available at URL (this will help you in scaffolding your solution)
  • Bot SDK is available at NuGet (this is mandatory to build a Bot)
  • Bot emulator is available at GitHub (this helps you in testing your bot during development)

Building a Bot

  1. Create an empty solution in your Visual Studio and add a Bot template project as an existing solution.
  2. Your solution directory should like the one below:

  3. Replace parameters $safeprojectname$ and $guid1$ with some meaningful name for your project and set a unique GUID
  4. Next step is to restore and update NuGet packages and ensure all dependencies are resolved.

  5. Run the application from Visual Studio and you should see bot application up and running

  6. Now open Bot emulator and connect to your Bot as follows:

  7. Once connected, you can send a test text message to see if Bot is responding

  8. At this point, your bot is up and running and in this step you will add Luis dialogue to it. Add a new class named RootLuisDialog under Dialogs folder and add methods as shown below against each intent that you have defined under your LUIS app. Ensure you have your LUIS app id and a key to decorate your class as shown below:

  9. Let’s implement a basic response from LUIS against intent ‘boot’ as shown in the code below.

  10. Open up an emulator, and try to use any utterance we have trained our LUIS application with. A sample bot response should be received as we have implemented in the code above. LUIS will identify intent ‘boot’ from a user message as shown below.

  11. And now we will be implementing a bit advanced response from LUIS against our intent ‘status’ as shown in the code below.

  12. And now you can send a bit complex message to your bot and it will send a message to LUIS to extract entity and intent from the utterance and respond to the user accordingly as per your implementation.

And the list of intent implementation goes on and on, you can customise behaviour as per your needs as your LUIS framework is ready to rock and roll within your bot and users can take advantage of it to issue specific commands or inquire about entities using your Bot. Happy Botting 🙂

How LUIS can help BOTs in understanding natural language

Since bots are evolving, you need a mechanism to better understand what user wants from his/her language and take actions or respond to user queries appropriately. In the days of increasing automation, bots can certainly help provided they are backed by tools to understand user language both naturally and contextually.

Azure Cognitive Services has an API that can help to identify what user wants, extracts concepts and entities from a sentence (user input) using an intelligent service name Language Understanding Intelligent Service (LUIS). It can help process natural language using custom trained language models and can incorporate Active learning concept based on how it was trained.

In this blog post, we will be building a LUIS app that can be utilised in a Bot or any other client application to respond to the user in a more meaningful way.

Create a LUIS app

  1. Go to https://www.luis.ai/ and sign up.
  2. You need to create a LUIS app by clicking ‘New App’ – this is the app you will be using in Bot Framework
  3. Fill out a form and give your app a unique name
  4. Your app will be created, and you can see details as below (page will be redirected to Overview)
  5. You need to create entities to identify the concept, and is very important part of utterances (input from a user). Let’s create few simple entities using the form below
  6. You can also reuse pre built entity like email, URL, date etc.
  7. Next step is to build intent which represents a task or an action from utterance (input from a user). By default, you will have None which is for irrelevant utterances to your LUIS app.
  8. Once you have defined the series of intents, you need to add possible utterances against each intent which forms the basis of Active Learning. You need to make sure to include varied terminology and different phrases to help LUIS identify.You can build Phrase list to include words that must be treated similarly like company name or phone models etc.
  9. As you write utterances, you need to identify or tag entities like we selected $service-request in your utterance.Remember: you are identifying possible phrases to help LUIS extract intents and entities from utterances.
  10. Next step is to train your LUIS app to help it identify entities and intents from utterances. Ensure you click Train Application when you are done with enough training (you can also do such training on per entity or per intent basis)
  11. You can repeat step 10 as much time as you like to ensure LUIS app is trained well enough on your language model.
  12. Publish the app once you have identified all possible entities, intents, utterances and have trained LUIS well to extract them from user input.
  13. Keep a note of Programmatic API key from MyKey section and Application ID from Settings menu of your LUIS app, you will need these two keys when integrating LUIS with your client application.

Now you are ready to go ahead and use your LUIS app in your Bot or any other client application to process natural language in a meaningful manner – Cheers!

Quickly creating and using an Azure Key Vault with PowerShell

Introduction

A couple of weeks back I was messing around with the Azure Key Vault looking to centralise a bunch of credentials for my ever-growing list of Azure Functions that are automating numerous tasks. What I found was getting an Azure Key Vault setup and getting credentials in and out was a little more cumbersome than what I thought it should be. At that same point via Twitter this tweet appeared in my timeline from a retweet. I’m not too sure why, but maybe because I’m been migrating to VSCode myself I checked out Axel’s project.

Tweet

Axel’s PowerShell Module simplifies creating and integrating with the Azure Key Vault. After messing with it and suggesting a couple of enhancements that Axel graciously entertained, I’m creating vaults, adding and removing credentials in the simplified way I’d wanted.

This quickstart guide to using this module will get you started too.

Create an Azure Key Vault

This is one of the beauties of Axel’s module. If the Resource Group and/or Storage Group you want associated with your Key Vault doesn’t exist then it creates them.

Update the following script for the location (line 8) and the name (line 10) that will be given to your Storage Account, Resource Group and Vault. Modify if you want to use different names for each.

Done, Key Vault created.

Create Azure KeyVault

Key Vault Created

Connect to the Azure Key Vault

This script assumes you’re now in a new session and wanting to connect to the Key Vault. Again, a simplified version whereby the SG, RG and KV names are all the same.  Update for your location and Key Vault name.

Connected.

Connect to Azure Key Vault

Add a Certificate to the Azure Key Vault

To add a certificate to our new Key Vault use the command below. It will prompt you for your certificate password and add the cert to the key vault.

Add Cert to Vault

Certificate added to Key Vault.

Cert Added to Vault

Retrieve a Certificate from the Azure Key Vault

To retrieve a certificate from the Key Vault is just as simple.

$VaultCert = Get-AzureCertificate -Name "AADAppCert" -ResourceGroupName $name -StorageAccountName $name -VaultName $name

Retrieve a Cert

Add Credentials to the Azure Key Vault

Adding username/password or clientID/clientSecret to the Key Vault is just as easy.

# Store credentials into the Azure Key Vault
Set-AzureCredential -UserName "serviceAccount" -Password ($pwd = Read-Host -AsSecureString) -VaultName $name -StorageAccountName $name -Verbose

Credentials added to vault

Add Creds to Key Vault

Creds Added to Vault

Retrieve Credentials from the Azure Key Vault

Retrieving credentials is just as easy.

# Get credentials from the Azure Key Vault
$AzVaultCreds = Get-AzureCredential -UserName "serviceAccount" -VaultName $name -StorageAccountName $name -Verbose

Credentials retrieved.

Retrieve Account Creds

Remove Credentials from the Azure Key Vault

Removing credentials is also a simple cmdlet.

# Remove credentials from the Azure Key Vault
Remove-AzureCredential -UserName "serviceAccount" -VaultName $name -StorageAccountName $name -Verbose

Credentials removed.

Remove Credential

Summary

Hopefully this gets you started quickly with the Azure Key Vault. Credit to Axel for creating the module. It’s now part of my toolkit that I’m using a lot.

Ok Google Email me the status of all vms – Part 2

First published at https://nivleshc.wordpress.com

In my last blog, we configured the backend systems necessary for accomplishing the task of asking Google Home “OK Google Email me the status of all vms” and it sending us an email to that effect. If you haven’t finished doing that, please refer back to my last blog and get that done before continuing.

In this blog, we will configure Google Home.

Google Home uses Google Assistant to do all the smarts. You will be amazed at all the tasks that Google Home can do out of the box.

For our purposes, we will be using the platform IF This Then That or IFTTT for short. IFTTT is a very powerful platform as it lets you create actions based on triggers. This combination of triggers and actions is called a recipe.

Ok, lets dig in and create our IFTTT recipe to accomplish our task

1.1   Go to https://ifttt.com/ and create an account (if you don’t already have one)

1.2   Login to IFTTT and click on My Applets menu from the top

IFTTT_MyApplets_Menu

1.3   Next, click on New Applet (top right hand corner)

1.4   A new recipe template will be displayed. Click on the blue + this choose a service

IFTTT_Reicipe_Step1

1.5   Under Choose a Service type “Google Assistant”

IFTTT_ChooseService

1.6   In the results Google Assistant will be displayed. Click on it

1.7   If you haven’t already connected IFTTT with Google Assistant, you will be asked to do so. When prompted, login with the Google account that is associated with your Google Home and then approve IFTTT to access it.

IFTTT_ConnectGA

1.8   The next step is to choose a trigger. Click on Say a simple phrase

IFTTT_ChooseTrigger

1.9   Now we will put in the phrases that Google Home should trigger on.

IFTTT_CompleteTrigger

For

  • What do you want to say? enter “email me the status of all vms
  • What do you want the Assistant to say in response? enter “no worries, I will send you the email right away

All the other sections are optional, however you can fill them if you prefer to do so

Click Create trigger

1.10   You will be returned to the recipe editor. To choose the action service, click on + that

IFTTT_That

1.11  Under Choose action service, type webhooks. From the results, click on Webhooks

IFTTT_ActionService

1.12   Then for Choose action click on Make a web request

IFTTT_Action_Choose

1.13   Next the Complete action fields screen is shown.

For

  • URL – paste the webhook url of the runbook that you had copied in the previous blog
  • Method – change this to POST
  • Content Type – change this to application/json

IFTTT_CompleteActionFields

Click Create action

1.13   In the next screen, click Finish

IFTTT_Review

 

Woo hoo. Everything is now complete. Lets do some testing.

Go to your Google Home and say “email me the status of all vms”. Google Home should reply by saying “no worries. I will send you the email right away”.

I have noticed some delays in receiving the email, however the most I have had to wait for is 5 minutes. If this is unacceptable, in the runbook script, modify the Send-MailMessage command by adding the parameter -Priority High. This sends all emails with high priority, which should make things faster. Also, the runbook is currently running in Azure. Better performance might be achieved by using Hybrid Runbook Workers

To monitor the status of the automation jobs, or to access their logs, in the Azure Automation Account, click on Jobs in the left hand side menu. Clicking on any one of the jobs shown will provide more information about that particular job. This can be helpful during troubleshooting.

Automation_JobsLog

There you go. All done. I hope you enjoy this additional task you can now do with your Google Home.

If you don’t own a Google Home yet, you can do the above automation using Google Assistant as well.

Ok Google Email me the status of all vms – Part 1

First published at https://nivleshc.wordpress.com

Technology is evolving at a breathtaking pace. For instance, the phone in your pocket has more grunt than the desktop computers of 10 years ago!

One of the upcoming areas in Computing Science is Artificial Intelligence. What seemed science fiction in the days of Isaac Asimov, when he penned I, Robot seems closer to reality now.

Lately the market is popping up with virtual assistants from the likes of Apple, Amazon and Google. These are “bots” that use Artificial Intelligence to help us with our daily lives, from telling us about the weather, to reminding us about our shopping lists or letting us know when our next train will be arriving. I still remember my first virtual assistant Prody Parrot, which hardly did much when you compare it to Siri, Alexa or Google Assistant.

I decided to test drive one of these virtual assistants, and so purchased a Google Home. First impressions, it is an awesome device with a lot of good things going for it. If only it came with a rechargeable battery instead of a wall charger, it would have been even more awesome. Well maybe in the next version (Google here’s a tip for your next version 😉 )

Having played with Google Home for a bit, I decided to look at ways of integrating it with Azure, and I was pleasantly surprised.

In this two-part blog, I will show you how you can use Google Home to send an email with the status of all your Azure virtual machines. This functionality can be extended to stop or start all virtual machines, however I would caution against NOT doing this in your production environment, incase you turn off some machine that is running critical workloads.

In this first blog post, we will setup the backend systems to achieve the tasks and in the next blog post, we will connect it to Google Home.

The diagram below shows how we will achieve what we have set out to do.

Google Home Workflow

Below is a list of tasks that will happen

  1. Google Home will trigger when we say “Ok Google email me the status of all vms”
  2. As Google Home uses Google Assistant, it will pass the request to the IFTTT service
  3. IFTTT will then trigger the webhooks service to call a webhook url attached to an Azure Automation Runbook
  4. A job for the specified runbook will then be queued up in Azure Automation.
  5. The runbook job will then run, and obtain a status of all vms.
  6. The output will be emailed to the designated recipient

Ok, enough talking 😉 lets start cracking.

1. Create an Azure AD Service Principal Account

In order to run our Azure Automation runbook, we need to create a security object for it to run under. This security object provides permissions to access the azure resources. For our purposes, we will be using a service principal account.

Assuming you have already installed the Azure PowerShell module, run the following in a PowerShell session to login to Azure

Import-Module AzureRm
Login-AzureRmAccount

Next, to create an Azure AD Application, run the following command

$adApp = New-AzureRmADApplication -DisplayName "DisplayName" -HomePage "HomePage" -IdentifierUris "http://IdentifierUri" -Password "Password"

where

DisplayName is the display name for your AD Application eg “Google Home Automation”

HomePage is the home page for your application eg http://googlehome (or you can ignore the -HomePage parameter as it is optional)

IdentifierUri is the URI that identifies the application eg http://googleHomeAutomation

Password is the password you will give the service principal account

Now, lets create the service principle for the Azure AD Application

New-AzureRmADServicePrincipal -ApplicationId $adApp.ApplicationId

Next, we will give the service principal account read access to the Azure tenant. If you need something more restrictive, please find the appropriate role from https://docs.microsoft.com/en-gb/azure/active-directory/role-based-access-built-in-roles

New-AzureRmRoleAssignment -RoleDefinitionName Reader -ServicePrincipalName $adApp.ApplicationId

Great, the service principal account is now ready. The username for your service principal is actually the ApplicationId suffixed by your Azure AD domain name. To get the Application ID run the following by providing the identifierUri that was supplied when creating it above

Get-AzureRmADApplication -IdentifierUri {identifierUri}

Just to be pedantic, lets check to ensure we can login to Azure using the newly created service principal account and the password. To test, run the following commands (when prompted, supply the username for the service principal account and the password that was set when it was created above)

$cred = Get-Credential 
Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantId {TenantId}

where Tenantid is your Azure Tenant’s ID

If everything was setup properly, you should now be logged in using the service principal account.

2. Create an Azure Automation Account

Next, we need an Azure Automation account.

2.1   Login to the Azure Portal and then click New

AzureMarketPlace_New

2.2   Then type Automation and click search. From the results click the following.

AzureMarketPlace_ResultsAutomation

2.3   In the next screen, click Create

2.4   Next, fill in the appropriate details and click Create

AutomationAccount_Details

3. Create a SendGrid Account

Unfortunately Azure doesn’t provide relay servers that can be used by scripts to email out. Instead you have to either use EOP (Exchange Online Protection) servers or SendGrid to achieve this. SendGrid is an Email Delivery Service that Azure provides, and you need to create an account to use it. For our purposes, we will use the free tier, which allows the delivery of 2500 emails per month, which is plenty for us.

3.1   In the Azure Portal, click New

AzureMarketPlace_New

3.2   Then search for SendGrid in the marketplace and click on the following result. Next click Create

AzureMarketPlace_ResultsSendGrid

3.3   In the next screen, for the pricing tier, select the free tier and then fill in the required details and click Create.

SendGridAccount_Details

4. Configure the Automation Account

Inside the Automation Account, we will be creating a Runbook that will contain our PowerShell script that will do all the work. The script will be using the Service Principal and SendGrid accounts. To ensure we don’t expose their credentials inside the PowerShell script, we will store them in the Automation Account under Credentials, and then access them from inside our PowerShell script.

4.1   Go into the Automation Account that you had created.

4.2   Under Shared Resource click Credentials

AutomationAccount_Credentials

4.3    Click on Add a credential and then fill in the details for the Service Principal account. Then click Create

Credentials_Details

4.4   Repeat step 4.3 above to add the SendGrid account

4.5   Now that the Credentials have been stored, under Process Automation click Runbooks

Automation_Runbooks

Then click Add a runbook and in the next screen click Create a new runbook

4.6   Give the runbook an appropriate name. Change the Runbook Type to PowerShell. Click Create

Runbook_Details

4.7   Once the Runbook has been created, paste the following script inside it, click on Save and then click on Publish

Import-Module Azure
$cred = Get-AutomationPSCredential -Name 'Service Principal account'
$mailerCred = Get-AutomationPSCredential -Name 'SendGrid account'

Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantID {tenantId}

$outputFile = $env:TEMP+ "\AzureVmStatus.html"
$vmarray = @()

#Get a list of all vms 
Write-Output "Getting a list of all VMs"
$vms = Get-AzureRmVM
$total_vms = $vms.count
Write-Output "Done. VMs Found $total_vms"

$index = 0
# Add info about VM's to the array
foreach ($vm in $vms){ 
 $index++
 Write-Output "Processing VM $index/$total_vms"
 # Get VM Status
 $vmstatus = Get-AzurermVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Status

# Add values to the array:
 $vmarray += New-Object PSObject -Property ([ordered]@{
 ResourceGroupName=$vm.ResourceGroupName
 Name=$vm.Name
 OSType=$vm.StorageProfile.OSDisk.OSType
 PowerState=(get-culture).TextInfo.ToTitleCase(($vmstatus.statuses)[1].code.split("/")[1])
 })
}
$vmarray | Sort-Object PowerState,OSType -Desc

Write-Output "Converting Output to HTML" 
$vmarray | Sort-Object PowerState,OSType -Desc | ConvertTo-Html | Out-File $outputFile
Write-Output "Converted"

$fromAddr = "senderEmailAddress"
$toAddr = "recipientEmailAddress"
$subject = "Azure VM Status as at " + (Get-Date).toString()
$smtpServer = "smtp.sendgrid.net"

Write-Output "Sending Email to $toAddr using server $smtpServer"
Send-MailMessage -Credential $mailerCred -From $fromAddr -To $toAddr -Subject $subject -Attachments $outputFile -SmtpServer $smtpServer -UseSsl
Write-Output "Email Sent"

where

  • ‘Service Principal Account’ and ‘SendGrid Account’ are the names of the credentials that were created in the Automation Account (include the ‘ ‘ around the name)
  • senderEmailAddress is the email address that the email will show it came from. Keep the domain of the email address same as your Azure domain
  • recipientEmailAddress is the email address of the recipient who will receive the list of vms

4.8   Next, we will create a Webhook. A webhook is a special URL that will allow us to execute the above script without logging into the Azure Portal. Treat the webhook URL like a password since whoever possesses the webhook can execute the runbook without needing to provide any credentials.

Open the runbook that was just created and from the top menu click on Webhook

Webhook_menu

4.9   In the next screen click Create new webhook

4.10  A security message will be displayed informing that once the webhook has been created, the URL will not be shown anywhere in the Azure Portal. IT IS EXTREMELY IMPORTANT THAT YOU COPY THE WEBHOOK URL BEFORE PRESSING THE OK BUTTON.

Enter a name for the webhook and when you want the webhook to expire. Copy the webhook URL and paste it somewhere safe. Then click OK.

Once the webhook has expired, you can’t use it to trigger the runbook, however before it expires, you can change the expiry date. For security reasons, it is recommended that you don’t keep the webhook alive for a long period of time.

Webhook_details

Thats it folks! The stage has been set and we have successfully configured the backend systems to handle our task. Give yourselves a big pat on the back.

Follow me to the next blog, where we will use the above with IFTTT, to bring it all together so that when we say “OK Google, email me the status of all vms”, an email is sent out to us with the status of all the vms 😉

I will see you in Part 2 of this blog. Ciao 😉

Monitoring Azure Storage Queues with Application Insights and Azure Monitor

Azure Queues provides an easy queuing system for cloud-based applications. Queues allow for loose coupling between application components, and applications that use queues can take advantage of features like peek-locking and multiple retry attempts to enable application resiliency and high availability. Additionally, when Azure Queues are used with Azure Functions or Azure WebJobs, the built-in poison queue support allows for messages that repeatedly fail processing attempts to be moved to a dedicated queue for later inspection.

An important part of operating a queue-based application is monitoring the length of queues. This can tell you whether the back-end parts of the application are responding, whether they are keeping up with the amount of work they are being given, and whether there are messages that are causing problems. Most applications will have messages being added to and removed from queues as part of their regular operation. Over time, an operations team will begin to understand the normal range for each queue’s length. When a queue goes out of this range, it’s important to be alerted so that corrective action can be taken.

Azure Queues don’t have a built-in queue length monitoring system. Azure Application Insights allows for the collection of large volumes of data from an application, but it does not support monitoring queue lengths with its built-in functionality. In this post, we will create a serverless integration between Azure Queues and Application Insights using an Azure Function. This will allow us to use Application Insights to monitor queue lengths and set up Azure Monitor alert emails if the queue length exceeds a given threshold.

Solution Architecture

There are several ways that Application Insights could be integrated with Azure Queues. In this post we will use Azure Functions. Azure Functions is a serverless platform, allowing for blocks of code to be executed on demand or at regular intervals. We will write an Azure Function to poll the length of a set of queues, and publish these values to Application Insights. Then we will use Application Insights’ built-in analytics and alerting tools to monitor the queue lengths.

Base Application Setup

For this sample, we will use the Azure Portal to create the resources we need. You don’t even need Visual Studio to follow along. I will assume some basic familiarity with Azure.

First, we’ll need an Azure Storage account for our queues. In our sample application, we already have a storage account with two queues to monitor:

  • processorders: this is a queue that an API publishes to, and a back-end WebJob reads from the queue and processes its items. The queue contains orders that need to be processed.
  • processorders-poison: this is a queue that WebJobs has created automatically. Any messages that cannot be processed by the WebJob (by default after five attempts) will be moved into this queue for manual handling.

Next, we will create an Azure Functions app. When we create this through the Azure Portal, the portal helpfully asks if we want to create an Azure Storage account to store diagnostic logs and other metadata. We will choose to use our existing storage account, but if you prefer, you can have a different storage account than the one your queues are in. Additionally, the portal offers to create an Application Insights account. We will accept this, but you can create it separately later if you want.

1-FunctionsApp

Once all of these components have been deployed, we are ready to write our function.

Azure Function

Now we can write an Azure Function to connect to the queues and check their length.

Open the Azure Functions account and click the + button next to the Functions menu. Select a Timer trigger. We will use C# for this example. Click the Create this function button.

2-Function

By default, the function will run every five minutes. That might be sufficient for many applications. If you need to run the function on a different frequency, you can edit the schedule element in the function.json file and specify a cron expression.

Next, paste the following code over the top of the existing function:

This code connects to an Azure Storage account and retrieves the length of each queue specified. The key parts here are:

var connectionString = System.Configuration.ConfigurationManager.AppSettings["AzureWebJobsStorage"];

Azure Functions has an application setting called AzureWebJobsStorage. By default this refers to the storage account created when we provisioned the functions app. If you wanted to monitor a queue in another account, you could reference the storage account connection string here.

var queue = queueClient.GetQueueReference(queueName);
queue.FetchAttributes();
var length = queue.ApproximateMessageCount;

When you obtain a reference to a queue, you must explicitly fetch the queue attributes in order to read the ApproximateMessageCount. As the name suggests, this count may not be completely accurate, especially in situations where messages are being added and removed at a high rate. For our purposes, an approximate message count is enough for us to monitor.

log.Info($"{queueName}: {length}");

For now, this line will let us view the length of the queues within the Azure Functions log window. Later, we will switch this out to log to Application Insights instead.

Click Save and run. You should see something like the following appear in the log output window below the code editor:

2017-09-07T00:35:00.028 Function started (Id=57547b15-4c3e-42e7-a1de-1240fdf57b36)
2017-09-07T00:35:00.028 C# Timer trigger function executed at: 9/7/2017 12:35:00 AM
2017-09-07T00:35:00.028 processorders: 1
2017-09-07T00:35:00.028 processorders-poison: 0
2017-09-07T00:35:00.028 Function completed (Success, Id=57547b15-4c3e-42e7-a1de-1240fdf57b36, Duration=9ms)

Now we have our function polling the queue lengths. The next step is to publish these into Application Insights.

Integrating into Azure Functions

Azure Functions has integration with Appliation Insights for logging of each function execution. In this case, we want to save our own custom metrics, which is not currently supported by the built-in integration. Thankfully, integrating the full Application Insights SDK into our function is very easy.

First, we need to add a project.json file to our function app. To do this, click the View files tab on the right pane of the function app. Then click the + Add button, and name your new file project.json. Paste in the following:

This adds a NuGet reference to the Microsoft.ApplicationInsights package, which allows us to use the full SDK from our function.

Next, we need to update our function so that it writes the queue length to Application Insights. Click on the run.csx file on the right-hand pane, and replace the current function code with the following:

The key new parts that we have just added are:

private static string key = TelemetryConfiguration.Active.InstrumentationKey = System.Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY", EnvironmentVariableTarget.Process);
private static TelemetryClient telemetry = new TelemetryClient() { InstrumentationKey = key };

This sets up an Application Insights TelemetryClient instance that we can use to publish our metrics. Note that in order for Application Insights to route the metrics to the right place, it needs an instrumentation key. Azure Functions’ built-in integration with Application Insights means that we can simply reference the instrumentation key it has set up for us. If you did not set up Application Insights at the same time as the function app, you can configure this separately and set your instrumentation key here.

Now we can publish our custom metrics into Application Insights. Note that Application Insights has several different types of custom diagnostics that can be tracked. In this case, we use metrics since they provide the ability to track numerical values over time, and set up alerts as appropriate. We have added the following line to our foreach loop, which publishes the queue length into Application Insights.

telemetry.TrackMetric($"Queue length - {queueName}", (double)length);

Click Save and run. Once the function executes successfully, wait a few minutes before continuing with the next step – Application Insights takes a little time (usually less than five minutes) to ingest new data.

Exploring Metrics in Application Insights

In the Azure Portal, open your Application Insights account and click Metrics Explorer in the menu. Then click the + Add chart button, and expand the Custom metric collection. You should see the new queue length metrics listed.

4-AppInsights

Select the metrics, and as you do, notice that a new chart is added to the main panel showing the queue length count. In our case, we can see the processorders queue count fluctuate between one and five messages, while the processorders-poison queue stays empty. Set the Aggregation property to Max to better see how the queue fluctuates.

5-AppInsightsChart

You may also want to click the Time range button in the main panel, and set it to Last 30 minutes, to fully see how the queue length changes during your testing.

Setting up Alerts

Now that we can see our metrics appearing in Application Insights, we can set up alerts to notify us whenever a queue exceeds some length we specify. For our processorders queue, this might be 10 messages – we know from our operations history that if we have more than ten messages waiting to be processed, our WebJob isn’t processing them fast enough. For our processorders-poison queue, though, we never want to have any messages appear in this queue, so we can have an alert if more than zero messages are on the queue. Your thresholds may differ for your application, depending on your requirements.

Click the Alert rules button in the main panel. Azure Monitor opens. Click the + Add metric alert button. (You may need to select the Application Insights account in the Resource drop-down list if this is disabled.)

On the Add rule pane, set the values as follows:

  • Name: Use a descriptive name here, such as `QueueProcessOrdersLength`.
  • Metric: Select the appropriate `Queue length – queuename` metric.
  • Condition: Set this to the value and time period you require. In our case, I have set the rule to `Greater than 10 over the last 5 minutes`.
  • Notify via: Specify how you want to be notified of the alert. Azure Monitor can send emails, call a webhook URL, or even start a Logic App. In our case, I have opted to receive an email.

Click OK to save the rule.

6-Alert.PNG

If the queue count exceeds your specified limit, you will receive an email shortly afterwards with details:

7-Alert

Summary

Monitoring the length of your queues is a critical part of operating a modern application, and getting alerted when a queue is becoming excessively long can help to identify application failures early, and thereby avoid downtime and SLA violations. Even though Azure Storage doesn’t provide a built-in monitoring mechanism, one can easily be created using Azure Functions, Application Insights, and Azure Monitor.

Building websites with Ionic Framework, Angular and Microsoft Azure App Services

The Ionic Framework (https://ionicframework.com/) is an angular 4 based framework that is designed to build beautiful applications quickly and easily that can be targeted to native platforms as well as Progressive Web Apps (PWAs).  In this blog post, I’ll walk through the steps to start your own Ionic PWA hosted on Azure App Services, which will then serve your application.

What is Microsoft Azure App Services?

Microsoft Azure is a cloud platform that allows you to host server workloads that you’d previously host locally in a data centre or on a server somewhere to be hosted in an environment where massive scale and availability becomes available at an hourly rate. This is great for this application because our application only needs to serve static HTML and assets, which is very low on CPU overhead. We’ll host this as a free site in Azure and it will be fine. Azure App Services are Platform-as-a-Service web applications, your code, hosted on Azure’s infrastructure. You don’t worry about the operating system, or the web server, only the code you want to write.

What is Angular?

Angular is a browser based framework that allows complex single page applications to be built and run using a common set of libraries and structure. Angular 4 is the current version of the framework, and uses Typescript as the language you write programming components in, along with SCSS for CSS files, and HTML.

What is Ionic Framework?

Ionic Framework is an application that takes the Angular framework and powers it with Cordova to allow web application developers to be able to develop native apps. This means you have one common language and framework set that everyone can use to develop apps, native or web based. It also recently enabled support to build applications into PWA’s, or Progressive Web Apps, which are web based applications that behave very similar to native apps. Its this capability that we will take advantage of in this tutorial.

Prerequisites

You’ll need a Microsoft Azure account to create the Web App. You should have some knowledge of git, typescript and web application development, but most of this we’ll step through. You also need to install node and npm (https://nodejs.org/en/download/) which will enable you to develop the application. You can check that node and npm are working correctly by opening terminal or a command prompt and typing “npm -v” which will show you the current version of npm.

Steps to take

  1. First you need to install the ionic framework and cordova on your machine. You do this by opening a command prompt or terminal window and running: “npm install -g ionic cordova”
  2. Once you’ve done this, you’ll be able to run the command “ionic” and you should see the following: ionic
  3. Once this is done, you will need to create a directory, and create your ionic app. From your newly created directory, run “ionic start myApp blank”. You’ll be asked if you want to integrate your new app with Cordova. This means you would like to make a native application. In this instance we don’t (we’re creating a PWA) so type “n” and enter. This will download and install the required code for your application. Wait patiently – this will take a few minutes. start.png
  4. Once you’ve seen the success message, then your app is ready to serve locally. Change directory to “./myApp” and then run “ionic serve” and you should see your browser open with your app running. If you get an error saying “Sorry! ionic serve can only be run in an ionic project directory” you aren’t in the right folder.
  5. Now that your application is built, its ready for you to develop. All you do is go and edit the code in your src folder, and build what you need to. There are great generators and assistants that you can use to structure your app.
  6. At this point we need to ready our code for production – this means we need to minify, AoT and treeshake any wasted code from the Typescript, and remove our debug maps to reduce the size of the delivered application to our apps. To do this we run “ionic build –prod” which produces our production ready output.
  7. Its worth noting the “–prod” in the above build. This does the magic of reducing your code size. If you don’t do this, then the app will be megabytes (as you will take all of angular and ionic and its dependencies, which you won’t need). Try checking the size of the “www” folder using both steps. Mine went down from 11.1Mb to 2.96Mb.
  8. Our code is ready to commit to git. In fact, Ionic has already done that for you, there are only a few other items to check in – so run “git add .” and “git commit -m “initial build”” and you’ll be all good.
  9. The next step is to create your web app in Azure. Go to portal.azure.com and click Add -> Web App, then enter the details and choose your plan (note you will need to force the “free” plan in the settings.createwebapp.PNG
  10. Once you’ve deployed – your app will be able to be viewed at https://{yourwebappnamehere}.azurewebsites.net/. In this case https://bradleytest.azurewebsites.net/:webappstart.PNG
  11. Now its just time to get our running ionic code from our local (or build server if you use continuous integration/delivery) to our application. I’ve got a remote GitHub repository I’m going to push this code to (https://github.com/bsmithb2/ionicdemo), so ill run “git remote add origin https://github.com/bsmithb2/ionicdemo.git”  and then “git push origin master”.

Connecting git to Microsoft Azure

In this part, we’ll use git to connect to Windows Azure and continuously build and deploy using Visual Studio Team Services.

  1. Go to your web application you created in the Microsoft Azure portal, and then choose the “Continuous Delivery (preview)” menu option.
  2. Choose your source control repository (Github in my case) in the first stage. deployment
  3. Now select Build, then configure Continuous Delivery. This will set up your build in Visual Studio Team System. You’ll need to select nodejs as your build type. It will take a few minutes to set up the build and perform the first build and deploy. At this stage your app won’t work, but don’t worry – we’ll fix that next.
  4.  Once your build is set up, click on “Build Definition”. We need to make a change in the build definition, as the build isn’t yet running for npm, and the folder you wish to package and deploy is actually the “www” subdirectory.
  5. In the build process, add a new task – choose npm. Change the Command to “Custom” and then add “npm build –prod” to the arguments. This matches the build you did with “ionic build –prod” in step 6.build add NPM
  6. Once done, click the “Archive files” task, and add “/www” to the Root folder (or files) to archive. This tells VSTS to only package our output directory, which is all we need.  build change path
  7. Save the build. You can queue a build now if you like, or wait and queue one once we’ve tweaked the release.
  8. Go to releases, then choose your release (if you are unsure which, there is a “Release Definition” link in the Azure Portal near the Build Definition one.
  9. Turn off the web.config creation in File Transforms & Variable Substitution Options. release mod
  10. Turn on the “Remove Additional Files” setting in Additional Deployment Options.
  11. Save the Release Definition.
  12. At this point, you can trigger a build. It should take a few minutes. You can do this in VSTS, or alternatively change a file in your local git repository and push to github.
  13. Once the build has completed, open your web application again, and you’ll see your Ionic application!

finished

Conclusion

Ionic is a great solution to build cross-platform, responsive, mobile applications. Serving these applications is incredibly easy to do using Visual Studio Team Services and Microsoft Azure. We get great benefits in separation of concerns, while our scalability, security and cost management processes are simple as we’ve only deployed our consumer side code to this service, and its secured and managed infrastructure saves us time and risk. In this tutorial, we’ve built our first Ionic Application, pushed it to github and then set up our continuous delivery system in a few easy steps.