Azure AD Domain Services

I recently had what I thought was a rather unique requirement from a customer.

The requirement was to build Azure IaaS virtual machines and have them joined to a managed domain, while also being able to authenticate to the virtual machines using Azure AD credentials.

The answer is Azure AD Domain Services!

Azure AD Domain Services provides managed domain services such as domain join, group policy and Kerberos/NTLM authentication without the need for you to deploy and  manage domain controllers in the cloud. For more information see https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-overview

It is not without its limitations though, main things to call out is that configuring domain trusts and applying schema extensions is not possible with Azure AD Domain Services. For a full list of limitations see: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-comparison

Unfortunately at this point in time you cannot use ARM templates to configure Azure AD Domain Services so you are limited to the Azure Portal or PowerShell. I am not going to bore you with the details of the deployment steps as it is quite simple and you can easily follow the steps supplied in the Microsoft documentation: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-enable-using-powershell

What I would like to do is point out the following learnings that I discovered during my deployment.

  1. In order to utilise Azure AD credentials that are synchronised from on-premises, synchronisation of NTLM/Kerberos credential hashes must be enabled in Azure AD Connect, this is not enabled by default.
  2. If there is any cloud-only user accounts, all users who need to use Azure AD Domain Services must change their passwords after Azure AD Domain Services is provisioned. The password change process causes the credential hashes for Kerberos and NTLM authentication to be generated in Azure AD.
  3. Once a cloud-only user account has changed their password, you will need to wait for a minimum of 20 minutes before you will be able to use Azure AD Domain Services (this got me as I was impatient).
  4. Speaking of patience the provisioning process of Azure Domain Services takes about an hour.
  5. Have a dedicated subnet for Azure AD Domain services to avoid any connectivity issues that may occur with NSGs/firewalls.
  6. You can only have one managed domain connected to your Azure Active Directory.

That’s it, hopefully this helped you get a better understanding of Azure AD Domain Services and assists with a smooth deployment.

Seamless Multi-identity Browsing for Cloud Consultants

If you’re a technical consultant working with cloud services like Office 365 or Azure on behalf of various clients, you have to deal with many different logins and passwords for the same URLs. This is painful, as your default browser instance doesn’t handle multiple accounts and you generally have to resort to InPrivate (IE) or Incognito (Chrome) modes which mean a lot of copying and pasting of usernames and passwords to do your job. If this is how you operate today: stop. There is an easier way.

Two tools for seamless logins

OK, the first one is technically a feature. The most important part of removing the login bottleneck is Chrome Profiles. This essential feature of Chrome lets you maintain completely separate profiles for Chrome, including saved passwords, browser cache, bookmarks, plugins, etc. Fantastic.

Set one up for each customer that you have a dedicated account for. Once you log in once, the credentials will be cached and you’ll be able to pass through seamlessly.

This is obviously a great improvement, but only half of the puzzle. It’s when Profiles are combined with another tool that the magic happens…

SlickRun your Chrome sessions

If you haven’t heard of the venerable SlickRun (which must be pushing 20 years if it’s a day) – download it right now. This gives you the godlike power of being able to launch any application or browse to any Url nearly instantaneously. Just hit ALT-Q and input the “magic word” (which autocompletes nicely) that corresponds to the command you want to execute and Bob’s your Mother’s Brother! I tend to hide the SlickRun prompt by default, so it only shows up when I use the global ALT-Q hotkey.

First we have to set up our magic word. If you simply put a URL into the ‘Filename or URL’ box, SlickRun will open it using your default browser. We don’t want that. Instead put ‘chrome.exe’ in the box and use the ‘–profile-directory’ command line switch to target the profile you want, followed by the URL to browse to.

N.B. You don’t seem to be able to reference the profiles by name. Instead you have to put “Profile n” (where n is the number of the profile in the order you created it).

SlickRun-MagicWord

That’s all there is to it. Once you’ve set up your magic words for the key web apps you need to be able to access for each client (I go with a naming convention of ‘clientappname‘ and extend that further if I have multiple test accounts I need to log in as, etc), then get to any of them in seconds and usually as seamlessly as single-sign-on would provide.

This hands-down my favourite productivity trick and yet I’ve never seen anyone else do it, or seen a better solution to the multiple logins problem. Hence this post! Hope you find it as awesome a shortcut as I do…

Till next time!

Azure Application Security Groups

Azure Application Security Groups (ASG) are a new feature, currently in Preview, that allows for configuring network security using an application-centric approach within Network Security Groups (NSG). This approach allows for the grouping of Virtual Machines logicaly, irrespective of their IP address or subnet assignment within a VNet.

They work by assigning the network interfaces of virtual machines, as members of the ASG. ASGs are then used within NSGs as either a source or destination of a rule, and this provides additional options and flexibility for controlling network flows of resources within a subnet.

The following requirements apply to the creation and use of ASGs:

  • All network interfaces used in an ASG must be within the same VNet
  • If ASGs are used in the source and destination, they must be within the same VNet

The following scenario demonstrates a use case where ASGs may be useful. In the below diagram, there are 2 sets of VMs within a single subnet. The blue set of VMs require outbound connectivity on TCP port 443, while the green set of VMs require outbound connectivity on TCP port 1433.

As each VM is within the same subnet, to achieve this with traditional NSG rules would require that each IP address be added to a relevant rule that allows the required connectivity. For example:


NSG1

As virtual machines are added, removed or updated the management overhead that is required to maintain the NSG may become quite considerable. This is where ASGs come in to play to simplify the NSG rule creation, and continued maintenance of the rule. Instead of defining IP prefixes, you create an ASG and use the it within the NSG rule. The Azure platform takes care of the rest by determining the IPs that are covered within the ASG.

As network interfaces of VMs are added to the ASG, the effective network security rules are applied without the need to update the NSG rule itself.


NSG2

The following steps will demonstrate this process using 2 virtual machines.

Enable Preview Feature

ASGs are currently in preview and the feature must be enabled. At present these are only available within US West Central.

Check the status of the registration, and wait for the RegistrationState to change to Registered.


Create Application Security Groups

We will create 2 application security groups

  • WebAsg
  • SqlAsg

Create security rules

In this example, we create rules that use the source as the application security group created in the previous step.

Create Network Security Group

Now that the ASGs are created and the relevant rules scoped to use the ASG as the source, we can create an NSG that uses these rules.

You can verify the rule from PowerShell, using Get-AzureRmNetworkSecurityGroup, and view the SecurityRules section. In there we can see that the reference to the ASG exists in SourceApplicationSecurityGroups:

Assign the NSG to a subnet:

Add network interfaces to ASG

The final step is to add the network interfaces of the VMs to the Application Security Group. The following example updates existing network interfaces to belong to the application security group. As network interfaces are added and removed the traffic flows will be controlled by the security rules applied to the NSG through the use of the ASGs, without further requirement to update the NSG.

You can verify this by viewing the network interface with Get-AzureRmNetworkInterface and checking the IpConfigurations properties. In there we can see the reference to the ASG memberships in ApplicationSecurityGroups.

Exchange Online & Splunk – Automating the solution

NOTES FROM THE FIELD:

I have recently been consulting on, what I think is a pretty cool engagement to integrate some Office365 mailbox data into the Splunk reporting platform.

I initially thought about using a .csv export methodology however through trial & error (more error than trial if I’m being honest), and realising that this method still required some manual interaction, I decided to embark on finding a fully automated solution.

The final solution comprises the below components:

  • Splunk HTTP event collector
    • Splunk hostname
    • Token from HTTP event collector config page
  • Azure automation account
    • Azure Run As Account
    • Azure Runbook
    • Exchange Online credentials (registered to Azure automation account

I’m not going to run through the creation of the automation account, or required credentials as these had already been created, however there is a great guide to configuring the solution I have used for this customer at  https://www.splunk.com/blog/2017/10/05/splunking-microsoft-cloud-data-part-3.html

What the PowerShell script we are using will achieve is the following:

  • Connect to Azure and Exchange Online – Azure run as account authentication
  • Configure variables for connection to Splunk HTTP event collector
  • Collect mailbox data from the Exchange Online environment
  • Split the mailbox data into parts for faster processing
  • Specify SSL/TLS protocol settings for self-signed cert in test environment
  • Create a JSON object to be posted to the Splunk environment
  • HTTP POST the data directly to Splunk

The Code:

#Clear Existing PS Sessions
Get-PSSession | Remove-PSSession | Out-Null
#Create Split Function for CSV file
function Split-array {
param($inArray,[int]$parts,[int]$size)
if($parts) {
$PartSize=[Math]::Ceiling($inArray.count/$parts)
}
if($size) {
$PartSize=$size
$parts=[Math]::Ceiling($inArray.count/$size)
}
$outArray=New-Object’System.Collections.Generic.List[psobject]’
for($i=1;$i-le$parts;$i++) {
$start=(($i-1)*$PartSize)
$end=(($i)*$PartSize)-1
if($end-ge$inArray.count) {$end=$inArray.count-1}
$outArray.Add(@($inArray[$start..$end]))
}
return,$outArray
}
function Connect-ExchangeOnline {
param(
$Creds
)
#Connect to Exchange Online
$Session=New-PSSession –ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/-Credential $Credentials-Authentication Basic -AllowRedirection
$Commands=@(“Add-MailboxPermission”,”Add-RecipientPermission”,”Remove-RecipientPermission”,”Remove-MailboxPermission”,”Get-MailboxPermission”,”Get-User”,”Get-DistributionGroupMember”,”Get-DistributionGroup”,”Get-Mailbox”)
Import-PSSession-Session $Session-DisableNameChecking:$true-AllowClobber:$true-CommandName $commands|Out-Null
}
#Create Variables
$SplunkHost = “Your Splunk hostname or IP Address”
$SplunkEventCollectorPort = “8088”
$SplunkEventCollectorToken = “Splunk Token from Http Event Collector”
$servicePrincipalConnection = Get-AutomationConnection -Name ‘AzureRunAsConnection’
$credentials = Get-AutomationPSCredential -Name ‘Exchange Online’
#Connect to Azure
Add-AzureRMAccount -ServicePrincipal -Tenant $servicePrincipalConnection.TenantID -ApplicationId $servicePrincipalConnection.ApplicationID -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
#Connect to Exchange Online
Connect-ExchangeOnline -Creds $credentials
#Invoke Script
$mailboxes = Get-Mailbox -resultsize unlimited | select-object -property DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Get Current Date & Time
$time = get-date -Format s
#Convert Timezone to Australia/Brisbane
$bnetime = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId($time, [System.TimeZoneInfo]::Local.Id, ‘E. Australia Standard Time’)
#Adding Time Column to Output
$mailboxes = $mailboxes | Select-Object @{expression = {$bnetime}; Name = ‘Time’}, DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Create Split Array for Mailboxes Spreadsheet
$recipients = Split-array -inArray $mailboxes -parts 5
#Create JSON objects and HTTP Post to Splunk HTTP Event Collector
foreach ($recipient in $recipients) {
foreach($rin$recipient) {
#Create SSL Validation Bypass for Self-Signed Certificate in Testing
$AllProtocols = [System.Net.SecurityProtocolType]’Ssl3,Tls,Tls11,Tls12′
[System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
#Get JSON string to post to Splunk
$StringToPost = “{ `”Time`”: `”$($r.Time)`”, `”DisplayName`”: `”$($r.DisplayName)`”, `”PrimarySMTPAddress`”: `”$($r.PrimarySmtpAddress)`”, `”IsMailboxEnabled`”: `”$($r.IsMailboxEnabled)`”, `”ForwardingSmtpAddress`”: `”$($r.ForwardingSmtpAddress)`”, `”GrantSendOnBehalfTo`”: `”$($r.GrantSendOnBehalfTo)`”, `”ProhibitSendReceiveQuota`”: `”$($r.ProhibitSendReceiveQuota)`”, `”AddressBookPolicy`”: `”$($r.AddressBookPolicy)`” }”
$uri = “https://” + $SplunkHost + “:” + $SplunkEventCollectorPort + “/services/collector/raw”
$header = @{“Authorization”=”Splunk ” + $SplunkEventCollectorToken}
#Post to Splunk Http Event Collector
Invoke-RestMethod -Method Post -Uri $uri -Body $StringToPost -Header $header
}
}
Get-PSSession | Remove-PSSession | Out-Null

 

The final output that can be seen in Splunk looks like the following:

11/13/17
12:28:22.000 PM
{ [-]
AddressBookPolicy:
DisplayName: Shane Fisher
ForwardingSmtpAddress:
GrantSendOnBehalfTo:
IsMailboxEnabled: True
PrimarySMTPAddress: shane.fisher@xxxxxxxx.com.au
ProhibitSendReceiveQuota: 50 GB (53,687,091,200 bytes)
Time: 11/13/2017 12:28:22
}Show as raw text·         AddressBookPolicy =  

·         DisplayName = Shane Fisher

·         ForwardingSmtpAddress =  

·         GrantSendOnBehalfTo =  

·         IsMailboxEnabled = True

·         PrimarySMTPAddress = shane.fisher@xxxxxxxx.com.au

·         ProhibitSendReceiveQuota = 50 GB (53,687,091,200 bytes)

I hope this helps some of you out there.

Cheers,

Shane.

 

 

 

Azure Logic App – Evaluating IF condition with the help of JSON expression by passing null

Introduction

Yes, you read the title right, this blog is about evaluating IF condition. You might be wondering what about IF, even novice developer with no experience knows about it.

Allow me to explain a specific scenario that helps us understand it’s behavior in Logic Apps, it might blow your mind.

Some of us come from years of development experience, and at times we like to skill up ourselves to various other technologies, which leaves us with a mindset based on our past development experience and programming habits, which we gained over the years. When clients requirements are approached based on these backgrounds, we expect the code to work with the certain flow and these are where rules are broken while using IF condition in Azure Logic Apps.

Understanding JSON expression

JSON expression evaluates string to JSON object using syntax as shown below

json({"Person":{"Name": "Simpson"}})  evaluates to var name = Person.Name as Simpson

But, the same json(null), throws an error (important), avoid where possible.

Understanding IF condition

IF don’t need any special introduction, we know how it works. As we know, it has two code blocks that are evaluated based on the condition and falls to one of the blocks. As applies to Logic Apps and below is the syntax for it.

@if("condition","true","false")

To understand IF better, let’s also look into @equals(), it is a simple expression that returns true or false based on the given input and provided comparing value.

Example 1

Below is just an example, please ignore simple equality condition.
@if(equals(1,1),"true1","false1")
Result: true1

Example 2

@if(equals(1,2),"true1","false1")
Result: false1

Now, let us take our person JSON and understand it.
@if(equals(1,1),"Merge",json({"Person":{"Name": "Homer"}}) ['Name'])
Result: Merge

and similarly when the comparison is not equal

@if(equals(1,2),"Merge",json({"Person":{"Name": "Homer"}}) ['Name'])
Result: Homer

Now, recall that IF falls to one of the code blocks and returns. But in case of Azure Logic Apps, it evaluates both the code blocks and returns one code block result, that it falls into.

Here is the proof

For Example, if I do something like below, it should result as “Merge”, but it actually throws an error. According to current Logic Apps, this is the current behavior.

@if(equals(1,1),"Merge",json(null) ['Name'])
Result: error

And similarly when not equal

@if(equals(1,2),"Merge",json(null) ['Name'])
Result: error

The above examples imply that Logic App evaluates both the code blocks and returns one.

Actual error thrown is as below from real logic app

InvalidTemplate. Unable to process template language expressions in action ‘Compose’ inputs at line ‘1’ and column ‘1525’: ‘The template language function ‘json’ expects its parameter to be a string or an XML. The provided value is of type ‘Null’. Please see https://aka.ms/logicexpressions#json for usage details.’.

Adding Bot to Microsoft Teams

If you are following up on my previous blog posts about Bots and integrating LUIS with them, you are almost done with building bots and already had some fun with it. Now it’s time to bring them to life and let internal or external users interact with Bot via some sort of front end channel accessible by them. If you haven’t read my previous posts on the subject yet, please give them a read at Creating a Bot and Creating a LUIS app before reading further.

In this blog post, we will be integrating our previously created intelligent Bots into Microsoft Teams channel. Following a step by step process, you can add your bot to MS Teams channel.

Bringing Bot to Life

  1. As a first step, you need to create a package as outlined here and build a manifest as per the schema listed here. This will include your Bot logos and a manifest file as shown below.

  2. Once manifest file is created, you need to zip it along with logos, as shown above, to make it a package with (*.zip)
  3. Open Microsoft team interface, select the particular team you want to add Bot to and go to Manage team section as highlighted below.

  4. Click on Bots tab, and then select Sideload a bot as highlighted and upload your previously created zip file

  5. Once successful, it will show the bot that you have just added to your selected team as shown below.

  6. If everything went well, your Bot is now ready and available in team’s conversation window to interact with. While addressing Bot, you need to start with symbol @BotName to direct messages to Bot as shown below.

  7. Based on the configuration you have done as part of the manifest file, your command list will be available against your Bot name.

  8. Now you can ask your Bot question that you have trained your LUIS app with and it will respond as programmed.

  9. You just need to ensure your Bot is programmed to respond possible questions your end user can ask it for.

  10. You can program a bot to acknowledge user first and then respond in detail on user’s question. If the response contains multiple records, you can represent it using cards as shown below.

  11. Or if a response requires some additional actions, you can have a link or a button to launch a URL directly from your team conversation.

  12. Besides adding Bot to teams, you can add tabs to a team as well which can show any SPA (single page application) or even a dashboard built as per your needs. Below is just an example of what can be achieved using tabs inside MS Teams.

As MS Teams is evolving as a group chat software, it can be leveraged to build useful integrations as a front face to many of the organisation’s needs capitalising on Bots as an example.

Using a Bot Framework to build LUIS enabled Bots

History

In this post, we are going to build a bot using Microsoft Bot framework and add intelligence to it to extract meanings from the conversation with users utilising Microsoft cognitive service named LUIS. The last post discussed details about LUIS, give it a read before you continue on reading. This post assumes you have a basic understanding of Language Understanding Intelligent Service (LUIS) and Bot Framework, further details can be read about them at LUIS and Bot Framework.

Pre-requisites

You need to download few items to start your quick bot development, please get all of them before you jump on to the next section.

  • Bot template is available at URL (this will help you in scaffolding your solution)
  • Bot SDK is available at NuGet (this is mandatory to build a Bot)
  • Bot emulator is available at GitHub (this helps you in testing your bot during development)

Building a Bot

  1. Create an empty solution in your Visual Studio and add a Bot template project as an existing solution.
  2. Your solution directory should like the one below:

  3. Replace parameters $safeprojectname$ and $guid1$ with some meaningful name for your project and set a unique GUID
  4. Next step is to restore and update NuGet packages and ensure all dependencies are resolved.

  5. Run the application from Visual Studio and you should see bot application up and running

  6. Now open Bot emulator and connect to your Bot as follows:

  7. Once connected, you can send a test text message to see if Bot is responding

  8. At this point, your bot is up and running and in this step you will add Luis dialogue to it. Add a new class named RootLuisDialog under Dialogs folder and add methods as shown below against each intent that you have defined under your LUIS app. Ensure you have your LUIS app id and a key to decorate your class as shown below:

  9. Let’s implement a basic response from LUIS against intent ‘boot’ as shown in the code below.

  10. Open up an emulator, and try to use any utterance we have trained our LUIS application with. A sample bot response should be received as we have implemented in the code above. LUIS will identify intent ‘boot’ from a user message as shown below.

  11. And now we will be implementing a bit advanced response from LUIS against our intent ‘status’ as shown in the code below.

  12. And now you can send a bit complex message to your bot and it will send a message to LUIS to extract entity and intent from the utterance and respond to the user accordingly as per your implementation.

And the list of intent implementation goes on and on, you can customise behaviour as per your needs as your LUIS framework is ready to rock and roll within your bot and users can take advantage of it to issue specific commands or inquire about entities using your Bot. Happy Botting 🙂

How LUIS can help BOTs in understanding natural language

Since bots are evolving, you need a mechanism to better understand what user wants from his/her language and take actions or respond to user queries appropriately. In the days of increasing automation, bots can certainly help provided they are backed by tools to understand user language both naturally and contextually.

Azure Cognitive Services has an API that can help to identify what user wants, extracts concepts and entities from a sentence (user input) using an intelligent service name Language Understanding Intelligent Service (LUIS). It can help process natural language using custom trained language models and can incorporate Active learning concept based on how it was trained.

In this blog post, we will be building a LUIS app that can be utilised in a Bot or any other client application to respond to the user in a more meaningful way.

Create a LUIS app

  1. Go to https://www.luis.ai/ and sign up.
  2. You need to create a LUIS app by clicking ‘New App’ – this is the app you will be using in Bot Framework
  3. Fill out a form and give your app a unique name
  4. Your app will be created, and you can see details as below (page will be redirected to Overview)
  5. You need to create entities to identify the concept, and is very important part of utterances (input from a user). Let’s create few simple entities using the form below
  6. You can also reuse pre built entity like email, URL, date etc.
  7. Next step is to build intent which represents a task or an action from utterance (input from a user). By default, you will have None which is for irrelevant utterances to your LUIS app.
  8. Once you have defined the series of intents, you need to add possible utterances against each intent which forms the basis of Active Learning. You need to make sure to include varied terminology and different phrases to help LUIS identify.You can build Phrase list to include words that must be treated similarly like company name or phone models etc.
  9. As you write utterances, you need to identify or tag entities like we selected $service-request in your utterance.Remember: you are identifying possible phrases to help LUIS extract intents and entities from utterances.
  10. Next step is to train your LUIS app to help it identify entities and intents from utterances. Ensure you click Train Application when you are done with enough training (you can also do such training on per entity or per intent basis)
  11. You can repeat step 10 as much time as you like to ensure LUIS app is trained well enough on your language model.
  12. Publish the app once you have identified all possible entities, intents, utterances and have trained LUIS well to extract them from user input.
  13. Keep a note of Programmatic API key from MyKey section and Application ID from Settings menu of your LUIS app, you will need these two keys when integrating LUIS with your client application.

Now you are ready to go ahead and use your LUIS app in your Bot or any other client application to process natural language in a meaningful manner – Cheers!

Quickly creating and using an Azure Key Vault with PowerShell

Introduction

A couple of weeks back I was messing around with the Azure Key Vault looking to centralise a bunch of credentials for my ever-growing list of Azure Functions that are automating numerous tasks. What I found was getting an Azure Key Vault setup and getting credentials in and out was a little more cumbersome than what I thought it should be. At that same point via Twitter this tweet appeared in my timeline from a retweet. I’m not too sure why, but maybe because I’m been migrating to VSCode myself I checked out Axel’s project.

Tweet

Axel’s PowerShell Module simplifies creating and integrating with the Azure Key Vault. After messing with it and suggesting a couple of enhancements that Axel graciously entertained, I’m creating vaults, adding and removing credentials in the simplified way I’d wanted.

This quickstart guide to using this module will get you started too.

Create an Azure Key Vault

This is one of the beauties of Axel’s module. If the Resource Group and/or Storage Group you want associated with your Key Vault doesn’t exist then it creates them.

Update the following script for the location (line 8) and the name (line 10) that will be given to your Storage Account, Resource Group and Vault. Modify if you want to use different names for each.

Done, Key Vault created.

Create Azure KeyVault

Key Vault Created

Connect to the Azure Key Vault

This script assumes you’re now in a new session and wanting to connect to the Key Vault. Again, a simplified version whereby the SG, RG and KV names are all the same.  Update for your location and Key Vault name.

Connected.

Connect to Azure Key Vault

Add a Certificate to the Azure Key Vault

To add a certificate to our new Key Vault use the command below. It will prompt you for your certificate password and add the cert to the key vault.

Add Cert to Vault

Certificate added to Key Vault.

Cert Added to Vault

Retrieve a Certificate from the Azure Key Vault

To retrieve a certificate from the Key Vault is just as simple.

$VaultCert = Get-AzureCertificate -Name "AADAppCert" -ResourceGroupName $name -StorageAccountName $name -VaultName $name

Retrieve a Cert

Add Credentials to the Azure Key Vault

Adding username/password or clientID/clientSecret to the Key Vault is just as easy.

# Store credentials into the Azure Key Vault
Set-AzureCredential -UserName "serviceAccount" -Password ($pwd = Read-Host -AsSecureString) -VaultName $name -StorageAccountName $name -Verbose

Credentials added to vault

Add Creds to Key Vault

Creds Added to Vault

Retrieve Credentials from the Azure Key Vault

Retrieving credentials is just as easy.

# Get credentials from the Azure Key Vault
$AzVaultCreds = Get-AzureCredential -UserName "serviceAccount" -VaultName $name -StorageAccountName $name -Verbose

Credentials retrieved.

Retrieve Account Creds

Remove Credentials from the Azure Key Vault

Removing credentials is also a simple cmdlet.

# Remove credentials from the Azure Key Vault
Remove-AzureCredential -UserName "serviceAccount" -VaultName $name -StorageAccountName $name -Verbose

Credentials removed.

Remove Credential

Summary

Hopefully this gets you started quickly with the Azure Key Vault. Credit to Axel for creating the module. It’s now part of my toolkit that I’m using a lot.

Building websites with Ionic Framework, Angular and Microsoft Azure App Services

The Ionic Framework (https://ionicframework.com/) is an angular 4 based framework that is designed to build beautiful applications quickly and easily that can be targeted to native platforms as well as Progressive Web Apps (PWAs).  In this blog post, I’ll walk through the steps to start your own Ionic PWA hosted on Azure App Services, which will then serve your application.

What is Microsoft Azure App Services?

Microsoft Azure is a cloud platform that allows you to host server workloads that you’d previously host locally in a data centre or on a server somewhere to be hosted in an environment where massive scale and availability becomes available at an hourly rate. This is great for this application because our application only needs to serve static HTML and assets, which is very low on CPU overhead. We’ll host this as a free site in Azure and it will be fine. Azure App Services are Platform-as-a-Service web applications, your code, hosted on Azure’s infrastructure. You don’t worry about the operating system, or the web server, only the code you want to write.

What is Angular?

Angular is a browser based framework that allows complex single page applications to be built and run using a common set of libraries and structure. Angular 4 is the current version of the framework, and uses Typescript as the language you write programming components in, along with SCSS for CSS files, and HTML.

What is Ionic Framework?

Ionic Framework is an application that takes the Angular framework and powers it with Cordova to allow web application developers to be able to develop native apps. This means you have one common language and framework set that everyone can use to develop apps, native or web based. It also recently enabled support to build applications into PWA’s, or Progressive Web Apps, which are web based applications that behave very similar to native apps. Its this capability that we will take advantage of in this tutorial.

Prerequisites

You’ll need a Microsoft Azure account to create the Web App. You should have some knowledge of git, typescript and web application development, but most of this we’ll step through. You also need to install node and npm (https://nodejs.org/en/download/) which will enable you to develop the application. You can check that node and npm are working correctly by opening terminal or a command prompt and typing “npm -v” which will show you the current version of npm.

Steps to take

  1. First you need to install the ionic framework and cordova on your machine. You do this by opening a command prompt or terminal window and running: “npm install -g ionic cordova”
  2. Once you’ve done this, you’ll be able to run the command “ionic” and you should see the following: ionic
  3. Once this is done, you will need to create a directory, and create your ionic app. From your newly created directory, run “ionic start myApp blank”. You’ll be asked if you want to integrate your new app with Cordova. This means you would like to make a native application. In this instance we don’t (we’re creating a PWA) so type “n” and enter. This will download and install the required code for your application. Wait patiently – this will take a few minutes. start.png
  4. Once you’ve seen the success message, then your app is ready to serve locally. Change directory to “./myApp” and then run “ionic serve” and you should see your browser open with your app running. If you get an error saying “Sorry! ionic serve can only be run in an ionic project directory” you aren’t in the right folder.
  5. Now that your application is built, its ready for you to develop. All you do is go and edit the code in your src folder, and build what you need to. There are great generators and assistants that you can use to structure your app.
  6. At this point we need to ready our code for production – this means we need to minify, AoT and treeshake any wasted code from the Typescript, and remove our debug maps to reduce the size of the delivered application to our apps. To do this we run “ionic build –prod” which produces our production ready output.
  7. Its worth noting the “–prod” in the above build. This does the magic of reducing your code size. If you don’t do this, then the app will be megabytes (as you will take all of angular and ionic and its dependencies, which you won’t need). Try checking the size of the “www” folder using both steps. Mine went down from 11.1Mb to 2.96Mb.
  8. Our code is ready to commit to git. In fact, Ionic has already done that for you, there are only a few other items to check in – so run “git add .” and “git commit -m “initial build”” and you’ll be all good.
  9. The next step is to create your web app in Azure. Go to portal.azure.com and click Add -> Web App, then enter the details and choose your plan (note you will need to force the “free” plan in the settings.createwebapp.PNG
  10. Once you’ve deployed – your app will be able to be viewed at https://{yourwebappnamehere}.azurewebsites.net/. In this case https://bradleytest.azurewebsites.net/:webappstart.PNG
  11. Now its just time to get our running ionic code from our local (or build server if you use continuous integration/delivery) to our application. I’ve got a remote GitHub repository I’m going to push this code to (https://github.com/bsmithb2/ionicdemo), so ill run “git remote add origin https://github.com/bsmithb2/ionicdemo.git”  and then “git push origin master”.

Connecting git to Microsoft Azure

In this part, we’ll use git to connect to Windows Azure and continuously build and deploy using Visual Studio Team Services.

  1. Go to your web application you created in the Microsoft Azure portal, and then choose the “Continuous Delivery (preview)” menu option.
  2. Choose your source control repository (Github in my case) in the first stage. deployment
  3. Now select Build, then configure Continuous Delivery. This will set up your build in Visual Studio Team System. You’ll need to select nodejs as your build type. It will take a few minutes to set up the build and perform the first build and deploy. At this stage your app won’t work, but don’t worry – we’ll fix that next.
  4.  Once your build is set up, click on “Build Definition”. We need to make a change in the build definition, as the build isn’t yet running for npm, and the folder you wish to package and deploy is actually the “www” subdirectory.
  5. In the build process, add a new task – choose npm. Change the Command to “Custom” and then add “npm build –prod” to the arguments. This matches the build you did with “ionic build –prod” in step 6.build add NPM
  6. Once done, click the “Archive files” task, and add “/www” to the Root folder (or files) to archive. This tells VSTS to only package our output directory, which is all we need.  build change path
  7. Save the build. You can queue a build now if you like, or wait and queue one once we’ve tweaked the release.
  8. Go to releases, then choose your release (if you are unsure which, there is a “Release Definition” link in the Azure Portal near the Build Definition one.
  9. Turn off the web.config creation in File Transforms & Variable Substitution Options. release mod
  10. Turn on the “Remove Additional Files” setting in Additional Deployment Options.
  11. Save the Release Definition.
  12. At this point, you can trigger a build. It should take a few minutes. You can do this in VSTS, or alternatively change a file in your local git repository and push to github.
  13. Once the build has completed, open your web application again, and you’ll see your Ionic application!

finished

Conclusion

Ionic is a great solution to build cross-platform, responsive, mobile applications. Serving these applications is incredibly easy to do using Visual Studio Team Services and Microsoft Azure. We get great benefits in separation of concerns, while our scalability, security and cost management processes are simple as we’ve only deployed our consumer side code to this service, and its secured and managed infrastructure saves us time and risk. In this tutorial, we’ve built our first Ionic Application, pushed it to github and then set up our continuous delivery system in a few easy steps.