Deploying Azure Functions with ARM Templates

There are many different ways in which an Azure Function can be deployed. In a future blog post I plan to go through the whole list. There is one deployment method that isn’t commonly known though, and it’s of particular interest to those of us who use ARM templates to deploy our Azure infrastructure. Before I describe it, I’ll quickly recap ARM templates.

ARM Templates

Azure Resource Manager (ARM) templates are JSON files that describe the state of a resource group. They typically declare the full set of resources that need to be provisioned or updated. ARM templates are idempotent, so a common pattern is to run the template deployment regularly—often as part of a continuous deployment process—which will ensure that the resource group stays in sync with the description within the template.

In general, the role of ARM templates is typically to deploy the infrastructure required for an application, while the deployment of the actual application logic happens separately. However, Azure Functions’ ARM integration has a feature whereby an ARM template can be used to deploy the files required to make the function run.

How to Deploy Functions in an ARM Template

In order to deploy a function through an ARM template, we need to declare a resource of type Microsoft.Web/sites/functions, like this:

There are two important parts to this.

First, the config property is essentially the contents of the function.json file. It includes the list of bindings for the function, and in the example above it also includes the disabled property.

Second, the files property is an object that contains key-value pairs representing each file to deploy. The key represents the filename, and the value represents the full contents of the file. This only really works for text files, so this deployment method is probably not the right choice for precompiled functions and other binary files. Also, the file needs to be inlined within the template, which may quickly get unwieldy for larger function files—and even for smaller files, the file needs to be escaped as a JSON string. This can be done using an online tool like this, or you could use a script to do the escaping and pass the file contents as a parameter into the template deployment.

Importantly, in my testing I found that using this method to deploy over an existing function will remove any files that are not declared in the files list, so be careful when testing this approach if you’ve modified the function or added any files through the portal or elsewhere.

Examples

There are many different ways you can insert your function file into the template, but one of the ways I tend to use is a PowerShell script. Inside the script, we can read the contents of the file into a string, and create a HashTable for the ARM template deployment parameters:

Then we can use the New-AzureRmResourceGroupDeployment cmdlet to execute the deployment, passing in $templateParameters to the -TemplateParameterObject argument.

You can see the full example here.

Of course, if you have a function that doesn’t change often then you could instead manually convert the file into a JSON-encoded string using a tool like this one, and paste the function right into the ARM template. To see a full example of how this can be used, check out this example ARM template from a previous blog article I wrote.

When to Use It

Deploying a function through an ARM template can make sense when you have a very simple function that is comprised of one, or just a few, files to be deployed. In particular, if you already deploy the function app itself through the ARM template then this might be a natural extension of what you’re doing.

This type of deployment can also make sense if you’re wanting to quickly deploy and test a function and don’t need some of the more complex deployment-related features like control over handling locked files. It’s also a useful technique to have available for situations where a full deployment script might be too heavyweight.

However, for precompiled functions, functions that have binary files, and for complex deployments, it’s probably better to use another deployment mechanism. Nevertheless, I think it’s useful to know that this is a tool in your Azure Functions toolbox.

Hub-Spoke communication using vNet Peering and User Defined Routes

Introduction

Recently, I was working on a solution for a customer where they wanted to implement a Hub-Spoke virtual network topology that enabled the HUB to communicate with its Spoke networks via vNet Peering. They also required the SPOKE networks to be able to communicate with each other but peering between them was NOT allowed.

Drawing1

As we know, vNet peering is Non-Transitive – which means, even though SPOKE 1 is peered with the HUB network and the HUB is peered with SPOKE 2, this does not enable automatic communication between SPOKE 1 and SPOKE 2 unless they are exclusively peered which in our requirement we were not allowed to do.

So, let’s explore a couple of options on how we can enable communication between the Spoke networks without peering.

Solutions

There are several ways to implement Spoke to Spoke communication, but in this blog I’d like to provide details of the 2 feasible options that worked for us.

Option 1– is to place a Network Virtual Appliance (NVA) basically a Virtual Machine with a configured firewall/router within the HUB and configure it to forward traffic to and from the SPOKE networks.

If you search the Azure Market Place with the keywords “Network Virtual Appliance“, you will be presented with several licensed products that you could install and configure in the HUB network to establish this communication. Configuration of these virtual appliances varies and installation instructions can easily be found on their product websites.

Option 2- is to have a Virtual Network Gateway attached to the HUB network and make use of User Defined Routes, to enable communication between the SPOKES.

The above information was sourced from this very helpful blog post.

The rest of this blog is a detailed step by step guide and the testing performed for implementing the approach mentioned in Option 2.

Implementation

1.) Create 3 Virtual Networks with non-overlapping IP addresses

  • Log on to the Azure Portal and create the Hub Virtual Network as follows

1

  • Create the 2 additional virtual networks as the SPOKES with the following settings:

2

3

2.) Now that we have the 3 Virtual Networks provisioned, let’s start Peering them as follows:

a.) HubNetwork <> Spoke1Network

b.) HubNetwork <> Spoke2Network

  • Navigate to the Hub Virtual Network and create a new peering with the following settings:

4

Select the “Allow gateway transit” option.

  • Repeat the above step to create a peering with Spoke2Network as well.

3.) To establish a successful connection, we will have to create a peering to the HUB Virtual Network from each of the SPOKE Networks too

  • Navigate to Spoke1Network and create a new Peering

6

Notice, that when we select the “Use remote gateways” option, we get an error as we haven’t yet attached a Virtual Network Gateway to the HUB network. Once a Gateway has been attached, we will come back to re-configure this.

For now, Do Not select this option and click Create.

  • Repeat the above step for Spoke2 Virtual Network

4.) Let’s now provision a Virtual Network Gateway

  • Before provisioning a gateway, a Gateway Subnet is required within the Hub Virtual Network. To create this, click on the “Subnets” option in the blade of the Hub Virtual Network and then Click on “Gateway subnet

7

For the purpose of this demo, we will create a Gateway Subnet with the smallest possible network address space with CIDR /29 which provides us with 8 addresses of which the first and last IP are reserved for protocol conformance and x.x.x.1 – x.x.x.3 for azure services. For production environments, a Gateway Subnet with at least /27 address space is advised.

Let’s assume for now that when we provision the Virtual Network Gateway, the internal IP address it gets assigned to will be from the 4th address on wards which in our case would be 10.4.1.4

  • Provision the Virtual Network Gateway

Create a new Virtual Network Gateway with the following settings:

8

Ensure that you select the Hub Virtual Network in the Virtual network field which is where we want the Gateway to be attached. Click Create.

  • The Gateway provisioning process may take a while to complete and you will need to wait for the Updating status to disappear. It can take anywhere between 30-45 mins.

9

5.) Once the Gateway has been provisioned, lets now go back to the Peering section of each of the SPOKE Networks and configure “Use Remote gateways” option

10

  • Repeat the above step for Spoke2ToHub peering

6.) We will now create the Route Tables and define user routes needed for the SPOKE to SPOKE communication

  • Create 2 new Route tables in the portal with the following settings:

11

12

  • Define the User Routes as follows:

13

In the Address Prefix field, insert the CIDR Subnet address of the Spoke2 Virtual Network which in our case is 10.6.0.0/16

Select Next hop type as Virtual appliance and the Next hop address as the internal address of the Virtual Network Gateway. In our case, we are going to have this set as 10.4.1.4 as mentioned earlier.

  • Repeat this step to create a new Route in the Spoke2RouteTable as well by inserting the Subnet CIDR address of Spoke1 Virtual Network

7.) Let’s now associate these Route tables with our Virtual Networks

  • Navigate to the Spoke1Network and in the “Subnets” section of the blade, select the default subnet

14

In the Route table field select, Spoke1RouteTable and click Save

15

  • Repeat the above step to associate Spoke2RouteTable with the Spoke2 Virtual Network

We have now completed the required steps to ensure that both SPOKE Virtual Networks are able to communicate with each other via the HUB

Testing

  • In order to test our configurations, let’s provision a virtual machine in each of the Spoke networks and conduct a simple ping test

1.) Provision a basic Virtual Machine in each of the Spoke networks

2.) Run the following Powershell command in each VM to allow ICMP ping in the windows firewall as this port is blocked by default:

New-NetFirewallRule –DisplayName "Allow ICMPv4-In" –Protocol ICMPv4

3.) In my testing the VM’s had the following internal IP

The VM running in Spoke 1 network: 10.5.0.4

The VM running in Spoke 2 network: 10.6.0.4

16

Pinging 10.6.0.4 from 10.5.0.4 returns a successful response!

Deploying Blob Containers with ARM Templates

ARM templates are a great way to programmatically deploy your Azure resources. They act as declarative descriptions of the desired state of an Azure resource group, and while they can be frustrating to work with, overall the ability to use templates to deploy your Azure resources provides a lot of value.

One common frustration with ARM templates is that certain resource types simply can’t be deployed with them. Until recently, one such resource type was a blob container. ARM templates could deploy Azure Storage accounts, but not blob containers, queues, or tables within them.

That has now changed, and it’s possible to deploy a blob container through an ARM template. Here’s an example template that deploys a container called logs within a storage account:

Queues and tables still can’t be deployed this way, though – hopefully that’s coming soon.

Azure ExpressRoute Public and Microsoft peering changes, notes from the field

I’ve been trying to piece all this together and get a single, concise blog post that covers all bases around the changes that have happened and are going to be happening for Microsoft ExpressRoute peering. That’s been a bit of a challenge because, I hope I don’t harp on this too much, but, communication could be a bit better from the product group team. With that said, though, it’s no secret for those that use ExpressRoute, Microsoft is looking to simply it’s configuration. Good news I guess?

The main change that I’m going to delve into here comes by way of merging Microsoft Peering and Public peering into a single Microsoft Peer. Microsoft announced this at the Ignite 2017 conference:

“To simplify ExpressRoute management and configuration we merged public and Microsoft peering”.

Fast forward from September 2017, there’s not been much communication around this shift in ExpressRoute config. I’ve been scouring the interwebs for publicly available confirmation; and all I could find is a blog post that highlighted that:

“As of April 1, 2018, you cannot configure Public peering on new ExpressRoute circuits.”

Searching the Twitterverse for the hashtag #PublicPeering, we get the following confirmation only a few days later on April 5th:

So, we have confirmation that this change in ExpressRoute Public peering is happening; followed by a confirmation that as of April 1st, 2018 (no this wasn’t a joke), any new ExpressRoute circuits provisioned on or after that April fools date cannot have Public Peering. Well, given the breadth of Microsoft, communication is in a grey area. Apart from that Japanese TechNet blog post, there’s really only suggestions and recommendations budging customers to Microsoft peering. Here’s two examples:

  1. Microsoft peering is the preferred way to access all services hosted on Azure. (Source)
  2. All Azure PaaS services are also accessible through Microsoft peering. We recommend you to create Microsoft peering and connect to Azure PaaS services over Microsoft peering. (Source)

I know I’m banging on about this for too long, but, for me this is a grey area and better communication is required!

 

Migration

If you’re currently using Public peering and need to move to Microsoft peering, there’s some pretty good guidance from Microsoft on how to Migrate – available here.

NOTE: Microsoft peering of ExpressRoute circuits that were configured prior to August 1, 2017 will have all service prefixes advertised through Microsoft peering, even if route filters are not defined. Microsoft peering of ExpressRoute circuits that are configured on or after August 1, 2017 will not have any prefixes advertised until a route filter is attached to the circuit. (Source)

For many customers, and recently a customer I’ve been working with, they’ve had ExpressRoute for several years now. This change has culminated in some interesting circumstances. For this customers migration process, they were actually after upgrading to a faster network carriage and faster ExpressRoute circuit. This meant we could line up the new environment in parallel to the legacy and in configuring peering on the new service, we just configured it as Microsoft Peering only, no more Public peering.

This is all well and good, but, using a legacy ExpressRoute circuit that was configured in ASM/Classic, there’s now also the consideration of Route Filters. In the legacy or Classic ExpressRoute deployment, BGP Communities were not used. Routes were advertised as soon as the peer came on line and, ARP done and eBGP session established between Azure and the customer.

In the ARM ExpressRoute deployment model, Azure Route Filters are a requirement for Microsoft peering (only). Note that this is an Azure side config, not a customer side which can confuse people when talking BGP Route Filters. Similar concept, similar name, much confuse.

ExpressRoute Microsoft peering, out of the box, no routes are advertised from Azure to the customer until such time that a Route Filter is associated with the peer. Inside of that Route Filter, BGP Community tags for the relevant services also need to be defined.

Again, just need to highlight that Route Filters are only required for Microsoft Peering, not for Private peering.

Here’s a few more relevant references to ExpressRoute, Route Filters and BGP Communities:

 

Changes to Azure AD

Recently Microsoft gave everyone that used ExpressRoute public peering about a 45-day notice that from August 2018 Azure AD authentication and authorisation traffic will no longer be routable via Public peering. This functionality is still available if you use Office 365 over ExpressRoute, simply create a Route Filter and assign the BGP Community “Other Office 365 Services”.

To get access to that BGP Community, its much like any Office 365 service being accessed via ExpressRoute- that will need to have your Microsoft TAM approve the request as the Microsoft stance on using ExpressRoute for Office 365 traffic seesaw has swung again in the “you should really use the internet for Office 365, unless maybe Skype for Business/Teams latency is a problem”- again this is my experience.

 

Summary

  • ExpressRoute public peering has been on the radar to be deprecated for some time now
  • If you create new ExpressRoute circuits in parallel to your legacy ones, don’t expect to have the new ones work the same as legacy
    • I’ve even had the Azure product group “restore service” on a ASM/Classic ExpressRoute circuit that had Public peering, which did not restore service at all
    • We essentially spun up Microsoft peering and added the relevant Azure Route Filter
  • ARM ExpressRoute
    • Microsoft Peering has merged with Public peering so Microsoft peering does everything it did before + Public peering
    • Microsoft Peering requires RouteFilters to be applied to advertise routes from Azure to the customer
      • BGP Community tags are used inside of RouteFilters
    • As of August 1st 2018, ExpressRoute Public peering will no longer advertise Azure AD routes
      • This can be accessed via Microsoft Peering, using a Route Filter and the BGP Community tag of “Other Office 365 services”
    • No changes to Private peering at all – woohoo! (as of the date of writing this blog)
  • ASM/Classic ExpressRoute
    • You can’t provision a Classic ExpressRoute circuit anymore
    • If you have one, you’ve likely been bumped up to ARM, given the ASM portal is deprecated
    • Legacy ExpressRoute circuits that have been in-place since prior to August 1 2017, enjoy it while it lasts!
      • Any changes that you might need may be difficult to arrange- you’ll likely need to change the service to comply to current standards

Enjoy!

Azure AD Connect: How to run custom Sync scheduler with multiple on-premise AD connectors

Hello All,

I was recently involved on a project where I did some PowerShell scripts to remotely connect to an Azure AD (AAD) Connect server and run custom manual synchronization cycles (Delta Import & Delta Sync) using AAD Connect’s Custom Scheduler component.

The primary reason we had to do this was due to AD migration of users from one AD forest to another AD forest. Both these AD forest users were being synchronized (using a single AADConnect in target AD forest) to a common Azure AD tenant. Post AD migration via ADMT tool, the migrated AD user(s) merges with its corresponding pre-existing synced identity on Azure AD (due to ms-DS-SourceAnchor being the ImmutableID). Hence this avoids a new user being created on Azure AD post AD migration.

This post details the instructions for the following tasks:

  1. How to run Azure AD Connect Sync Scheduler remotely for a specific on-premise AD connector?
  2. How to run Delta Import/Delta Sync schedule actions remotely?

SCENARIO:

The AAD Connect server is having multiple On-Premises AD connectors configured for 2 Active Directory forests (abc.net & xyz.com); synchronizing user accounts from both these AD forests to a common Office 365 tenant (Skynet.com) as shown below.

1

 

So here are the instructions to run AAD Connect Custom Run Scheduler manually for a Delta Import & Delta Sync operation for the “ABC.NET” ON-PREM AD connector remotely.

1.  Stop the AutoSyncScheduler on the AADConnect01 server. (By default, Delta Sync runs on all configured connectors every 30 minutes on an Azure AD Connect Server)

 
Import-Module -Name ActiveDirectory

$AADComputer = “AADCONNECT01.ABC.NET”

$Session = New-PSSession -ComputerName $AADComputer

Invoke-Command -Session $Session -ScriptBlock {Import-Module -Name ‘ADSync’}

Invoke-Command -Session $Session -ScriptBlock {Set-ADSyncScheduler -SyncCycleEnabled $false}

Invoke-Command -Session $Session -ScriptBlock {Get-ADSyncScheduler}

Confirm that the default AutoSyncCycle is set to “FALSE” as shown below. This confirms that the AutoSyncScheduler will not run every 30 minutes.

2

2.  Run the following PowerShell command to perform a Delta Import for the “ABC.NET” (On-Premises) AD connector remotely from a management server.

 
Invoke-Command -Session $Session -ScriptBlock {Invoke-ADSyncRunProfile -ConnectorName “abc.net” -RunProfileName “Delta Import”}

3.  Run the following PowerShell command to perform a Delta Sync for the “ABC.NET” (On-Premises) AD connector.

 
Invoke-Command -Session $Session -ScriptBlock {Invoke-ADSyncRunProfile -ConnectorName “abc.net” -RunProfileName “Delta Synchronization”}

4.  Run the following PowerShell command to monitor the Sync engine to see if its busy due to Delta Import command issued in the previous step.

 
Invoke-Command -Session $Session -ScriptBlock {Get-ADSyncConnectorRunStatus}

3“RunState” status of “Busy” means that the Delta Synchronization is currently running as shown above.

The cmdlet returns an empty result if the sync engine is idle and is not running a Connector as shown below.

4

4.  Run the following PowerShell command to “Export” (commit) all the changes to Azure AD connector “SKYNET.COM – AAD”.

 
Invoke-Command -Session $Session -ScriptBlock {Invoke-ADSyncRunProfile -ConnectorName “skynet.com – AAD” -RunProfileName “Export”}

5.  Finally, do not forget to turn back the “SyncCycle” back to its previous defaults by running the PowerShell command below.

 
Invoke-Command -Session $Session -ScriptBlock {Set-ADSyncScheduler -SyncCycleEnabled $true}

~Cheers

HemantA

IaaS Application Migration Principles and Process – Consideration

What is IaaS Application Migration

Application migration is the process of moving an application program or set of applications from one environment to another. This includes migration from an on-premises enterprise server to a cloud provider’s environment or from one cloud environment to another. In this example, Infrastructure as a Service (IaaS) application migration.

It is important to consider some migration principles to guide your application migration that will allow to complete your transition successfully. At the same time, having too many principles can impact the overall delivery of the transition. Having a right balance is important. Hopefully, the below example principles will help to structure and customise your organisation’s application migration principles.

Application Migration Principles

apps migration principles

Server Migration Principles

Apps server migration principles

Database Migration Principles

Database migratyion principls.jpg

Testing Principles

Testing principles.jpg

Application Migration ProcessApps migration

Application Migration Process – High Level Process Flow

apps migration process flow.jpg

Summary

I Hope you found these useful. These are examples and varies that are according to organisational strategy, priorities and internal processes.

 

 

Querying against an Azure SQL Database using Azure Automation Part 1

What if you wanted to leverage Azure automation to analyse database entries and send some statistics or even reports on a daily or weekly basis?

Well why would you want to do that?

  • On demand compute:
    • You may not have access to a physical server. Or your computer isn’t powerful enough to handle huge data processing. Or you would definitely do not want to wait in the office for the task to complete before leaving on a Friday evening.
  • You pay by the minute
    • With Azure automation, your first 500 minutes are for free, then you pay by the minute. Check out Azure Automation Pricing for more details. By the way its super cheap.
  • Its Super Cool doing it with PowerShell. 

There are other reasons why would anyone use Azure automation but we are not getting into the details around that. What we want to do is to leverage PowerShell to do such things. So here it goes!

To query against a SQL database whether its in Azure or not isn’t that complex. In fact this part of the post is to just get us started. Now for this part, we’re going to do something simple because if you want to get things done, you need the fastest way of doing it. And that is what we are going to do. But here’s a quick summary for the ways I thought of doing it:

    1. Using ‘invoke-sqlcmd2‘. This Part of the blog.. its super quick and easy to setup and it helps getting things done quickly.
    2. Creating your own SQL Connection object to push complex SQL Querying scenarios. [[ This is where the magic kicks in.. Part 2 of this series ]]

How do we get this done quickly?

For the sake of keeping things simple, we’re assuming the following:

  • We have an Azure SQL Database called ‘myDB‘, inside an Azure SQL Server ‘mytestAzureSQL.database.windows.net‘.
  • Its a simple database containing a single table ‘test_table’. This table has basically three columns  (Id, Name, Age) and this table contains only two records.
  • We’ve setup ‘Allow Azure Services‘ Access on this database in the firewall rules Here’s how to do that just in case:
    • Search for your database resource.
    • Click on ‘Set firewall rules‘ from the top menu.
    • Ensure the option ‘Allow Azure Services‘ is set to ‘ON
  • We do have an Azure automation account setup. We’ll be using that to test our code.

Now lets get this up and running

Start by creating two variables, one containing the SQL server name and the other containing the database name.

Then create an Automation credential object to store your SQL Login username and password. You need this as you definitely should not be thinking of storing your password in plain text in script editor.

I still see people storing passwords in plain text inside scripts.

Now you need to import the ‘invoke-sqlcmd2‘ module in the automation account. This can be done by:

  • Selecting the modules tab from the left side options in the automation account.
  • From the top menu, click on Browse gallery, search for the module ‘invoke-sqlcmd2‘, click on it and hit ‘Import‘. It should take about a minute to complete.

Now from the main menu of the automation account, click on the ‘Runbooks‘ tab and then ‘Add a Runbook‘, Give it a name and use ‘PowerShell‘ as the type. Now you need to edit the runbook. To do that, click on the Pencil icon from the top menu to get into the editing pane.

Inside the pane, paste the following code. (I’ll go through the details don’t worry).

#Import your Credential object from the Automation Account
 
 $SQLServerCred = Get-AutomationPSCredential -Name "mySqllogin" #Imports your Credential object from the Automation Account
 
 #Import the SQL Server Name from the Automation variable.
 
 $SQL_Server_Name = Get-AutomationVariable -Name "AzureSQL_ServerName" #Imports the SQL Server Name from the Automation variable.
 
 #Import the SQL DB from the Automation variable.
 
 $SQL_DB_Name = Get-AutomationVariable -Name "AzureSQL_DBname"
    • The first cmdlet ‘Get-AutomationPSCredential‘ is to retrieve the automation credential object we just created.
    • The next two cmdlets ‘Get-AutomationVariable‘ are to retrieve the two Automation variables we just created for the SQL server name and the SQL database name.

Now lets query our database. To do that, paste the below code after the section above.

#Query to execute
 
 $Query = "select * from Test_Table"
 
 "----- Test Result BEGIN "
 
 # Invoke to Azure SQL DB
 
 invoke-sqlcmd2 -ServerInstance "$SQL_Server_Name" -Database "$SQL_DB_Name" -Credential $SQLServerCred -Query "$Query" -Encrypt
 
 "`n ----- Test Result END "

So what did we do up there?

    • We’ve created a simple variable that contains our query. I know the query is too simple but you can put in there whatever you want.
    • We’ve executed the cmdlet ‘invoke-sqlcmd2‘. Now if you noticed, we didn’t have to import the module we’ve just installed, Azure automation takes care of that upon every execution.
    • In the cmdlet parameter set, we specified the SQL server (that has been retrieved from the automation variable), and the database name (automation variable too). Now we used the credential object we’ve imported from Azure automation. And finally, we used the query variable we also created. An optional switch parameter ‘-encypt’ can be used to encrypt the connection to the SQL server.

Lets run the code and look at the output!

From the editing pane, click on ‘Test Pane‘ from the menu above. Click on ‘Start‘ to begin testing the code, and observe the output.

Initially the code goes through the following stages for execution

  • Queuing
  • Starting
  • Running
  • Completed

Now what’s the final result? Look at the black box and you should see something like this.

----- Test Result BEGIN 

Id Name Age
-- ---- ---
 1 John  18
 2 Alex  25

 ----- Test Result END 

Pretty sweet right? Now the output we’re getting here is an object of type ‘Datarow‘. If you wrap this query into a variable, you can start to do some cool stuff with it like

$Result.count or

$Result.Age or even

$Result | where-object -Filterscript {$PSItem.Age -gt 10}

Now imagine if you could do so much more complex things with this.

Quick Hint:

If you include a ‘-debug’ option in your invoke cmdlet, you will see the username and password in plain text. Just don’t run this code with debugging option ON 🙂

Stay tuned for Part 2!!

 

Securing your Web front-end with Azure Application Gateway Part 2

In part one of this post we looked at configuring an Azure Application Gateway to secure your web application front-end, it is available here.

In part two we will be looking at some additional post configuration tasks and how to start investigating whether the WAF is blocking any of our application traffic and how to check for this.

First up we will look at configuring some NSG (Network Security Group) inbound and outbound rules for the subnet that the Application Gateway is deployed within.

The inbound rules that you will require are below.

  • Allow HTTPS Port 443 from any source.
  • Allow HTTP Port 80 from any source (only if required for your application).
  • Allow HTTP/HTTPS Port 80/443 from your Web Server subnet.
  • Allow Application Gateway Health API ports 65503-65534. These are required for correct operation of your Application Gateway.
  • Deny-all other traffic, set this rule as a high value ie. 4000 – 4096.

The outbound rules that you will require are below.

  • Allow HTTPS Port 443 from your Application Gateway to any destination.
  • Allow HTTP Port 80 from your Application Gateway to any destination (only if required for your application).
  • Allow HTTP/HTTPS Port 80/443 to your Web Server subnet.
  • Allow Internet traffic using the Internet traffic tag.
  • Deny-all other traffic, set this rule as a high value ie. 4000 – 4096.

Now we need to configure the Application Gateway to write Diagnostic Logs to a Storage Account. To do this open the Application Gateway from within the Azure Portal, find the Monitoring section and click on Diagnostic Logs.

Click on Add diagnostic settings and browse to the Storage Account you wish to write logs to, select all log types and save the changes.

DiagStg

Now that the Application Gateway is configured to store diagnostic logs (we need the ApplicationFirewallLog) you can start testing your web front-end. To do this, firstly you should set the WAF to “Detection” mode which will log any traffic that would have been blocked. This setting is only recommended for testing purposes and should not be permanent state.

To change this setting open your Application Gateway from within the Azure Portal and click Web Application Firewall under settings.

Change the Firewall mode to “Detection” for testing purposes. Save the changes.

Now you can start your web front-end testing. Any traffic that would be blocked will now be allowed, however it will still create a log entry showing you the details for the traffic that would be blocked.

Once testing is completed open your Storage Account from within the Azure Portal and browse to the insights-logs-applicationgatewayfirewalllog container, continue opening the folder structure and find the date and time of the testing. The log file is named PT1H.json, download it to your local computer.

Open the PT1H.json file. Any entries for traffic that would be blocked will look similar to the below.

{
"resourceId": "/SUBSCRIPTIONS/....",
"operationName": "ApplicationGatewayFirewall",
"time": "2018-07-03T03:30:59Z",
"category": "ApplicationGatewayFirewallLog",
"properties": {
  "instanceId": "ApplicationGatewayRole_IN_1",
  "clientIp": "202.141.210.52",
  "clientPort": "0",
  "requestUri": "/uri",
  "ruleSetType": "OWASP",
  "ruleSetVersion": "3.0",
  "ruleId": "942430",
  "message": "Restricted SQL Character Anomaly Detection (args): # of special characters exceeded (12)",
  "action": "Detected",
  "site": "Global",
  "details": {
    "message": "Warning. Pattern match \",
    "file": "rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf",
    "line": "1002"
  },
  "hostname": "website.com.au"
}

This will give you useful information to either fix your application or disable a rule that is blocking traffic, the “ruleId” section of the log will show you the rule that requires action. Rules should only be disabled temporarily while you remediate your application. They can be disabled/enabled from within the Web Application Firewall tab within Application Gateway, just make sure the “Advanced Rule Configuration” box is ticked so that you can see them.

This process of testing and fixing code/disabling rules should continue until you can complete a test without any errors showing in the logs. Once no errors occur you can change the Web Application Firewall mode back to “Prevention” mode which will make the WAF actively block traffic that does not pass the rule sets.

Something important to note is the below log entry type with a ruleId of “0”. This error would need to be resolved by remediating the code as the default rules cannot be changed within the WAF. Microsoft are working on changing this, however at the time of writing it cannot be done as the default data length cannot be changed. Sometimes this will occur with a part of the application that cannot be resolved, if this is the case you would need to look at another WAF product such as a Barracuda etc.

{
"resourceId": "/SUBSCRIPTIONS/...",
"operationName": "ApplicationGatewayFirewall",
"time": "2018-07-03T01:21:44Z",
"category": "ApplicationGatewayFirewallLog",
"properties": {
  "instanceId": "ApplicationGatewayRole_IN_0",
  "clientIp": "1.136.111.168",
  "clientPort": "0",
  "requestUri": "/..../api/document/adddocument",
  "ruleSetType": "OWASP",
  "ruleSetVersion": "3.0",
  "ruleId": "0",
  "message": "",
  "action": "Blocked",
  "site": "Global",
  "details": {
    "message": "Request body no files data length is larger than the configured limit (131072).. Deny with code (413)",
    "data": "",
    "file": "",
    "line": ""
  },
  "hostname": "website.com.au"
}

 

In this post we looked at some post configuration tasks for Application Gateway such as configuring NSG rules to further protect the network, configure diagnostic logging and how to check the Web Application Firewall logs for application traffic that would be blocked by the WAF. The Application Gateway can be a good alternative to dedicated appliances as it is easier to configure and manage. However, in some cases where more control of WAF rule sets are required a dedicated WAF appliance may be required.

Hopefully this two part series helps you with your decision making when it comes to securing your web front-end applications.

Securing your Web front-end with Azure Application Gateway Part 1

I have just completed a project with a customer who were using Azure Application Gateway to secure their web front-end and thought it would be good to post some findings.

This is part one in a two part post looking at how to secure a web front-end using Azure Application Gateway with the WAF component enabled. In this post I will explain the process for configuring the Application Gateway once deployed. You can deploy the Application Gateway from an ARM Template, Azure PowerShell or the portal. To be able to enable the WAF component you must use a Medium or Large instance size for the Application Gateway.

Using Application Gateway allows you to remove the need for your web front-end to have a public endpoint assigned to it, for instance if it is a Virtual Machine then you no longer need a Public IP address assigned to it. You can deploy Application Gateway in front of Virtual Machines (IaaS) or Web Apps (PaaS).

An overview of how this will look is shown below. The Application Gateway requires its own subnet which no other resources can be deployed to. The web server (Virtual Machine) can be assigned to a separate subnet, if using a web app no subnet is required.

AppGW

 

The benefits we will receive from using Application Gateway are:

  • Remove the need for a public endpoint from our web server.
  • End-to-end SSL encryption.
  • Automatic HTTPS to HTTPS redirection.
  • Multi-site hosting, though in this example we will configure a single site.
  • In-built WAF solution utilising OWASP core rule sets 3.0 or 2.2.9.

To follow along you will require the Azure PowerShell module version of 3.6 or later. You can install or upgrade following this link

Before starting you need to make sure that an Application Gateway with an instance size of Medium or Large has been deployed with the WAF component enabled and that the web server or web app has been deployed and configured.

Now open PowerShell ISE and login to your Azure account using the below command.


Login-AzureRmAccount

Now we need to set our variables to work with. These variables are your Application Gateway name, the resource group where you Application Gateway is deployed, your Backend Pool name and IP, your HTTP and HTTPS Listener names, your host name (website name), the HTTP and HTTPS rule names, your front end (Private) and back end (Public) SSL Names along with your Private certificate password.

NOTE: The Private certificate needs to be in PFX format and your Public certificate in CER format.

Change these to suit your environment and copy both your pfx and cer certificate files to C:\Temp\Certs on your computer.

# Application Gateway name.
[string]$ProdAppGw = "PRD-APPGW-WAF"
# The resource group where the Application Gateway is deployed.
[string]$resourceGroup = "PRD-RG"
# The name of your Backend Pool.
[string]$BEPoolName = "BackEndPool"
# The IP address of your web server or URL of web app.
[string]$BEPoolIP = "10.0.1.10"
# The name of the HTTP Listener.
[string]$HttpListener = "HTTPListener"
# The name of the HTTPS Listener.
[string]$HttpsListener = "HTTPSListener"
# Your website hostname/URL.
[string]$HostName = "website.com.au"
# The HTTP Rule name.
[string]$HTTPRuleName = "HTTPRule"
# The HTTPS Rule name.
[string]$HTTPSRuleName = "HTTPSRule"
# SSL certificate name for your front-end (Private cert pfx).
[string]$FrontEndSSLName = "Private_SSL"
# SSL certificate name for your back-end (Public cert cer).
[string]$BackEndSSLName = "Public_SSL"
# Password for front-end SSL (Private cert pfx).
[string]$sslPassword = "&lt;Enter your Private Certificate pfx password here.&gt;"

Our first step is to configure the Front and Back end HTTPS settings on the Application Gateway.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Add the Front-end (Private) SSL certificate. If you have any issues with this step you can upload the certificate from within the Azure Portal by creating a new Listener.

Add-AzureRmApplicationGatewaySslCertificate -ApplicationGateway $AppGw `
-Name $FrontEndSSLName -CertificateFile "C:\Temp\Certs\PrivateCert.pfx" `
-Password $sslPassword

Save the certificate as a variable.

$AGFECert = Get-AzureRmApplicationGatewaySslCertificate -ApplicationGateway $AppGW `
            -Name $FrontEndSSLName

Configure the front-end port for SSL.

Add-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
-Name "appGatewayFrontendPort443" `
-Port 443

Add the back-end (Public) SSL certificate.

Add-AzureRmApplicationGatewayAuthenticationCertificate -ApplicationGateway $AppGW `
-Name $BackEndSSLName `
-CertificateFile "C:\Temp\Certs\PublicCert.cer"

Save the back-end (Public) SSL as a variable.

$AGBECert = Get-AzureRmApplicationGatewayAuthenticationCertificate -ApplicationGateway $AppGW `
            -Name $BackEndSSLName

Configure back-end HTTPS settings.

Add-AzureRmApplicationGatewayBackendHttpSettings -ApplicationGateway $AppGW `
-Name "appGatewayBackendHttpsSettings" `
-Port 443 `
-Protocol Https `
-CookieBasedAffinity Enabled `
-AuthenticationCertificates $AGBECert

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The next stage is to configure the back-end pool to connect to your Virtual Machine or Web App. This example is using the IP address of the NIC attached to the web server VM. If using a web app as your front-end you can configure it to accept traffic only from the Application Gateway by setting an IP restriction on the web app to the Application Gateway IP address.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Add the Backend Pool Virtual Machine or Web App. This can be a URL or an IP address.

Add-AzureRmApplicationGatewayBackendAddressPool -ApplicationGateway $AppGw `
-Name $BEPoolName `
-BackendIPAddresses $BEPoolIP

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The next steps are to configure the HTTP and HTTPS Listeners.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the front-end port as a variable – port 80.

$AGFEPort = Get-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
            -Name "appGatewayFrontendPort"

Save the front-end IP configuration as a variable.

$AGFEIPConfig = Get-AzureRmApplicationGatewayFrontendIPConfig -ApplicationGateway $AppGw `
                -Name "appGatewayFrontendIP"

Add the HTTP Listener for your website.

Add-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HttpListener `
-Protocol Http `
-FrontendIPConfiguration $AGFEIPConfig `
-FrontendPort $AGFEPort `
-HostName $HostName

Save the HTTP Listener for your website as a variable.

$AGListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
              -Name $HTTPListener

Save the front-end SSL port as a variable – port 443.

$AGFESSLPort = Get-AzureRmApplicationGatewayFrontendPort -ApplicationGateway $AppGw `
               -Name "appGatewayFrontendPort443"

Add the HTTPS Listener for your website.

Add-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
-Name $HTTPSListener `
-Protocol Https `
-FrontendIPConfiguration $AGFEIPConfig `
-FrontendPort $AGFESSLPort `
-HostName $HostName `
-RequireServerNameIndication true `
-SslCertificate $AGFECert

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

The final part of the configuration is to configure the HTTP and HTTPS rules and the HTTP to HTTPS redirection.

First configure the HTTPS rule.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the Backend Pool as a variable.

$BEP = Get-AzureRmApplicationGatewayBackendAddressPool -ApplicationGateway $AppGW `
       -Name $BEPoolName

Save the HTTPS Listener as a variable.

$AGSSLListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
                 -Name $HttpsListener

Save the back-end HTTPS settings as a variable.

$AGHTTPS = Get-AzureRmApplicationGatewayBackendHttpSettings -ApplicationGateway $AppGW `
           -Name "appGatewayBackendHttpsSettings"

Add the HTTPS rule.

Add-AzureRmApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
-Name $HTTPSRuleName `
-RuleType Basic `
-BackendHttpSettings $AGHTTPS `
-HttpListener $AGSSLListener `
-BackendAddressPool $BEP

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

Now configure the HTTP to HTTPS redirection and the HTTP rule with the redirection applied.

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the HTTPS Listener as a variable.

$AGSSLListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
                 -Name $HttpsListener

Add the HTTP to HTTPS redirection.

Add-AzureRmApplicationGatewayRedirectConfiguration -Name ProdHttpToHttps `
-RedirectType Permanent `
-TargetListener $AGSSLListener `
-IncludePath $true `
-IncludeQueryString $true `
-ApplicationGateway $AppGw

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

Save the Application Gateway as a variable.

$AppGw = Get-AzureRmApplicationGateway -Name $ProdAppGw `
         -ResourceGroupName $resourceGroup

Save the redirect as a variable.

$Redirect = Get-AzureRmApplicationGatewayRedirectConfiguration -Name ProdHttpToHttps `
            -ApplicationGateway $AppGw

Save the HTTP Listener as a variable.

$AGListener = Get-AzureRmApplicationGatewayHttpListener -ApplicationGateway $AppGW `
              -Name $HttpListener

Add the HTTP rule with redirection to HTTPS.

Add-AzureRmApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
-Name $HTTPRuleName `
-RuleType Basic `
-HttpListener $AGListener `
-RedirectConfiguration $Redirect

Apply the settings to the Application Gateway.

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

 
In this post we covered how to configure Azure Application Gateway to secure a web front-end whether running on Virtual Machines or Web Apps. We have configured the Gateway for end-to-end SSL encryption and automatic HTTP to HTTPS redirection removing this overhead from the web server.

In part two we will look at some additional post configuration tasks and how to make sure your web application works with the WAF component.

Building a Breakfast Ordering Skill for Amazon Alexa – Part 1

First published at https://nivleshc.wordpress.com

Introduction

At the AWS Summit Sydney this year, Telstra decided to host a breakfast session for some of their VIP clients. This was more of a networking session, to get to know the clients much better. However, instead of having a “normal” breakfast session, we decided to take it up one level 😉

Breakfast ordering is quite “boring” if you ask me 😉 The waitress comes to the table, gives you a menu and asks what you would like to order. She then takes the order and after some time your meal is with you.

As it was AWS Summit, we decided to sprinkle a bit of technical fairy dust on the ordering process. Instead of having the waitress take the breakfast orders, we contemplated the idea of using Amazon Alexa instead 😉

I decided to give the Alexa skill development a go. However, not having any prior Alexa skill development experience, I anticipated an uphill battle, having to first learn the product and then developing for it. To my amazement, the learning curve wasn’t too steep and over a weekend, spending just 12 hours in total, I had a working proof of concept breakfast ordering skill ready!

Here is a link to the proof of concept skill https://youtu.be/Z5Prr31ya10

I then spent a week polishing the Alexa skill, giving it more “personality” and adding a more “human” experience.

All the work paid off when I got told that my Alexa skill would be used at the Telstra breakfast session! I was over the moon!

For the final product, to make things even more interesting, I created a business intelligence chart using Amazon QuickSight, showing the popularity of each of the food and drink items on the menu. The popularity was based on the orders that were being received.

BothVisualsSidebySide

Using a laptop, I displayed the chart near the Amazon Echo Dot. This was to help people choose what food or drink they wanted to order (a neat marketing trick 😉 ) . If you would like to know more about Amazon QuickSight, you can read about it at Amazon QuickSight – An elegant and easy to use business analytics tool

Just as a teaser, you can watch one of the ordering scenarios for the finished breakfast ordering skill at https://youtu.be/T5PU9Q8g8ys

In this blog, I will introduce the architecture behind Amazon Alexa and prepare you for creating an Amazon Alexa Skill. In the next blog, we will get our hands dirty with creating the breakfast ordering Alexa skill.

How does Amazon Alexa actually work?

I have heard a lot of people use the name “Alexa” interchangeably for the Amazon Echo devices. As good as it is for Amazon’s marketing team, unfortunately, I have to set the records straight. Amazon Echo are the physical devices that Amazon sells that interface to the Alexa Cloud. You can see the whole range at https://www.amazon.com/Amazon-Echo-And-Alexa-Devices/b?ie=UTF8&node=9818047011. These devices don’t have any smarts in them. They sit in the background listening for the “wake” command, and then they start streaming the audio to Alexa Cloud. Alexa Cloud is where all the smarts are located. Using speech recognition, machine learning and natural language processing, Alexa Cloud converts the audio to text. Alexa Cloud identifies the skill name that the user had requested, the intent and any slot values it finds (these will be explained further in the next blog). The intent and slot values (if any) are passed to the identified skill. The skill uses the input and processes it using some form of compute (AWS Lambda in my case) and then passes the output back to Alexa Cloud. Alexa Cloud, converts the skill output to Speech Synthesis Markup Language (SSML) and sends it to the Amazon Echo device. The device then converts the SSML to audio and plays it to the user.

Below is an overview of the process.

alexa-skills-kit-diagram._CB1519131325_

Diagram is from https://developer.amazon.com/blogs/alexa/post/1c9f0651-6f67-415d-baa2-542ebc0a84cc/build-engaging-skills-what-s-inside-the-alexa-json-request

Getting things ready

Getting an Alexa enabled device

The first thing to get is an Alexa enabled device. Amazon has released quite a few different varieties of Alexa enabled devices. You can checkout the whole family here.

If you are keen to try a side project, you can build your own Alexa device using a Raspberry Pi. A good guide can be found at https://www.lifehacker.com.au/2016/10/how-to-build-your-own-amazon-echo-with-a-raspberry-pi/

You can also try out EchoSim (Amazon Echo Simulator). This is a browser-based interface to Amazon Alexa. Please ensure you read the limits of EchoSim on their website. For instance, it cannot stream music

For developing the breakfast ordering skill, I decided to purchase an Amazon Echo Dot. It’s a nice compact device, which doesn’t cost much and can run off any usb power source. For the Telstra Breakfast session, I actually ran it off my portable battery pack 😉

Create an Amazon Account

Now that you have got yourself an Alexa enabled device, you will need an Amazon account to register it with. You can use one that you already have or create a new one. If you don’t have an Amazon account, you can either create one beforehand by going to https://www.amazon.com or you can create it straight from the Alexa app (the Alexa app is used to register the Amazon Echo device).

Setup your Amazon Echo Device

Use the Alexa app to setup your Amazon Echo device. When you login to the app, you will be asked for the Amazon Account credentials. As stated above, if you don’t have an Amazon account, you can create it from within the app.

Create an Alexa Developer Account

To create skills for Alexa, you need a developer account. If you don’t have one already, you can create one by going to https://developer.amazon.com/alexa. There are no costs associated with creating an Alexa developer account.

Just make sure that the username you choose for your Alexa developer account matches the username of the Amazon account to which your Amazon Echo is registered to. This will enable you to test your Alexa skills on your Amazon Echo device without having to publish it on the Alexa Skills Store (the skills will show under Your Skills in the Alexa App)

Create an AWS Free Tier Account

In order to process any of the requests sent to the breakfast ordering Alexa skill, we will make use of AWS Lambda. AWS Lambda provides a cheap and cost-effective way to run code due to the fact that you are only charged for the time that the code is run. There are no costs for any idle time.

If you already have an AWS account, you can use that otherwise, you can sign up for an AWS Free tier account by going to https://aws.amazon.com . AWS provides a lot of services for free for the first 12 months under the Free Tier, with some services continuing the free tier allowance even beyond the 12 months (AWS Lambda is one such). For a full list of Free Tier services, visit https://aws.amazon.com/free/

High Level Architecture for the Breakfast Ordering Skill

Below is the architectural overview for the Breakfast Ordering Skill that I built. I will introduce you to the various components over the next few blogs.Breakfast Ordering System_HighLevelArchitecture

In the next blog, I will take you through the Alexa Developer console, where we will use the Alexa Skills Kit (ASK) to start creating our breakfast ordering skill. We will define the invocation name, intents, slot names for our Alexa Skill. Not familiar with these terms? Don’t worry,  I will explain them in the next blog.  I hope to see you there.

See you soon.