Microsoft Azure Consumption Insights with Power BI

In a recent engagement I was tasked to assess the Azure consumption for a customer, they have been exceeding their forecasted budget for last several months. In a short timeframe I had to make sense out of the recon files provided via the Azure Enterprise Portal and present that in a decision-making format that is business understandable and easy to consume. To understand and optimise the cost, it is important to identify and understand where the cost originated, i.e. resources, resource groups and the build spend pattern by analysing the trends.

We all struggle at some point for intelligent visuals to analyse consumption data produced by Azure. Earlier, our options were to play with excel and be a chart artist, rely on Microsoft provided graphs in Azure Portal or ingest into a database and run analysis tools.  To make life easier, we ingested the present and historic data into Microsoft Power BI. This Blog discusses the visual capability provided by Microsoft BI that can be used for periodic review of Azure billing data along with Azure provided Advisor tool.

What is Microsoft Power BI?

Power BI is a free application you can either install on your local computer or use the web version, that lets you connect to, transform, and visualize your data. With Power BI Desktop, you can connect to multiple different sources of data, and combine them (often called modelling) into a data model that lets you build visuals, and collections of visuals you can share as reports. One of the data sources is Microsoft Azure Consumption Insight, currently in beta.

Step 1: Download Microsoft Power BI Desktop (Figure 1) or use web version (Figure 2), in this blog we use web version of Power BI.

1

2

Step 2: Ensure that you have a minimum of reader permission on the Azure Enterprise Portal. An existing Enrolment Administrator who has full control can grant you appropriate permissions.

3Step 3: Login to the EA portal and go to the ‘Reporting’ section on the left pane. This will get you to the below screen, go to ‘API Access Key’ and copy the access key, if you do not see an access key speak to the enrolment administrator to generate this. One you have the key make a note of it and keep it in a secured location, you will need this in the following steps.

4

Step 4: While you are logged into the EA portal make a note of the ‘Enrolment Number’ it is available under ‘Manage’ and Enrolment Details. You will need this in the following steps.

Step 5: Launch the Power BI Desktop app you downloaded or open https://app.powerbi.com/groups/me/getdata/welcome. Click on ‘Get Data’ to connect to the EA portal APIs, in the desktop app it is located in the ‘Home’ menu ribbon and in the web version it is located at the bottom left pane.  One you have launched get data, you will see the below window. Go to ‘Online Services’ in the left list and select ‘Microsoft Azure Consumption Insights (Beta)’ on the right list and hit ‘Connect’.

5

Step 6: One you connect the new window will ask you to provide the ‘Enrolment Number’ you captured in Step 4. Input that number and hit ‘OK’

6

Step 7: Now input the Account Key you captured in Step 3 and hit ‘Connect’. In the next screen you’ll be presented with multiple options to select from, choose ‘Usage’. One you have made your choice, this could take few to several minutes to ingest data from the portal.

7

Step 8: When the data source is connected you will see consumption data with dashboard for your enterprise. Default data is provided in following tabs:

  • Usage Trend by Account and Subscriptions graph provide a summary view of enterprise consumption by subscription and various accounts in EA portal
  • Usage Trend by Services8
  • Top 5 Usage Drivers graph provide a visual for each of the resources by ‘Meter Name’. Meter Name is the identifier used by Microsoft Azure to monitor consumption by resource type for billing purposes. Highlighted period indicated unusual consumption.9
  • Usage Summary. When compared with the previous graph (Top 5 Usage Drivers) and the period highlighted in the below graph, analysis confirmed an unusual consumption. We drilled into the usage detailed for further analysis for this period which revealed high usage of premium managed disks and discussed with the customer about the change during this period.10
  • Market Place Charges provides you any purchases from Azure Marketplace. In this case, customer did not have any third-party usage.11
  • Usage dashboards based on Tags, Resource Groups, Location continued to point to unusual consumption of premium disks. Usage by tags data is only available if you have tagged your resources in Azure.1213

    Conclusion

    After careful observation and detailed analysis of the data ingested from the EA portal, it was concluded that the premium disk usage (as discussed earlier in this blog) for the period of several months (highlighted in earlier figures) had done the major damage to their financials, which was due to the application design that consumed the premium storage. Fixing the application configuration optimised the disk usage and dropped the cost significantly in the following months, i.e. 2018-06 to 2018-08.

    Power BI provides you the capability to download the analysis in a PowerPoint format that help present and use the visuals in your report, similar to the graphs discussed in this blog.

    Cloud Services have revolutionised the way Information Technology (IT) departments support their organisation to be more productive without investing huge capital upfront on building servers, procuring software’s etc. The key to effective use of Cloud Service is planning, and one of the key areas to effectively forecast is consumption to prevent unexpected charges. This section highlights some of the best practices to be more predictable:

    • To get a better idea of your spend, Azure pricing calculator can provide an estimate of costs before you provision an Azure resource.
    • Account Admin for an Azure subscription can use the Azure Billing Alert Service to create customized billing alerts that help you monitor and manage billing activity for your Azure accounts.
    • Apply tags to your Azure resources to logically organize them by categories. Each tag consists of a name and a value. After you apply tags, you can retrieve all the resources in your subscription with that tag name and value.
    • Regularly checks for cost breakdown and burn rate allows you to see the current spend and burn rate in Azure portal.
    • Consider enabling cost-cutting features like auto-shutdown for VMs where possible. This can be enabled either in the virtual machine configuration or via runbook(s) through the automation account
    • Periodic review of Azure Advisor recommendation helps you reduce costs by identifying resources with low usage and possible cost optimisation
    • Pre-paying for one-year or three-years of virtual machine or SQL Database compute capacity via Azure reservations for virtual machines or SQL databases that run for long periods of time. Pre-paying allows you to get a discount on the resources you use. Azure reservations can significantly reduce your virtual machine or SQL database compute costs—up to 72 percent.

Processing Azure Event Grid events across Azure subscriptions

Consider a scenario where you need to listen to Azure resource events happening in one Azure subscription from another Azure subscription. A use case for such a scenario can be when you are developing a solution where you listen to events happening in your customers’ Azure subscriptions, and then you need to handle those events from an Azure Function or Logic App running in your subscription.

A solution for such a scenario could be:
1. Create an Azure Function in your subscription that will handle Azure resource events received from Azure Event Grid.
2. Handle event validation in the above function, which is required to perform a handshake with Event Grid.
3. Create an Azure Event Grid subscription in the customers’ Azure subscriptions.

Before, I go into details let’s have a brief overview of Azure Event Grid.

Azure Event Grid is a routing service based on a publish/subscribe model, which is used for developing event-based applications. Event sources publish events, and event handlers can subscribe to these events via Event Grid subscriptions.

event-grids

Figure 1. Azure event grid publishers and handlers

Azure Event Grid subscriptions can be used to subscribe to system topics as well as custom topics. Various Azure services automatically send events to Event Grid. The system-level event sources that currently send events to Event Grid are Azure subscriptions, resource groups, Event Hubs, IOT Hubs, Azure Media Services, Service Bus, and blob storage

You can listen to these events by creating an event handler. Azure Event Grid supports several Azure Services and custom webhooks for event handlers. There are number of Azure services that can be used as event handlers, including Azure Functions, Logic Apps, Event Hubs, Azure Automation, Hybrid Connections, and storage queues.

In this post I’ll focus on using Azure Functions as an event handler to which an Event Grid subscription will send events to whenever an event occurs at the whole Azure subscription level. You can also create an Event Grid subscription at a resource group level to be notified only for the resources belonging to a particular resource group. The figure 1 posted above, shows various event sources that can publish events, and various supported event handlers. As per our solution Azure subscriptions and Azure Functions are marked.

Create an Azure Function in your subscription and handle the validation event from Event Grid

If our Event Grid subscription and function were in the same subscription, then we could have simply created an Event Grid-triggered Azure Function. Using that you can simply specify the Event Grid subscription details with this function specified as an endpoint in the Event Grid subscription. However, in our case this cannot be done as we need to have the Event Grid subscription in the customer subscription, and the Azure Function in our subscription. Therefore, we will simply create a HTTP-triggered function or a webhook function

Because we’re not selecting an Event Grid triggered function, we need us to do an extra validation step. At the time of creating a new Azure Event Grid subscription, Event Grid requires the endpoint to prove the ownership of the webhook, so that Event Grid can deliver the events to that endpoint. For built-in event handlers such as Logic Apps, Azure Automation, and Event Grid triggered functions, this process of validation is not necessary. However, in our scenario where we are using a HTTP-triggered function we need to handle the validation handshake

When an Event Grid subscription is created, it sends a subscription validation event in a POST request to the endpoint. All we need to do is to handle this event, read the request body, read the validationCode property in the data object in the request, and send it back in the response. Once Event Grid receives the same validation code back it knows that endpoint is validated, and it will start delivering events to our function. Following is an example of a POST request that Event Grid sends to the endpoint for validation.

Our function can check if the eventType is Microsoft.EventGrid.SubscriptionValidationEvent , which indicates it is meant for validation, and send back the value in data.validationCode. In all other scenarios, eventType will be based on the resource on which the event occurred, and the function can process those events accordingly. Also, the resource validation event contains a header aeg-event-type with value SubscriptionValidation. You should also validate this header.

Following is the sample code for a Node.js function to handle the validation event and send back the validation code and hence completing the validation handshake.

Processing Resource Events

To process the resource events, you can filter them on the resourceProvider or operationName properties. For example, the operationName property for a VM create event is set to Microsoft.Compute/virtualMachines/write. The event payload follows a fixed schema as described here. An event for a virtual machine creation looks like below:

Authentication

While creating the Event Grid subscription, detailed in next section, it should be created with the endpoint URL pointing to function URL including the function key.. Also, event validation done for the handshake acts as another means of authentication. To add an extra layer of authentication, you can generate your own access token, and append it to your function URL when specifying the endpoint for the Event Grid subscription. Your function can now also validate this access token before further processing.

Create an Azure Event Grid Subscription in customer’s subscription

A subscription owner/administrator should be able to run an Azure CLI or PowerShell command for creating the Event Grid subscription in customer subscription.

Important: This step must be done after the above step of creating the Azure Function is done. Otherwise, when you try to create an Event Grid subscription, and it raises the subscription validation event, Event Grid will not get a valid response back, and the creation of the Event Grid subscription will fail.

You can add filters to your Event Grid subscription to filter the events by subject. Currently, events can only be filtered with text comparison of the subject property value starting with or ending with some text. The subject filter doesn’t support a wildcard or regex search.

Azure CLI or PowerShell

An example Azure CLI command to create an Event Grid Subscription, which receives all the events occurring at subscription level is as below:

Here https://myhttptriggerfunction.azurewebsites.net/api/f1?code= is the URL of the function app.

Azure REST API

Instead of asking customer to run a CLI or PowerShell script to create the Event Grid subscription, you can automate this process by writing another Azure Function that calls Azure REST API. The API call can be invoked using service principal with rights on the customer’s subscription.

To create an Event Grid subscription for the customer’s Azure Subscription, you submit the following PUT request:

PUT https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /providers/Microsoft.EventGrid/eventSubscriptions/ eg-subscription-test?api-version=2018-01-01

Request Body:

{
"properties": {
"destination": {
"endpointType": "WebHook",
"properties": {
"endpointUrl": " https://myhttptriggerfunction.azurewebsites.net/api/f1?code="
}
},
"filter": {
"isSubjectCaseSensitive": false
}
}
}

 

Hub-Spoke communication using vNet Peering and User Defined Routes

Introduction

Recently, I was working on a solution for a customer where they wanted to implement a Hub-Spoke virtual network topology that enabled the HUB to communicate with its Spoke networks via vNet Peering. They also required the SPOKE networks to be able to communicate with each other but peering between them was NOT allowed.

Drawing1

As we know, vNet peering is Non-Transitive – which means, even though SPOKE 1 is peered with the HUB network and the HUB is peered with SPOKE 2, this does not enable automatic communication between SPOKE 1 and SPOKE 2 unless they are exclusively peered which in our requirement we were not allowed to do.

So, let’s explore a couple of options on how we can enable communication between the Spoke networks without peering.

Solutions

There are several ways to implement Spoke to Spoke communication, but in this blog I’d like to provide details of the 2 feasible options that worked for us.

Option 1– is to place a Network Virtual Appliance (NVA) basically a Virtual Machine with a configured firewall/router within the HUB and configure it to forward traffic to and from the SPOKE networks.

If you search the Azure Market Place with the keywords “Network Virtual Appliance“, you will be presented with several licensed products that you could install and configure in the HUB network to establish this communication. Configuration of these virtual appliances varies and installation instructions can easily be found on their product websites.

Option 2- is to have a Virtual Network Gateway attached to the HUB network and make use of User Defined Routes, to enable communication between the SPOKES.

The above information was sourced from this very helpful blog post.

The rest of this blog is a detailed step by step guide and the testing performed for implementing the approach mentioned in Option 2.

Implementation

1.) Create 3 Virtual Networks with non-overlapping IP addresses

  • Log on to the Azure Portal and create the Hub Virtual Network as follows

1

  • Create the 2 additional virtual networks as the SPOKES with the following settings:

2

3

2.) Now that we have the 3 Virtual Networks provisioned, let’s start Peering them as follows:

a.) HubNetwork <> Spoke1Network

b.) HubNetwork <> Spoke2Network

  • Navigate to the Hub Virtual Network and create a new peering with the following settings:

4

Select the “Allow gateway transit” option.

  • Repeat the above step to create a peering with Spoke2Network as well.

3.) To establish a successful connection, we will have to create a peering to the HUB Virtual Network from each of the SPOKE Networks too

  • Navigate to Spoke1Network and create a new Peering

6

Notice, that when we select the “Use remote gateways” option, we get an error as we haven’t yet attached a Virtual Network Gateway to the HUB network. Once a Gateway has been attached, we will come back to re-configure this.

For now, Do Not select this option and click Create.

  • Repeat the above step for Spoke2 Virtual Network

4.) Let’s now provision a Virtual Network Gateway

  • Before provisioning a gateway, a Gateway Subnet is required within the Hub Virtual Network. To create this, click on the “Subnets” option in the blade of the Hub Virtual Network and then Click on “Gateway subnet

7

For the purpose of this demo, we will create a Gateway Subnet with the smallest possible network address space with CIDR /29 which provides us with 8 addresses of which the first and last IP are reserved for protocol conformance and x.x.x.1 – x.x.x.3 for azure services. For production environments, a Gateway Subnet with at least /27 address space is advised.

Let’s assume for now that when we provision the Virtual Network Gateway, the internal IP address it gets assigned to will be from the 4th address on wards which in our case would be 10.4.1.4

  • Provision the Virtual Network Gateway

Create a new Virtual Network Gateway with the following settings:

8

Ensure that you select the Hub Virtual Network in the Virtual network field which is where we want the Gateway to be attached. Click Create.

  • The Gateway provisioning process may take a while to complete and you will need to wait for the Updating status to disappear. It can take anywhere between 30-45 mins.

9

5.) Once the Gateway has been provisioned, lets now go back to the Peering section of each of the SPOKE Networks and configure “Use Remote gateways” option

10

  • Repeat the above step for Spoke2ToHub peering

6.) We will now create the Route Tables and define user routes needed for the SPOKE to SPOKE communication

  • Create 2 new Route tables in the portal with the following settings:

11

12

  • Define the User Routes as follows:

13

In the Address Prefix field, insert the CIDR Subnet address of the Spoke2 Virtual Network which in our case is 10.6.0.0/16

Select Next hop type as Virtual appliance and the Next hop address as the internal address of the Virtual Network Gateway. In our case, we are going to have this set as 10.4.1.4 as mentioned earlier.

  • Repeat this step to create a new Route in the Spoke2RouteTable as well by inserting the Subnet CIDR address of Spoke1 Virtual Network

7.) Let’s now associate these Route tables with our Virtual Networks

  • Navigate to the Spoke1Network and in the “Subnets” section of the blade, select the default subnet

14

In the Route table field select, Spoke1RouteTable and click Save

15

  • Repeat the above step to associate Spoke2RouteTable with the Spoke2 Virtual Network

We have now completed the required steps to ensure that both SPOKE Virtual Networks are able to communicate with each other via the HUB

Testing

  • In order to test our configurations, let’s provision a virtual machine in each of the Spoke networks and conduct a simple ping test

1.) Provision a basic Virtual Machine in each of the Spoke networks

2.) Run the following Powershell command in each VM to allow ICMP ping in the windows firewall as this port is blocked by default:

New-NetFirewallRule –DisplayName "Allow ICMPv4-In" –Protocol ICMPv4

3.) In my testing the VM’s had the following internal IP

The VM running in Spoke 1 network: 10.5.0.4

The VM running in Spoke 2 network: 10.6.0.4

16

Pinging 10.6.0.4 from 10.5.0.4 returns a successful response!

Why is the Azure Load Balancer NOT working?

Context

For most workloads that I’ve deployed in Azure that have required load balancing, for the Azure Load Balancer (ALB) used in those architectures, the out of the box experience or the default configuration was used. The load balancer service is great like that, whereby for the majority of scenarios it just works out of the box. I’m sure this isn’t an Azure only experience either. The other public cloud providers have a great out of the box load balancing service that would work with just about any service without in depth configuration.

You can see that I’ve been repetitive on the point around out of the box experience. This is where I think I’ve become complacent in thinking that this out of the box experience should work in the majority of circumstances.

My problem, as outlined in this blog post, is that I’ve experienced both the Azure Service Manager (ASM) load balancer and the Azure Resource Manager (ARM) load balancer not working as intended…

UPDATE 2018-07-13 – The circumstances in both of the examples given in this blog post assume that the workloads in the backend pool are configured correctly AND that the Azure Load Balancer is also configured correctly as per Microsoft recommendations. The specific solution that I’ve outlined came about after making sure that all the settings were checked, checked again and also had Microsoft Premier Support validate the config.

Azure Load Balancer

From what I’ve been told by Microsoft Premier Support, the Azure Load Balancer has had a 5-tuple distribution algorithm, based on source IP, source port, destination IP, destination port and protocol type, since its inception. However, that was certainly not the case as I’ll explain in the next paragraph. While this 5-tuple mode should in theory work well with just about any scenario, because at the end of the day the distribution is still round robin between endpoints, the stickiness of sessions to those endpoints comes into play where that can cause some issues.

In a blog post from way back when, Microsoft outlines that to accommodate RDS Gateway, the distribution mode options for the ALB have been updated. There are a total of 3 distribution modes: 5-tuple (mentioned before), 3-tuple (source IP, destination IP and protocol) and 2-tuple (source IP and destination IP).

Now that we have established the configuration options and roughly when they came about, lets get stuck into the impact in two scenarios, months apart and in both ASM Classic and ARM deployment modes…

 

The problem

Earlier this year at a customer, we ran into a problem where we had a number of Azure workloads hard reset. This was either the cause of an outage in the region, some scheduled or unscheduled maintenance that had to occur. Nothing to serious sounding until we found that the Network Device Enrolment Server (NDES) was not able to accept traffic from the Web Application Proxy (WAP) server that was inline and “north” of the server. The WAP itself was in a Cloud Service (so ASM/Classic environment here) where there was multiple WAP servers that leveraged Load Balanced Sets (or the Azure Internal Load Balancer) as part of the Cloud Service.

The odd thing that happened was that since the outage/scheduled/unscheduled maintenance had happened, inbound NDES traffic via the WAP suddenly became erratic. Certificates that were requested via NDES (from Intune in this circumstance) were for the most part not being completed. So ensued, a long and enjoyable Microsoft Premier case that involved the Azure Product Group (sarcasm intended).

The solution

I’ll keep this short and sweet as I would rather save you, dear reader, the time of not having to relive that incident. The outcome was as follows:

It was determined that the ASM Cloud Service Load Balanced Set (or Azure Load Balancer, Azure Internal Load Balancer) configuration was set to the out of the box default of 5-tuple distribution. While this implementation of the WAP + NDES solution was in production for at least 2-3 years, working without fault or issue, was not the correct configuration. It was determined that the correct configuration for this setup was to leverage either 2-tuple (source and destination IPs) distribution, or 3-tuple (source IP, destination IP and protocol). We went with the more specific 3-tuple and that resolved connectivity issues.

The PowerShell to execute this solution (setting the ASM Cloud Service LBSet to 3-tuple, source IP+destinationIP+protocol) is as follows:

Set-AzureLoadBalancedEndpoint -ServiceName "[CloudServiceX]" -LBSetName "[LBSetX]" -Protocol tcp LoadBalancerDistribution "sourceIPprotocol"

The specific parameter (-LoadBalancerDistribution) which sets the load balancer distribution algorithm, has the Valid values of:

  • sourceIP. A two tuple affinity: Source IP, Destination IP
  • sourceIPProtocol. A three tuple affinity: Source IP, Destination IP, Protocol
  • none. A five tuple affinity: Source IP, Source Port, Destination IP, Destination Port, Protocol

Round 2: The problem happened again

Recently, in another work stream, we ran into the same issue. However, the circumstances were slightly different. The problem parameters this time were:

  • An Azure Resource Manager (ARM) environment, not ASM or Classic
    • So, we were using the individual resource of the Azure Internal Load Balancer
    • Much more configuration available here
    • Again though- 5-tuple is the default “Source IP affinity mode” (note: no longer called simply the distribution mode in ARM)
  • The work load WAS NOT NDES
  • The work load WAS deployed on Windows Server 2016 again – same as before
    • I’m not sure if that is a coincidence or not

Round 2: Solution

Having gone through the load balancing distribution mode issue only a few months earlier, I had it fresh in my mind. I suggested to investigate that. After the parameter was changed, in this second instance, we were able to resolve the issue again and get the intended work load working as intended via the Azure Load Balancer.

With Azure Resource Manager, theres a couple of ways you can go about the configuration change. The most common way would be to change the JSON template which is quick and easy. The below is an example of the section around load balancing rules which has the specific “loadDistribtuion” parameter that would need to be changed. The ARMARM load balancer has basically the same configuration options as the ASM counterpart around this setting; sourceIP, sourceIPProtocol. However, the only difference is that there is a “Default” option which is the ARM equivalent to the ASM or “None” (default = 5-tuple).

"loadBalancingRules": [
{
"name": "[concat(parameters('loadBalancers_EXAMPLE_name')]",
"etag": "W/\"[XXXXXXXXXXXXXXXXXXXXXXXXXXX]\"",
"properties": {
"provisioningState": "Succeeded",
"frontendIPConfiguration": {
"id": "[parameters('loadBalancers_EXAMPLE_id')]"
},
"frontendPort": XX,
"backendPort": XX,
"enableFloatingIP": false,
"idleTimeoutInMinutes": X,
"protocol": "TCP",
"loadDistribution": "SourceIP",
"backendAddressPool": {
"id": "[parameters('loadBalancers_EXAMPLE_id_1')]"
},
"probe": {
"id": "[parameters('loadBalancers_EXAMPLE_id_2')]"
}
}
}
],

The alternative option would be to just user PowerShell. To do that, you can execute the following:

Get-AzureRmLoadBalancer -Name [LBName] -ResourceGroupName [RGName] | Set-AzureRmLoadBalancerRuleConfig -LoadDistribution "[Parameter]"

 

Conclusion and final thoughts

For the most part I would usually go with the default for any configuration. Through this exercise I have come to question the load balancing requirements to be specific around this distribution mode to avoid any possible fault. Certainly, this is a practice that should extend to every aspect of Azure. The only challenge is balancing questioning every configuration and/or simply going with the defaults. Happy balancing! (Pun intended).


This was originally posted on Lucian.Blog by Lucian.

Follow Lucian on Twitter @LucianFrango.

Querying against an Azure SQL Database using Azure Automation Part 1

What if you wanted to leverage Azure automation to analyse database entries and send some statistics or even reports on a daily or weekly basis?

Well why would you want to do that?

  • On demand compute:
    • You may not have access to a physical server. Or your computer isn’t powerful enough to handle huge data processing. Or you would definitely do not want to wait in the office for the task to complete before leaving on a Friday evening.
  • You pay by the minute
    • With Azure automation, your first 500 minutes are for free, then you pay by the minute. Check out Azure Automation Pricing for more details. By the way its super cheap.
  • Its Super Cool doing it with PowerShell. 

There are other reasons why would anyone use Azure automation but we are not getting into the details around that. What we want to do is to leverage PowerShell to do such things. So here it goes!

To query against a SQL database whether its in Azure or not isn’t that complex. In fact this part of the post is to just get us started. Now for this part, we’re going to do something simple because if you want to get things done, you need the fastest way of doing it. And that is what we are going to do. But here’s a quick summary for the ways I thought of doing it:

    1. Using ‘invoke-sqlcmd2‘. This Part of the blog.. its super quick and easy to setup and it helps getting things done quickly.
    2. Creating your own SQL Connection object to push complex SQL Querying scenarios. [[ This is where the magic kicks in.. Part 2 of this series ]]

How do we get this done quickly?

For the sake of keeping things simple, we’re assuming the following:

  • We have an Azure SQL Database called ‘myDB‘, inside an Azure SQL Server ‘mytestAzureSQL.database.windows.net‘.
  • Its a simple database containing a single table ‘test_table’. This table has basically three columns  (Id, Name, Age) and this table contains only two records.
  • We’ve setup ‘Allow Azure Services‘ Access on this database in the firewall rules Here’s how to do that just in case:
    • Search for your database resource.
    • Click on ‘Set firewall rules‘ from the top menu.
    • Ensure the option ‘Allow Azure Services‘ is set to ‘ON
  • We do have an Azure automation account setup. We’ll be using that to test our code.

Now lets get this up and running

Start by creating two variables, one containing the SQL server name and the other containing the database name.

Then create an Automation credential object to store your SQL Login username and password. You need this as you definitely should not be thinking of storing your password in plain text in script editor.

I still see people storing passwords in plain text inside scripts.

Now you need to import the ‘invoke-sqlcmd2‘ module in the automation account. This can be done by:

  • Selecting the modules tab from the left side options in the automation account.
  • From the top menu, click on Browse gallery, search for the module ‘invoke-sqlcmd2‘, click on it and hit ‘Import‘. It should take about a minute to complete.

Now from the main menu of the automation account, click on the ‘Runbooks‘ tab and then ‘Add a Runbook‘, Give it a name and use ‘PowerShell‘ as the type. Now you need to edit the runbook. To do that, click on the Pencil icon from the top menu to get into the editing pane.

Inside the pane, paste the following code. (I’ll go through the details don’t worry).

#Import your Credential object from the Automation Account
 
 $SQLServerCred = Get-AutomationPSCredential -Name "mySqllogin" #Imports your Credential object from the Automation Account
 
 #Import the SQL Server Name from the Automation variable.
 
 $SQL_Server_Name = Get-AutomationVariable -Name "AzureSQL_ServerName" #Imports the SQL Server Name from the Automation variable.
 
 #Import the SQL DB from the Automation variable.
 
 $SQL_DB_Name = Get-AutomationVariable -Name "AzureSQL_DBname"
    • The first cmdlet ‘Get-AutomationPSCredential‘ is to retrieve the automation credential object we just created.
    • The next two cmdlets ‘Get-AutomationVariable‘ are to retrieve the two Automation variables we just created for the SQL server name and the SQL database name.

Now lets query our database. To do that, paste the below code after the section above.

#Query to execute
 
 $Query = "select * from Test_Table"
 
 "----- Test Result BEGIN "
 
 # Invoke to Azure SQL DB
 
 invoke-sqlcmd2 -ServerInstance "$SQL_Server_Name" -Database "$SQL_DB_Name" -Credential $SQLServerCred -Query "$Query" -Encrypt
 
 "`n ----- Test Result END "

So what did we do up there?

    • We’ve created a simple variable that contains our query. I know the query is too simple but you can put in there whatever you want.
    • We’ve executed the cmdlet ‘invoke-sqlcmd2‘. Now if you noticed, we didn’t have to import the module we’ve just installed, Azure automation takes care of that upon every execution.
    • In the cmdlet parameter set, we specified the SQL server (that has been retrieved from the automation variable), and the database name (automation variable too). Now we used the credential object we’ve imported from Azure automation. And finally, we used the query variable we also created. An optional switch parameter ‘-encypt’ can be used to encrypt the connection to the SQL server.

Lets run the code and look at the output!

From the editing pane, click on ‘Test Pane‘ from the menu above. Click on ‘Start‘ to begin testing the code, and observe the output.

Initially the code goes through the following stages for execution

  • Queuing
  • Starting
  • Running
  • Completed

Now what’s the final result? Look at the black box and you should see something like this.

----- Test Result BEGIN 

Id Name Age
-- ---- ---
 1 John  18
 2 Alex  25

 ----- Test Result END 

Pretty sweet right? Now the output we’re getting here is an object of type ‘Datarow‘. If you wrap this query into a variable, you can start to do some cool stuff with it like

$Result.count or

$Result.Age or even

$Result | where-object -Filterscript {$PSItem.Age -gt 10}

Now imagine if you could do so much more complex things with this.

Quick Hint:

If you include a ‘-debug’ option in your invoke cmdlet, you will see the username and password in plain text. Just don’t run this code with debugging option ON 🙂

Stay tuned for Part 2!!

 

PowerShell gotcha when connecting ASM Classic VNETs to ARM ExpressRoute

Recently I was working on an Azure ExpressRoute configuration change that required an uplift from a 1GB circuit to a 10Gb circuit. Now thats nothing interesting, but, of note was using some PowerShell to execute a cmdlet.

A bit of a back story to set the scene here; and I promise it will be brief.

You can no longer provision Azure ExpressRoute circuits in the Classic or ASM deployment model. All ExpressRoute circuits that are provisioned now are indeed Azure Resource Manager (ASM) deployments. So there is a very grey area between what cmdlets to run on which PowerShell module.

The problem

The environment I was working with had a mixture of ASM and ARM deployments. The ExpressRoute circuit was re-created in ARM. When attempting to follow the Microsoft documentation to connect a VNET to that new circuit (available on this docs.microsoft.com page), I ran the following command–

Get-AzureDedicatedCircuitLink 
New-AzureDedicatedCircuitLink -ServiceKey "[XXXX-XXXX-XXXX-XXXX-XXXX]" -VNetName "[MyVNet]"

–but PowerShell threw out the following error (even after doing the end to end process in a new PS ISE session):

Get-AzureDedicatedCircuit : Object reference not set to an instance of an object. 
At line:1 char:1 + 
Get-AzureDedicatedCircuit 
+ ~~~~~~~~~~~~~~~~~~~~~~~~~     
+ CategoryInfo          : NotSpecified: (:) [Get-AzureDedicatedCircuit], NullReferenceException     
+ FullyQualifiedErrorId : System.NullReferenceException,Microsoft.WindowsAzure.Commands.ExpressRoute.GetAzureDedicatedCircuitCommand

Solution

This is actually quite a tricky one as there’s little to no documentation that outlines this specifically. It’s not too difficult though as theres only two things you need to do:

  1. Ensure you have the version 5.1.1 of the Azure PowerShell modules
    • Ensure you also have the Azure ExpressRoute PowerShell module
  2. Execute the all Import-Module commands and in the right order as per bellow

Note: If you have a newer version of the Azure PowerShell modules, like I did (version 5.2.0), import the older module as per bellow

Import-Module "C:\Program Files\WindowsPowerShell\Modules\Azure\5.1.1\Azure.psd1"
Import-Module "C:\Program Files\WindowsPowerShell\Modules\Azure\5.1.1\ExpressRoute\ExpressRoute.psd1

Then you can execute the New-AzureDedicatedCircuitLink command and connect your ASM VNET to ARM ExpressRoute.

Cheers!


Originally posted on Lucian.Blog. Follow Lucian on Twitter @LucianFrango.

Set up a Microsoft Graph App for Office 365 and SharePoint Online management to use in Azure Functions, Flow, .Net solutions and much more

Microsoft Graph API can be used to connect and manage the Office 365 SaaS platforms such as SharePoint Online, Office 365 Groups, One Drive, OneNote, Azure AD, Teams (in beta) and much more.

A Graph app is an Azure AD app that has privileges (with provided permissions) to authenticate and then execute operations when using PowerShell, Azure Functions, Flow, Office Online CSOM, SharePoint Online and many other tools.

It is quite easy to set up a graph app, below is a brief preview of the process.

1. Go to the following link in your tenancy – https://apps.dev.microsoft.com/ and create an App. Below a brief screenshot of the App registration page.

GraphAppRegistrationScreen1

  1.  Then, first create a password and make sure to copy the password because it will be shown that time only. Also copy the App ID later for any future use.

GraphAppRegistrationScreen2

3. Then select the platforms that will be used to call the Graph app. For web calls use Web and for PowerShell scripts use Native as platform. You can leave the fields in the apps blank unless there is a specific endpoint that you would like to refer to. More information is provided at this link – https://developer.microsoft.com/en-us/graph/docs/concepts/auth_register_app_v2

Note: Turn off “Allow Implicit Flow” for web calls

4. Add the owners that could manage the App in Azure AD.
5. Next, select the proper application permissions that the App will need to run the actions. These settings are very important for your app to do the right calls, so try to set the appropriate settings. In some cases, it might be necessary to trial various app permission levels till you get it correct.

Note: For admin programs or scripts, it will be necessary to get Admin consent to the url below

https://login.microsoftonline.com//adminconsent?client_id=[clientid]&state=[something]

GraphAppRegistrationScreen7

6. Leave the other fields as is and create the App. We can turn off the ‘Live SDK support’ if not needed

GraphAppRegistrationScreen6

7. The App will show up in the Apps home page once created

GraphAppMyApplications

After the Graph app is created, we can use to perform various operations on Office 365 platforms. More details of the various operations are detailed here – https://developer.microsoft.com/en-us/graph/docs/concepts/overview.

There is also the beta release (https://graph.microsoft.com/beta/) which has more features upcoming in Graph Api.

Conclusion:

In the above blog, how we can create an Graph App that will allow us to connect to Graph Api and do operations with it.

Global Azure Bootcamp 2018 – Creating the Internet of YOUR Things

Today is the 6th Global Azure Bootcamp and I presented at the Sydney Microsoft Office on the Creating the Internet of YOUR Things.

In my session I gave an overview on where IoT is going and some of the amazing things we can look forward to (maybe). I then covered a number of IoT devices that you can buy now that can enrich your life.

I then moved on to building IoT devices and leveraging Azure, the focus of my presentation. How to get started quickly with devices, integration and automation. I provided a working example based off previous my previous posts Integrating Azure IoT Devices with MongooseOS MQTT and PowerShellBuilding a Teenager Notification Service using Azure IoT an Azure Function, Microsoft Flow, Mongoose OS and a Micro Controller, and Adding a Display to the Teenager Notification Service Azure IoT Device

I provided enough information and hopefully inspiration to get you started.

Here is my presentation.

 

Demystifying Managed Service Identities on Azure

Managed service identities (MSIs) are a great feature of Azure that are being gradually enabled on a number of different resource types. But when I’m talking to developers, operations engineers, and other Azure customers, I often find that there is some confusion and uncertainty about what they do. In this post I will explain what MSIs are and are not, where they make sense to use, and give some general advice on how to work with them.

What Do Managed Service Identities Do?

A managed service identity allows an Azure resource to identify itself to Azure Active Directory without needing to present any explicit credentials. Let’s explain that a little more.

In many situations, you may have Azure resources that need to securely communicate with other resources. For example, you may have an application running on Azure App Service that needs to retrieve some secrets from a Key Vault. Before MSIs existed, you would need to create an identity for the application in Azure AD, set up credentials for that application (also known as creating a service principal), configure the application to know these credentials, and then communicate with Azure AD to exchange the credentials for a short-lived token that Key Vault will accept. This requires quite a lot of upfront setup, and can be difficult to achieve within a fully automated deployment pipeline. Additionally, to maintain a high level of security, the credentials should be changed (rotated) regularly, and this requires even more manual effort.

With an MSI, in contrast, the App Service automatically gets its own identity in Azure AD, and there is a built-in way that the app can use its identity to retrieve a token. We don’t need to maintain any AD applications, create any credentials, or handle the rotation of these credentials ourselves. Azure takes care of it for us.

It can do this because Azure can identify the resource – it already knows where a given App Service or virtual machine ‘lives’ inside the Azure environment, so it can use this information to allow the application to identify itself to Azure AD without the need for exchanging credentials.

What Do Managed Service Identities Not Do?

Inbound requests: One of the biggest points of confusion about MSIs is whether they are used for inbound requests to the resource or for outbound requests from the resource. MSIs are for the latter – when a resource needs to make an outbound request, it can identify itself with an MSI and pass its identity along to the resource it’s requesting access to.

MSIs pair nicely with other features of Azure resources that allow for Azure AD tokens to be used for their own inbound requests. For example, Azure Key Vault accepts requests with an Azure AD token attached, and it evaluates which parts of Key Vault can be accessed based on the identity of the caller. An MSI can be used in conjunction with this feature to allow an Azure resource to directly access a Key Vault-managed secret.

Authorization: Another important point is that MSIs are only directly involved in authentication, and not in authorization. In other words, an MSI allows Azure AD to determine what the resource or application is, but that by itself says nothing about what the resource can do. For some Azure resources this is Azure’s own Identity and Access Management system (IAM). Key Vault is one exception – it maintains its own access control system, and is managed outside of Azure’s IAM. For non-Azure resources, we could communicate with any authorisation system that understands Azure AD tokens; an MSI will then just be another way of getting a valid token that an authorisation system can accept.

Another important point to be aware of is that the target resource doesn’t need to run within the same Azure subscription, or even within Azure at all. Any service that understands Azure Active Directory tokens should work with tokens for MSIs.

How to Use MSIs

Now that we know what MSIs can do, let’s have a look at how to use them. Generally there will be three main parts to working with an MSI: enabling the MSI; granting it rights to a target resource; and using it.

  1. Enabling an MSI on a resource. Before a resource can identify itself to Azure AD,it needs to be configured to expose an MSI. The way that you do this will depend on the specific resource type you’re enabling the MSI on. In App Services, an MSI can be enabled through the Azure Portal, through an ARM template, or through the Azure CLI, as documented here. For virtual machines, an MSI can be enabled through the Azure Portal or through an ARM template. Other MSI-enabled services have their own ways of doing this.

  2. Granting rights to the target resource. Once the resource has an MSI enabled, we can grant it rights to do something. The way that we do this is different depending on the type of target resource. For example, Key Vault requires that you configure its Access Policies, while to use the Event Hubs or the Azure Resource Manager APIs you need to use Azure’s IAM system. Other target resource types will have their own way of handling access control.

  3. Using the MSI to issue tokens. Finally, now that the resource’s MSI is enabled and has been granted rights to a target resource, it can be used to actually issue tokens so that a target resource request can be issued. Once again, the approach will be different depending on the resource type. For App Services, there is an HTTP endpoint within the App Service’s private environment that can be used to get a token, and there is also a .NET library that will handle the API calls if you’re using a supported platform. For virtual machines, there is also an HTTP endpoint that can similarly be used to obtain a token. Of course, you don’t need to specify any credentials when you call these endpoints – they’re only available within that App Service or virtual machine, and Azure handles all of the credentials for you.

Finding an MSI’s Details and Listing MSIs

There may be situations where we need to find our MSI’s details, such as the principal ID used to represent the application in Azure AD. For example, we may need to manually configure an external service to authorise our application to access it. As of April 2018, the Azure Portal shows MSIs when adding role assignments, but the Azure AD blade doesn’t seem to provide any way to view a list of MSIs. They are effectively hidden from the list of Azure AD applications. However, there are a couple of other ways we can find an MSI.

If we want to find a specific resource’s MSI details then we can go to the Azure Resource Explorer and find our resource. The JSON details for the resource will generally include an identity property, which in turn includes a principalId:

Screenshot 1

That principalId is the client ID of the service principal, and can be used for role assignments.

Another way to find and list MSIs is to use the Azure AD PowerShell cmdlets. The Get-AzureRmADServicePrincipal cmdlet will return back a complete list of service principals in your Azure AD directory, including any MSIs. MSIs have service principal names starting with https://identity.azure.net, and the ApplicationId is the client ID of the service principal:

Screenshot 2

Now that we’ve seen how to work with an MSI, let’s look at which Azure resources actually support creating and using them.

Resource Types with MSI and AAD Support

As of April 2018, there are only a small number of Azure services with support for creating MSIs, and of these, currently all of them are in preview. Additionally, while it’s not yet listed on that page, Azure API Management also supports MSIs – this is primarily for handling Key Vault integration for SSL certificates.

One important note is that for App Services, MSIs are currently incompatible with deployment slots – only the production slot gets assigned an MSI. Hopefully this will be resolved before MSIs become fully available and supported.

As I mentioned above, MSIs are really just a feature that allows a resource to assume an identity that Azure AD will accept. However, in order to actually use MSIs within Azure, it’s also helpful to look at which resource types support receiving requests with Azure AD authentication, and therefore support receiving MSIs on incoming requests. Microsoft maintain a list of these resource types here.

Example Scenarios

Now that we understand what MSIs are and how they can be used with AAD-enabled services, let’s look at a few example real-world scenarios where they can be used.

Virtual Machines and Key Vault

Azure Key Vault is a secure data store for secrets, keys, and certificates. Key Vault requires that every request is authenticated with Azure AD. As an example of how this might be used with an MSI, imagine we have an application running on a virtual machine that needs to retrieve a database connection string from Key Vault. Once the VM is configured with an MSI and the MSI is granted Key Vault access rights, the application can request a token and can then get the connection string without needing to maintain any credentials to access Key Vault.

API Management and Key Vault

Another great example of an MSI being used with Key Vault is Azure API Management. API Management creates a public domain name for the API gateway, to which we can assign a custom domain name and SSL certificate. We can store the SSL certificate inside Key Vault, and then give Azure API Management an MSI and access to that Key Vault secret. Once it has this, API Management can automatically retrieve the SSL certificate for the custom domain name straight from Key Vault, simplifying the certificate installation process and improving security by ensuring that the certificate is not directly passed around.

Azure Functions and Azure Resource Manager

Azure Resource Manager (ARM) is the deployment and resource management system used by Azure. ARM itself supports AAD authentication. Imagine we have an Azure Function that needs to scan our Azure subscription to find resources that have recently been created. In order to do this, the function needs to log into ARM and get a list of resources. Our Azure Functions app can expose an MSI, and so once that MSI has been granted reader rights on the resource group, the function can get a token to make ARM requests and get the list without needing to maintain any credentials.

App Services and Event Hubs/Service Bus

Event Hubs is a managed event stream. Communication to both publish onto, and subscribe to events from, the stream can be secured using Azure AD. An example scenario where MSIs would help here is when an application running on Azure App Service needs to publish events to an Event Hub. Once the App Service has been configured with an MSI, and Event Hubs has been configured to grant that MSI publishing permissions, the application can retrieve an Azure AD token and use it to post messages without having to maintain keys.

Service Bus provides a number of features related to messaging and queuing, including queues and topics (similar to queues but with multiple subscribers). As with Event Hubs, an application could use its MSI to post messages to a queue or to read messages from a topic subscription, without having to maintain keys.

App Services and Azure SQL

Azure SQL is a managed relational database, and it supports Azure AD authentication for incoming connections. A database can be configured to allow Azure AD users and applications to read or write specific types of data, to execute stored procedures, and to manage the database itself. When coupled with an App Service with an MSI, Azure SQL’s AAD support is very powerful – it reduces the need to provision and manage database credentials, and ensures that only a given application can log into a database with a given user account. Tomas Restrepo has written a great blog post explaining how to use Azure SQL with App Services and MSIs.

Summary

In this post we’ve looked into the details of managed service identities (MSIs) in Azure. MSIs provide some great security and management benefits for applications and systems hosted on Azure, and enable high levels of automation in our deployments. While they aren’t particularly complicated to understand, there are a few subtleties to be aware of. As long as you understand that MSIs are for authentication of a resource making an outbound request, and that authorisation is a separate thing that needs to be managed independently, you will be able to take advantage of MSIs with the services that already support them, as well as the services that may soon get MSI and AAD support.

Deploy active/active FortiGate NGFW in Azure

I recently was tasked with deploying two Fortinet FortiGate firewalls in Azure in a highly available active/active model. I quickly discovered that there is currently only two deployment types available in the Azure marketplace, a single VM deployment and a high availability deployment (which is an active/passive model and wasn’t what I was after).

FG NGFW Marketplace Options

I did some digging around on the Fortinet support sites and discovered that to you can achieve an active/active model in Azure using dual load balancers (a public and internal Azure load balancer) as indicated in this Fortinet document: https://www.fortinet.com/content/dam/fortinet/assets/deployment-guides/dg-fortigate-high-availability-azure.pdf.

Deployment

To achieve an active/active model you must deploy two separate FortiGate’s using the single VM deployment option and then deploy the Azure load balancers separately.

I will not be going through how to deploy the FortiGate’s and required VNets, subnets, route tables, etc. as that information can be found here on Fortinet’s support site: http://cookbook.fortinet.com/deploying-fortigate-azure/.

NOTE: When deploying each FortiGate ensure they are deployed into different frontend and backend subnets, otherwise the route tables will end up routing all traffic to one FortiGate.

Once you have two FortiGate’s, a public load balancer and an internal load balancer deployed in Azure you are ready to configure the FortiGate’s.

Configuration

NOTE: Before proceeding ensure you have configured static routes for all your Azure subnets on each FortiGate otherwise the FortiGate’s will not be able to route Azure traffic correctly.

Outbound traffic

To direct all internet traffic from Azure via the FortiGate’s will require some configuration on the Azure internal load balancer and a user defined route.

  1. Create a load balance rule with:
    • Port: 443
    • Backend Port: 443
    • Backend Pool:
      1. FortiGate #1
      2. FortiGate #2
    • Health probe: Health probe port (e.g. port 22)
    • Session Persistence: Client IP
    • Floating IP: Enabled
  2. Repeat step 1 for port 80 and any other ports you require
  3. Create an Azure route table with a default route to the Azure internal load balancer IP address
  4. Assign the route table to the required Azure subnets

IMPORTANT: In order for the load balance rules to work you must add a static route on each FortiGate for IP address: 168.63.129.16. This is required for the Azure health probe to communicate with the FortiGate’s and perform health checks.

FG Azure Health Probe Cfg

Once complete the outbound internet traffic flow will be as follows:

FG Internet Traffic Flow

Inbound traffic

To publish something like a web server to the internet using the FortiGate’s will require some configuration on the Azure public load balancer.

Let’s say I have a web server that resides on my Azure DMZ subnet that hosts a simple website on HTTPS/443. For this example the web server has IP address: 172.1.2.3.

  1. Add an additional public IP address to the Azure public load balancer (for this example let’s say the public IP address is: 40.1.2.3)
  2. Create a load balance rule with:
    • Frontend IP address: 40.1.2.3
    • Port: 443
    • Backend Port: 443
    • Backend Pool:
      1. FortiGate #1
      2. FortiGate #2
    • Session Persistence: Client IP
  3. On each FortiGate create a VIP address with:
    • External IP Address: 40.1.2.3
    • Mapped IP Address: 172.1.2.3
    • Port Forwarding: Enabled
    • External Port: 443
    • Mapped Port: 443

FG WebServer VIP Cfg

You can now create a policy on each FortiGate to allow HTTPS to the VIP you just created, HTTPS traffic will then be allowed to your web server.

For details on how to create policies/VIPs on FortiGate’s refer to the Fortinet support website: http://cookbook.fortinet.com.

Once complete the traffic flow to the web server will be as follows:

FG Web Traffic Flow