How to create and auto update route tables in Azure for your local Azure datacentre with Azure Automation, bypassing firewall appliances

Originally posted on Lucians blog, at clouduccino.com. Follow Lucian on Twitter @LucianFrango.


When deploying an “edge” or “perimeter” network in Azure, by way of a peered edge VNET or an edge subnet, you’ll likely want to deploy virtual firewall appliances of some kind to manage and control that ingress and egress traffic. This comes at a cost though. That cost being that Azure services are generally accessed via public IP addresses or hosts, even within Azure. The most common of those and one that has come up recently is Azure Blob storage.

If you have ExpressRoute, you can get around this by implementing Public Peering. This essentially sends all traffic destined for Azure services to your ER gateway. A bottleneck? Perhaps.

The problem in detail

Recently I ran into a road block on a customers site around the use of Blob storage. I designed an edge network that met certain traffic monitoring requirements. Azure NSGs were not able to meet all requirements, so, something considerably more complex and time consuming was implemented. It’s IT, isn’t that what always happens you may ask? 

Heres some reference blog posts:

Getting Azure 99.95% SLA for Cisco FTD virtual appliances in Azure via availability sets and ARM templates

Lessons learned from deploying Cisco Firepower Threat Defence firewall virtual appliances in Azure, a brain dump

WE deployed Cisco Firepower Threat Defence virtual appliance firewalls in an edge VNET. Our subnets had route tables with a default route of 0.0.0.0/0 directed to the “tag” “VirtualAppliance”. So all traffic to a host or network not known by Azure is directed to the firewall(s).  How that can be achieved is another blog post.

When implementing this solution, Azure services that are accessed via an external or public range IP address or host, most commonly Blob Storage which is accessed via blah.blob.core.windows.net, additionally gets directed to the Cisco FTDs. Not a big problem, create some rules to allow traffic flow etc and we’re cooking with gas.

Not exactly the case as the FTDv’s have a NIC with a throughput of 2GiB’s per second. That’s plenty fast, but, when you have a lot of workloads, a lot of user traffic, a lot of writes to Blob storage, bottle necks can occur.

The solution

As I mentioned earlier this can be tackled quickly through a number of methods. These discarded methods in this situation are as follows:

  • Implement ExpressRoute
    • Through ExpressRoute enable public peering
    • All traffic to Azure infrastructure is directed to the gateway
    • This is a “single device” that I have heard whispers is a virtual Cisco appliance similar to a common enterprise router
    • Quick to implement and for most cases, the throughout bottleneck isn’t reached and you’re fine
  • Implement firewall rules
    • Allow traffic to Microsoft IP ranges
    • Manually enter those IP ranges into the firewall
      • These are subject to change to a maintenance or managed services runbook should be implemented to do this on a regular basis
    • Or- enable URL filtering and basically allow traffic to certain URI’s or URL’s

Like I said, both above will work.

The solution in this blob post is a little bit more technical, but, does away with the above. Rather than any manual work, lets automate this process though AzureAutomation. Thankfully, this isn’t something new, but, isn’t something that is used often. Through the use of pre-configured Azure Automation modules and Powershell scripts, a scheduled weekly or monthly (or whatever you like) runbook to download the Microsoft publicly available .xml file that lists all subnets and IP addresses use in Azure. Then uses that file to update a route table the script creates with routes to Azure subnets and IP’s in a region that is specified.

This process does away with any manual intervention and works to the ethos “work smarter, not harder”. I like that very much, and no, that is not being lazy. It’s being smart.

The high five

I’m certainly not trying to take the credit for this, except for the minor tweak to the runbook code, so cheers to James Bannan (@JamesBannan) who wrote this great blog post (available here) on the solution. He’s put together the Powershell script that expands on a Powershell module written by Kieran Jacobson (@kjacobsen). Check out their Twitters, their blogs and all round awesome content for. Thank you and high five to both!

The process

I’m going to speed through this as its really straight forward and nothing to complicated here. The only tricky part is the order in doing that. Stick to the order and you’re guaranteed to succeed:

  • Create a new automation user account
  • Create a new runbook
    • Quick create a new Powershell runbook
  • Go back to the automation account
  • Update some config:
    • Update the modules in the automation account – do this FIRST as there are dependencies on up to date modules (specifically the AzureRM.Profile module by AzureRM.network)
    • By default you have these runbook modules:

    • Go to Browse Gallery
    • Select the following two modules, one at a time, and add to the automation user account
      • AzureRM.Network
      • AzurePublicIPAddress
        • This is the module created by Kieran Jacobson 
    • Once all are in, for the purposes of being borderline OCD, select Update Azure Modules
      • This should update all modules to the latest version, incase some are lagging a little behind
    • Lets create some variables
      • Select Variables from the menu blade in the automation user account
      • The script will need the following variables for YOUR ENVIRONMENT
        • azureDatacenterRegions
        • VirtualNetworkName
        • VirtualNetworkRGLocation
        • VirtualNetworkRGName
      • For my sample script, I have resources in the Australia, AustraliaEast region
      • Enter in the variables that apply to you here (your RGs, VNET etc)
  • Lets add in the Powershell to the runbook
    • Select the runbook
    • Select EDIT from the properties of the runbook (top horizontal menu)
    • Enter in the following Powershell:
      • this is my slightly modified version
$VerbosePreference = 'Continue'

### Authenticate with Azure Automation account

$cred = "AzureRunAsConnection"
try
{
 # Get the connection "AzureRunAsConnection "
 $servicePrincipalConnection=Get-AutomationConnection -Name $cred

"Logging in to Azure..."
 Add-AzureRmAccount `
 -ServicePrincipal `
 -TenantId $servicePrincipalConnection.TenantId `
 -ApplicationId $servicePrincipalConnection.ApplicationId `
 -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
}
catch {
 if (!$servicePrincipalConnection)
 {
 $ErrorMessage = "Connection $cred not found."
 throw $ErrorMessage
 } else{
 Write-Error -Message $_.Exception
 throw $_.Exception
 }
}

### Populate script variables from Azure Automation assets

$resourceGroupName = Get-AutomationVariable -Name 'virtualNetworkRGName'
$resourceLocation = Get-AutomationVariable -Name 'virtualNetworkRGLocation'
$vNetName = Get-AutomationVariable -Name 'virtualNetworkName'
$azureRegion = Get-AutomationVariable -Name 'azureDatacenterRegions'
$azureRegionSearch = '*' + $azureRegion + '*'

[array]$locations = Get-AzureRmLocation | Where-Object {$_.Location -like $azureRegionSearch}

### Retrieve the nominated virtual network and subnets (excluding the gateway subnet)

$vNet = Get-AzureRmVirtualNetwork `
 -ResourceGroupName $resourceGroupName `
 -Name $vNetName

[array]$subnets = $vnet.Subnets | Where-Object {$_.Name -ne 'GatewaySubnet'} | Select-Object Name

### Create and populate a new array with the IP ranges of each datacenter in the specified location

$ipRanges = @()

foreach($location in $locations){
 $ipRanges += Get-MicrosoftAzureDatacenterIPRange -AzureRegion $location.DisplayName
}

$ipRanges = $ipRanges | Sort-Object

### Iterate through each subnet in the virtual network
foreach($subnet in $subnets){

$RouteTableName = $subnet.Name + '-RouteTable'

$vNet = Get-AzureRmVirtualNetwork `
 -ResourceGroupName $resourceGroupName `
 -Name $vNetName

### Create a new route table if one does not already exist
 if ((Get-AzureRmRouteTable -Name $RouteTableName -ResourceGroupName $resourceGroupName) -eq $null){
 $RouteTable = New-AzureRmRouteTable `
 -Name $RouteTableName `
 -ResourceGroupName $resourceGroupName `
 -Location $resourceLocation
 }

### If the route table exists, save as a variable and remove all routing configurations
 else {
 $RouteTable = Get-AzureRmRouteTable `
 -Name $RouteTableName `
 -ResourceGroupName $resourceGroupName
 $routeConfigs = Get-AzureRmRouteConfig -RouteTable $RouteTable
 foreach($config in $routeConfigs){
 Remove-AzureRmRouteConfig -RouteTable $RouteTable -Name $config.Name | Out-Null
 }
 }

### Create a routing configuration for each IP range and give each a descriptive name
 foreach($ipRange in $ipRanges){
 $routeName = ($ipRange.Region.Replace(' ','').ToLower()) + '-' + $ipRange.Subnet.Replace('/','-')
 Add-AzureRmRouteConfig `
 -Name $routeName `
 -AddressPrefix $ipRange.Subnet `
 -NextHopType Internet `
 -RouteTable $RouteTable | Out-Null
 }

### Add default route for Edge Firewalls
 Add-AzureRmRouteConfig `
 -Name 'DefaultRoute' `
 -AddressPrefix 0.0.0.0/0 `
 -NextHopType VirtualAppliance `
 -NextHopIpAddress 10.10.10.10 `
 -RouteTable $RouteTable
 
### Include a routing configuration to give direct access to Microsoft's KMS servers for Windows activation
 Add-AzureRmRouteConfig `
 -Name 'AzureKMS' `
 -AddressPrefix 23.102.135.246/32 `
 -NextHopType Internet `
 -RouteTable $RouteTable

### Apply the route table to the subnet
 Set-AzureRmRouteTable -RouteTable $RouteTable

$forcedTunnelVNet = $vNet.Subnets | Where-Object Name -eq $subnet.Name
 $forcedTunnelVNet.RouteTable = $RouteTable

### Update the virtual network with the new subnet configuration
 Set-AzureRmVirtualNetwork -VirtualNetwork $vnet -Verbose

}

How is this different from James’s?

I’ve made two changes to the original script. These changes are as follows:

I changed the authentication to use an Azure Automation account. This streamlined the deployment process so I could reuse the script across another of subscriptions. This change was the following:

$cred = "AzureRunAsConnection"
try
{
 # Get the connection "AzureRunAsConnection "
 $servicePrincipalConnection=Get-AutomationConnection -Name $cred

"Logging in to Azure..."
 Add-AzureRmAccount `
 -ServicePrincipal `
 -TenantId $servicePrincipalConnection.TenantId `
 -ApplicationId $servicePrincipalConnection.ApplicationId `
 -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
}
catch {
 if (!$servicePrincipalConnection)
 {
 $ErrorMessage = "Connection $cred not found."
 throw $ErrorMessage
 } else{
 Write-Error -Message $_.Exception
 throw $_.Exception
 }
}

Secondly, I added an additional static route. This was for the default route (0.0.0.0/0) which is used to forward to our edge firewalls. This change was the following:

### Add default route for Edge Firewalls
 Add-AzureRmRouteConfig `
 -Name 'DefaultRoute' `
 -AddressPrefix 0.0.0.0/0 `
 -NextHopType VirtualAppliance `
 -NextHopIpAddress 10.10.10.10 `
 -RouteTable $RouteTable

You can re-use this section to add further custom static routes

Tying it all together

  • Hit the SAVE button and job done
    • Well, almost…
  • Next you should test the script
    • Select the TEST PANE from the top horizontal menu
    • A word of warning- this will go off and create the route tables and associate them with the subnets in your selected VNET!!!
    • Without testing though, you can’t confirm it works correctly
  • Should the test work out nicely, hit the publish button in the top hand menu
    • This gets the runbook ready to be used
  • Now go off and create a schedule
    • Azure public IP addresses can update often
    • It’s best to be ahead of the game and keep your route tables up to date
    • A regular schedule is recommended – I do once a week as the script only takes about 10-15min to run 
    • From the runbook top horizontal menu, select schedule
    • Create the schedule as desired
    • Save
  • JOB DONE! -No really, thats it!

Enjoy!

Best,

Lucian

Message retry patterns in Azure Functions

Azure Functions provide ServiceBus based trigger bindings that allow us to process messages dropped onto a SB queue or delivered to a SB subscription. In this post we’ll walk through creating an Azure Function using a ServiceBus trigger that implements a configurable message retry pattern.

Note: This post is not an introduction to Azure Functions nor an introduction to ServiceBus. For those not familiar will these Azure services take a look at the Azure Documentation Centre.

Let’s start by creating a simple C# function that listens for messages delivered to a SB subscription.

create azure function

Azure Functions provide a number of ways we can receive the message into our functions, but for the purpose of this post we’ll use the BrokeredMessage type as we will want access to the message properties. See the above link for further options for receiving messages into Azure Functions via the ServiceBus trigger binding.

To use BrokeredMessage we’ll need to import the Microsoft.ServiceBus assembly and change the input type to BrokeredMessage.

If we did nothing else, our function would receive the message from SB, log its message ID and remove it from the queue. Actually, the SB trigger peeks the message off the queue, acquiring a peek lock in the process. If the function executes successfully, the message is removed from the queue. So what happens when things go pear shaped? Let’s add a pear and observe what happens.

Note: To send messages to the SB topic, I use Paolo Salvatori’s ServiceBus Explorer. This tool allows us to view queue and message properties mentioned in this post.

retry by default

Notice the function being triggered multiple times. This will continue until the SB queue’s MaxDeliveryCount is exceeded. By default, SB queues and topics have a MaxDeliveryCount of 10. Let’s output the delivery count in our function using a message property on the BrokeredMessage class so we can observe this in action.

outputing delivery count

From the logs we see the message was retried 10 times until the maximum number of deliveries was reached and ServiceBus expired the message, sending it to the dead letter queue. “Ah hah!”, I hear you say. Implement message retries by configuring the MaxDeliveryCount property on the SB queue or subscription. Well, that will work as a simple, static retry policy but quite often we need a more configurable, dynamic approach. One based on the message context or type of exception caught by the processing logic.

Typical use cases include handling business errors (e.g. message validation errors, downstream processing errors etc.) vs transport errors (e.g. downstream service unavailable, request timeouts etc.). When handling business errors we may elect not to retry the failed message and instead move it to the dead letter queue. When handling transport errors we may wish to treat transient failures (e.g. database connections) and protocol errors (e.g. 503 service unavailable) differently. Transient failures we may wish to retry a few times over a short period of time where-as protocol errors we might want to keep trying the service over an extended period.

To implement this capability we’ll create a shared function that determines the appropriate retry policy based on context of the exception thrown. It will then check if the number of retry attempts against the maximum defined by the policy. If retry attempts have been exceeded, the message is moved to the dead letter queue, else the function waits for the duration defined by the policy before throwing the original exception.

Let’s change our function to throw mock exceptions and call our retry handler function to implement message retry policy.

Now our function implements some basic exception handling and checks for the appropriate retry policy to use. Let’s test our retry polices work by throwing the different exceptions our policy supports

Testing throwing our mock business rules exception…

test mock business exception

…we observe that the message is moved straight to the dead letter queue as per our defined policy.

Testing throwing our mock protocol exception…test mock protocol exception

…we observe that we retry the message a total of 5 times, waiting 3 seconds between retries as per the defined policy for protocol errors.

Considerations  

  • Ensure your SB queues and subscriptions are defined with a MaxDeliveryCount greater than your maximum number of retries.
  • Ensure your SB queues and subscriptions are defined with a TTL period greater than your maximum retry interval.
  • Be aware that using Consumption based service plans have a maximum execution duration of 5 minutes. App Service Plans don’t have this constraint. However, ensure the functionTimeout setting in the host.json file is greater than your maximum retry interval.
  • Also be aware that if you are using Consumption based plans you will still be charged for time spent waiting for the retry interval (thread sleep).

Conclusion

In this post we have explored the behaviour of the ServiceBus trigger binding within Azure Functions and how we can implement a dynamic message retry policy. As long as you are willing to manage the deserialization of message content yourself (rather than have Azure Functions do it for you) you can gain access to the BrokeredMessage class and implement feature rich messaging solutions on the Azure platform.

Know Your Cloud Resource Costs on Azure

An organisation used to invest their IT infrastructure mostly for computers, network or data centre. Over time, they spent their budget for hosting spaces. Nowadays, in cloud environments, they mostly spend their funds to purchase computing power. Here’s a simple diagram about the cloud computing evolution. From left to right, expenditure shifts from infrastructure to computing power.

In the cloud environment, when we need resources, we just create and use them, and when we don’t need them any longer, we just delete them. But let’s think about this. If your organisation runs dev, test and production environment on cloud, the cost of resources running on dev or test environment is likely to be overlooked unless carefully monitored. In this case, your organisation might be receiving an invoice with massive amount of cost! That has to be avoided. In this post, we are going to have a look at the Azure Billing API that was released in preview and build a simple application to monitor costs in an effective way.

The sample codes used for this post can be found here.

Azure Billing API Structure

There are two distinctive APIs for Azure Billing – one is Usage API and the other is Rate Card API. Therefore, we can calculate how much we spent during a particular period.

Usage API

This API is based on a subscription. Within a subscription, we can send a request to calculate how much resources we used in a specified period. Here are the parameters we can use for these requests.

  • ReportedStartTime: Starting date/time reported in the billing system.
  • ReportedEndTime: Ending date/time reported in the billing system.
  • Granularity: Either Daily or Hourly. Hourly can return more detailed result but takes far longer time to get the result.
  • Details: Either true or false. This determines how usage is split into instance level or not. If false is selected, all same instance types are aggregated.

Here’s an interesting point on the term Reported. When we USE cloud resources, that can be interpreted from two different perspectives. The term, USE, might mean that the resources were actually used at the specified date/time, or the resource used events were reported to the billing system at the specified date/time. This happens because Azure is basically a distributed system scattered all around the world, and based on the data centre the resources are situated, the actual usage date/time can be reported to the billing system in a delayed manner. Therefore, even though we send requests based on the reported date/time, the responses containing usage data show the actual usage date/time.

Rate Card API

When you open a new Azure subscription, you might have noticed a code looking like MS-AZR-****P. Have you seen that code before? This is called Offer Durable ID and, based on this, different rates on resources apply. Please refer to this page to see more details about various types of offers. In order to send requests for this, we can use the following query parameters.

  • OfferDurableId: This is the offer Id. eg) MS-AZR-0017P (EA Subscription)
  • Currency: Currency that you want to look for. eg) AUD
  • Locale: Locale of your search region. eg) en-AU
  • Region: Two-letter ISO country code that you purchased this offer. eg) AU

Therefore, in order to calculate the actual spending, we need to combine these two API responses. Fortunately, there’s a good NuGet library called CodeHollow.AzureBillingApi. So we just use it to figure out Azure resource consumption costs.

Scenario

Kloud, as a cloud consulting firm, offers all consultants access to the company’s subscription without restriction so that they can create resources to develop/test scenarios for their clients. However, once resources are created, there’s high chance that those resources are not destroyed in a timely manner, which brings about unnecessary cost spending. Therefore, management team has made a decision to perform cost control by resource groups 1) assigning resource group owners, 2) setting total spend limit, and 3) setting daily spend limit, using tags. By virtue of these tags, resource group owners are notified via email when cost approaches 90% of the total spend limit, and when it reaches the total spend limit. They also get notified if the cost exceeds the daily spend limit so they can take appropriate actions for their resource groups.

Sounds simple, right? Let’s code it!

When the application is written, it should be run daily to aggregate all costs, store it to database, and send notifications to resource group owners that meets the conditions above.

Writing Common Libraries

The common libraries consist of three parts. Firstly, it calls Azure Billing API, and aggregates data by date and resource group. Secondly, it stores those aggregated data into database. Finally, it sends notification to resource group owners who have resource groups that exceeds either total spend limit or daily spend limit.

Azure Billing API Call & Data Aggregation

CodeHollow.AzureBillingApi can reduce huge amount of API calling work. Its simple implementation might look like:

First of all, like the code above, we need to fetch all resource usage/cost data then, like below, those data needs to be grouped by dates and resource groups.

We now have all cost related data per resource group. We then need to fetch tag values from resource groups using another API call and merge it with the data previously populated.

We can look up all resource groups in a given subscription like above, and merge this result with the cost data that we previously found, like below.

Date Storage

This is the simplest part. Just use Entity Framework and store data into the database.

We’ve so far implemented data aggregation part.

Notification

First of all, we need to fetch resource groups that meet conditions, which is not that hard to write.

The code above is self-explanatory: it only returns resource groups that 1) approach the total spend limit or 2) exceed the total spend limit, or 3) exceed the daily spend limit. It works well, even though it looks smelly.

The following code bits show how to send notifications to the resource group owners.

It only writes alarms onto the screen, but we can implement SendGrid for email notification or Twillio for SMS alert, in here.

Now we’ve got the basic application structure. How can we execute it, by the way? We might have two approaches – Azure WebJobs and Azure Functions. Let’s move on.

Monitoring Application on Azure WebJob

A console application might be the simplest way for this purpose. Once the console app is built, it can be deployed to an Azure WebJob straight away. Here’s the simple console application code.

Aggregator service collects and store data and Reminder service sends alerts to resource group owners. In order to deploy this to Azure WebJob, we need to create two extra files, run.cmd and settings.job.

  • settings.job: It contains CRON expression for scheduled job. For example, if this WebJob runs every night at 00:20, the JSON object might look like:

  • run.cmd: When this WebJob is run, it always looks up run.cmd first, which is a simple batch command file. Therefore, if necessary, we can enter the actual executable command with appropriate arguments into this file.

That’s how we can use Azure WebJob for monitoring.

Monitoring Application on Azure Function

We can use Azure Functions instead. But in this case we HAVE TO make sure:

Azure Functions instance MUST be with App Service Plan, NOT Consumption Plan

Basically, this app runs for 1-2 minutes at the shortest or 30-40 minutes at the longest. This execution time is not affordable for Consumption Plan, which charges costs based on execution time. On the other hand, as we have already paid for App Service Plan, we don’t need to pay extra for the Function instance, if we create it under the App Service Plan.

Timer Trigger Function code suits our purpose. Also using Precompiled Azure Functions approach would be more helpful and the function code might look like:

Here’s the function.json for this Timer Trigger one:

Here we have shown how to quickly write a simple application for cost monitoring, using the Azure Billing API. Cloud resources can certainly be used effectively and efficiently, but the flipside of it is, of course, that we have to be very careful not to be wasteful. Therefore, implementing a monitoring application would help in preventing unwanted cost leak.

Getting Azure 99.95% SLA for Cisco FTD virtual appliances in Azure via availability sets and ARM templates

First published on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter: @LucianFrango or connect via LinkedIn: Lucian Franghiu.


In the real world there are numerous lessons learned, experiences, opinions and vendors recommendations that dictate and what constitutes “best practice” when it comes to internet edge security. It’s a can of worms that I don’t want to open as I am not claiming to be an expert in that regard. I can say that I do have enough experience to know that not having any security is a really bad idea and having bank level security for regular enterprise customers can be excessive.

I’ve been working with an enterprise customer that falls pretty much in the middle of that dichotomy. They are a regular large enterprise organisation that is concerned about internet security and have little experience with Azure. That said, the built in tools and software defined networking principles of Azure don’t meet the requirements they’ve set. So, to accomodate those requirements, moving from Azure NSGs and WAFs and all the goodness that Azure provides to dedicated virtual appliances was not difficult, but, did require a lot of thinking and working with various team members and 3rd parties to get the result.

Cisco Firepower Thread Defence Virtual for Microsoft Azure

From what I understand, Cisco’s next generation firewall has been in the Azure marketplace for about 4 months now, maybe a little longer. Timelines are not that much of a concern, rather, they are a consideration in that it relates to the maturity of the product. Unlike competitors, there is indeed a lag behind in some features.

The firewalls themselves, Cisco Firepower Thread Defence Virtual for Microsoft Azure, are Azure specific Azure Marketplace available images of the virtual appliances Cisco has made for some time. The background again, not that important. It’s just the foundational knowledge for the following:

Cisco FTDv supports 4 x network interfaces in Azure. These interfaces include:

  • A management interface (Nic0) – cannot route traffic over this
  • A diagnostics interface (Nic1) – again, cannot route traffic over this. I found this out the hard way…
  • An external / untrusted interface (Nic2)
  • An internal / trusted interface (Nic3)

So we have a firewall that essentially is an upgraded Cisco ASA (Cisco Adaptive Security Appliance) with expanded feature sets unlocked through licensing. An already robust product with new features.

The design

Availability is key in the cloud. Scale out dominates scale up methodologies and as the old maxim goes: two is better than one. For a customer, I put together the following design to leverage Azure availability sets (to guarantee instance uptime of at least one instance in the set; and to guarantee different underlying Azure physical separation of these resources) and to have a level of availability higher than a single instance. NOTE: Cisco FTDv does not support high availability (out of the box) and is not a statefull appliance in Azure. 

Implementation

To deploy a Cisco FTDv in Azure, the quick and easy way is to use the Azure Marketplace and deploy through the portal. It’s a quick and pretty much painless process. To note though, here are some important pieces of information when deploying these virtual appliances from the Azure marketplace:

  • There is essentially only one deployment option for the size of instances – Standard_D3 or Standard_D3v2 – the difference being SSD vs HDD (with other differences between v1 and v2 series coming by way of more available RAM in certain service plans)
  • YOU MUST deploy the firewall in a resource group WITH NO OTHER RESOURCES in that group (from the portal > Marketplace)
  • Each interface MUST be on a SUBNET – so when deploying your VNET, you need to have 4 subnets available
    • ALSO, each interface MUST be on a UNIQUE subnet – again, 4 subnets, can’t double up – even with the management and diagnostic interfaces
  • Deploying the instance in an Azure availability set is- NOT AVAILABLE (from the portal > Marketplace)

Going through the wizard is relatively painless and straight forward and within 15-20min you can have a firewall provisioned and ready to connect to your on-premises management server. Yes, another thing to note is that the appliance is managed from Firepower Management Centre (FMC). The FMC, from that I have read, cannot be deployed in Azure at this time. However, i’ve not looked into that tidbit to much, so I may be wrong there.

The problem

In my design I have a requirement for two appliances. These appliances would be in a farm, which is supported in the FMC, and the two appliances can have common configuration applied to both devices- stuff like allow/deny rules. In Azure, without an availability set, there is a small chance, however a chance nonetheless, that both devices could someone be automagically provisioned in the same rack, on the same physical server infrastructure in the Australia East region (my local region).

Availability is a rather large requirement and ensuring that all workloads across upwards of 500+ instances for the customer I was working with is maintained was a tricky proposition. Here’s how I worked around the problem at hand as officially Cisco do not state they “do not support availability sets”.

The solution

Pretty much all resources when working with the Azure Portal have a very handy tab under their properties. I use this tab a lot. It’s the Automation Script section of the properties blade of a resource.

Automation script

After I provisioned a single firewall, I reviewed the Automation Script blade of the instance. There is plenty of good information there. What was particularly is handy to know is the following:

 },
 "storageProfile": {
 "imageReference": {
 "publisher": "cisco",
 "offer": "cisco-ftdv",
 "sku": "ftdv-azure-byol",
 "version": "620362.0.0"
 },
 "osDisk": {
 "osType": "Linux",
 "name": "[concat(parameters('virtualMachines_FW1_name'),'-disk')]",
 "createOption": "FromImage",

So with that, we have all the key information to leverage ARM templates to deploy the firewalls. In practice though, I copied the entire Automation Script 850 line JSON file and put it into Atom. Then I did the following:

  • Reviewed the JSON file and cleaned it up to match the naming standard for the customer
    • This applied to various resources that were provisioned: NIC, NSGs, route tables etc
  • I copied and added to the file an addition for adding the VM to availability set
    • The code for that will be bellow
  • I then removed the firewall instance and all of the instance specific resources it created form Azure
    • I manually removed the NICs
    • I manually removed the DISKs from Blob storage
  • I then used Visual Studio to create an ew project
  • I copied the JSON file into the azuredeploy.json file
  • Updated my parameters file (azuredeplpy.parameters.json)
  • Finally, proceeded to deploy from template

Low and behold the firewall instance provisioned just fine and indeed there was an availability set associated with that. Additionally, when I provisioned the second appliance, I followed the same process and both are now in the same availability set. This makes using the Azure Load Balancer nice and easy! Happy days!

For your reference, here’s the availability set JSON I added in my file:

"parameters": [
 {
"availabilitySetName": {
 "defaultValue": "FW-AS",
 "type": "string"
 }

Then you need to add the following under “resources”:

"resources": [
 {
 "type": "Microsoft.Compute/availabilitySets",
 "name": "[parameters('availabilitySetName')]",
 "apiVersion": "2015-06-15",
 "location": "[resourceGroup().location]",
 "properties": {
 "platformfaultdomaincount": "2",
 "platformupdatedomaincount": "2"
 }
 },

Then you’ll also need to add in the resources “type”: “Microsoft.Compute/virtualMachines”:

 "properties": {
 "availabilitySet": {
 "id": "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]"
 },
  "dependsOn": [
 "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]",

Those are really the only things that need to be added to the ARM template. It’s quick and easy!

BUT WAIT, THERES MORE!

No, I’m not talking about throwing in a set of steak knives with that, but, there is a little more to this that you dear reader need to be aware of.

Once you deploy the firewall and the creating process finalises and its state is now running, there is an additional challenge. When deploying via the Marketplace, the firewall enters Advanced User mode and is able to be connected to the FMC. I’m sure you can guess where this is going… When deploying the firewall via an ARM template, the same mode is not entered. You get the following error message:

User [admin] is not allowed to execute /bin/su/ as root on deviceIDhere

After much time digging through Cisco documentation, which I am sorry to say is not up to standard, Cisco TAC were able to help. The following command needs to be run in order to get into the correct mode:

~$ su admin
~$ [password goes here] which is Admin123 (the default admin password, not the password you set)

Once you have entered the correct mode, you can add the device to the FMC with the following:

~$ configure manager add [IP address of FMC] [key - one time use to add the FW, just a single word]

The summary

I appreciate that speciality network vendors provide really good quality products to manage network security. Due limitations in the Azure Fabric, not all work 100% as expected. From a purists point of view, NSGs and the Azure provided software defined networking solutions and the wealth of features provided, works amazingly well out of the box.

The cloud is still new to a lot of people. That trust that network admins place in tried and true vendors and products is just not there yet with BSOD Microsoft. In time I feel it will be. For now though, deploying virtual appliances can be a little tricky to work with.

Happy networking!

Lucian

Migrating VirtualBox VDI Virtual Machines to Azure

Overview

Over the years I’ve transitioned through a number of laptops and for whatever reason they never fully get put out to pasture. Two specific laptops are used semi-regularly for functions associated with a few virtual machines they hold. Over the last 10 years or so, I’ve been a big proponent of VirtualBox. It’s footprint and functionality aligned with my needs. The downside these days is needing to sometimes carry two laptops just to use an application or two contained inside a Virtual Machine on VirtualBox.

It’s 2017 and time to get with the times. Dedicate an evening of working through the process of migrating those VM’s.

DISCLAIMER and CONSIDERATIONS Keep in mind that if you are migrating legacy operating systems, you’ll need a method to remote into them once they are in Azure. Check the configuration of them before  you convert and migrate them. Do they have firewalls? Is the network interface on the VM configured for dynamic or static addressing? Do the VM have remote access configured, VNC, RDP, SSH. As they are also likely to be less secure my process below includes a Network Security Group as part of the Azure Resource Group with no rules specified. You’ll need to add some inbound rules for the method you’ll be using to remote into your Virtual Machine. And I STRONGLY suggest locking those rules down to a single host or home subnet.

The VM Conversion Process

This blog post covers the migration of a Windows Virtual Machine in VDI format from VirtualBox on SUSE Linux to Azure.

  • With the VM Started un-install the VBox Guest Additions from the virtual machine

RemoveExtensions2

  • Shutdown the VM
  • In VirtualBox Manager select the VM and Settings
    1. Select Storage. If the VBoxGuestAdditions CD/DVD is attached then remove it.
    2. Take note of the VM’s disk(s) location (WinXPv2.vdi in my case) and naming. Mine just had a single hard disk. You’ll need the path for the conversion utility.

RemoveVBoxAdditions

RemoveAdditionsDVD

  • Virtual Box includes a utility named vboxmanage. We can use that to convert the VM virtual hard disk from VDI to VHD format. Simply run vboxmanage clonehd –format VHD –variant Fixed
    • You will need to make sure you have enough space on your laptop hard disk for the VHD which will be about the same size as your VDI Hard Disk
      • If you don’t on Linux you’ll get a slightly cryptic message like
        • Could not create the clone medium (VERR_EOF)
        • VBOX_E_FILE_ERROR (0x80bb0004)
      • the –variant Fixed switch is not shown in the virtual disk conversion screenshot (three images further down the page). One of my other VM’s was Dynamic. Size needs to be Fixed for the VHD to be associated with a VM in Azure
      • Below shows determining an existing disk that is Dynamic and needs to be converted to Fixed

DynamicDisk

  • Below shows determining an existing disk that is Fixed and doesn’t need to be converted

FixedDisk

  • Converting the VDI virtual disk to VHD

Convert60Percent

Preparing our Azure Environment for our new Virtual Machine

  • Whilst the conversion was taking place I logged into the Azure Portal and created a new Resource Group for my VM to go into. I also created a new Storage Account in that Resource Group to put the VM’s VHD into. Basically I’m keeping these specific individual VM’s that serve a very specific purpose in their own little compartment.

RGandSG

  • Using the fantastic Azure Storage Explorer which works on Linux, Mac and Windows I created a Blob Container in my newly created Storage Account named vhds.

CreateBlobContainer

Upload the Converted Virtual Hard Disk

  • By now my virtual disk had converted. Using the Azure Storage Explorer I uploaded my converted virtual disk. NOTE Make sure you have the ‘upload vhd/vhdx files as Page Blobs’ selected. 

UploadVHD

For a couple of other VM’s I wrote a little PowerShell script to upload the VHD’s to blob storage.

Create the Azure VM

The following script follows on from the Resource Group, Storage Account and the Virtual Machine Virtual Disk we created and uploaded to Azure and creates the VM to attached to the virtual disk.

All the variables are up front, we create the Network Security Group, Subnet and Virtual Network. Then the Public IP and Network Interface. Finally we define the details for the VM with the networking and the uploaded VHD before creating the VM.

And we’re done. VM created and started.

VMCreated
Happy days and good bye to a number of old laptops.

Send mail to Office 365 via an Exchange Server hosted in Azure

Those of you who have attempted to send mail to Office 365 from Azure know that sending outbound mail directly from an email server hosted in Azure is not supported due to elastic nature of public cloud service IPs and the potential for abuse. Therefore, the Azure IP address blocks are added to public block lists with no exceptions to this policy.

To be able to send mail from an Azure hosted email server to Office 365 you to need to send mail via a SMTP relay. There is a number of different SMTP relays you can utilise including Exchange Online Protection, more information can be found here: https://blogs.msdn.microsoft.com/mast/2016/04/04/sending-e-mail-from-azure-compute-resource-to-external-domains

To configure Exchange Server 2016 hosted in Azure to send mail to Office 365 via SMTP relay to Exchange Online protection you need to do the following;

  1. Create a connector in your Office 365 tenant
  2. Configure accepted domains on your Exchange Server in Azure
  3. Create a send connector on your Exchange Server in Azure that relays to Exchange Online Protection

Create a connector in your Office 365 tenant

  1. Login to Exchange Online Admin Center
  2. Click mail flow | connector
  3. Click +
  4. Select from: “Your organisation’s email server” to: “Office 365”o365-connector1
  5. Enter in a Name for the Connector | Click Nexto365-connector2
  6. Select “By verifying that the IP address of the sending server matches one of these IP addresses that belong to your organization”
  7. Add the public IP address of your Exchange Server in Azureo365-connector3

Configure accepted domains on your Exchange Server in Azure

  1. Open Exchange Management Shell
  2. Execute the following PowerShell command for each domain you want to send mail to in Office 365;
  3. New-AcceptedDomain -DomainName Contoso.com -DomainType InternalRelay -Name Contosoaccepted-domain1

Create a send connector on your Exchange Server in Azure that relays to Exchange Online Protection

  1. Execute the following PowerShell command;
  2. New-SendConnector -Name “My company to Office 365” -AddressSpaces * -CloudServicesMailEnabled $true -RequireTLS $true -SmartHosts yourdomain-com.mail.protection.outlook.com -TlsAuthLevel CertificateValidationsend-connector1

Exchange Server 2016 in Azure

I recently worked on a project where I had to install Exchange Server 2016 on an Azure VM and I chose a D2 sized Azure VM (2 cores, 7GB RAM) thinking that will suffice, well that was a big mistake.

The installation made it to the last step before a warning appeared informing me that the server is low on memory resources and eventually terminated the installation, leaving it incomplete.

Let this be a warning to the rest of you, choose a D3 or above sized Azure VM to save yourself a whole lot of agony.

To try and salvage the Exchange install I attempted to re-run the installation as it detects an incomplete installation and tries to pick up where it failed previously, this did not work.

I then tried to uninstall Exchange completely by running command: “Setup.exe /mode:Uninstall /IAcceptExchangeServerLicenseTerms”. This also did not work as it was trying to uninstall an Exchange role that never got installed, this left me one option manually remove Exchange from Active Directory and rebuild the Azure VM. 

To remove the Exchange organisation from Active Directory I had to complete the following steps;

  1. On a Domain Controller | Open ADSI Edit
  2. Connect to the Configuration naming contextconfig-naming-context
  3. Expand Services
  4. Delete CN=Microsoft Exchange and CN=Microsoft Exchange Autodiscoverconfig-exchange-objects
  5. Connect to the Default naming contextdefault-naming-context
  6. Under the root OU delete OU=Microsoft Exchange Security Groups and CN=Microsoft Exchange System Objects delete-exchange-objects
  7. Open Active Directory Users and Computers
  8. Select the Users OU
  9. Delete the following:
    • DiscoverySearchMailbox{GUID}
    • Exchange Online-ApplicationAccount
    • Migration.GUID
    • SystemMailbox{GUID}ad-exchange-objects

After Exchange was completely removed from Active Directory and my Azure VM was rebuilt with a D3 size I could successfully install Exchange Server 2016.

Exchange Server 2016 install error: “Active Directory could not be contacted”

I recently worked on a project where I had to install Exchange Server 2016 on an Azure VM and received error “Active Directory could not be contacted”.

To resolve the issue, I had to complete the following steps;

  1. Remove the Azure VM public IP address
  2. Disable IPv6 on the NICipv6-disabled
  3. Set the IPv4 DNS suffix to point to your domain. If a public address is being used it will be set to reddog.microsoft.com by default.dns-suffix

Once done the installation could proceed and Active Directory was contactable.

A [brief] intro to Azure Resource Visualiser (ARMVIZ.io)

Another week, another Azure tool that I’ve come by and thought I’d share with the masses. Though this one isn’t a major revelation or a something that I’ve added to my Chrome work profile bookmarks bar like I did with the Azure Resource Explorer (as yet, though, I may well add this in the very near future), I certainly have it bookmarked in my Azure folder in Chrome bookmarks.

When working with Azure Resource Manager templates, you’re dealing with long JSON files. These files can get to be pretty big in size and span hundreds of lines. Here’s an example of one:

(word or warning- scroll to the bottom as there is more content after this ~250+ line ARM template sample)

{
 "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json",
 "contentVersion": "1.0.0.0",
 "parameters": {
 "storageAccountName": {
 "type": "string",
 "metadata": {
 "description": "Name of storage account"
 }
 },
 "adminUsername": {
 "type": "string",
 "metadata": {
 "description": "Admin username"
 }
 },
 "adminPassword": {
 "type": "securestring",
 "metadata": {
 "description": "Admin password"
 }
 },
 "dnsNameforLBIP": {
 "type": "string",
 "metadata": {
 "description": "DNS for Load Balancer IP"
 }
 },
 "vmSize": {
 "type": "string",
 "defaultValue": "Standard_D2",
 "metadata": {
 "description": "Size of the VM"
 }
 }
 },
 "variables": {
 "storageAccountType": "Standard_LRS",
 "addressPrefix": "10.0.0.0/16",
 "subnetName": "Subnet-1",
 "subnetPrefix": "10.0.0.0/24",
 "publicIPAddressType": "Dynamic",
 "nic1NamePrefix": "nic1",
 "nic2NamePrefix": "nic2",
 "imagePublisher": "MicrosoftWindowsServer",
 "imageOffer": "WindowsServer",
 "imageSKU": "2012-R2-Datacenter",
 "vnetName": "myVNET",
 "publicIPAddressName": "myPublicIP",
 "lbName": "myLB",
 "vmNamePrefix": "myVM",
 "vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('vnetName'))]",
 "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]",
 "publicIPAddressID": "[resourceId('Microsoft.Network/publicIPAddresses',variables('publicIPAddressName'))]",
 "lbID": "[resourceId('Microsoft.Network/loadBalancers',variables('lbName'))]",
 "frontEndIPConfigID": "[concat(variables('lbID'),'/frontendIPConfigurations/LoadBalancerFrontEnd')]",
 "lbPoolID": "[concat(variables('lbID'),'/backendAddressPools/BackendPool1')]"
 },
 "resources": [
 {
 "type": "Microsoft.Storage/storageAccounts",
 "name": "[parameters('storageAccountName')]",
 "apiVersion": "2015-05-01-preview",
 "location": "[resourceGroup().location]",
 "properties": {
 "accountType": "[variables('storageAccountType')]"
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "type": "Microsoft.Network/publicIPAddresses",
 "name": "[variables('publicIPAddressName')]",
 "location": "[resourceGroup().location]",
 "properties": {
 "publicIPAllocationMethod": "[variables('publicIPAddressType')]",
 "dnsSettings": {
 "domainNameLabel": "[parameters('dnsNameforLBIP')]"
 }
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "type": "Microsoft.Network/virtualNetworks",
 "name": "[variables('vnetName')]",
 "location": "[resourceGroup().location]",
 "properties": {
 "addressSpace": {
 "addressPrefixes": [
 "[variables('addressPrefix')]"
 ]
 },
 "subnets": [
 {
 "name": "[variables('subnetName')]",
 "properties": {
 "addressPrefix": "[variables('subnetPrefix')]"
 }
 }
 ]
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "type": "Microsoft.Network/networkInterfaces",
 "name": "[variables('nic1NamePrefix')]",
 "location": "[resourceGroup().location]",
 "dependsOn": [
 "[concat('Microsoft.Network/virtualNetworks/', variables('vnetName'))]",
 "[concat('Microsoft.Network/loadBalancers/', variables('lbName'))]"
 ],
 "properties": {
 "ipConfigurations": [
 {
 "name": "ipconfig1",
 "properties": {
 "privateIPAllocationMethod": "Dynamic",
 "subnet": {
 "id": "[variables('subnetRef')]"
 },
 "loadBalancerBackendAddressPools": [
 {
 "id": "[concat(variables('lbID'), '/backendAddressPools/BackendPool1')]"
 }
 ],
 "loadBalancerInboundNatRules": [
 {
 "id": "[concat(variables('lbID'),'/inboundNatRules/RDP-VM0')]"
 }
 ]
 }
 }
 ]
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "type": "Microsoft.Network/networkInterfaces",
 "name": "[variables('nic2NamePrefix')]",
 "location": "[resourceGroup().location]",
 "dependsOn": [
 "[concat('Microsoft.Network/virtualNetworks/', variables('vnetName'))]"
 ],
 "properties": {
 "ipConfigurations": [
 {
 "name": "ipconfig1",
 "properties": {
 "privateIPAllocationMethod": "Dynamic",
 "subnet": {
 "id": "[variables('subnetRef')]"
 }
 }
 }
 ]
 }
 },
 {
 "apiVersion": "2015-05-01-preview",
 "name": "[variables('lbName')]",
 "type": "Microsoft.Network/loadBalancers",
 "location": "[resourceGroup().location]",
 "dependsOn": [
 "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]"
 ],
 "properties": {
 "frontendIPConfigurations": [
 {
 "name": "LoadBalancerFrontEnd",
 "properties": {
 "publicIPAddress": {
 "id": "[variables('publicIPAddressID')]"
 }
 }
 }
 ],
 "backendAddressPools": [
 {
 "name": "BackendPool1"
 }
 ],
 "inboundNatRules": [
 {
 "name": "RDP-VM0",
 "properties": {
 "frontendIPConfiguration": {
 "id": "[variables('frontEndIPConfigID')]"
 },
 "protocol": "tcp",
 "frontendPort": 50001,
 "backendPort": 3389,
 "enableFloatingIP": false
 }
 }
 ]
 }
 },
 {
 "apiVersion": "2015-06-15",
 "type": "Microsoft.Compute/virtualMachines",
 "name": "[variables('vmNamePrefix')]",
 "location": "[resourceGroup().location]",
 "dependsOn": [
 "[concat('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
 "[concat('Microsoft.Network/networkInterfaces/', variables('nic1NamePrefix'))]",
 "[concat('Microsoft.Network/networkInterfaces/', variables('nic2NamePrefix'))]"
 ],
 "properties": {
 "hardwareProfile": {
 "vmSize": "[parameters('vmSize')]"
 },
 "osProfile": {
 "computername": "[variables('vmNamePrefix')]",
 "adminUsername": "[parameters('adminUsername')]",
 "adminPassword": "[parameters('adminPassword')]"
 },
 "storageProfile": {
 "imageReference": {
 "publisher": "[variables('imagePublisher')]",
 "offer": "[variables('imageOffer')]",
 "sku": "[variables('imageSKU')]",
 "version": "latest"
 },
 "osDisk": {
 "name": "osdisk",
 "vhd": {
 "uri": "[concat('http://',parameters('storageAccountName'),'.blob.core.windows.net/vhds/','osdisk', '.vhd')]"
 },
 "caching": "ReadWrite",
 "createOption": "FromImage"
 }
 },
 "networkProfile": {
 "networkInterfaces": [
 {
 "properties": {
 "primary": true
 },
 "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nic1NamePrefix'))]"
 },
 {
 "properties": {
 "primary": false
 },
 "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nic2NamePrefix'))]"
 }
 ]
 },
 "diagnosticsProfile": {
 "bootDiagnostics": {
 "enabled": "true",
 "storageUri": "[concat('http://',parameters('StorageAccountName'),'.blob.core.windows.net')]"
 }
 }
 }
 }
 ]
} 


So what is this Azure Resource Visualiser?

For those following at home and are cluey enough to pick up on what the ARM Visualiser is, it’s a means to help visualise the components and relationships of those components in an ARM template. It does this in a very user friendly and easy to use way.

What you can do is as follows:

  • Play around with ARMVIZ with a pre-build ARM template
  • Import your own ARM template and visualise the characteristics of your components and their relationships
    • This helps validate the ARM template and make sure there are no errors or bugs
  • You can have quick access to the ARM Quickstart templates library in Github
  • You can view your JSON ARM template online
  • You can create a new ARM template from scratch or via copying one of the quick start templates ONLINE in your browser of choice
  • You can freely edit that template and go from JSON view to the Visualiser view quickly
  • You can then download a copy of your ARM template that is built, tested, visualised and working
  • You can provide helpful feedback to the team from Azure working on this service
    • Possibly leading to this being rolled up into the Azure Portal at some point in the future
  • All of the components are able to be moved around the screen as well
  • If you double click on any of the components, for example the Load Balancer, you’ll be taken to the line in the ARM template to view or amend the config

What does ARMVIZ.io look like?

Here’s a sample ARM template visualised

Azure Resource Visualiser

This is a screen grab of the Azure Resource Visualiser site. It shows the default, sample template that can be manipulated and played with.

Final words

If you want a quick, fun ARM template editor and your favourite ISE is just behaving like it did ever other day, then spruce up and pimp up your day with the Azure Resource Visualiser. In the coming weeks I’m going to be doing quick a few ARM templates. I can assure you that those will be run through the Azure Resource Visualiser to validate and check for any errors.

Pro-tip: don’t upload an ARM template to Microsoft that might have sensitive information. I know it’s common sense, but, it happens to the best of us. I thought I’d just quickly mention that as a reminder more than anything else.

Hope you enjoy it,

Best,

Lucian

 

 

***

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango, or, connect via LinkedIn.

Why are you not using Azure Resource Explorer (Preview)?

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango. Connect on LinkedIn.

***

For almost two years the Azure Resource Explorer has been in preview. For almost two years barely anyone has used it. This stops today!

I’ve been playing around with the Azure Portal (ARM) and clicking away stumbled upon the Azure Resource Explorer; available via https://resources.azure.com. Before you go any further, click on that or open the URI in a new tab in your favourite browser (I’m using Chrome 56.x for Mac if you were wondering) and finally BOOKMARK IT!

Okay, let’s pump the breaks and slow down now. I know what you’re probably thinking: “I’m not bookmarking a URI to some Azure service because some blogger dude told me to. This could be some additional service that won’t add any benefit since I have the Azure portal and PowerShell; love PowerShell”. Well, that is a fair point. However, let me twist your arm with the following blog post; full of fun facts and information about what Azure Resource Explorer is, what is does, how to use it and more!

What is Azure Resource Explorer

This is a [new to me] website, running Bootstrap HTML, CSS, Javascript framework (an older version, but, like yours truly here on clouduccino), that provides streamlined and rather well laid out access to various REST API details/calls for any given Azure subscription. You can login and view some nice management REST API’s, make changes to Azure infrastructure in your subscription via REST calls/actions like get, put, post, delete and create.

There’s some awesome resources around documentation for the different API’s, although Microsoft is lagging in actually making this of any use across the board (probably should not have mentioned that). Finally, what I find handy is pre-build PowerShell scripts that outline how to complete certain actions mixed in with the REST API’s.

Here’s an example of an application gateway in the ARE portal. Not that there is much to see, since there are no appGateways, but, I could easily create one!

Use cases

I’m sure that all looks “interesting”; with an example above with nothing to show for it. Well, here is where I get into a little more detail. I can’t show you all the best features straight away, otherwise you’ll probably go off and start playing with and tinkering with the Resource Explorer portal yourselves (go ahead and do that, but, after reading the remainder of this blog!).

Use case #1 – Quick access to information

Drilling through numerous blades in the Azure Portal, while it works well, sometimes can take longer than you want when all you need to do is check for one bit of information- say a route table for a VNET. PowerShell can also be time consuming- doing all that typing and memorising cmdlets and stuff (so 2016..).

A lot of the information you need can be grabbed at a glance from the ARE portal though which in turns saves you time. Here’s a quick screenshot of the route table (or lack there of) from a test VNET I have in the Kloud sandbox Azure subscription that most Kloudies use on the regular.

I know, I know. There is no routes. In this case it’s a pretty basic VNET, but, if I introduced peering and other goodness in Azure, it would all be visible at a glance here!

Use case #2 – Ahh… quick access to information?

Here’s another example where getting access to information configuration is really easy with ARE. If you’re working on a PowerShell script to provision some VM instances and you are not sure of the instance size you need, or the programatic name for that instance size, you can easily grab that information form the ARE portal. Bellow highlights a quick view of all the VM instance sizes available. There are 532 lines in that JSON output with all instance sizes from Standard_A0 to Standard_F16s (which offers 16 cores, 32GB of RAM and up to 32 disks attached if you were interested).

vmSizes view for all current available VM sizes in Azure. Handy to grab the programatic name to then use in PowerShell scripting?!

Use case #3 – PowerShell script examples

Mixing the REST API programatic cmdlets with PowerShell is easy. The ARE portal outlines the ways you can mix the two to execute quick cmdlets to various actions, for example: get, set, delete. Bellow is a an example set of PowerShell cmdlets from the ARE portal for VNET management.

Final words

Hopefully you get a chance to try out the Azure Resource Explorer today. It’s another handy tool to keep in your Azure Utility Belt. It’s definitely something I’m going to use, probably more often than I will probably realise.

#HappyFriday

Best,

Lucian