Why is the Azure Load Balancer NOT working?

Context

For most workloads that I’ve deployed in Azure that have required load balancing, for the Azure Load Balancer (ALB) used in those architectures, the out of the box experience or the default configuration was used. The load balancer service is great like that, whereby for the majority of scenarios it just works out of the box. I’m sure this isn’t an Azure only experience either. The other public cloud providers have a great out of the box load balancing service that would work with just about any service without in depth configuration.

You can see that I’ve been repetitive on the point around out of the box experience. This is where I think I’ve become complacent in thinking that this out of the box experience should work in the majority of circumstances.

My problem, as outlined in this blog post, is that I’ve experienced both the Azure Service Manager (ASM) load balancer and the Azure Resource Manager (ARM) load balancer not working as intended…

UPDATE 2018-07-13 – The circumstances in both of the examples given in this blog post assume that the workloads in the backend pool are configured correctly AND that the Azure Load Balancer is also configured correctly as per Microsoft recommendations. The specific solution that I’ve outlined came about after making sure that all the settings were checked, checked again and also had Microsoft Premier Support validate the config.

Azure Load Balancer

From what I’ve been told by Microsoft Premier Support, the Azure Load Balancer has had a 5-tuple distribution algorithm, based on source IP, source port, destination IP, destination port and protocol type, since its inception. However, that was certainly not the case as I’ll explain in the next paragraph. While this 5-tuple mode should in theory work well with just about any scenario, because at the end of the day the distribution is still round robin between endpoints, the stickiness of sessions to those endpoints comes into play where that can cause some issues.

In a blog post from way back when, Microsoft outlines that to accommodate RDS Gateway, the distribution mode options for the ALB have been updated. There are a total of 3 distribution modes: 5-tuple (mentioned before), 3-tuple (source IP, destination IP and protocol) and 2-tuple (source IP and destination IP).

Now that we have established the configuration options and roughly when they came about, lets get stuck into the impact in two scenarios, months apart and in both ASM Classic and ARM deployment modes…

 

The problem

Earlier this year at a customer, we ran into a problem where we had a number of Azure workloads hard reset. This was either the cause of an outage in the region, some scheduled or unscheduled maintenance that had to occur. Nothing to serious sounding until we found that the Network Device Enrolment Server (NDES) was not able to accept traffic from the Web Application Proxy (WAP) server that was inline and “north” of the server. The WAP itself was in a Cloud Service (so ASM/Classic environment here) where there was multiple WAP servers that leveraged Load Balanced Sets (or the Azure Internal Load Balancer) as part of the Cloud Service.

The odd thing that happened was that since the outage/scheduled/unscheduled maintenance had happened, inbound NDES traffic via the WAP suddenly became erratic. Certificates that were requested via NDES (from Intune in this circumstance) were for the most part not being completed. So ensued, a long and enjoyable Microsoft Premier case that involved the Azure Product Group (sarcasm intended).

The solution

I’ll keep this short and sweet as I would rather save you, dear reader, the time of not having to relive that incident. The outcome was as follows:

It was determined that the ASM Cloud Service Load Balanced Set (or Azure Load Balancer, Azure Internal Load Balancer) configuration was set to the out of the box default of 5-tuple distribution. While this implementation of the WAP + NDES solution was in production for at least 2-3 years, working without fault or issue, was not the correct configuration. It was determined that the correct configuration for this setup was to leverage either 2-tuple (source and destination IPs) distribution, or 3-tuple (source IP, destination IP and protocol). We went with the more specific 3-tuple and that resolved connectivity issues.

The PowerShell to execute this solution (setting the ASM Cloud Service LBSet to 3-tuple, source IP+destinationIP+protocol) is as follows:

Set-AzureLoadBalancedEndpoint -ServiceName "[CloudServiceX]" -LBSetName "[LBSetX]" -Protocol tcp LoadBalancerDistribution "sourceIPprotocol"

The specific parameter (-LoadBalancerDistribution) which sets the load balancer distribution algorithm, has the Valid values of:

  • sourceIP. A two tuple affinity: Source IP, Destination IP
  • sourceIPProtocol. A three tuple affinity: Source IP, Destination IP, Protocol
  • none. A five tuple affinity: Source IP, Source Port, Destination IP, Destination Port, Protocol

Round 2: The problem happened again

Recently, in another work stream, we ran into the same issue. However, the circumstances were slightly different. The problem parameters this time were:

  • An Azure Resource Manager (ARM) environment, not ASM or Classic
    • So, we were using the individual resource of the Azure Internal Load Balancer
    • Much more configuration available here
    • Again though- 5-tuple is the default “Source IP affinity mode” (note: no longer called simply the distribution mode in ARM)
  • The work load WAS NOT NDES
  • The work load WAS deployed on Windows Server 2016 again – same as before
    • I’m not sure if that is a coincidence or not

Round 2: Solution

Having gone through the load balancing distribution mode issue only a few months earlier, I had it fresh in my mind. I suggested to investigate that. After the parameter was changed, in this second instance, we were able to resolve the issue again and get the intended work load working as intended via the Azure Load Balancer.

With Azure Resource Manager, theres a couple of ways you can go about the configuration change. The most common way would be to change the JSON template which is quick and easy. The below is an example of the section around load balancing rules which has the specific “loadDistribtuion” parameter that would need to be changed. The ARMARM load balancer has basically the same configuration options as the ASM counterpart around this setting; sourceIP, sourceIPProtocol. However, the only difference is that there is a “Default” option which is the ARM equivalent to the ASM or “None” (default = 5-tuple).

"loadBalancingRules": [
{
"name": "[concat(parameters('loadBalancers_EXAMPLE_name')]",
"etag": "W/\"[XXXXXXXXXXXXXXXXXXXXXXXXXXX]\"",
"properties": {
"provisioningState": "Succeeded",
"frontendIPConfiguration": {
"id": "[parameters('loadBalancers_EXAMPLE_id')]"
},
"frontendPort": XX,
"backendPort": XX,
"enableFloatingIP": false,
"idleTimeoutInMinutes": X,
"protocol": "TCP",
"loadDistribution": "SourceIP",
"backendAddressPool": {
"id": "[parameters('loadBalancers_EXAMPLE_id_1')]"
},
"probe": {
"id": "[parameters('loadBalancers_EXAMPLE_id_2')]"
}
}
}
],

The alternative option would be to just user PowerShell. To do that, you can execute the following:

Get-AzureRmLoadBalancer -Name [LBName] -ResourceGroupName [RGName] | Set-AzureRmLoadBalancerRuleConfig -LoadDistribution "[Parameter]"

 

Conclusion and final thoughts

For the most part I would usually go with the default for any configuration. Through this exercise I have come to question the load balancing requirements to be specific around this distribution mode to avoid any possible fault. Certainly, this is a practice that should extend to every aspect of Azure. The only challenge is balancing questioning every configuration and/or simply going with the defaults. Happy balancing! (Pun intended).


This was originally posted on Lucian.Blog by Lucian.

Follow Lucian on Twitter @LucianFrango.

Deploy VM via ARM template: Purchase eligibility failed

I recently tried to deploy a VM using an ARM template executed via PowerShell and I encountered the purchase eligibility failed error as seen below.

PurchaseEligibilityFailedError

As I have encountered this before I ensured I accepted marketplace terms for the VM image in question using the PowerShell commands:

Get-AzureRmMarketplaceTerms -Publisher PublisherName -Product ProductName -Name Name | Set-AzureRmMarketplaceTerms -Accept

I then reattempted to deploy my VM using my ARM template and still got the same error, I even waited 24 hours and tried again with no luck.

I then discovered that from the Azure portal you can create a new resource using the “Template deployment” option and deploy your ARM template via the Azure portal.

TemplateDeploymentOption

After I uploaded and executed my ARM template using this method it deployed my VM successfully with no purchase eligibility errors.

Any subsequent VM deployments via PowerShell using the exact same ARM template now worked as expected with no purchase eligibility errors.

Creating SharePoint Modern Team sites using Site Scripts, Flow and Azure Function

With Site Scripts and Site design, it is possible to invoke custom PnP Provisioning for Modern Team Sites from a Site Script. In the previous blog, we saw how we can provision Simple modern sites using Site Scripts JSON. However, there are some scenarios where we would need a custom provisioning template or process such as listed below:

  • Auto deploy custom web components such as SPFx extension apps
  • Complex Site Templates which couldn’t be configured
  • Complex Document libs, content types that are provided by JSON schema. For an idea of support items using the OOB schema, please check here.

Hence, in this blog, we will see how we can use Flow and Azure Functions to apply more complex templates and customization on SharePoint Modern Sites.

Software Prerequisites:

  • Azure Subscription
  • Office 365 subscription or MS Flow subscription
  • PowerShell 3.0 or above
  • SharePoint Online Management Shell
  • PnP PowerShell
  • Azure Storage Emulator*
  • Postman*

* Optional, helpful for Dev and Testing

High Level Overview Steps:

1. Create an Azure Queue Storage Container
2. Create a Microsoft Flow with Request Trigger
3. Put an item into Azure Queue from Flow
4. Create an Azure Function to trigger from the Queue
5. Use the Azure Function to apply the PnP Provisioning template

Detailed Steps:

This can get quite elaborate, so hold on!!

Azure

1. Create an Azure Queue Store.

Note: For dev and testing, you can use the Azure Storage Emulator to emulate the queue requirements. For more details to configure Azure Emulator on your system, please check here.

Microsoft Flow

2. Create a Microsoft Flow with Request trigger and then add the below JSON.

Note: If you have an Office 365 enterprise E3 license, you get a Flow Free Subscription or else you can also register for a trial for this here.

3. Enter a message into the Queue in the Flow using the “Add message to Azure Queue” action.

FlowSiteDesignAzureQueue

Note: The flow trigger URL has an access key which allows it to be called from any tenant. For security reasons, please don’t share it with any third parties unless needed.

Custom SharePoint Site Template (PowerShell)

4. Next create a template site for provisioning. Make all the configurations that you will need for the initial implementation. Then create the template using PnPPowerShell, use the PnP Provisioning Command as shown below.

Get-PnPProvisioningTemplate -Out .\TestCustomTeamTemplate.xml -ExcludeHandlers Navigation, ApplicationLifecycleManagement -IncludeNativePublishingFiles

Note: The ExclueHandlers option depends on your requirement, but the configuration in the above command will save a lot of issues which you could potentially encounter while applying the template later. So, use the above as a starting template.

Note: Another quick tip, if you have any custom theme applied on the template site, then the provisioning template doesn’t carry it over. You might have to apply the theme again!

5. Export and save the PowerShell PnP Module to a local drive location. We will use it later in the Azure Function.

powershell Save-Module -Name SharePointPnPPowershellOnline -Path “[Location on your system or Shared drive]”

SharePoint
6. Register an App key and App Secret in https://yourtenant.sharepoint.com/_layouts/appregnew.aspx and provide the below settings.
7. Copy the App Id and Secret which we will use later for Step 9 and 10. Below is a screenshot of the App registration page.
8. Trust the app at https://yourtenant-admin.sharepoint.com/_layouts/appinv.aspx by providing the below xml. Fill in the App Id to get the details of the App.

Azure Function

9. Create a Queue Trigger PowerShell Azure function
10. After the function is created, go to Advance Editor (kudu) and then create a sub folder “SharePointPnPPowerShellOnline” in site -> wwwroot -> [function_name] -> modules. Upload all the files from the saved PowerShell folder in the Step above into this folder.
11. Add the below PowerShell to the Azure Function

12. Test the Function by the below input in PowerShell

$uri = "[the URI you copied in step 14 when creating the flow]"
$body = "{webUrl:'somesiteurl'}"
Invoke-RestMethod -Uri $uri -Method Post -ContentType "application/json" -Body $body

PowerShell and JSON

13. Create a Site Script with the below JSON and add it to a Site Design. For more details, please check the link here for more detailed steps.

14. After the above, you are finally ready to run the provisioning process. Yay!!

But before we finish off, one quick tip is that when you click manual refresh, the changes are not immediately updated on the site. It may take a while, but it will apply.

Conclusion:

In the above blog we saw how we can create Site templates using custom provisioning template by Flow and Azure Function using SharePoint site scripts and design.

Intro to Site Scripts and Site Designs with a Simple SharePoint Modern Site provisioning

Microsoft announced Site Scripts and Site Designs in late 2017 which became available for Targeted release in Jan 2018, and released to general use recently. It is a quick way to allow users to create custom modern sites, without using any scripting hacks. Hence, in this blog we will go through the steps of Site Scripts and Site design for a Simple SharePoint Modern Site Creation.

Before we get into detailed steps, lets’ get a brief overview about Site Designs and Site Scripts.

Site designs: Site designs are like a template. They can be used each time a new site is created to apply a consistent set of actions. You create site designs and register them in SharePoint to one of the modern template sites: the team site, or communication site. A site design can run multiple scripts. The script IDs are passed in an array, and they will run in the order listed.

Site Scripts: Site script is custom JSON based script that runs when a site design is selected. The site scripts detail the provisioning items such as creating new lists or applying a theme. When the actions in the scripts are completed, SharePoint displays detailed results of those actions in a progress pane. They can even call flow trigger that is essential for site creation.

Software Prerequisites:

  1. PowerShell 3.0 or above
  2. SharePoint Online Management Shell
  3. Notepad or any notes editor for JSON creation – I prefer Notepad++
  4. Windows System to run PowerShell
  5. And a must – SharePoint Tenant J

 Provisioning Process Overview:

The Provisioning process is divided into 4 steps

1. Create a Site Script using JSON template to call actions to be run after a Modern Site is created.
2. Add the Script to your tenant
3. Create a Site Design and add the Site Script to the design. Associate the Site Design to Modern Site Templates – Team Site template is 64 and Communication Site Template is 68
4. Create a Modern Site from SharePoint landing page
5. Wait for the Site Script to apply changes and refresh your site

Quick and Easy right!? Now lets’ get to the “how to”.

Steps

1. JSON Template: We will need to create a JSON template that will have the declarations of site resources that will be provisioned after the site is created. For more details, here is a link to Microsoft docs. The brief schema is below.

{
"$schema": "schema.json",
"actions": [
... [one or more verb   actions]  ...
],
"bindata": { },
"version": 1
};

For our blog here, we will use the below schema where we are creating a custom list with few columns.

2. Site Script: Add the above site script to your tenant using PowerShell. The below code with return the Site Script GUID. Copy that GUID and will be used later

Get-Content '[ JSON Script location ]' -Raw | Add-SPOSiteScript -Title “[ Title of the script ]

3. Site Design: After the Site Script is added, create the Site Design from the Site Script to be added to the dropdown menu options for creating the site.

Add-SPOSiteDesign -Title “[ Site design title ]” -WebTemplate "64"  -SiteScripts “[ script GUID from above step ]”  -Description "[ Description ]"

4. Create a Modern Site: After the Site Design is registered, you could see the design while creating a site as shown below

ModernTeamSiteWIthcustom

5. Click on the Manual Refresh button as per screenshot after the site upgrade process is complete.

SiteScriptFinish

When ready, the final Team site will look like the screenshot below after provisioning is complete.

CustomTeamSiteWithScriptResult

In this blog, we came to know about Site Script, Site design and how to use them to provision modern team sites.

Easy Filtering of IoT Data Streams with Azure Stream Analytics and JSON reference data

siliconvalve

I am currently working on an next-gen widget dispenser solution that is gradually being rolled out to trial sites across Australia. The dispenser hardware is a modern platform that provides telemetry data that can be used for various purposes by the locations at which the dispenser is deployed and potentially by other third parties.

In addition to these next-gen dispensers we already have existing dispenser hardware at the locations that emits telemetry that we already use for other purposes in our solution. To our benefit both the new and existing hardware emits the same format telemetry data 🙂

A sample telemetry entry is shown below.

We take all of the telemetry data from new and old hardware at all our sites and feed it into an Azure Event Hub which allows us to perform multiple actions, such as archival of the data to Blob Storage using Azure Event Hub Capture

View original post 468 more words

Getting Azure 99.95% SLA for Cisco FTD virtual appliances in Azure via availability sets and ARM templates

First published on Lucian’s blog at Lucian.Blog. Follow Lucian on Twitter: @LucianFrango or connect via LinkedIn: Lucian Franghiu.


In the real world there are numerous lessons learned, experiences, opinions and vendors recommendations that dictate and what constitutes “best practice” when it comes to internet edge security. It’s a can of worms that I don’t want to open as I am not claiming to be an expert in that regard. I can say that I do have enough experience to know that not having any security is a really bad idea and having bank level security for regular enterprise customers can be excessive.

I’ve been working with an enterprise customer that falls pretty much in the middle of that dichotomy. They are a regular large enterprise organisation that is concerned about internet security and have little experience with Azure. That said, the built in tools and software defined networking principles of Azure don’t meet the requirements they’ve set. So, to accomodate those requirements, moving from Azure NSGs and WAFs and all the goodness that Azure provides to dedicated virtual appliances was not difficult, but, did require a lot of thinking and working with various team members and 3rd parties to get the result.

Cisco Firepower Thread Defence Virtual for Microsoft Azure

From what I understand, Cisco’s next generation firewall has been in the Azure marketplace for about 4 months now, maybe a little longer. Timelines are not that much of a concern, rather, they are a consideration in that it relates to the maturity of the product. Unlike competitors, there is indeed a lag behind in some features.

The firewalls themselves, Cisco Firepower Thread Defence Virtual for Microsoft Azure, are Azure specific Azure Marketplace available images of the virtual appliances Cisco has made for some time. The background again, not that important. It’s just the foundational knowledge for the following:

Cisco FTDv supports 4 x network interfaces in Azure. These interfaces include:

  • A management interface (Nic0) – cannot route traffic over this
  • A diagnostics interface (Nic1) – again, cannot route traffic over this. I found this out the hard way…
  • An external / untrusted interface (Nic2)
  • An internal / trusted interface (Nic3)

So we have a firewall that essentially is an upgraded Cisco ASA (Cisco Adaptive Security Appliance) with expanded feature sets unlocked through licensing. An already robust product with new features.

The design

Availability is key in the cloud. Scale out dominates scale up methodologies and as the old maxim goes: two is better than one. For a customer, I put together the following design to leverage Azure availability sets (to guarantee instance uptime of at least one instance in the set; and to guarantee different underlying Azure physical separation of these resources) and to have a level of availability higher than a single instance. NOTE: Cisco FTDv does not support high availability (out of the box) and is not a statefull appliance in Azure. 

Implementation

To deploy a Cisco FTDv in Azure, the quick and easy way is to use the Azure Marketplace and deploy through the portal. It’s a quick and pretty much painless process. To note though, here are some important pieces of information when deploying these virtual appliances from the Azure marketplace:

  • There is essentially only one deployment option for the size of instances – Standard_D3 or Standard_D3v2 – the difference being SSD vs HDD (with other differences between v1 and v2 series coming by way of more available RAM in certain service plans)
  • YOU MUST deploy the firewall in a resource group WITH NO OTHER RESOURCES in that group (from the portal > Marketplace)
  • Each interface MUST be on a SUBNET – so when deploying your VNET, you need to have 4 subnets available
    • ALSO, each interface MUST be on a UNIQUE subnet – again, 4 subnets, can’t double up – even with the management and diagnostic interfaces
  • Deploying the instance in an Azure availability set is- NOT AVAILABLE (from the portal > Marketplace)

Going through the wizard is relatively painless and straight forward and within 15-20min you can have a firewall provisioned and ready to connect to your on-premises management server. Yes, another thing to note is that the appliance is managed from Firepower Management Centre (FMC). The FMC, from that I have read, cannot be deployed in Azure at this time. However, i’ve not looked into that tidbit to much, so I may be wrong there.

The problem

In my design I have a requirement for two appliances. These appliances would be in a farm, which is supported in the FMC, and the two appliances can have common configuration applied to both devices- stuff like allow/deny rules. In Azure, without an availability set, there is a small chance, however a chance nonetheless, that both devices could someone be automagically provisioned in the same rack, on the same physical server infrastructure in the Australia East region (my local region).

Availability is a rather large requirement and ensuring that all workloads across upwards of 500+ instances for the customer I was working with is maintained was a tricky proposition. Here’s how I worked around the problem at hand as officially Cisco do not state they “do not support availability sets”.

The solution

Pretty much all resources when working with the Azure Portal have a very handy tab under their properties. I use this tab a lot. It’s the Automation Script section of the properties blade of a resource.

Automation script

After I provisioned a single firewall, I reviewed the Automation Script blade of the instance. There is plenty of good information there. What was particularly is handy to know is the following:

 },
 "storageProfile": {
 "imageReference": {
 "publisher": "cisco",
 "offer": "cisco-ftdv",
 "sku": "ftdv-azure-byol",
 "version": "620362.0.0"
 },
 "osDisk": {
 "osType": "Linux",
 "name": "[concat(parameters('virtualMachines_FW1_name'),'-disk')]",
 "createOption": "FromImage",

So with that, we have all the key information to leverage ARM templates to deploy the firewalls. In practice though, I copied the entire Automation Script 850 line JSON file and put it into Atom. Then I did the following:

  • Reviewed the JSON file and cleaned it up to match the naming standard for the customer
    • This applied to various resources that were provisioned: NIC, NSGs, route tables etc
  • I copied and added to the file an addition for adding the VM to availability set
    • The code for that will be bellow
  • I then removed the firewall instance and all of the instance specific resources it created form Azure
    • I manually removed the NICs
    • I manually removed the DISKs from Blob storage
  • I then used Visual Studio to create an ew project
  • I copied the JSON file into the azuredeploy.json file
  • Updated my parameters file (azuredeplpy.parameters.json)
  • Finally, proceeded to deploy from template

Low and behold the firewall instance provisioned just fine and indeed there was an availability set associated with that. Additionally, when I provisioned the second appliance, I followed the same process and both are now in the same availability set. This makes using the Azure Load Balancer nice and easy! Happy days!

For your reference, here’s the availability set JSON I added in my file:

"parameters": [
 {
"availabilitySetName": {
 "defaultValue": "FW-AS",
 "type": "string"
 }

Then you need to add the following under “resources”:

"resources": [
 {
 "type": "Microsoft.Compute/availabilitySets",
 "name": "[parameters('availabilitySetName')]",
 "apiVersion": "2015-06-15",
 "location": "[resourceGroup().location]",
 "properties": {
 "platformfaultdomaincount": "2",
 "platformupdatedomaincount": "2"
 }
 },

Then you’ll also need to add in the resources “type”: “Microsoft.Compute/virtualMachines”:

 "properties": {
 "availabilitySet": {
 "id": "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]"
 },
  "dependsOn": [
 "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]",

Those are really the only things that need to be added to the ARM template. It’s quick and easy!

BUT WAIT, THERES MORE!

No, I’m not talking about throwing in a set of steak knives with that, but, there is a little more to this that you dear reader need to be aware of.

Once you deploy the firewall and the creating process finalises and its state is now running, there is an additional challenge. When deploying via the Marketplace, the firewall enters Advanced User mode and is able to be connected to the FMC. I’m sure you can guess where this is going… When deploying the firewall via an ARM template, the same mode is not entered. You get the following error message:

User [admin] is not allowed to execute /bin/su/ as root on deviceIDhere

After much time digging through Cisco documentation, which I am sorry to say is not up to standard, Cisco TAC were able to help. The following command needs to be run in order to get into the correct mode:

~$ su admin
~$ [password goes here] which is Admin123 (the default admin password, not the password you set)

Once you have entered the correct mode, you can add the device to the FMC with the following:

~$ configure manager add [IP address of FMC] [key - one time use to add the FW, just a single word]

The summary

I appreciate that speciality network vendors provide really good quality products to manage network security. Due limitations in the Azure Fabric, not all work 100% as expected. From a purists point of view, NSGs and the Azure provided software defined networking solutions and the wealth of features provided, works amazingly well out of the box.

The cloud is still new to a lot of people. That trust that network admins place in tried and true vendors and products is just not there yet with BSOD Microsoft. In time I feel it will be. For now though, deploying virtual appliances can be a little tricky to work with.

Happy networking!

-Lucian


First published on Lucian’s blog at Lucian.Blog. Follow Lucian on Twitter: @LucianFrango or connect via LinkedIn: Lucian Franghiu.

Why are you not using Azure Resource Explorer (Preview)?

For almost two years the Azure Resource Explorer has been in preview. For almost two years barely anyone has used it. This stops today!

I’ve been playing around with the Azure Portal (ARM) and clicking away stumbled upon the Azure Resource Explorer; available via https://resources.azure.com. Before you go any further, click on that or open the URI in a new tab in your favourite browser (I’m using Chrome 56.x for Mac if you were wondering) and finally BOOKMARK IT!

Okay, let’s pump the breaks and slow down now. I know what you’re probably thinking: “I’m not bookmarking a URI to some Azure service because some blogger dude told me to. This could be some additional service that won’t add any benefit since I have the Azure portal and PowerShell; love PowerShell”. Well, that is a fair point. However, let me twist your arm with the following blog post; full of fun facts and information about what Azure Resource Explorer is, what is does, how to use it and more!

What is Azure Resource Explorer

This is a [new to me] website, running Bootstrap HTML, CSS, Javascript framework (an older version, but, like yours truly here on clouduccino), that provides streamlined and rather well laid out access to various REST API details/calls for any given Azure subscription. You can login and view some nice management REST API’s, make changes to Azure infrastructure in your subscription via REST calls/actions like get, put, post, delete and create.

There’s some awesome resources around documentation for the different API’s, although Microsoft is lagging in actually making this of any use across the board (probably should not have mentioned that). Finally, what I find handy is pre-build PowerShell scripts that outline how to complete certain actions mixed in with the REST API’s.

Here’s an example of an application gateway in the ARE portal. Not that there is much to see, since there are no appGateways, but, I could easily create one!

Use cases

I’m sure that all looks “interesting”; with an example above with nothing to show for it. Well, here is where I get into a little more detail. I can’t show you all the best features straight away, otherwise you’ll probably go off and start playing with and tinkering with the Resource Explorer portal yourselves (go ahead and do that, but, after reading the remainder of this blog!).

Use case #1 – Quick access to information

Drilling through numerous blades in the Azure Portal, while it works well, sometimes can take longer than you want when all you need to do is check for one bit of information- say a route table for a VNET. PowerShell can also be time consuming- doing all that typing and memorising cmdlets and stuff (so 2016..).

A lot of the information you need can be grabbed at a glance from the ARE portal though which in turns saves you time. Here’s a quick screenshot of the route table (or lack there of) from a test VNET I have in the Kloud sandbox Azure subscription that most Kloudies use on the regular.

I know, I know. There is no routes. In this case it’s a pretty basic VNET, but, if I introduced peering and other goodness in Azure, it would all be visible at a glance here!

Use case #2 – Ahh… quick access to information?

Here’s another example where getting access to information configuration is really easy with ARE. If you’re working on a PowerShell script to provision some VM instances and you are not sure of the instance size you need, or the programatic name for that instance size, you can easily grab that information form the ARE portal. Bellow highlights a quick view of all the VM instance sizes available. There are 532 lines in that JSON output with all instance sizes from Standard_A0 to Standard_F16s (which offers 16 cores, 32GB of RAM and up to 32 disks attached if you were interested).

vmSizes view for all current available VM sizes in Azure. Handy to grab the programatic name to then use in PowerShell scripting?!

Use case #3 – PowerShell script examples

Mixing the REST API programatic cmdlets with PowerShell is easy. The ARE portal outlines the ways you can mix the two to execute quick cmdlets to various actions, for example: get, set, delete. Bellow is a an example set of PowerShell cmdlets from the ARE portal for VNET management.

Final words

Hopefully you get a chance to try out the Azure Resource Explorer today. It’s another handy tool to keep in your Azure Utility Belt. It’s definitely something I’m going to use, probably more often than I will probably realise.

#HappyFriday

Best

[Updated] Yammer group and user export via Yammer API to JSON, then converted to CSV

 

Update: awesome pro-tip

Big shout out to Scott Hoag (@ciphertxt on Twitter) with this pro-tip which will save you having to read this blog post. Yes, you don’t know what you don’t know.

Part of a Yammer network merge (which I am writing a blog post about.. WIP), you would lose data, posts, files etc as that can’t come across. You can however do an export of all that data to, depending on how much there is to export, usually a large .zip file. This is where Scott showed me the light. In that export, there are also two .csv files that contain all the user info in the first, and in the second all the group info. Knowing this, run that export process and you probably don’t need to read the rest of this blog post. #FacePalm.

HOWEVER, and that is a big however for a reason. The network export process does not export what members there are in groups in that groups.csv file. So if you want to to export Yammer groups and all their members, the below blog post is one way of doing that process, just a longer way… 


Yammer network merges are not pretty. I’m not taking a stab at you (Yammer developers and Microsoft Office 365 developers), but, I’m taking a stab.

There should be an option to allow at least group and group member data to be brought across when there is a network merge. Fair enough not bringing any data across as that can certainly be a headache with the vast amount of posts, photos, files and various content that consumes a Yammer network.

However, it would be considerably much less painful for customers if at least the groups and all their members could be merged. It would also make my life a little easier not having to do it.

Let me set the stage her and paint you a word picture. I’m no developer. Putting that out there from the start. I am good at problem solving though and I’m a black belt at finding information online (surfing the interwebs). So, after some deliberation, I found the following that might help with gathering group and user data, to be used for Yammer network merges.

Read More

How to parse JSON data in Nintex Workflow for Office 365

A workflow is usually described as a series of tasks that produce an outcome. In the context of Microsoft SharePoint Products and Technologies, a workflow is defined more precisely as the automated movement of documents or items through a specific sequence of actions or tasks that are related to a business process. SharePoint Workflows can be used to consistently manage common business processes within an organisation by allowing the attachment of business logic that is set of instructions to documents or items in a SharePoint list or library.

Nintex Workflow is one of the most popular 3rd party workflow products, it adds a drag-and-drop workflow designer, advanced connectivity, and rich workflow features to give customers more power and flexibility. Nintex Workflow Products have Nintex Workflow for SharePoint and Nintex for Office 365. Nintex for SharePoint is targeted to SharePoint on premises and Nintex Workflow for Office 365 has seamless integration with Office 365 so you can use a browser based drag-and-drop workflow designer but have limited workflow activities compared to the on premises product.

Prior to SharePoint 2013, many organisations streamlined business process by building an InfoPath form and SharePoint workflow. With InfoPath being deprecated soon by Microsoft developers need to move away from that technology for their basic forms needs. A great InfoPath alternative is HTML fields form and store the fields as a JSON object in one field in the SharePoint list.

Nintex Workflow for Office 365 hasn’t released an activity in the workflow that can parse the JSON format, and there is no easy solution to get the JSON value out except using Regular Expression activity which is hard to maintain or update if JSON object structure changes. In July 2015 Product Release, Nintex Workflow contains a new feature – Collection Variable, it is extremely powerful and can speed up workflow design dramatically.

Below are the actions that are available in the cloud:

  • Add Item to Collection
  • Check if Item Exists in Collection
  • Clear Collection
  • Count Items in Collection
  • Get Item from Collection
  • Join Items in Collection
  • Remove Duplicates from Collection
  • Remove Item from Collection
  • Remove Last Item from Collection
  • Remove Value from Collection
  • Sort Items in Collections.

I will walk you through a scenario how we can read the JSON data from the Nintex Workflow in the cloud. Below is a simple HTML form which contains basic employee information and it saves the form data as JSON to the field of a SharePoint list.

Employee details form

This is how it looks like JSON data stored in the SharePoint list field “Form Data“,

{
   "employeeDetails":
   {
      "dateChangeEffectiveFrom":"2015-09-22T21:49:15.866Z",
      "dateChangeEffectiveTo":"2015-12-25T00:00:00.000Z",
      "employee":
      {
         "employeeId":"2318",
         "employeeFullName":"Ken Zheng",
         "positionTitle":"Developer",
         "departmentDescription":"IT"
       }
   }
}

If you put in a JSON Parser it will give you better understanding the structure of the data, so you can see it contains an “employeeDetails” property which has “dateChangeEffectiveFrom“, “dateChangeEffectiveTo” and “employee“. And “employee” contains “employeeId“, “employeeFullName“, “positionTitle” and “departmentDescription” properties. In this example, we are going to get the value of “employeeFullName” from the JSON data.

JSON with colourised layout

Step 1: Create two Collection Variables: ‘FormData’ and ’employee’, a Dictionary variable for ’employeeDetails’ and a text variable named ‘employeeFullName’ in the Nintex Workflow Designer

Step 1

Step 2: Set “FormData” variable to the value of “Form Data” field of the list:

Step 2

Step 3: Use “Get an Item From A Dictionary” activity to set “employeeDetails” variable value, Item name or path must be the child element name

Step 3

Step 4: Then add another “Get an Item From A Dictionary” activity to get Employee collection, because “employee” is the child of “employeeDetails“. You need to select “employeeDetails” as source Dictionary:

Step 4

Step 5: Then another “Get an Item From A Dictionary” activity to get Employee Full name (Text variable)

Step 5

The completed workflow diagram will look like this

Completed workflow

Now if you log the variable “employeeFullName“, you should get value “Ken Zheng”.

With this Nintext Workflow Collection Variables feature we can now parse and extract information from JSON data field in a SharePoint list back into a workflow without creating separate list fields for each properties. And it can easily handle looping properties as well.

Automate your Cloud Operations Part 2: AWS CloudFormation

Stacking the AWS CloudFormation

Automate your Cloud Operations blog post Part 1 have given us the basic understanding on how to automate the AWS stack using CloudFormation.

This post will help the reader on how to layer the stack on top of the existing AWS CloudFormation stack using AWS CloudFormation instead of modifying the base template. AWS resources can be added into existing VPC using the outputs detailing the resources from the main VPC stack instead of having to modify the main template.

This will allow us to compartmentalize, separate out any components of AWS infrastructure and again versioning all different AWS infrastructure code for every components.

Note: The template I will use for this post for educational purposes only and may not be suitable for production workloads :).

Diagram below will help to illustrate the concept:

CloudFormation3

Bastion Stack

Previously (Part 1), we created the initial stack which provide us the base VPC. Next, we will  provision bastion stack which will create a bastion host on top of our base VPC. Below are the components of the bastion stack:

  • Create IAM user that can find out information about the stack and has permissions to create KeyPairs and actions related.
  • Create bastion host instance with the AWS Security Group enable SSH access via port 22
  • Use CloudFormation Init to install packages, create files and run commands on the bastion host instance also take the creds created for the IAM user and setup to be used by the scripts
  • Use the EC2 UserData to run the cfn-init command that actually does the above via a bash script
  • The condition handle: the completion of the instance is dependent on the scripts running properly, if the scripts fail, the CloudFormation stack will error out and fail

Below is the CloudFormation template to build the bastion stack:

Following are the high level steps to layer the bastion stack on top of the initial stack:

I put together the following video on how to use the template:

NAT Stack

It is important to design the VPC with security in mind. I recommend to design your Security Zone and network segregation, I have written a blog post regarding how to Secure Azure Network. This approach also can be impelemented on AWS environment using VPC, subnet and security groups. At the very minimum we will segregate the Private subnet and Public subnet on our VPC.

NAT instance will be added to our Initial VPC “public” subnets so that the future private instances can use the NAT instance for communication outside the Initial VPC. We will use exact same method like what we did on Bastion stack.

Diagram below will help to illustrate the concept:

CloudFormation4

The components of the NAT stack:

  • An Elastic IP address (EIP) for the NAT instance
  • A Security Group for the NAT instance: Allowing ingress traffic tcp, udp from port 0-65535 from internal subnet ; Allowing egress traffic tcp port 22, 80, 443, 9418 to any and egress traffic udp port 123 to Internet and egress traffic port 0-65535 to internal subnet
  • The NAT instance
  • A private route table
  • A private route using the NAT instance as the default route for all traffic

Following is the CloudFormation template to build the stack:

Similar like the previous steps on how to layer the stack:

Hopefully after reading the Part 1 and the Part 2 of my blog posts, the readers will gain some basic understanding on how to automate the AWS cloud operations using AWS CloudFormation.

Please contact Kloud Solutions if the readers need help with automating AWS production environment

http://www.wasita.net