Getting Azure 99.95% SLA for Cisco FTD virtual appliances in Azure via availability sets and ARM templates

First published on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter: @LucianFrango or connect via LinkedIn: Lucian Franghiu.


In the real world there are numerous lessons learned, experiences, opinions and vendors recommendations that dictate and what constitutes “best practice” when it comes to internet edge security. It’s a can of worms that I don’t want to open as I am not claiming to be an expert in that regard. I can say that I do have enough experience to know that not having any security is a really bad idea and having bank level security for regular enterprise customers can be excessive.

I’ve been working with an enterprise customer that falls pretty much in the middle of that dichotomy. They are a regular large enterprise organisation that is concerned about internet security and have little experience with Azure. That said, the built in tools and software defined networking principles of Azure don’t meet the requirements they’ve set. So, to accomodate those requirements, moving from Azure NSGs and WAFs and all the goodness that Azure provides to dedicated virtual appliances was not difficult, but, did require a lot of thinking and working with various team members and 3rd parties to get the result.

Cisco Firepower Thread Defence Virtual for Microsoft Azure

From what I understand, Cisco’s next generation firewall has been in the Azure marketplace for about 4 months now, maybe a little longer. Timelines are not that much of a concern, rather, they are a consideration in that it relates to the maturity of the product. Unlike competitors, there is indeed a lag behind in some features.

The firewalls themselves, Cisco Firepower Thread Defence Virtual for Microsoft Azure, are Azure specific Azure Marketplace available images of the virtual appliances Cisco has made for some time. The background again, not that important. It’s just the foundational knowledge for the following:

Cisco FTDv supports 4 x network interfaces in Azure. These interfaces include:

  • A management interface (Nic0) – cannot route traffic over this
  • A diagnostics interface (Nic1) – again, cannot route traffic over this. I found this out the hard way…
  • An external / untrusted interface (Nic2)
  • An internal / trusted interface (Nic3)

So we have a firewall that essentially is an upgraded Cisco ASA (Cisco Adaptive Security Appliance) with expanded feature sets unlocked through licensing. An already robust product with new features.

The design

Availability is key in the cloud. Scale out dominates scale up methodologies and as the old maxim goes: two is better than one. For a customer, I put together the following design to leverage Azure availability sets (to guarantee instance uptime of at least one instance in the set; and to guarantee different underlying Azure physical separation of these resources) and to have a level of availability higher than a single instance. NOTE: Cisco FTDv does not support high availability (out of the box) and is not a statefull appliance in Azure. 

Implementation

To deploy a Cisco FTDv in Azure, the quick and easy way is to use the Azure Marketplace and deploy through the portal. It’s a quick and pretty much painless process. To note though, here are some important pieces of information when deploying these virtual appliances from the Azure marketplace:

  • There is essentially only one deployment option for the size of instances – Standard_D3 or Standard_D3v2 – the difference being SSD vs HDD (with other differences between v1 and v2 series coming by way of more available RAM in certain service plans)
  • YOU MUST deploy the firewall in a resource group WITH NO OTHER RESOURCES in that group (from the portal > Marketplace)
  • Each interface MUST be on a SUBNET – so when deploying your VNET, you need to have 4 subnets available
    • ALSO, each interface MUST be on a UNIQUE subnet – again, 4 subnets, can’t double up – even with the management and diagnostic interfaces
  • Deploying the instance in an Azure availability set is- NOT AVAILABLE (from the portal > Marketplace)

Going through the wizard is relatively painless and straight forward and within 15-20min you can have a firewall provisioned and ready to connect to your on-premises management server. Yes, another thing to note is that the appliance is managed from Firepower Management Centre (FMC). The FMC, from that I have read, cannot be deployed in Azure at this time. However, i’ve not looked into that tidbit to much, so I may be wrong there.

The problem

In my design I have a requirement for two appliances. These appliances would be in a farm, which is supported in the FMC, and the two appliances can have common configuration applied to both devices- stuff like allow/deny rules. In Azure, without an availability set, there is a small chance, however a chance nonetheless, that both devices could someone be automagically provisioned in the same rack, on the same physical server infrastructure in the Australia East region (my local region).

Availability is a rather large requirement and ensuring that all workloads across upwards of 500+ instances for the customer I was working with is maintained was a tricky proposition. Here’s how I worked around the problem at hand as officially Cisco do not state they “do not support availability sets”.

The solution

Pretty much all resources when working with the Azure Portal have a very handy tab under their properties. I use this tab a lot. It’s the Automation Script section of the properties blade of a resource.

Automation script

After I provisioned a single firewall, I reviewed the Automation Script blade of the instance. There is plenty of good information there. What was particularly is handy to know is the following:

 },
 "storageProfile": {
 "imageReference": {
 "publisher": "cisco",
 "offer": "cisco-ftdv",
 "sku": "ftdv-azure-byol",
 "version": "620362.0.0"
 },
 "osDisk": {
 "osType": "Linux",
 "name": "[concat(parameters('virtualMachines_FW1_name'),'-disk')]",
 "createOption": "FromImage",

So with that, we have all the key information to leverage ARM templates to deploy the firewalls. In practice though, I copied the entire Automation Script 850 line JSON file and put it into Atom. Then I did the following:

  • Reviewed the JSON file and cleaned it up to match the naming standard for the customer
    • This applied to various resources that were provisioned: NIC, NSGs, route tables etc
  • I copied and added to the file an addition for adding the VM to availability set
    • The code for that will be bellow
  • I then removed the firewall instance and all of the instance specific resources it created form Azure
    • I manually removed the NICs
    • I manually removed the DISKs from Blob storage
  • I then used Visual Studio to create an ew project
  • I copied the JSON file into the azuredeploy.json file
  • Updated my parameters file (azuredeplpy.parameters.json)
  • Finally, proceeded to deploy from template

Low and behold the firewall instance provisioned just fine and indeed there was an availability set associated with that. Additionally, when I provisioned the second appliance, I followed the same process and both are now in the same availability set. This makes using the Azure Load Balancer nice and easy! Happy days!

For your reference, here’s the availability set JSON I added in my file:

"parameters": [
 {
"availabilitySetName": {
 "defaultValue": "FW-AS",
 "type": "string"
 }

Then you need to add the following under “resources”:

"resources": [
 {
 "type": "Microsoft.Compute/availabilitySets",
 "name": "[parameters('availabilitySetName')]",
 "apiVersion": "2015-06-15",
 "location": "[resourceGroup().location]",
 "properties": {
 "platformfaultdomaincount": "2",
 "platformupdatedomaincount": "2"
 }
 },

Then you’ll also need to add in the resources “type”: “Microsoft.Compute/virtualMachines”:

 "properties": {
 "availabilitySet": {
 "id": "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]"
 },
  "dependsOn": [
 "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]",

Those are really the only things that need to be added to the ARM template. It’s quick and easy!

BUT WAIT, THERES MORE!

No, I’m not talking about throwing in a set of steak knives with that, but, there is a little more to this that you dear reader need to be aware of.

Once you deploy the firewall and the creating process finalises and its state is now running, there is an additional challenge. When deploying via the Marketplace, the firewall enters Advanced User mode and is able to be connected to the FMC. I’m sure you can guess where this is going… When deploying the firewall via an ARM template, the same mode is not entered. You get the following error message:

User [admin] is not allowed to execute /bin/su/ as root on deviceIDhere

After much time digging through Cisco documentation, which I am sorry to say is not up to standard, Cisco TAC were able to help. The following command needs to be run in order to get into the correct mode:

~$ su admin
~$ [password goes here] which is Admin123 (the default admin password, not the password you set)

Once you have entered the correct mode, you can add the device to the FMC with the following:

~$ configure manager add [IP address of FMC] [key - one time use to add the FW, just a single word]

The summary

I appreciate that speciality network vendors provide really good quality products to manage network security. Due limitations in the Azure Fabric, not all work 100% as expected. From a purists point of view, NSGs and the Azure provided software defined networking solutions and the wealth of features provided, works amazingly well out of the box.

The cloud is still new to a lot of people. That trust that network admins place in tried and true vendors and products is just not there yet with BSOD Microsoft. In time I feel it will be. For now though, deploying virtual appliances can be a little tricky to work with.

Happy networking!

Lucian

Why are you not using Azure Resource Explorer (Preview)?

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango. Connect on LinkedIn.

***

For almost two years the Azure Resource Explorer has been in preview. For almost two years barely anyone has used it. This stops today!

I’ve been playing around with the Azure Portal (ARM) and clicking away stumbled upon the Azure Resource Explorer; available via https://resources.azure.com. Before you go any further, click on that or open the URI in a new tab in your favourite browser (I’m using Chrome 56.x for Mac if you were wondering) and finally BOOKMARK IT!

Okay, let’s pump the breaks and slow down now. I know what you’re probably thinking: “I’m not bookmarking a URI to some Azure service because some blogger dude told me to. This could be some additional service that won’t add any benefit since I have the Azure portal and PowerShell; love PowerShell”. Well, that is a fair point. However, let me twist your arm with the following blog post; full of fun facts and information about what Azure Resource Explorer is, what is does, how to use it and more!

What is Azure Resource Explorer

This is a [new to me] website, running Bootstrap HTML, CSS, Javascript framework (an older version, but, like yours truly here on clouduccino), that provides streamlined and rather well laid out access to various REST API details/calls for any given Azure subscription. You can login and view some nice management REST API’s, make changes to Azure infrastructure in your subscription via REST calls/actions like get, put, post, delete and create.

There’s some awesome resources around documentation for the different API’s, although Microsoft is lagging in actually making this of any use across the board (probably should not have mentioned that). Finally, what I find handy is pre-build PowerShell scripts that outline how to complete certain actions mixed in with the REST API’s.

Here’s an example of an application gateway in the ARE portal. Not that there is much to see, since there are no appGateways, but, I could easily create one!

Use cases

I’m sure that all looks “interesting”; with an example above with nothing to show for it. Well, here is where I get into a little more detail. I can’t show you all the best features straight away, otherwise you’ll probably go off and start playing with and tinkering with the Resource Explorer portal yourselves (go ahead and do that, but, after reading the remainder of this blog!).

Use case #1 – Quick access to information

Drilling through numerous blades in the Azure Portal, while it works well, sometimes can take longer than you want when all you need to do is check for one bit of information- say a route table for a VNET. PowerShell can also be time consuming- doing all that typing and memorising cmdlets and stuff (so 2016..).

A lot of the information you need can be grabbed at a glance from the ARE portal though which in turns saves you time. Here’s a quick screenshot of the route table (or lack there of) from a test VNET I have in the Kloud sandbox Azure subscription that most Kloudies use on the regular.

I know, I know. There is no routes. In this case it’s a pretty basic VNET, but, if I introduced peering and other goodness in Azure, it would all be visible at a glance here!

Use case #2 – Ahh… quick access to information?

Here’s another example where getting access to information configuration is really easy with ARE. If you’re working on a PowerShell script to provision some VM instances and you are not sure of the instance size you need, or the programatic name for that instance size, you can easily grab that information form the ARE portal. Bellow highlights a quick view of all the VM instance sizes available. There are 532 lines in that JSON output with all instance sizes from Standard_A0 to Standard_F16s (which offers 16 cores, 32GB of RAM and up to 32 disks attached if you were interested).

vmSizes view for all current available VM sizes in Azure. Handy to grab the programatic name to then use in PowerShell scripting?!

Use case #3 – PowerShell script examples

Mixing the REST API programatic cmdlets with PowerShell is easy. The ARE portal outlines the ways you can mix the two to execute quick cmdlets to various actions, for example: get, set, delete. Bellow is a an example set of PowerShell cmdlets from the ARE portal for VNET management.

Final words

Hopefully you get a chance to try out the Azure Resource Explorer today. It’s another handy tool to keep in your Azure Utility Belt. It’s definitely something I’m going to use, probably more often than I will probably realise.

#HappyFriday

Best,

Lucian

 

 

 

[Updated] Yammer group and user export via Yammer API to JSON, then converted to CSV

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango.


Update: awesome pro-tip

Big shout out to Scott Hoag (@ciphertxt on Twitter) with this pro-tip which will save you having to read this blog post. Yes, you don’t know what you don’t know.

Part of a Yammer network merge (which I am writing a blog post about.. WIP), you would lose data, posts, files etc as that can’t come across. You can however do an export of all that data to, depending on how much there is to export, usually a large .zip file. This is where Scott showed me the light. In that export, there are also two .csv files that contain all the user info in the first, and in the second all the group info. Knowing this, run that export process and you probably don’t need to read the rest of this blog post. #FacePalm.

HOWEVER, and that is a big however for a reason. The network export process does not export what members there are in groups in that groups.csv file. So if you want to to export Yammer groups and all their members, the below blog post is one way of doing that process, just a longer way… 


Yammer network merges are not pretty. I’m not taking a stab at you (Yammer developers and Microsoft Office 365 developers), but, I’m taking a stab.

There should be an option to allow at least group and group member data to be brought across when there is a network merge. Fair enough not bringing any data across as that can certainly be a headache with the vast amount of posts, photos, files and various content that consumes a Yammer network.

However, it would be considerably much less painful for customers if at least the groups and all their members could be merged. It would also make my life a little easier not having to do it.

Let me set the stage her and paint you a word picture. I’m no developer. Putting that out there from the start. I am good at problem solving though and I’m a black belt at finding information online (surfing the interwebs). So, after some deliberation, I found the following that might help with gathering group and user data, to be used for Yammer network merges.

Read More

How to parse JSON data in Nintex Workflow for Office 365

A workflow is usually described as a series of tasks that produce an outcome. In the context of Microsoft SharePoint Products and Technologies, a workflow is defined more precisely as the automated movement of documents or items through a specific sequence of actions or tasks that are related to a business process. SharePoint Workflows can be used to consistently manage common business processes within an organisation by allowing the attachment of business logic that is set of instructions to documents or items in a SharePoint list or library.

Nintex Workflow is one of the most popular 3rd party workflow products, it adds a drag-and-drop workflow designer, advanced connectivity, and rich workflow features to give customers more power and flexibility. Nintex Workflow Products have Nintex Workflow for SharePoint and Nintex for Office 365. Nintex for SharePoint is targeted to SharePoint on premises and Nintex Workflow for Office 365 has seamless integration with Office 365 so you can use a browser based drag-and-drop workflow designer but have limited workflow activities compared to the on premises product.

Prior to SharePoint 2013, many organisations streamlined business process by building an InfoPath form and SharePoint workflow. With InfoPath being deprecated soon by Microsoft developers need to move away from that technology for their basic forms needs. A great InfoPath alternative is HTML fields form and store the fields as a JSON object in one field in the SharePoint list.

Nintex Workflow for Office 365 hasn’t released an activity in the workflow that can parse the JSON format, and there is no easy solution to get the JSON value out except using Regular Expression activity which is hard to maintain or update if JSON object structure changes. In July 2015 Product Release, Nintex Workflow contains a new feature – Collection Variable, it is extremely powerful and can speed up workflow design dramatically.

Below are the actions that are available in the cloud:

  • Add Item to Collection
  • Check if Item Exists in Collection
  • Clear Collection
  • Count Items in Collection
  • Get Item from Collection
  • Join Items in Collection
  • Remove Duplicates from Collection
  • Remove Item from Collection
  • Remove Last Item from Collection
  • Remove Value from Collection
  • Sort Items in Collections.

I will walk you through a scenario how we can read the JSON data from the Nintex Workflow in the cloud. Below is a simple HTML form which contains basic employee information and it saves the form data as JSON to the field of a SharePoint list.

Employee details form

This is how it looks like JSON data stored in the SharePoint list field “Form Data“,

{
   "employeeDetails":
   {
      "dateChangeEffectiveFrom":"2015-09-22T21:49:15.866Z",
      "dateChangeEffectiveTo":"2015-12-25T00:00:00.000Z",
      "employee":
      {
         "employeeId":"2318",
         "employeeFullName":"Ken Zheng",
         "positionTitle":"Developer",
         "departmentDescription":"IT"
       }
   }
}

If you put in a JSON Parser it will give you better understanding the structure of the data, so you can see it contains an “employeeDetails” property which has “dateChangeEffectiveFrom“, “dateChangeEffectiveTo” and “employee“. And “employee” contains “employeeId“, “employeeFullName“, “positionTitle” and “departmentDescription” properties. In this example, we are going to get the value of “employeeFullName” from the JSON data.

JSON with colourised layout

Step 1: Create two Collection Variables: ‘FormData’ and ’employee’, a Dictionary variable for ’employeeDetails’ and a text variable named ‘employeeFullName’ in the Nintex Workflow Designer

Step 1

Step 2: Set “FormData” variable to the value of “Form Data” field of the list:

Step 2

Step 3: Use “Get an Item From A Dictionary” activity to set “employeeDetails” variable value, Item name or path must be the child element name

Step 3

Step 4: Then add another “Get an Item From A Dictionary” activity to get Employee collection, because “employee” is the child of “employeeDetails“. You need to select “employeeDetails” as source Dictionary:

Step 4

Step 5: Then another “Get an Item From A Dictionary” activity to get Employee Full name (Text variable)

Step 5

The completed workflow diagram will look like this

Completed workflow

Now if you log the variable “employeeFullName“, you should get value “Ken Zheng”.

With this Nintext Workflow Collection Variables feature we can now parse and extract information from JSON data field in a SharePoint list back into a workflow without creating separate list fields for each properties. And it can easily handle looping properties as well.

Automate your Cloud Operations Part 2: AWS CloudFormation

Stacking the AWS CloudFormation

Automate your Cloud Operations blog post Part 1 have given us the basic understanding on how to automate the AWS stack using CloudFormation.

This post will help the reader on how to layer the stack on top of the existing AWS CloudFormation stack using AWS CloudFormation instead of modifying the base template. AWS resources can be added into existing VPC using the outputs detailing the resources from the main VPC stack instead of having to modify the main template.

This will allow us to compartmentalize, separate out any components of AWS infrastructure and again versioning all different AWS infrastructure code for every components.

Note: The template I will use for this post for educational purposes only and may not be suitable for production workloads :).

Diagram below will help to illustrate the concept:

CloudFormation3

Bastion Stack

Previously (Part 1), we created the initial stack which provide us the base VPC. Next, we will  provision bastion stack which will create a bastion host on top of our base VPC. Below are the components of the bastion stack:

  • Create IAM user that can find out information about the stack and has permissions to create KeyPairs and actions related.
  • Create bastion host instance with the AWS Security Group enable SSH access via port 22
  • Use CloudFormation Init to install packages, create files and run commands on the bastion host instance also take the creds created for the IAM user and setup to be used by the scripts
  • Use the EC2 UserData to run the cfn-init command that actually does the above via a bash script
  • The condition handle: the completion of the instance is dependent on the scripts running properly, if the scripts fail, the CloudFormation stack will error out and fail

Below is the CloudFormation template to build the bastion stack:

Following are the high level steps to layer the bastion stack on top of the initial stack:

I put together the following video on how to use the template:

NAT Stack

It is important to design the VPC with security in mind. I recommend to design your Security Zone and network segregation, I have written a blog post regarding how to Secure Azure Network. This approach also can be impelemented on AWS environment using VPC, subnet and security groups. At the very minimum we will segregate the Private subnet and Public subnet on our VPC.

NAT instance will be added to our Initial VPC “public” subnets so that the future private instances can use the NAT instance for communication outside the Initial VPC. We will use exact same method like what we did on Bastion stack.

Diagram below will help to illustrate the concept:

CloudFormation4

The components of the NAT stack:

  • An Elastic IP address (EIP) for the NAT instance
  • A Security Group for the NAT instance: Allowing ingress traffic tcp, udp from port 0-65535 from internal subnet ; Allowing egress traffic tcp port 22, 80, 443, 9418 to any and egress traffic udp port 123 to Internet and egress traffic port 0-65535 to internal subnet
  • The NAT instance
  • A private route table
  • A private route using the NAT instance as the default route for all traffic

Following is the CloudFormation template to build the stack:

Similar like the previous steps on how to layer the stack:

Hopefully after reading the Part 1 and the Part 2 of my blog posts, the readers will gain some basic understanding on how to automate the AWS cloud operations using AWS CloudFormation.

Please contact Kloud Solutions if the readers need help with automating AWS production environment

http://www.wasita.net

Amazon Web Services (AWS) networking: public IP address and subnet list

Originally posted on Lucian’s blog over at clouduccino.com

Amazon Web Services (AWS) has many data centre’s in many continents and countries all over the world. AWS has two key grouping methods of these data centres: regions and availability zones.

It can be very handy to either reference the IP address or subnet of a particular service in say a proxy server to streamline connectivity. This is a good practice to avoid unnecessary latency via proxy authentication requests. Below is an output of Amazon Web Services IP address and subnet details split into the key categories as listed by AWS via thier publishing of information through the IP address JSON file available here.

Sidebar: Click here to read up more on regions and availability zones or click here or click here. Included in these references is also information about the DNS endpoints for services that are therefore IP address agnostic. Also, If you’d like more details about the JSON file click here.

Read More

Automate your Cloud Operations Part 1: AWS CloudFormation

Operations

What is Operations?

In the IT world, Operations refers to a team or department within IT which is responsible for the running of a business’ IT systems and infrastructure.

So what kind of activities this team perform on day to day basis?

Building, modifying, provisioning, updating systems, software and infrastructure to keep them available, performing and secure which ensures that users can be as productive as possible.

When moving to public cloud platforms the areas of focus for Operations are:

  • Cost reduction: if we design it properly and apply good practices when managing it (scale down / switch off)
  • Smarter operation: Use of Automation and APIs
  • Agility: faster in provisioning infrastructure or environments by Automating the everything
  • Better Uptime: Plan for failover, and design effective DR solutions more cost effectively.

If Cloud is the new normal then Automation is the new normal.

For this blog post we will focus on automation using AWS CloudFormation. The template I will use for this post for educational purposes only and may not be suitable for production workloads :).

AWS CloudFormation

AWS CloudFormation provides developers and system administrators DevOps an easy way to create and manage a collection of related AWS resources, including provisioning and updating them in an orderly and predictable fashion. AWS provides various CloudFormation templates, snippets and reference implementations.

Let’s talk about versioning before diving deeper into CloudFormation. It is extremely important to version your AWS infrastructure in the same way as you version your software. Versioning will help you to track change within your infrastructure by identifying:

  • What changed?
  • Who changed it?
  • When was it changed?
  • Why was it changed?

You can tie this version to a service management or project delivery tools if you wish.

You should also put your templates into source control. Personally I am using Github to version my infrastructure code, but any system such as Team Foundation Server (TFS) will do.

AWS Infrastructure

The below diagram illustrates the basic AWS infrastructure we will build and automate for this blog post:

CloudFormation1

Initial Stack

Firstly we will create the initial stack. Below are the components for the initial stack:

  • A VPC with CIDR block of 192.168.0.0/16 : 65,543 IPs
  • Three Public Subnets across 3 Availability Zones : 192.168.10.0/24, 192.168.11.0/24,  192.168.12.0/24
  • An Internet Gateway attached to the VPC to allow public Internet access. This is a routing construct for VPC and not an EC2 instance
  • Routes and Route tables for three public subnets so EC2 instances in those public subnets can communicate
  • Default Network ACLs to allow all communication inside of the VPC.

Below is the CloudFormation template to build the initial stack.

The template can be downloaded here: https://s3-ap-southeast-2.amazonaws.com/andreaswasita/cloudformation_template/demo/lab1-vpc_ELB_combined.template

I put together the following video on how to use the template:

Understanding a CloudFormation template

AWS CloudFormation is pretty neat and FREE. You only need to pay for the AWS resources provisioned by the CloudFormation template.

The next bit is understanding the Structure of the template. Typically CloudFormation template will have 5 sections:

  • Headers
  • Parameters
  • Mappings
  • Resources
  • Outputs

Headers: Example:

Parameters: Provision-time spec command-line options. Example:

Mappings: Conditionals Case Statements. Example:

Resources: All resources to be provisioned. Example:

Outputs: Example:

Note: Not all AWS Resources can be provisioned using AWS CloudFormation and it has the following limitations.

In Part 2 we will deep dive further on AWS CloudFormation and automating the EC2 including the configuration for NAT and Bastion Host instance.

http://www.wasita.net