Key Vault Secrets and ARM Templates

What is Azure Key Vault

Azure Key Vault helps safeguard cryptographic keys and secrets used by cloud applications and services. By using Key Vault, you can encrypt keys and secrets (such as authentication keys, storage account keys, data encryption keys, .PFX files, and passwords) using keys protected by hardware security modules (HSMs).

Key Vault streamlines the key management process and enables you to maintain control of keys that access and encrypt your data. Developers can create keys for development and testing in minutes, and then seamlessly migrate them to production keys. Security administrators can grant (and revoke) permission to keys, as needed.

Anybody with an Azure subscription can create and use key vaults. Although Key Vault benefits developers and security administrators, it could be implemented and managed by an organization’s administrator who manages other Azure services for an organization. For example, this administrator would sign in with an Azure subscription, create a vault for the organization in which to store keys, and then be responsible for operational tasks, such as:

  • Create or import a key or secret
  • Revoke or delete a key or secret
  • Authorize users or applications to access the key vault, so they can then manage or use its keys and secrets
  • Configure key usage (for example, sign or encrypt)
  • Monitor key usage

This administrator would then provide developers with URIs to call from their applications, and provide their security administrator with key usage logging information.

( Ref: https://docs.microsoft.com/en-us/azure/key-vault/key-vault-whatis ).

 

Current Scenario of the Key Vault.

In the current scenario , we are utilizing Key Vault for the provisioning of arm resources. Instead of any username and passwords , the template only contains references to these values stored in Azure Key Vault.

Theses secrets are extracted while the resources are being deployed using the template and the parameter file together.

Utilizing the Key Vault.

The following tasks are involved in utilizing the Key Vault

  • Creating the key Vault.
  • Adding Keys and Secrets in the Vault
  • Securing the Key Vault
  • Referencing Keys

Creating a Key Vault

Step 1 : Login to Azure Portal and click on All Services  and select Key Vault

1.png

Step 2 : Click on Add and enter the following details and click on Create

  • Key Vault Name
  • Subscription
  • Resource Group
  • Pricing Tier

2.PNG

Step 3 : Select the Key Vault Name

Step 4 : Select Secrets

Step 5 : Click on Generate/Import

3

Step 6: Select Manual in upload options

Step 7: Enter the following information.

  • Name of the Secret ( Like MyPassword )
  • Value of the Secret ( example P@ssword1)
  • Set Activation Date ( If required )
  • Set Expiration Date ( If required)

Step 8 : Select Yes on the Enabled option

Step 9: Click on Create

4

Securing the Key Vault

Step 1: Select the Newly created Key Vault
Step 2:  Select Access Policies
Step 3 : Select click to show advanced access policies

5

Step 4: Select the checkboxes as shown in the snapshot below.

Step 5: Click on Add New

6

Step 6: Select Secret management in the configure from template option.
Step 7:  Select the Principal ( name of the resource which needs access to the secret.)
Step 8: Select the permissions required from the Secret Permissions
Step 9: Select OK.

8

 

Referencing the Secrets.

Currently we are referencing the secrets stored in the Key vault using the arm templates.
A parameter type “securestring” is created in the parameter section of the arm template armtemplate.json file

kvsecret

We add the parameter in the parameters file of the template armtemplate.parameters.json with the following details

  • ID of the key vault ( resource ID of the KeyVault under the properties section )
  • name of the secret to extract (Mypassword)

kvsecretparamters.PNG

 

 

Summary:

Based on the above example , we achieved the following

  • Secure information ( usernames and passwords ) are not mentioned anywhere in the template or the parameter files
  • The secure values will be acquired when the template is running
  • The values are only accessible to those who have access to the Key vault.

 

Deploying a SailPoint IdentityNow Virtual Appliance in Azure

Introduction

The CentOS image that SailPoint provide for the IdentityNow Virtual Appliance that performs integration between ‘Sources’ and IdentityNow is VMWare based. I don’t have any VMWare Infrastructure to run it on and really didn’t want to run up any VMWare environments for this component. All my other infrastructure is in Azure. I’d love to run my VA(s) in Azure too.

In discussions with SailPoint I understand it is simply a case that they haven’t certified their CentOS image on Azure. So I figured I’d convert the VM, get it into Azure and see if it works from my Sandpit environment. This blog post details how I got it working.

Disclaimer: If you use this for more than a Sandpit/Test environment let your SailPoint CSM know. This isn’t an approved process or a support configuration. That said it works for me.

Overview

This is the high-level process I threw together that worked for me.

  1. Obtain the CentOS Image from the IdentityNow Virtual Appliance Setup
  2. Convert the VMWare VMDK image to Hyper-V VHD format using VirtualBox vboxmanage (free)
  3. From the Azure MarketPlace create a Seed VM based on CentOS (with new Resource Group, Storage Account, Virtual Network etc)
  4. Upload the VHD to the Azure Storage Account (associated with VM from Step 3) using Azure Storage Explorer
  5. Create a new VM based off the VM from Step 3 to use the disk from Step 4 as the Operating System disk
  6. Log in and configure the Virtual Appliance

Convert VMWare VM to Hyper-V.png

Prerequisites

  1. Virtual Box (for the disk image converter). You could probably do it with other tools but I’ve used this before and it just works.
  2. Enough hard disk space for the VA image and the converted image. The base image is ~2.8Gb and when converted to a fixed disk image it becomes ~128Gb (which can compress to ~3Gb for initial upload).
  3. Azure Storage Explorer. We’ll need this to upload the converted virtual disk to Azure.

SailPoint Virtual Appliance CentOS VMWare Image

To download the CentOS VMWare Image login to the Admin section of your IdentityNow Tenant.  Under Admin => Connections => Virtual Appliances create a New Cluster. Select that Cluster then Virtual Appliances => New 

Download the Appliance Package 

Create New VA.PNG

Converting the CentOS VMWare Virtual Disk to a Fixed Hyper-V Virtual Disk

I already had Virtual Box installed on my computer. I had to give the full path to VBoxManage (as shown below) and called it with the switches to convert the image;

vboxmanage clonehd –format VHD –variant Fixed

The –variant Fixed switch takes the dynamic image and converts it to Fixed as this is a requirement in Azure.

ConvertVADisk 1.PNG

The image conversion started and completed in under ten minutes.

Converted Fixed.PNG

Creating an Azure CentOS VM

In the Azure Portal I created a New Resource and chose CoreOS.

NewCoreOS 1

I gave it a name, chose HDD as the disk type and gave it a Username and Password.

NewCoreOS 2

I chose sizing in line with the recommendations for a Virtual Appliance.

NewCoreOS 3

And kept everything else simple (for my sandpit environment).

NewCoreOS 4

After the VM had deployed I had a Resource Group with the necessary Virtual Network, Storage Account etc.

Resource Group.PNG

Upload the Converted Disk to Azure Storage

I created a vhd container (in the Storage Group associated with the VM I just created) to hold the new VHD. Using Azure Storage Explorer I then uploaded the converted image. Select Page Blob for the blob type.

Upload VHD

You’ll want to have a decent internet connection to do this. I converted the SailPoint image on an Azure VM (to which I added a 256Gb data disk too). I then uploaded the new 128Gb VHD disk image from within Azure to the target Resource Group in about 75 minutes.

Upload VHD 2

Below I show the SailPoint Virtual Appliance CentOS OS converted disk image uploaded to Azure Storage Account Blob Storage.

Upload VHD 3.PNG

Generate SAS Token / Get Blob URI

We won’t used a SAS Token, but this just gives easy access to the Storage Blob URL. Right click on the VHD Blob and select Generate Shared Access Signature. Select Create.

Right Click - Get Shared Access Signature

Copy the URL. We’ll need parts of this for the script to create a new CentOS VM with our VA Disk Image.

Get VHD and BLOB Details

Create the new VM for our Virtual Appliance

Update the script below for:

  • The Resource Group you created the Seed VM in (line 2)
  • The Seed VM Name (line 4)
  • The Seed VM Subnet Name (line 6)

Each of those are easily obtained from the Seed VM Summary as highlighted below.

  •  update the Disk Blob details in Live 8 and 10 as copied earlier

After stepping through the script to create the new VM, and happy with the new name etc, I executed the New-AzureRMVM command.

Create New VM

And the VM was created in a couple of minutes.

Create VM Initiated

Accessing the new VM

Getting the IP address from the new VM Summary I SSH’d into it.

VM Started

And logged in with the default credentials from SailPoint. (Windows Subsystem for Linux is awesome).

SSH In to VA

Next Steps

  1. Change the password on your Virtual Appliance (passwrd)
  2. Create a DNS Name, update the configuration as per SailPoint VA Configuration tasks
  3. Create the VA and Test the Connection from the IdentityNow Portal
  4. Delete your original SeedVM as it is no longer required
  5. Add an NSG to the new VM
  6. Create another VM in a different location for High Availability and configure it in IdentityNow

Below shows my Azure based Virtual Appliance connected and all setup.

Cluster Up and Running.PNG

Summary

Whilst not officially supported it is possible to convert the SailPoint Virtual Appliance VMWare based image to an Azure compatible Hyper-V image and assign it as the Operating System disk on an Azure Linux (CoreOS) Virtual Machine. If you need to do something similar I hope my approach gives you some ideas.

If you then need to create another Virtual Appliance in Azure you have a Data Disk you can assign to a VM and upload to wherever it needs to be for creation of another Virtual Appliance VM.

Demystifying Managed Service Identities on Azure

Managed service identities (MSIs) are a great feature of Azure that are being gradually enabled on a number of different resource types. But when I’m talking to developers, operations engineers, and other Azure customers, I often find that there is some confusion and uncertainty about what they do. In this post I will explain what MSIs are and are not, where they make sense to use, and give some general advice on how to work with them.

What Do Managed Service Identities Do?

A managed service identity allows an Azure resource to identify itself to Azure Active Directory without needing to present any explicit credentials. Let’s explain that a little more.

In many situations, you may have Azure resources that need to securely communicate with other resources. For example, you may have an application running on Azure App Service that needs to retrieve some secrets from a Key Vault. Before MSIs existed, you would need to create an identity for the application in Azure AD, set up credentials for that application (also known as creating a service principal), configure the application to know these credentials, and then communicate with Azure AD to exchange the credentials for a short-lived token that Key Vault will accept. This requires quite a lot of upfront setup, and can be difficult to achieve within a fully automated deployment pipeline. Additionally, to maintain a high level of security, the credentials should be changed (rotated) regularly, and this requires even more manual effort.

With an MSI, in contrast, the App Service automatically gets its own identity in Azure AD, and there is a built-in way that the app can use its identity to retrieve a token. We don’t need to maintain any AD applications, create any credentials, or handle the rotation of these credentials ourselves. Azure takes care of it for us.

It can do this because Azure can identify the resource – it already knows where a given App Service or virtual machine ‘lives’ inside the Azure environment, so it can use this information to allow the application to identify itself to Azure AD without the need for exchanging credentials.

What Do Managed Service Identities Not Do?

Inbound requests: One of the biggest points of confusion about MSIs is whether they are used for inbound requests to the resource or for outbound requests from the resource. MSIs are for the latter – when a resource needs to make an outbound request, it can identify itself with an MSI and pass its identity along to the resource it’s requesting access to.

MSIs pair nicely with other features of Azure resources that allow for Azure AD tokens to be used for their own inbound requests. For example, Azure Key Vault accepts requests with an Azure AD token attached, and it evaluates which parts of Key Vault can be accessed based on the identity of the caller. An MSI can be used in conjunction with this feature to allow an Azure resource to directly access a Key Vault-managed secret.

Authorization: Another important point is that MSIs are only directly involved in authentication, and not in authorization. In other words, an MSI allows Azure AD to determine what the resource or application is, but that by itself says nothing about what the resource can do. For some Azure resources this is Azure’s own Identity and Access Management system (IAM). Key Vault is one exception – it maintains its own access control system, and is managed outside of Azure’s IAM. For non-Azure resources, we could communicate with any authorisation system that understands Azure AD tokens; an MSI will then just be another way of getting a valid token that an authorisation system can accept.

Another important point to be aware of is that the target resource doesn’t need to run within the same Azure subscription, or even within Azure at all. Any service that understands Azure Active Directory tokens should work with tokens for MSIs.

How to Use MSIs

Now that we know what MSIs can do, let’s have a look at how to use them. Generally there will be three main parts to working with an MSI: enabling the MSI; granting it rights to a target resource; and using it.

  1. Enabling an MSI on a resource. Before a resource can identify itself to Azure AD,it needs to be configured to expose an MSI. The way that you do this will depend on the specific resource type you’re enabling the MSI on. In App Services, an MSI can be enabled through the Azure Portal, through an ARM template, or through the Azure CLI, as documented here. For virtual machines, an MSI can be enabled through the Azure Portal or through an ARM template. Other MSI-enabled services have their own ways of doing this.

  2. Granting rights to the target resource. Once the resource has an MSI enabled, we can grant it rights to do something. The way that we do this is different depending on the type of target resource. For example, Key Vault requires that you configure its Access Policies, while to use the Event Hubs or the Azure Resource Manager APIs you need to use Azure’s IAM system. Other target resource types will have their own way of handling access control.

  3. Using the MSI to issue tokens. Finally, now that the resource’s MSI is enabled and has been granted rights to a target resource, it can be used to actually issue tokens so that a target resource request can be issued. Once again, the approach will be different depending on the resource type. For App Services, there is an HTTP endpoint within the App Service’s private environment that can be used to get a token, and there is also a .NET library that will handle the API calls if you’re using a supported platform. For virtual machines, there is also an HTTP endpoint that can similarly be used to obtain a token. Of course, you don’t need to specify any credentials when you call these endpoints – they’re only available within that App Service or virtual machine, and Azure handles all of the credentials for you.

Finding an MSI’s Details and Listing MSIs

There may be situations where we need to find our MSI’s details, such as the principal ID used to represent the application in Azure AD. For example, we may need to manually configure an external service to authorise our application to access it. As of April 2018, the Azure Portal shows MSIs when adding role assignments, but the Azure AD blade doesn’t seem to provide any way to view a list of MSIs. They are effectively hidden from the list of Azure AD applications. However, there are a couple of other ways we can find an MSI.

If we want to find a specific resource’s MSI details then we can go to the Azure Resource Explorer and find our resource. The JSON details for the resource will generally include an identity property, which in turn includes a principalId:

Screenshot 1

That principalId is the client ID of the service principal, and can be used for role assignments.

Another way to find and list MSIs is to use the Azure AD PowerShell cmdlets. The Get-AzureRmADServicePrincipal cmdlet will return back a complete list of service principals in your Azure AD directory, including any MSIs. MSIs have service principal names starting with https://identity.azure.net, and the ApplicationId is the client ID of the service principal:

Screenshot 2

Now that we’ve seen how to work with an MSI, let’s look at which Azure resources actually support creating and using them.

Resource Types with MSI and AAD Support

As of April 2018, there are only a small number of Azure services with support for creating MSIs, and of these, currently all of them are in preview. Additionally, while it’s not yet listed on that page, Azure API Management also supports MSIs – this is primarily for handling Key Vault integration for SSL certificates.

One important note is that for App Services, MSIs are currently incompatible with deployment slots – only the production slot gets assigned an MSI. Hopefully this will be resolved before MSIs become fully available and supported.

As I mentioned above, MSIs are really just a feature that allows a resource to assume an identity that Azure AD will accept. However, in order to actually use MSIs within Azure, it’s also helpful to look at which resource types support receiving requests with Azure AD authentication, and therefore support receiving MSIs on incoming requests. Microsoft maintain a list of these resource types here.

Example Scenarios

Now that we understand what MSIs are and how they can be used with AAD-enabled services, let’s look at a few example real-world scenarios where they can be used.

Virtual Machines and Key Vault

Azure Key Vault is a secure data store for secrets, keys, and certificates. Key Vault requires that every request is authenticated with Azure AD. As an example of how this might be used with an MSI, imagine we have an application running on a virtual machine that needs to retrieve a database connection string from Key Vault. Once the VM is configured with an MSI and the MSI is granted Key Vault access rights, the application can request a token and can then get the connection string without needing to maintain any credentials to access Key Vault.

API Management and Key Vault

Another great example of an MSI being used with Key Vault is Azure API Management. API Management creates a public domain name for the API gateway, to which we can assign a custom domain name and SSL certificate. We can store the SSL certificate inside Key Vault, and then give Azure API Management an MSI and access to that Key Vault secret. Once it has this, API Management can automatically retrieve the SSL certificate for the custom domain name straight from Key Vault, simplifying the certificate installation process and improving security by ensuring that the certificate is not directly passed around.

Azure Functions and Azure Resource Manager

Azure Resource Manager (ARM) is the deployment and resource management system used by Azure. ARM itself supports AAD authentication. Imagine we have an Azure Function that needs to scan our Azure subscription to find resources that have recently been created. In order to do this, the function needs to log into ARM and get a list of resources. Our Azure Functions app can expose an MSI, and so once that MSI has been granted reader rights on the resource group, the function can get a token to make ARM requests and get the list without needing to maintain any credentials.

App Services and Event Hubs/Service Bus

Event Hubs is a managed event stream. Communication to both publish onto, and subscribe to events from, the stream can be secured using Azure AD. An example scenario where MSIs would help here is when an application running on Azure App Service needs to publish events to an Event Hub. Once the App Service has been configured with an MSI, and Event Hubs has been configured to grant that MSI publishing permissions, the application can retrieve an Azure AD token and use it to post messages without having to maintain keys.

Service Bus provides a number of features related to messaging and queuing, including queues and topics (similar to queues but with multiple subscribers). As with Event Hubs, an application could use its MSI to post messages to a queue or to read messages from a topic subscription, without having to maintain keys.

App Services and Azure SQL

Azure SQL is a managed relational database, and it supports Azure AD authentication for incoming connections. A database can be configured to allow Azure AD users and applications to read or write specific types of data, to execute stored procedures, and to manage the database itself. When coupled with an App Service with an MSI, Azure SQL’s AAD support is very powerful – it reduces the need to provision and manage database credentials, and ensures that only a given application can log into a database with a given user account. Tomas Restrepo has written a great blog post explaining how to use Azure SQL with App Services and MSIs.

Summary

In this post we’ve looked into the details of managed service identities (MSIs) in Azure. MSIs provide some great security and management benefits for applications and systems hosted on Azure, and enable high levels of automation in our deployments. While they aren’t particularly complicated to understand, there are a few subtleties to be aware of. As long as you understand that MSIs are for authentication of a resource making an outbound request, and that authorisation is a separate thing that needs to be managed independently, you will be able to take advantage of MSIs with the services that already support them, as well as the services that may soon get MSI and AAD support.

Deploy active/active FortiGate NGFW in Azure

I recently was tasked with deploying two Fortinet FortiGate firewalls in Azure in a highly available active/active model. I quickly discovered that there is currently only two deployment types available in the Azure marketplace, a single VM deployment and a high availability deployment (which is an active/passive model and wasn’t what I was after).

FG NGFW Marketplace Options

I did some digging around on the Fortinet support sites and discovered that to you can achieve an active/active model in Azure using dual load balancers (a public and internal Azure load balancer) as indicated in this Fortinet document: https://www.fortinet.com/content/dam/fortinet/assets/deployment-guides/dg-fortigate-high-availability-azure.pdf.

Deployment

To achieve an active/active model you must deploy two separate FortiGate’s using the single VM deployment option and then deploy the Azure load balancers separately.

I will not be going through how to deploy the FortiGate’s and required VNets, subnets, route tables, etc. as that information can be found here on Fortinet’s support site: http://cookbook.fortinet.com/deploying-fortigate-azure/.

NOTE: When deploying each FortiGate ensure they are deployed into different frontend and backend subnets, otherwise the route tables will end up routing all traffic to one FortiGate.

Once you have two FortiGate’s, a public load balancer and an internal load balancer deployed in Azure you are ready to configure the FortiGate’s.

Configuration

NOTE: Before proceeding ensure you have configured static routes for all your Azure subnets on each FortiGate otherwise the FortiGate’s will not be able to route Azure traffic correctly.

Outbound traffic

To direct all internet traffic from Azure via the FortiGate’s will require some configuration on the Azure internal load balancer and a user defined route.

  1. Create a load balance rule with:
    • Port: 443
    • Backend Port: 443
    • Backend Pool:
      1. FortiGate #1
      2. FortiGate #2
    • Health probe: Health probe port (e.g. port 22)
    • Session Persistence: Client IP
    • Floating IP: Enabled
  2. Repeat step 1 for port 80 and any other ports you require
  3. Create an Azure route table with a default route to the Azure internal load balancer IP address
  4. Assign the route table to the required Azure subnets

IMPORTANT: In order for the load balance rules to work you must add a static route on each FortiGate for IP address: 168.63.129.16. This is required for the Azure health probe to communicate with the FortiGate’s and perform health checks.

FG Azure Health Probe Cfg

Once complete the outbound internet traffic flow will be as follows:

FG Internet Traffic Flow

Inbound traffic

To publish something like a web server to the internet using the FortiGate’s will require some configuration on the Azure public load balancer.

Let’s say I have a web server that resides on my Azure DMZ subnet that hosts a simple website on HTTPS/443. For this example the web server has IP address: 172.1.2.3.

  1. Add an additional public IP address to the Azure public load balancer (for this example let’s say the public IP address is: 40.1.2.3)
  2. Create a load balance rule with:
    • Frontend IP address: 40.1.2.3
    • Port: 443
    • Backend Port: 443
    • Backend Pool:
      1. FortiGate #1
      2. FortiGate #2
    • Session Persistence: Client IP
  3. On each FortiGate create a VIP address with:
    • External IP Address: 40.1.2.3
    • Mapped IP Address: 172.1.2.3
    • Port Forwarding: Enabled
    • External Port: 443
    • Mapped Port: 443

FG WebServer VIP Cfg

You can now create a policy on each FortiGate to allow HTTPS to the VIP you just created, HTTPS traffic will then be allowed to your web server.

For details on how to create policies/VIPs on FortiGate’s refer to the Fortinet support website: http://cookbook.fortinet.com.

Once complete the traffic flow to the web server will be as follows:

FG Web Traffic Flow

Amazon QuickSight – An elegant and easy to use business analytics tool

First published at https://nivleshc.wordpress.com

Introduction

Recently, I had a requirement for a tool to visualise some data I had collected. My requirements were very simple. I didn’t want something that would cost me a lot, and at the same time I wanted the reports to be elegant and informative. Most of all, I didn’t want to have to go through pages and pages of documentation to learn how to use it.

As my data was within Amazon Web Services (AWS), I thought to check if AWS had any such offerings. Guess what, there was indeed a tool just for what I wanted, and after using it, I was amazed at how simple and elegant it is.

In this blog, I will show how you can easily get started with Amazon QuickSight. I will take you through the steps to import your data into Amazon QuickSight and then create some informative visualisations.

Some background on Amazon QuickSight

Pricing

Amazon QuickSight is very inexpensive, infact, if your data is not too much, you won’t have to pay anything!

For standard edition use, Amazon QuickSight provides 1GB of SPICE for the first user free per month. SPICE is an acronym for Super-fast, Parallel, In-memory, Calculation Engine and it uses a combination of columnar storage, in-memory technologies enabled through the latest hardware innovations, machine code generation, and data compression to allow users to run interactive queries on large datasets and get rapid responses.  SPICE is the calculation engine that Amazon QuickSight uses.

Any additional SPICE is priced at $USD0.25 per GB/month. For the latest pricing, please refer to https://aws.amazon.com/quicksight/#Pricing

Data Sources

Currently Amazon QuickSight supports the following data sources

  • Relational Data Sources
    • Amazon Athena
    • Amazon Aurora
    • Amazon Redshift
    • Amazon Redshift Spectrum
    • Amazon S3
    • Amazon S3 Analytics
    • Apache Spark 2.0 or later
    • Microsoft SQL Server 2012 or later
    • MySQL 5.1 or later
    • PostgreSQL 9.3.1 or later
    • Presto 0.167 or later
    • Snowflake
    • Teradata 14.0 or later
  • File Data Sources
    • CSV/TSV – (comma separated, tab separated value text files)
    • ELF/CLF – Extended and common log format files
    • JSON – Flat or semi-structured data files
    • XLSX – Microsoft Excel files

Unfortunately, currently Amazon DynamoDB is not supported as a native data source. Since my data is in Amazon DynamoDB, I had to write some custom lambda functions to export it to a csv file, so that it could be imported into Amazon QuickSight.

Ok, time for that walk-through I promised earlier.  For this blog, I will be using an S3 bucket as my data source. It will contain the CSV files that I will use for analysis in Amazon QuickSight.

Step 1 – Create S3 buckets

If you haven’t already done so, create an S3 bucket that will contain the csv files. The S3 bucket does not have to be publicly accessible. Once created, upload the csv files into the S3 bucket.

In my case, the csv file is called orders.csv and its location is https://s3.amazonaws.com/sample/orders.csv (to get the URL to your S3 file, login to the S3 console and navigate to the S3 bucket that contains the file. Click the S3 bucket to open it, then click the file name to open its properties. Under Overview you will see Link. This is the URL to the file)

Step 2 – Create an Amazon QuickSight Account

Before you start using Amazon QuickSight, you must create an account. Unfortunately, I couldn’t find a way for creating an Amazon QuickSight account without creating an Amazon AWS account. If you don’t have an existing Amazon AWS account, you can create an AWS Free Tier account. Once you have got an AWS account, go ahead and create an Amazon QuickSight account at https://aws.amazon.com/quicksight/.

While creating your Amazon QuickSight account, you will be asked if you would like Amazon QuickSight to auto-discover your Amazon S3 buckets. Enable this and then click to Choose S3 buckets. Choose the S3 bucket that you created in Step 1 above. This will give Amazon QuickSight read-only access to the S3 bucket, so that it can read the data for analysis.

Step 3 – Create a manifest file

A manifest file is a JSON file that provides the location and format of the data files to Amazon QuickSight. This is required when creating a data set for S3 data sources. Please refer to https://docs.aws.amazon.com/quicksight/latest/user/supported-manifest-file-format.html if you would like more information about manifest files.

Below is my manifest file, which I have affectionately named ordersmanifest.json.

{
   "fileLocations": [
      {
         "URIs": [
            "https://s3.amazonaws.com/sample/orders.csv"
         ]
      },
   ],
   "globalUploadSettings": {
      "format": "CSV",
      "delimiter": ",",
      "textqualifier": "'",
      "containsHeader": "true"
   }
}

Once created, upload the manifest file into the same S3 bucket as to where the csv file is stored.

Step 4 – Create a data set

  • Login to your Amazon QuickSight account. From the top right, click on Manage data
  • In the next screen, click on New data set
  • In the next screen, for Create a Data Set FROM NEW DATA SOURCES, click on S3
  • In the next screen
    • provide a name for the data source
    • for Upload a manifest file ensure URL is clicked and enter the URL to the manifest file (you can get the url by logging into the S3 console, and then clicking on the manifest file to reveal its properties. Under the Overview tab, you will see Link. This is the URL to the manifest file).NewS3DataSource
    • Click Connect
    • Amazon QuickSight will now read the manifest file and then import the csv file to SPICE. You will see the following screenFinishDataSetCreation
    • Click on Edit/Preview data.
    • In the next screen, you will see the contents of the data file that was imported, along with the Fields name on the left. If you want to exclude any columns from the analysis, simply untick them (I unticked orderTime (S) since I didn’t need it) EditPreviewDataSet
    • By default, the data is called Group 1. To customise the name, replace Group 1 with a text of your choice (I have renamed my data to Orders Data)RenameGroup1Label
    • Click Save & visualize from the top menu

Step 5 – Create Visualisations

Now that you have imported the data into SPICE, you can start analysing it and creating visualisations.

After step 4, you should be in the Analysis section.

  • Depending on which visualisation you want, you can select the respective type under Visual types from the bottom left hand side of the screen. For my visualisations, I chose Pie Chart (side note – you will notice that orderTime (S)  isn’t listed under Fields list. This is because we had unticked it in the previous screen)OrdersDataAnalysis-01
  • I want to create two Pie Charts, one to show me analysis about what is the most popular foodName and another to find out what is the most popular drinkName. For the first Pie Chart, drag foodName (S) from the Fields list to the Value – Add a measure here box  in the top of the screen. Then drag foodName (S) from the Fields list to the Group/Color – Add a dimension here box in the top of the screen. You will see the followingOrdersDataAnalysis-02
  • You can customise the visualisation title Count of Foodname (S) by Foodname (S) by clicking it and then changing the text (I have changed the title to Popularity of Food Types)FoodNamePopularity
  • If you look closely, the legend on the right hand side doesn’t serve much purpose since the pie slices are already labelled quite well. You can also get rid of the legend and get more space for your visual. To do this, click on the down arrow above FoodName (S) on the right and then select Hide legend FoodNameHideLengend
  • Next, lets create a Pie Chart visualisation for drinkName. From the top menu, click on Add and then Add visual drinkNameAddVisual
  • You will now have another Canvas at the bottom of the first Pie Chart. Click this new canvas area to select it (a blue border will appear to show that it is selected). From Visual types at the bottom left hand side, click on the Pie Chart visual. Then from the top, click on Field wells to expose the Value and Group/Color boxes for the second canvas drinkNameCanvas
  • From the Field list on the left, drag drinkName (S) to the Value – Add a measure here box  in the top of the screen. Then drag drinkName (S) from the Fields list to the Group/Color – Add a dimension here box in the top of the screen. You will now see the following foodanddrinkvisual
  • We are almost done. I actually want the two Pie Charts to sit side by side, instead of one ontop ofthe other. To do this, I will show you a neat trick. In each of the visuals, at the bottom right border, you will see two diagonal lines. If you move your mouse pointer over them, they change to a resizing cursor. Use this to resize the visual’s canvas area. Also, in the middle of the top border of the visual, you will see two rows of gray dots. Click your mouse pointer on this and drag to the location you want to move the visual to.VisualResizeandMove
  • I have hidden the legend for the second visual, customised the title and resized both the visuals and moved them side by side. Viola! Below is what I get. Not bad aye!BothVisualsSidebySide

Step 6 – Create a dashboard

Now that the visuals have been created, they can be shared it with others. This can be done by creating a dashboard. A dashboard is a read-only snapshot of the analysis. When you share the dashboard with others, they can view and filter the dashboard data, however any filters applied to the dashboard visual exist only when the user is viewing the dashboard, and aren’t saved once it is closed.

One thing to note about sharing dashboards – you can only share dashboards with users who have an Amazon QuickSight account.

Creating a dashboard is very easy.

  • In the Analysis screen, on the top right corner, click on Share and then select Create dashboardCreateDashboard
  • You can either replace an existing dashboard or create a new one. In our case, since we are creating a new dashboard, select Create a new dashboard as and enter a name for the dashboard. Once finished, click Create dashboardCreateDashboard-Name
  • You will then be asked to enter the username or email address of those you want to share the dashboard with. Enter this and click on Share ShareDashboard
  • That’s it, your dashboard is now created. To access it, go to the Amazon QuickSight home screen (click on the Amazon QuickSight icon on the top left hand side of the screen) and then click on All dashboards. Those that you have shared the dashboard with will also be able to see it once they login to their Amazon QuickSight account.AllDashboards

Step 6 – Refreshing the Data Set

If your data set continually changes, your visualisations/dashboards will not show the updated information. This can be done by refreshing the data set. Doing this will import the new data into SPICE, which will then automatically update the analysis/visualisations and dashboards

Note: you will have to manually reload the webpage to see the updated visualisations and dashboard

There are two ways of refreshing data sets. One is to do it manually while the other is to use a schedule. The scheduled data refresh allows for the data to be automatically refreshed at a certain time daily, weekly or monthly. A maximum of five scheduled refreshes can be configured.

The steps below show how you can manually refresh the data or create schedules to refresh the data

  • From the Amazon QuickSight main screen, click on Manage data from the top left of the screen ManageData
  • In the next screen, you will see all your currently configured data sets. Click the Orders Data dataset (this is the one we had created previously).
  • In the next screen, you will see Refresh Now and Schedule refreshManualScheduleDataRefresh
  • Clicking on Refresh Now will manually refresh the data. Clicking on Schedule refresh will bring up the screen where you can configure a schedule for refreshing the data automatically.

 

That’s it folks! Wasn’t that simple? If you already have an Amazon AWS account, I would strongly recommend giving Amazon QuickSight a try for all your analytics needs. Even if you don’t have an Amazon AWS account, I would still suggest getting an AWS free tier account to try it out.

Enjoy 😉

 

Azure Update Management

How do you patch/update your infrastructure in Azure, AWS, On-Premises? There are many ways, of course, including manually, built-in scheduled update, Group Policy, locally scripted, ConfigMgr, custom Azure Automation, WSUS, and so on.

Somewhat recently, another option “Azure Update Management” has become available, and it is FREE*. This is an expanded offering of what used to be OMS Update Management, integrated into the main Azure Portal and visible on each VM under the “Update Management” node.

Rather than regurgitate the existing documentation and tutorial, I want to highlight some of the finer points:

  • Yes, supports Windows and Linux
  • Requires ‘supported’ versions of Windows or Linux
  • Does not support ‘client’ versions e.g. Windows 7/8/8.1/10
  • Requires .NET Framework 4.5 and Windows Management Framework 5.0 or later on Windows 2008R2 SP1
  • Windows Server 2008 or 2008 R2 without SP1 won’t apply updates, just scan/assess
  • Update targets must have access to an update repository
    • WSUS, ConfigMgr SUP, or Microsoft Update
    • Linux package repository, either locally managed, or the OS default
  • Integration with ConfigMgr requires current branch 1606 or newer

Caveats

  • If an update reports that it requires a reboot, the VM will reboot. Currently there appears to be no way to avoid/defer a reboot
  • Windows VMs only scan for updates every 12 hours
  • Linux VMs scan for updates every 3 hours

I was about to build a WSUS server in an Azure subscription to address a number of manually updated or otherwise unmanaged Azure VMs, which was going to cost a minimum of about AUD $200 per month. This appears to be a nearly ideal solution to me and very attractive to the client at the ‘nearly free’ price point.

This is my new ‘go-to’ for update management; I hope it looks as good to you, and simplifies an important part of your environment.

FREE*

  • Requires a Log Analytics storage account, which does cost a small amount for ingestion and storage for 31 days (https://azure.microsoft.com/en-us/pricing/details/log-analytics/). The first 5 GB ingestion per month is free, which should be good for 50-100 VMs, leaving about 20c/GB/Month for storage costs – so maybe $1 per month for up to 100 VMs!
  • No further costs involved for Azure VMs
  • Non-Azure VMs/Computers could incur further charges, depending on your environment, use of extra Azure Automation/Configuration Management features, etc

Sample Update Status (Server name column removed)

AzureUpdateManagement

Deploy VM via ARM template: Purchase eligibility failed

I recently tried to deploy a VM using an ARM template executed via PowerShell and I encountered the purchase eligibility failed error as seen below.

PurchaseEligibilityFailedError

As I have encountered this before I ensured I accepted marketplace terms for the VM image in question using the PowerShell commands:

Get-AzureRmMarketplaceTerms -Publisher PublisherName -Product ProductName -Name Name | Set-AzureRmMarketplaceTerms -Accept

I then reattempted to deploy my VM using my ARM template and still got the same error, I even waited 24 hours and tried again with no luck.

I then discovered that from the Azure portal you can create a new resource using the “Template deployment” option and deploy your ARM template via the Azure portal.

TemplateDeploymentOption

After I uploaded and executed my ARM template using this method it deployed my VM successfully with no purchase eligibility errors.

Any subsequent VM deployments via PowerShell using the exact same ARM template now worked as expected with no purchase eligibility errors.

Azure ARM architecture pattern: a DMZ design with a firewall appliance

Im in the process of putting together a new Azure design for a client. As always in Azure, the network components form the core of the design. There was a couple of key requirements that needed to be addressed that the existing environment had outgrown: lack of any layer 7 edge heightened security controls and a lack of a DMZ.

I was going through some designs that I’ve previously done and was checking the Microsoft literature on what some fresh design patterns might look like, in case anythings changed in recent times. There is still only a single reference on the Microsoft Azure documentation and it still references ASM and not ARM.

For me then, it seems that the existing pattern I’ve used is still valid. Therefore, I thought I’d share what that architecture would look like via this quick blog post.

My DMZ with a firewall appliance design

Here’s an overview of some key points on the design:

  • Firewall appliances will have multiple interfaces, but, typically will have 2 that we are mostly concerned about: an internal interface and an external interface
  • Network interfaces in ARM are now independent objects from compute resources
    • As such, an interface needs to be associated with a subnet
    • Both the internal and external interfaces could in theory be associated with the same subnet, but, thats a design for another blog post some day
  • My DMZ design features two subnets in two zones
    • Zone 1 = “Untrusted”
    • Zone 2 = “Semi-trusted”
    • Zone 3 = “Trusted” – this is for reference purposes only so you get the overall picture
  • Simple subnet design
    • Subnet 1 = “External DMZ”
    • Subnet 2 = “Internal DMZ”
    • Subnet 3 = Trusted
  • Network Security Groups (NSGs) are also used to encapsulate the DMZ subnets and to isolate traffic from the VNET
  • Through this topology there are effectively three layers of firewall between the untrusted zone and the trusted zone
    • External DMZ NSG, firewall appliance and Internal DMZ NSG
  • With two DMZ subnets (external DMZ and internal DMZ), there are two scenarios for deploying DMZ workloads
    • External DMZ = workloads that do not require heightened security controls by way of the firewall
      • Workload example: proxy server, jump host, additional firewall appliance management or monitoring interfaces
      • Alternatively, this subnet does not need to be used for anything other than the firewall edge interface
    • Internal DMZ = workloads that required heightened security controls and also (by way of best practice) shouldn’t be deployed in the trusted zone
      • Workload example: front end web server, Windows WAP server, non domain joined workloads
  • Using the firewall as an edge device requires route tables to force all traffic leaving a subnet to be directed to the firewall internal interface
    • This applies to the internal DMZ subnet and the trusted subnet
    • The External DMZ subnet does not have a route table configured so that the firewall is able to route out to Azure internet and the greater internet
  • Through VNET peering, other VNETs and subnets could also route out to Azure internet (and the greater WWW) via this firewall appliance – again, leverage route tables

Cheers 🙂

Azure ARM architecture pattern: the correct way to deploy a DMZ with NSGs

Isolating any subnet in Azure can effectively create a DMZ. To do this correctly though is certainly something that is super easy, but, something that can easily be done incorrectly.

Firstly, all that is required is a NSG and associating that with any given subnet (caveat- remember that NSGs are not compatible with the GatewaySubnet). Doing this will deny most traffic to and from that subnet- mostly relating to the tag “internet”. What is easily missed is not applying a deny all rule set in both the inbound and outbound rules of the NSG itself.

Ive seen some clients that have put an NSG on a subnet and assumed that subnet was protected. Unfortunately, thats not correct.

Ive seen some clients that have put a deny all inbound from the internet and vice versa deny all outbound to the internet and assumed that the subnet was protected and isolated. Unfortunately, thats also not correct.

How to correctly isolate a subnet to create a DMZ

Azure has 3 default rules that apply to an NSG.

These rules are:

To view these default rules, at the top of the NSG inbound or outbound rules, select this button:

With these 3 rules, there’s the higher in order two rules that can trip people up. The main culprit being rule 65000 means that any other subnet in your VNET and any other VNET that is peered with your VNET is allowed to communicate with your given subnet.

To correctly isolate a subnet in a VNET, we need to create new rule (I always do the lowest available rule priority number of 4096) for both inbound and outbound, to deny * ports on * IP’s or subnets that will override these default rules. Azure NSGs work by way of precedence. The higher the rule priority number, the higher…. um… you guess it: the priority when processing the rules (much like any other firewall or network appliance vendor ACL). This 4096 deny rule should look something like this:

You’ll also notice two warnings when you’ve attempted to save that rule which should explain in better and more succinct English my earlier definition of what happens:

  • Warning #1 – This rule denies traffic from AzureLoadBalancer and may affect virtual machine connectivity. To allow access, add an inbound rule with higher priority to allow AzureLoadBalancer to VirtualNetwork.
  • Warning #2 – This rule denies virtual network access. If you wish to allow access to your virtual network, add an inbound rule with higher priority to Allow VirtualNetwork to VirtualNetwork.

With that, the Azure load balancer and VNET traffic from other subnets within the same VNET or other VNETs through peering will be denied. Thus, we have a true isolated subnet and one that can be setup as a DMZ.

Cheers!

BONUS

Before I go, I just wanted to quickly mention that in Azure NSGs are also able to be applied to a single network interface.

If you flip the methodology there, its quite easily possible to have no NSGs on any subnets, but rather, apply those NSGs on every interface associated with servers and instances in a VNET. There is a key drawback though with this approach- administrative overhead.

With an NSG associated with a server instance, again following my correct deny all rule mentioned earlier, a single NIC or a single server instance could be isolated in a sudo DMZ. The challenge then is, if this process is repeated across 100 servers, keeping all those NSGs up to date and replicating rules when servers need to communicate on various ports or protocols. Administrative overhead indeed!


Original posted on Lucian.Blog. Follow Lucian on Twitter @LucianFrango.

Implementing a Break Glass Process with AWS Systems Manager

Modern day organisations rely on systems to perform critical, sometimes lifesaving tasks. As a result, a common requirement for many organisations is a break-glass process, providing the ability to bypass normal access control procedures when existing authentication mechanisms fail. The implementation of a break glass system often involves considerable effort to ensure the process is not open to malicious use and is auditable, yet simple and efficient. The good news is AWS Systems Manager (SSM) with AWS Key Management Service (KMS) can be leveraged to allow administrative users the ability to recover access to systems on-demand, without having to bake in privileged users with predefined passwords on systems.

How the AWS Systems Manager Break Glass solution works

Before we get into the configuration details, let’s walk through how this all works.

  1. The break-glass process is initiated when an administrative user invokes SSM Run Command against a target system using a custom SSM document for Windows or Linux.
  2. The commands in the SSM document are invoked and the root/admin password is set to a random string of characters. The string is then encrypted using KMS and stored in the SSM Parameter store.
  3. CloudWatch events detects that SSM Run Command has completed successfully and initiates a Lambda function to clean up the reset password.
  4. The Lambda function waits for 60 seconds, then removes the password from the parameter store.

As you can see, there’s minimal password management required for this solution without having to compromise security. Now that we have an understanding of how the solution hangs together, let’s take a look at how to set it up.

Creating the Customer Master Key

To begin, we need to create a key that will be used to encrypt passwords written to SSM parameter store. You can use the IAM section of the AWS Management Console to create a Customer Master Key by performing the following:

  1. Open the Encryption Keys section of the Identity and Access Management (IAM) console.
  2. For Region, choose the appropriate AWS region.
  3. Choose Create key.
  4. Type an alias for the CMK. Choose Next Step.
  5. Select which IAM users and roles can administer the CMK. Choose Next Step.
  6. Select which IAM users can use the CMK to encrypt and decrypt data. These users will be able to perform the break glass process. Choose Next Step.
  7. Choose Finish to create the CMK.

Creating the EC2 Policy

Great, so we’ve got a key set up. We now need to provide our instances access to encrypt the password and store it in the SSM parameter store. To do this, we need to create a custom IAM policy by performing the following:

  1. Open the IAM console.
  2. In the navigation column on the left, choose Policies.
  3. At the top of the page, choose Create Policy.
  4. On the Create Policy page choose Select on Create Your Own Policy.
  5. For Policy Name, type a unique name.
  6. The policy document you’ll want to use is defined below. Note that the key ARN defined here is the CMK created in the previous step.
  7. When you are done, choose Create Policy to save your completed policy.

Creating the EC2 Role

We now need to assign the policy to our EC2 instances. Additionally, we need to allow our instances access to communicate with the SSM endpoint. To do this, we’ll need to create an appropriate EC2 role:

  1. Open the IAM console.
  2. In the navigation pane, choose Roles, Create new role.
  3. On the Select role type page, choose Select next to Amazon EC2.
  4. On the Attach Policy page, select AmazonEC2RoleforSSM and the policy you created in the previous step.
  5. On the Set role name and review page, type a name for the role and choose Create role.

Attaching the Role to the EC2 Instance

After creating the EC2 role, we then need to attach it to the target instance(s).

  1. Navigate to the EC2 console.
  2. Choose Instances in the navigation pane.
  3. Select the target instance you intend to test the break-glass process on.
  4. Choose Actions, choose Instance Settings and then Attach/Replace IAM role from the drop-down list.
  5. On the Attach/Replace IAM role page, choose the role created in the previous step from the drop-down list.
  6. After choosing the IAM role, proceed to the next step by choosing Apply.

Creating the Password Reset SSM Document

An AWS Systems Manager Document defines the actions that are performed on the target instance(s). We need to create a multi-step cross-platform document that can reset Linux or Windows passwords based on the target platform. To do this, perform the following:

  1. Open the Amazon EC2 console.
  2. In the navigation pane, choose Documents.
  3. Choose Create Document.
  4. Type a descriptive name for the document.
  5. In the Document Type list, choose Command.
  6. Delete the brackets in the Content field, and then paste the document below containing scripts for Windows and Linux. Remember to replace the CMKs and region in the both scripts.
  7. Choose Create Document to save the document.

Congratulations! So far, you’ve set up the password reset functionality. Technically, you could stop here and you’d have a working break-glass capability, however we’re going to go one step further and add a clean-up process to remove the password from the parameter store for added security, as described below.

Creating the Lambda Function Policy

Our password clean-up process will use a Lambda function to delete the password from the parameter store. We’ll need to create an IAM policy to allow the Lamda function to do this.

  1. Open the IAM console.
  2. In the navigation column on the left, choose Policies.
  3. At the top of the page, choose Create Policy.
  4. On the Create Policy page choose Select on Create Your Own Policy.
  5. For Policy Name, type a unique name.
  6. The policy document to use is defined below.
  7. When you are done, choose Create Policy to save your completed policy.

Creating the Lambda Function Role

We now need to attach the policy to a role that will be used by our lambda function.

  1. Open the IAM console.
  2. In the navigation pane, choose RolesCreate new role.
  3. On the Select role type page, choose Select next to AWS Lambda.
  4. On the Attach Policy page, select CloudWatchLogsFullAccess (for logging purposes) and the policy you created in the previous step.
  5. On the Set role name and review page, type a name for the role and choose Create role.

Creating the Lambda Function

We now need to create the Lambda Function that will delete the password, and attach the role created in the previous step.

  1. Open the AWS Lambda console.
  2. Choose Create Function.
  3. Choose Author from scratch.
  4. On the triggers page, click Next.
  5. Under Basic Information enter a name for your function and select the Python 2.7 runtime.
  6. Under Lambda function code, enter the code below.
  7. Under Lambda Function handler and role, choose the role you created in the previous step.
  8. Expand advanced settings and extend the timeout to 90 seconds.
  9. Choose Next and review the summary page then click Create Function.

Creating the CloudWatch Event

Almost there! The last step is to capture a successful execution of SSM Run Command, then trigger the previously created Lambda function. We can capture this using CloudWatch events:

  1. Open the CloudWatch console.
  2. In the navigation pane, choose Events.
  3. Choose Create rule.
  4. For Event Source, Choose Event Pattern, then choose Build custom event pattern, from the dropdown box.
  5. Enter the following into the text box, replacing the document name with the SSM document that was created earlier.
  6. For Targets, choose Add target and then choose Lambda function.
  7. For Function, select the Lambda function that was created in the previous step.
  8. Choose Configure details.
  9. For Rule definition, type an appropriate name.
  10. Choose Create rule.

That’s it! All that’s left is taking the process for a test drive. Let’s give it a shot.

Testing the Process

Assuming you’ve logged into the console with a user that has decrypt access for the CMK used, the following process can be used to access the password:

  1. Open the Amazon EC2 console.
  2. In the navigation pane under Systems Manager Services, choose Run Command.
  3. Choose Run a command.
  4. For Command document, choose the SSM Document created earlier.
  5. For Target instances, choose an instance that has the previously created EC2 role attached. If you do not see the instance in this list, it might not have the correct role attached, or may not be able to access the SSM endpoint.
  6. Choose Run, and then choose View results.
  7. In the commands list, choose the command you just executed. If the command is still in progress, click the refresh icon in the top right corner of the console.
  8. When the Status column shows Success, click the Output tab.
  9. The output tab will display successful execution of both plugins described in our SSM document. If we click View Output on both we’ll notice that one didn’t execute due to not meeting the platform precondition we set. The other plugin should show that the password reset executed successfully.

  1. In the navigation pane under Systems Manager Shared Resources, choose Parameter Store.
  2. Assuming 60 seconds hasn’t elapsed (because our clean-up function will kick in and delete the password) there should be a parameter called pwd-<instance-ID>. After selecting the parameter, the Description tab below will show a SecureString.
  3. Click on Show to view the password.

You can now use this password to access the administrator/root account of your instance. Assuming the password clean-up script is configured correctly, the password should disappear from the parameter store within 60 seconds of the Run Command completing.

Conclusion

The above process provides a simple and secure method for emergency access to both Windows and Linux systems, without the complex process and inherent risk of a traditional break-glass system. Additionally, this method has no running systems, providing a break-glass capability at nearly no cost.