Preparing your Docker container for Azure App Services

Similar to other cloud platforms, Azure is starting to leverage containers to provide flexible managed environments for us to run Applications. The App Service on Linux being such a case, allows us to bring in our own home-baked Docker images containing all the tools we need to make our Apps work.

This service is still in preview and obviously has a few limitations:

  • Only one container per service instance in contrast to Azure Container Instances,
  • No VNET integration.
  • SSH server required to attach to the container.
  • Single port configuration.
  • No ability to limit the container’s memory or processor.

Having said this, we do get a good 50% discount for the time being which is not a bad thing.

The basics

In this post I will cover how to set up an SSH server into our Docker images so that we can inspect and debug our containers hosted in the Azure App Service for Linux.

It is important to note that running SSH in containers is a highly disregarded practice and should be avoided in most cases. Azure App Services mitigates the risk by only granting SSH port access to the Kudu infrastructure which we tunnel through. However, we don’t need SSH if we are not running in the App Services engine so we can just secure ourselves by only enabling SSH when a flag like ENABLE_SSH environment variable is present.

Running an SSH daemon with our App also means that we will have more than one process per container. For cases like these, Docker allows us to enable an init manager per container that makes sure no orphaned child processes are left behind on container exit. Since this feature requires docker run rights that for security reasons the App services does not grant, we must package and configure this binary ourselves when building the Docker image.

Building our Docker image

TLDR: docker pull xynova/appservice-ssh

The basic structure looks like the following:

The SSH configuration

The /ssh-config/sshd_config specifies the SSH server configuration required by App Services to establish connectivity with the container:

  • The daemon needs to listen on port 2222.
  • Password authentication must be enabled.
  • The root user must be able to login.
  • Ciphers and MACs security settings must be the one displayed below.

The container startup script

The entrypoint.sh script manages the application startup:

If the ENABLE_SSH environment variable equals true then the setup_ssh() function sets up the following:

  • Change the root user password to Docker! (required by App Services).
  • Generate the SSH host keys required by SSH clients to authenticate SSH server.
  • Start the SSH daemon into the background.

App Services requires the container to have an Application listening on the configurable public service port (80 by default). Without this listener, the container will be flagged as unhealthy and restarted indefinitely. The start_app(), as its name implies, runs a web server (http-echo) that listens on port 80 and just prints all incoming request headers back out to the response.

The Dockerfile

There is nothing too fancy about the Dockerfile either. I use the multistage build feature to compile the http-echo server and then copy it across to an alpine image in the PACKAGING STAGE. This second stage also installs openssh, tini and sets up additional configs.

Note that the init process manager is started through ENTRYPOINT ["/sbin/tini","--"] clause, which in turn receives the monitored entrypoint.sh script as an argument.

Let us build the container image by executing docker build -t xynova/appservice-ssh docker-ssh. You are free to tag the container and push it to your own Docker repository.

Trying it out

First we create our App Service on Linux instance and set the custom Docker container we will use (xynova/appservice-ssh if you want to use mine). Then we then set theENABLE_SSH=true environment variable to activate the SSH Server on container startup.

Now we can make a GET request the the App Service url to trigger a container download and activation. If everything works, you should see something like the following:

One thing to notice here is the X-Arr-Ssl header. This header is passed down by the Azure App Service internal load balancer when the App it is being browsed through SSL. You can check on this header if you want to trigger http to https redirections.

Moving on, we jump into the Kudu dashboard as follows:

Select the SSH option from the Debug console (the Bash option will take you to the Kudu container instead).

DONE! we are now inside the container.

Happy hacking!

Swashbuckle Pro Tips for ASP.NET Web API – XML

In the previous post, we implemented IOperationFilter of Swashbuckle to handle example objects with combination of AutoFixture. According to Swagger spec, it doesn’t only handle JSON payloads, but also copes with XML payloads. In this post, we’re going to use the Swashbuckle library again to handle XML payloads properly.

The sample codes used in this post can be found here.

Acknowledgement

The sample application uses the following spec:

Built-In XML Support of Web API

ASP.NET Web API contains the Httpconfiguration instance out-of-the-box and it handles both JSON and XML payloads serialisation/deserialisation. Therefore, without any extra efforts, we can see those Swagger UI pages showing both JSON and XML payload.


As Swashbuckle is a JSON-biased library, camelCasing in JSON payloads is well supported, while XML payloads is not. All XML nodes is PascalCasing, which is inconsistent. How can we support camel-case on both JSON and XML? Here’s the tip:

It’s a little bit strange that, in order for XML to support camel-casing, we should change JSON serialisation settings, by the way. Go back to the Swagger UI screen and we’ll be able to find out all XML nodes except the root one are now camel-cased. The root node will be dealt later on.

Let’s send a REST API request to the endpoint using an XML payload.

However, the model instance is null! The Web API action can’t recognise the XML payload.

This is because the HttpConfiguration instance basically uses the DataContractSerializer instance for payload serialisation/deserialisation, but this doesn’t work as expected. What if we change the serialiser to XmlSerializer from DataContractSerializer? It would work. Add the following a line of code like:

Go back to the screen again and we’ll confirm the same screen as before.

What was the change? Let’s send an XML request again with some value modifications.

Now, the Web API action recognises the XML payload, but something weird happened. We sent lorem ipsum and 123, but they’re not passed. What’s wrong this time? There might be still the casing issue. So, instead of camel-cased payload, use pascal-cased XML payload and see how it’s going.

Here we go! Now the API action method can get the XML payload properly!

In other words, Swagger UI screen showed the proper camel-cased payload, but in fact it wasn’t that payload. What could be the cause? Previously we set the UseXmlSerializer property value to true for nothing. Now, all payload models should have XmlRoot and/or XmlElement decorators. Let’s update the payload models like:

Send the XML request again and we’ll get the payload!


However, it’s still not quite right. Let’s have a look at the actual payload and example payload.

The example payload defines ValueRequestModel as a root node. However, the actual payload uses request as declared in the XmlRoot decorator. Where should I change it? Swagger spec defines the xml object for this case. This can be easily done by implementing the ISchemaFilter interface of Swashbuckle.

Defining xml in Schema Object

Here’s the sample code defining the request root node by implementing the ISchemaFilter interface.

It applies the Visitor Pattern to filter out designated types only.

Once implemented, Swagger configuration needs to include the filter to register both ValueRequestModel and ValueResponseModel so that the request model has request on its root node and the response model has response on its root node. Try the Swagger UI screen again.

Now the XML payload looks great and works great.

So far, we’ve briefly walked through how Swagger document deals with XML payloads, using Swashbuckle.

Static Security Analysis of Container Images with CoreOS Clair

Container security is (or should be) a concern to anyone running software on Docker Containers. Gone are the days when running random Images found on the internet was common place. Security guides for Containers are common now: examples from Microsoft and others can be found easily online.

The two leading Container Orchestrators also offer their own security guides: Kubernetes Security Best Practices and Docker security.

Container Image Origin

One of the single biggest factors in Container security is determined by the origin of container Images:

  1. It is recommended to run your own private Registry to distribute Images
  2. It is recommended to scan these Images against known vulnerabilities.

Running a private Registry is easy these days (Azure Container Registry for instance).

I will concentrate on the scanning of Images in the remainder of this post and show how to look for common vulnerabilities using Core OS Clair. Clair is probably the most advanced non commercial scanning solution for Containers at the moment, though it requires some elbow grease to run this way. It’s important to note that the GUI and Enterprise features are not free and are sold under the Quay brand.

As security scanning is recommended as part of the build process through your favorite CI/CD pipeline, we will see how to configure Visual Studio Team Services (VSTS) to leverage Clair.

Installing CoreOS Clair

First we are going to run Clair in minikube for the sake of experimenting. I have used Fedora 26 and Ubuntu. I will assume you have minikube, kubectl and docker installed (follow the respective links to install each piece of software) and that you have initiated a minikube cluster by issuing the “minikube start” command.

Clair is distributed through a docker image or you can also compile it yourself by cloning the following Github repository: https://github.com/coreos/clair.

In any case, we will run the following commands to clone the repository, and make sure we are on the release 2.0 branch to benefit from the latest features (tested on Fedora 26):

~/Documents/github|⇒ git clone https://github.com/coreos/clair
Cloning into 'clair'...
remote: Counting objects: 8445, done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 8445 (delta 0), reused 2 (delta 0), pack-reused 8440
Receiving objects: 100% (8445/8445), 20.25 MiB | 2.96 MiB/s, done.
Resolving deltas: 100% (3206/3206), done.
Checking out files: 100% (2630/2630), done.

rafb@overdrive:~/Documents/github|⇒ cd clair
rafb@overdrive:~/Documents/github/clair|master
⇒ git fetch
rafb@overdrive:~/Documents/github/clair|master
⇒ git checkout -b release-2.0 origin/release-2.0
Branch release-2.0 set up to track remote branch release-2.0 from origin.
Switched to a new branch 'release-2.0'

The Clair repository comes with a Kubernetes deployment found in the contrib/k8s subdirectory as shown below. It’s the only thing we are after in the repository as we will run the Container Image distributed by Quay:

rafb@overdrive:~/Documents/github/clair|release-2.0
⇒ ls -l contrib/k8s
total 8
-rw-r--r-- 1 rafb staff 1443 Aug 15 14:18 clair-kubernetes.yaml
-rw-r--r-- 1 rafb staff 2781 Aug 15 14:18 config.yaml

We will modify these two files slightly to run Clair version 2.0 (for some reason the github master branch carries an older version of the configuration file syntax – as highlighted in this github issue).

In the config.yaml, we will change the postgresql source from:

source: postgres://postgres:password@postgres:5432/postgres?sslmode=disable

to

source: host=postgres port=5432 user=postgres password=password sslmode=disable

In config.yaml, we will change the version of the Clair image from latest to 2.0.1:

From:

image: quay.io/coreos/clair

to

image: quay.io/coreos/clair:v2.0.1

Once these changes have been made, we can deploy Clair to our minikube cluster by running those two commands back to back:

kubectl create secret generic clairsecret --from-file=./config.yaml 
kubectl create -f clair-kubernetes.yaml

By looking at the startup logs for Clair, we can see it fetches a vulnerability list at startup time:

[rbigeard@flanger ~]$ kubectl get pods NAME READY STATUS RESTARTS AGE 
clair-postgres-l3vmn 1/1 Running 1 7d 
clair-snmp2 1/1 Running 4 7d 
[rbigeard@flanger ~]$ kubectl logs clair-snmp2 
...
{"Event":"fetching vulnerability updates","Level":"info","Location":"updater.go:213","Time":"2017-08-14 06:37:33.069829"}
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"ubuntu.go:88","Time":"2017-08-14 06:37:33.069960","package":"Ubuntu"} 
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"oracle.go:119","Time":"2017-08-14 06:37:33.092898","package":"Oracle Linux"} 
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"rhel.go:92","Time":"2017-08-14 06:37:33.094731","package":"RHEL"}
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"alpine.go:52","Time":"2017-08-14 06:37:33.097375","package":"Alpine"}

Scanning Images through Clair Integrations

Clair is just a backend and we therefore need a frontend to “feed” Images to it. There are a number of frontends listed on this page. They range from full Enterprise-ready GUI frontends to simple command line utilities.

I have chosen to use “klar” for this post. It is a simple command line tool that can be easily integrated into a CI/CD pipeline (more on this in the next section). To install klar, you can compile it yourself or download a release.

Once installed, it’s very easy to use and parameters are passed using environment variables. In the following example, CLAIR_OUTPUT is set to “High” so that we only see the most dangerous vulnerabilities. CLAIR_ADDRESS is the address of Clair running on my minikube cluster.

Note that since I am pulling an image from an Azure Container Registry instance and I have specified a DOCKER_USER and DOCKER_PASSWORD variable in my environment.

# CLAIR_OUTPUT=High CLAIR_ADDR=http://192.168.99.100:30060 \
klar-1.4.1-linux-amd64 romainregistry1.azurecr.io/samples/nginx

Analysing 3 layers 
Found 26 vulnerabilities 
CVE-2017-8804: [High]  
The xdr_bytes and xdr_string functions in the GNU C Library (aka glibc or libc6) 2.25 mishandle failures of buffer deserialization, which allows remote attackers to cause a denial of service (virtual memory allocation, or memory consumption if an overcommit setting is not used) via a crafted UDP packet to port 111, a related issue to CVE-2017-8779. 
https://security-tracker.debian.org/tracker/CVE-2017-8804 
----------------------------------------- 
CVE-2017-10685: [High]  
In ncurses 6.0, there is a format string vulnerability in the fmt_entry function. A crafted input will lead to a remote arbitrary code execution attack. 
https://security-tracker.debian.org/tracker/CVE-2017-10685 
----------------------------------------- 
CVE-2017-10684: [High]  
In ncurses 6.0, there is a stack-based buffer overflow in the fmt_entry function. A crafted input will lead to a remote arbitrary code execution attack. 
https://security-tracker.debian.org/tracker/CVE-2017-10684 
----------------------------------------- 
CVE-2016-2779: [High]  
runuser in util-linux allows local users to escape to the parent session via a crafted TIOCSTI ioctl call, which pushes characters to the terminal's input buffer. 
https://security-tracker.debian.org/tracker/CVE-2016-2779 
----------------------------------------- 
Unknown: 2 
Negligible: 15 
Low: 1 
Medium: 4 
High: 4

So Clair is showing us the four “High” level common vulnerabilities found in the nginx image that I pulled from Docker Hub. At times of writing, this is consistent with the details listed on docker hub. It’s not necessarily a deal breaker as those vulnerabilities are only potentially exploitable by local users on the Container host which mean we would need to protect the VMs that are running Containers well!

Automating the Scanning of Images in Azure using a CI/CD pipeline

As a proof-of-concept, I created a “vulnerability scanning” Task in a build pipeline in VSTS.

Conceptually, the chain is as follows:

Container image scanning VSTS pipeline

I created an Ubuntu Linux VM and built my own VSTS agent following published instructions after which I installed klar.

I then built a Kubernetes cluster in Azure Container Service (ACS) (see my previous post on the subject which includes a script to automate the deployment of Kubernetes on ACS), and deployed Clair to it, as shown in the previous section.

Little gotcha here: my Linux VSTS agent and my Kubernetes cluster in ACS ran in two different VNets so I had to enable VNet peering between them.

Once those elements are in place, we can create a git repo with a shell script that calls klar and a build process in VSTS with a task that will execute the script in question:

Scanning Task in a VSTS Build

The content of scan.sh is very simple (This would have to be improved for a production environment obviously, but you get the idea):

#!/bin/bash
CLAIR_ADDR=http://X.X.X.X:30060 klar Ubuntu

Once we run this task in VSTS, we get the list of vulnerabilities in the output which can be used to “break” the build based on certain vulnerabilities being discovered.

Build output view in VSTS

Summary

Hopefully you have picked up some ideas around how you can ensure Container Images you run in your environments are secure, or at least you know what potential issues you are having to mitigate, and that a build task similar to the one described here could very well be part of a broader build process you use to build Containers Images from scratch.

Moving from Azure VMs to Azure VM Scale Sets – VM Image Build

siliconvalve

I have previously blogged about using Visual Studio Team Services (VSTS) to securely build and deploy solutions to Virtual Machines running in Azure.

In this, and following posts I am going to take the existing build process I have and modify it so I can make use of VM Scale Sets to host my API solution. This switch is to allow the API to scale under load.

My current setup is very much fit for purpose for the limited trial it’s been used in, but I know (at minimum) I’ll see at least 150 times the traffic when I am running at full-scale in production, and while my trial environment barely scratches the surface in terms of consumed resources, I don’t want to have to capacity plan to the n-nth degree for production.

Shifting to VM Scale Sets with autoscale enabled will help me greatly in this respect!

Current State…

View original post 954 more words

Set your eyes on the Target!

1015red_F1CoverStory.jpg

So in my previous posts I’ve discussed a couple of key points in what I define as the basic principles of Identity and Access Management;

Now that we have all the information needed, we can start to look at your target systems. Now in the simplest terms this could be your local Active Directory (Authentication Domain), but this could be anything, and with the adoption of cloud services, often these target systems are what drives the need for robust IAM services.

Something that we are often asked as IAM consultants is why. Why should the corporate applications be integrated with any IAM Service, and these are valid questions. Sometimes depending on what the system is and what it does, integrating with an IAM system isn’t a practical solution, but more often there are many benefits to having your applications integrated with and IAM system. These benefits include:

  1. Automated account provisioning
  2. Data consistency
  3. If supported Central Authentication services

Requirements

With any target system much like the untitled1IAM system itself, the one thing you must know before you go into any detail are the requirements. Every target system will have individual requirements. Some could be as simple as just needing basic information, first name, last name and date of birth. But for most applications there is allot more to it, and the requirements will be derived largely by the application vendor, and to a lessor extent the application owners and business requirements.

IAM Systems are for the most part extremely flexible in what they can do, they are built to be customized to an enormous degree, and the target systems used by the business will play a large part in determining the amount of customisations within the IAM system.

This could be as simple as requiring additional attributes that are not standard within both the IAM system and your source systems, or could also be the way in which you want the IAM system to interact with the application i.e. utilising web services and building custom Management Agents to connect and synchronise data sets between.

But the root of all this data is when using an IAM system you are having a constant flow of data that is all stored within the “Vault”. This helps ensure that any changes to a user is flowed to all systems, and not just the phone book, it also ensures that any changes are tracked through governance processes that have been established and implemented as part of the IAM System. Changes made to a users’ identity information within a target application can be easily identified, to the point of saying this change was made on this date/time because a change to this persons’ data occurred within the HR system at this time.

Integration

Most IAM systems will have management agents or connectors (the phases can vary depending on the vendor you use) built for the typical “Out of Box” systems, and these will for the most part satisfy the requirements of many so you don’t tend to have to worry so much about that, but if you have “bespoke” systems that have been developed and built up over the years for your business then this is where the custom management agents would play a key part, and how they are built will depend on the applications themselves, in a Microsoft IAM Service the custom management agents would be done using an Extensible Connectivity Management Agent (ECMA). How you would build and develop management agents for FIM or MIM is quite an extensive discussion and something that would be better off in a separate post.

One of the “sticky” points here is that most of the time in order to integrate applications, you need to have elevated access to the applications back end to be able to populate data to and pull data from the application, but the way this is done through any IAM system is through specific service accounts that are restricted to only perform the functions of the applications.

Authentication and SSO

Application integration is something seen to tighten the security of the data and access to applications being controlled through various mechanisms, authentication plays a large part in the IAM process.

During the provisioning process, passwords are usually set when an account is created. This is either through using random password generators (preferred), or setting a specific temporary password. When doing this though, it’s always done with the intent of the user resetting their password when they first logon. The Self Service functionality that can be introduced to do this enables the user to reset their password without ever having to know what the initial password was.

Depending on the application, separate passwords might be created that need to be managed. In most cases IAM consultants/architects will try and minimise this to not being required at all, but this isn’t always the case. In these situations, the IAM System has methods to manage this as well. In the Microsoft space this is something that can be controlled through Password Synchronisation using the “Password Change Notification Service” (PCNS) this basically means that if a user changes their main password that change can be propagated to all the systems that have separate passwords.

SONY DSCMost applications today use standard LDAP authentication to provide access to there application services, this enables the password management process to be much simpler. Cloud Services however generally need to be setup to do one of two things.

  1. Store local passwords
  2. Utilise Single Sign-On Services (SSO)

SSO uses standards based protocols to allow users to authenticate to applications with managed accounts and credentials which you control. Examples of these standard protocols are the likes of SAML, oAuth, WS-Fed/WS-Trust and many more.

There is a growing shift in the industry for these to be cloud services however, being the likes of Microsoft Azure Active Directory, or any number of other services that are available today.
The obvious benefit of SSO is that you have a single username or password to remember, this also greatly reduces the security risk that your business has from and auditing and compliance perspective having a single authentication directory can help reduce the overall exposure your business has to compromise from external or internal threats.

Well that about wraps it up, IAM for the most part is an enabler, it enables your business to be adequately prepared for the consumption of Cloud services and cloud enablement, which can help reduce the overall IT spend your business has over the coming years. But one thing I think I’ve highlighted throughout this particular series is requirements requirements requirements… repetitive I know, but for IAM so crucially important.

If you have any questions about this post or any of my others please feel free to drop a comment or contact me directly.

 

Creating an AzureAD WebApp using PowerShell to leverage Certificate Based Authentication

Introduction

Previously I’ve posted about using PowerShell to access the Microsoft AzureAD/Graph API in a number of different ways. Two such examples I’ve listed below. The first uses a Username and Password method for Authentication, whilst the second uses a registered application and therefore ClientID and Client Secret.

As time has gone on I have numerous WebApp’s doing all sorts of automation. However they all rely on accounts with a username and password, or clientid and secret, where the passwords and secrets expire. Granted the secrets have a couple of years of life and are better than passwords which depending on the environment roll every 30-45 days.

However using Certificates would allow for a script that is part of an automated process to run for much longer than the key lifetime available for WebApps and definitely longer than passwords. Obviously there is security around the certificate to be considered so do keep that in mind.

Overview

This post is going to detail a couple of simple but versatile scripts;

  1. Using PowerShell we will;
    1.  Configure AzureAD
      1. Create a Self Signed 10yr Certificate
      2. Create an AzureAD WebApp and assign the Certificate to it
      3. Apply permissions to the WebApp (this is manual via the Azure Portal)
      4. Record the key parameters for use in the second script
    2. Connect to AzureAD using our Certificate and new WebApp

Creating the AzureAD WebApp, Self Signed Certificate and Assigning Application Permissions

The script below does everything required. Run it line by line, or in small chunks as you step through the process. You will need the AzureRM and Azure AD Utils Powershell Modules installed on the machine you run this script on.

Change;

  • Lines 3 & 4 if you want a certificate with a time-frame other than 10yrs
  • Line 5 for the password you want associated with the certificate for exporting/importing the private key
  • Line 6 for the certificate subject name and location it’ll be stored
  • Line 8 for a valid location to export it too
  • Line 11 for the same path as provided in Line 8
  • Lines 24 & 25 for an account to automatically connect to AAD with
  • Line 31 for the name of your WebApp

Before running line 37 login to the Azure Portal and assign permissions to the WebApp. e.g. AzureAD Directory Permissions. When you then run Line 37 it will trigger a GUI for AuthN and AuthZ to be presented. Sign in as an Admin and accept the oAuth2 Permission Authorizations for whatever you have request on the WebApp.

e.g Graph API Read/Write Permissions

Connecting to AzureAD using our Certificate and new WebApp

Update lines 3, 4, 6 and 7 as you step through lines 40-43 from the configuration script above which copies key configuration settings to the clipboard.

The following script then gets our certificate out of the local store and takes the Tenant and WebApp parameters and passes them to Connect-AzureAD in Line 15 which will connect you to AAD and allow you to run AzureAD cmdlets.

If you wish to go direct to the GraphAPI, lines 20 and 23 show leveraging the AzureADUtils Module to connect to AzureAD via the GraphAPI.

Notes on creating your Self-Signed Certificate in PowerShell

I’m using the PowerShell New-SelfSignedCertifcate cmdlet to create the self signed certificate. If when you run New-SelfSignedCertificate you get the error as shown below, make sure you have Windows Management Framework 5.1 and if you don’t have Visual Studio or the Windows 8.1/10 SDK, get the Windows 8.1 SDK from here and just install the base SDK as shown further below.

Once the install is complete copy C:\Program Files (x86)\Windows Kits\8.1\bin\x86\makecert.exe to C:\windows\system32

Summary

The two scripts above show how using PowerShell we can quickly create a Self Signed Certifcate, Create an Azure AD WebApp and grant it some permissions. Then using a small PowerShell script we can connect and query AAD/GraphAPI using our certificate and not be concerned about passwords or keys expiring for 10yrs (in this example which can be any timeframe you wish).

What’s a DEA?

In my last post I made a reference to a “Data Exchange Agreement” or DEA, and I’ve since been asked a couple of times about this. So I thought it would be worth while writing a post about what it is, why it’s of value to you and to your business.

So what’s a DEA? Well in simply terms it’s exactly what the name states, it’s an agreement that defines the parameters in which data is exchanged between Service A and Service B. Service A being the Producer of Attributes X and Services B, the consumers. Now I’ve intentionally used a vague example here as a DEA is used amongst many services in business and or government and is not specifically related to IT or IAM Services. But if your business adopts a controlled data governance process, it can play a pivotal role in the how IAM Services are implemented and adopted throughout the entire enterprise.

So what does a DEA look like, well in an IAM service it’s quite simple, you specify your “Source” and your “Target” services, an example of this could be the followings;

Source
ServiceNow
AurionHR
PROD Active Directory
Microsoft Exchange
Target
PROD Active Directory
Resource Active Directory Domain
Microsoft Online Services (Office 365)
ServiceNow

As you can see this only tells you where the data is coming from and where it’s going to, it doesn’t go into any of the details around what data is being transported and in which direction. A separate section in the DEA details this and an example of this is provided below;

MIM Flow Service Now Source User Types Notes
accountName –> useraccountname MIM All  
employeeID –> employeeid AurionHR All  
employeeType –> employeetype AurionHR All  
mail <– email Microsoft Exchange All  
department –> department AurionHR All
telephoneNumber –> phone PROD AD All  
o365SourceAnchor –> ImmutableID Resource Domain All  
employeeStatus –> status AurionHR All  
dateOfBirth –> dob AurionHR CORP Staff yyyy-MM-dd
division –> region AurionHR CORP Staff  
firstName –> preferredName AurionHR CORP Staff  
jobTitle –> jobtitle AurionHR CORP Staff  
positionNumber –> positionNumber AurionHR CORP Staff
legalGivenNames <– firstname ServiceNow Contractors
localtionCode <– location ServiceNow Contractors  
ManagerID <– manager ServiceNow Contractors  
personalTitle <– title ServiceNow Contractors  
sn <– sn ServiceNow Contractors  
department <– department ServiceNow Contractors
employeeID <– employeeid ServiceNow Contractors  
employeeType <– employeetype ServiceNow Contractors  

This might seem like a lot of detail, but this is actually only a small section of what would be included in a DEA of this type, as the whole purpose of this agreement is to define what attributes are managed by which systems and going to which target systems, and as many IAM consultants can tell you, would be substantially more then what’s provided in this example. And this is just an example for a single system, this is something that’s done for all applications that consume data related to your organisations staff members.

One thing that you might also notice is that I’ve highlighted 2 attributes in the sample above in bold. Why might you ask? Well the point of including this was to highlight data sets that are considered “Sensitive” and within the DEA you would specify this being classified as sensitive data with specific conditions around this data set. This is something your business would define and word to appropriately express this but it could be as simple as a section stating the following;

“Two attributes are classed as sensitive data in this list and cannot be reproduced, presented or distributed under any circumstances”

One challenge that is often confronted within any business is application owners wanting “ownership” of the data they consume. Utilising a DEA provides clarity over who owns the data and what your applications can do with the data they consume removing any uncertainty.

To summarise this post, the point of this wasn’t to provide you with a template, or example DEA to use, it was to help explain what a DEA is, what its used for and examples of what parts can look like. No DEA is the same, and providing you with a full example DEA is only going to make you end up recreating it from scratch anyway. But it is intended to help you with understanding what is needed.

As with any of my posts if you have any questions please pop a comment or reach out to me directly.

 

Google Cloud Platform: an entrée

The recent opening of a Google Cloud Platform region in Sydney about 2 months ago triggered my interest in learning more about the platform and understand how their offering would affect the local market moving forward.

So far, I have concentrated mainly on GCPs IaaS offering by digging information out of videos, documentation and venturing through the portal and Cloud Shell. I would like to share my first findings and highlight a few features that, in my opinion, make it worth having a closer look.

Virtual Networks are global

Virtual Private Clouds (VPC) are global by default; this means that workloads in any GCP region can be one trace-route hop away from each other in the same private network. Firewall rules can also be applied in a global scope, simplifying preparation activities for regional failover.

Global HTTP Load Balancing is another feature that allows a single entry-point address to direct traffic to the most appropriate backend around the world. This comes as a very interesting advantage over a DNS based solutions because Global Load Balancing can react instantaneously.

Subnets and Availability Zones are independent 

Google Cloud Platform subnets cover an entire region. Regions still have multiple Availability Zones but they are not directly bound to a particular subnet. This comes in handy when we want to move a Virtual Machine across AZs but keep the same IP address.

Subnets also enable turning on/off Private Google API access with simple switch. Private access allows Virtual Machines without Internet access to reach Google APIs and Services using their internal IPs.

Live Migration across Availability Zones

GCP supports Live Migration within a region. This feature maintains machines up and running during events like infrastructure maintenance, host and security upgrades, failed hardware, etc.

A very nice addition to this feature is the ability to migrate a Virtual Machine into a different AZ with a single command:

$ gcloud compute instances move example-instance  \
  --zone <ZONEA> --destination-zone <ZONEB>

Notice the internal IP is preserved:

The Snapshot service is also global

Moving instances across regions is not as straight forward as moving them within Availability Zones. However, since Compute Engine’s Snapshot service is global, the process is still quite simple.

  1. I create a Snapshot from the VM instance’s disk.
  2. I crate a new Disk from the Snapshot but I place it in the target region’s AZ I want to move the VM to.
  3. Then I can create a new VM using the Disk.

An interesting consequence of Snapshots being global is that it allows us to use them as a data transfer alternative between regions that results in no ingress-egress charges.

You can attach VMs to multiple VPCs

Although still in beta, GCP allows us to attach multiple NICs to a machine and have each interface connect to a different VPCs.

Aside from the usual security benefits of perimeter and DMZ isolation, this feature gives us the ability to share third-party appliances across different projects: for example having all Internet ingress and egress traffic inspected and filtered by a common custom firewall box in the account.

Cloud Shell comes with batteries included

Cloud Shell is just awesome. Apart from its outgoing connections restricted to 20, 21, 22, 80, 443, 2375, 2376, 3306, 8080, 9600, and 50051, it is such a handy tool that you can use to quickly put together PoCs.

  • You get your own Debian VM with tmux multi tab support.
  • Docker up and running to build and test containers.
  • Full apt-get capabilities.
  • You can upload files into it directly from your desktop.
  • A brand new integrated code editor if you don’y like using vim, nano and so on.
  • Lastly, it has a web preview feature allowing you to run your own web server on ports 8080 to 8084 to test your PoC from the internet.

SSH is managed for you

GCP SSH key management is one of my favourite features so far. SSH key pairs are created and managed for you whenever you connect to an instance from the browser or with the gcloud command-line tool. User access to is controlled by Identity and Access Management (IAM) roles having CGP create and apply short lived SSH key pairs on the fly when necessary.

Custom instances, custom pricing

Although a custom machine type can be viewed as something that covers a very niche use case, it can in fact help us price the right instance RAM and CPU for the job at hand. Having said this, we also get the option to buy plenty of RAM and CPU that we will never need (see below).

 – Discounts, discounts and more discounts

I wouldn’t put my head in the lion’s mouth about pricing at this time but there are a large number of Cloud cost analysis reports that categorise GPC as being cheaper than the competition. Having said this, I still believe it comes down to having the right implementation and setup: you might not manage the infrastructure directly in the Cloud but you should definitely manage your costs.

GCP offers sustained-use discounts for instances that have been run over a percent of the overall billing month (25%, 50%, 75% and 100%) and it also recently released 1 and 3 year committed-use discounts which can reach up to 57% of the original instance price. Finally, Preemptible instances (similar to AWS spot instances) can reach up to 80% discount from list price.

Another very nice feature to help managing cost is their Compute sizing recommendations. These recommendations are generated based on system metrics and can help identifying workloads that can be resized to have a more appropriate use of resources.

Interesting times ahead

Google has been making big progress with its platform in the last two years. According to some analyses it still has to cover some ground to reach its competitors level but as we just saw GCP is coming with some very interesting cards under its sleeve.

One thing is for sure… interesting times lie ahead.

Happy window shopping!

 

Configure SBC to forward calls out from the original SG where incoming call comes in

Issue Description:

Recently I did a project of adding additional Telstra SIP trunks into Sonus production environment. The customer Sonus environment has primary SIP trunks with another SIP provider. They are going to use the new Telstra SIP trunks set up for new established small office. After the new trunk set up. I had one issue: when the calls were forwarded from SFB clients to users’ mobile phones, the A party number didn’t Pass through, instead, the number shown on mobile phones is the pilot number of the primary SIP trunk. The feature of A Party Number Pass-through was not supported by the primary SIP trunk provider, but on the new Telstra SIP trunk, this feature is definitely supported as I know. Now my question is: how to configure SBC to forward calls out back to Telstra Signalling group where the incoming calls comes in?

 

Investigation:

I did a testing call (A Party Rang B Party, B Party set to forward the call externally to C Party) and captured the logs for the whole call forwarding scenario, I can see the forwarding part of the call has a SIP “invite” message sent from the mediation server. in the SIP header, it contains all the numbers of Party A, B, C. Screenshot as below:

I can see the HISTORY-INFO data field contains “B Party Number” during the call forwarding. what I was thinking to do is to create a transformation rule to compare on the History-Info data field value, if the value contains the Telstra SIP trunk number range, the call should be routed out via the Telstra SIP Trunk.

Before creating new rule, I wanted to verify the A Party Number Pass-through was working. I created an optional rule to match calling address/number with my mobile number (A Party). When I called the Telstra number range, calls coming in from Telstra trunk and went out ringing another mobile (C Party) via the same trunk, A party number displayed as Caller number. It’s all good.

I set up the message manipulation rule for invite message based on below Sonus Doc (the first half of the doc) https://support.sonus.net/display/UXDOC61/Using+HISTORY-INFO+to+set+the+FROM+Number.

After that I tried to put an optional transformation rule matching the Telstra outgoing calls out, it didn’t work and still the call went out from the primary trunk. Because this rule can’t be mandatory under the current SFB to Telstra routing table. It will disconnect the normal outgoing calls on the Telstra trunk.

Next, I created a mandatory rule to compare the history-info value and bind this rule with Telstra SG, re-test the call forwarding scenario, still the call went out from the primary trunk. :/

When I moved to the Sonus local system log, I couldn’t see any logs contains the “SG User” variable. This made me realise that the inbound manipulation rule assigned to Telstra SG was totally wrong, because the invite is from SFB server, so I correct this setting. It started to work as expected.

Solution Summary:

  1. Create a message manipulation rule “Collect History-Info” on the “invite” message to collect “Collect History-Info” to “SG User Value 1”: Applicable Messages – Selected Messages, Message Selection – Invite, Table Result Type – Mandatory, refer to screenshot below:

  2. Assign it to the Inbound Message Manipulation of SFB SG
  3. Create a transformation rule to compare “SG User Value 1” with the Telstra SIP trunk number range:

  4. Create a NEW route match this transformation rule.

 

After this, retest the call forwarding scenario, both inbound part and the forwarding part of the call are routed through Telstra SIP Trunk. The result looks all correct. Verified the issue resolved. 😊