Ubuntu security hardening for the cloud.

Hardening Ubuntu Server Security For Use in the Cloud

The following describes a few simple means of improving Ubuntu Server security for use in the cloud. Many of the optimizations discussed below apply equally to other Linux based distribution although the commands and settings will vary somewhat.

Azure cloud specific recommendations

  1. Use private key and certificate based SSH authentication exclusively and never use passwords.
  2. Never employ common usernames such as root , admin or administrator.
  3. Change the default public SSH port away from 22.

AWS cloud specific recommendations

AWS makes available a small list of recommendation for securing Linux in their cloud security whitepaper.

Ubuntu / Linux specific recommendations

1. Disable the use of all insecure protocols (FTP, Telnet, RSH and HTTP) and replace them with their encrypted counterparts such as sFTP, SSH, SCP and HTTPS

yum erase inetd xinetd ypserv tftp-server telnet-server rsh-server

2. Uninstall all unnecessary packages

dpkg --get-selections | grep -v deinstall
dpkg --get-selections | grep postgres
yum remove packageName

For more information: http://askubuntu.com/questions/17823/how-to-list-all-installed-packages

3. Run the most recent kernel version available for your distribution

For more information: https://wiki.ubuntu.com/Kernel/LTSEnablementStack

4. Disable root SSH shell access

Open the following file…

sudo vim /etc/ssh/sshd_config

… then change the following value to no.

PermitRootLogin yes

For more information: http://askubuntu.com/questions/27559/how-do-i-disable-remote-ssh-login-as-root-from-a-server

5. Grant shell access to as few users as possible and limit their permissions

Limiting shell access is an important means of securing a system. Shell access is inherently dangerous because of the risk of unlawfully privilege escalations as with any operating systems, however stolen credentials are a concern too.

Open the following file…

sudo vim /etc/ssh/sshd_config

… then add an entry for each user to be allowed.

AllowUsers jim,tom,sally

For more information: http://www.cyberciti.biz/faq/howto-limit-what-users-can-log-onto-system-via-ssh/

6. Limit or change the IP addresses SSH listens on

Open the following file…

sudo vim /etc/ssh/sshd_config

… then add the following.

ListenAddress <IP ADDRESS>

For more information:


7. Restrict all forms of access to the host by individual IPs or address ranges

TCP wrapper based access lists can be included in the following files.


Note: Any changes to your hosts.allow and hosts.deny files take immediate effect, no restarts are needed.


ALL : 123.12.

Would match all hosts in the network.


An IP address and subnet mask can be used in a rule.

sshd : /etc/sshd.deny

If the client list begins with a slash (/), it is treated as a filename. In the above rule, TCP wrappers looks up the file sshd.deny for all SSH connections.


This will allow SSH connections from only the machine with IP address and block all other connection attemps. You can use the options allow or deny to allow or restrict access on a per client basis in either of the files.

in.telnetd : : deny
in.telnetd : : allow

Warning: While restricting system shell access by IP address be very careful not to loose access to the system by locking the administrative user out!

For more information: https://debian-administration.org/article/87/Keeping_SSH_access_secure

8. Check listening network ports

Check listening ports and uninstall or disable all unessential or insecure protocols and deamons.

netstat -tulpn

9. Install Fail2ban

Fail2ban is a means of dealing with unwanted system access attempts over any protocol against a Linux host. It uses rule sets to automate variable length IP banning sources of configurable activity patterns such as SPAM, (D)DOS or brute force attacks.

“Fail2Ban is an intrusion prevention software framework that protects computer servers from brute-force attacks. Written in the Python programming language, it is able to run on POSIX systems that have an interface to a packet-control system or firewall installed locally, for example, iptables or TCP Wrapper.” – Wikipedia

For more information: https://www.digitalocean.com/community/tutorials/how-to-protect-ssh-with-fail2ban-on-ubuntu-14-04

10. Improve the robustness of TCP/IP

Add the following to harden your networking configuration…


… such as

sudo vim /etc/sysctl.d/10-network-security.conf
Ignore ICMP broadcast requests
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Disable source packet routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0 
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0

# Ignore send redirects
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Block SYN attacks
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 5

# Log Martians
net.ipv4.conf.all.log_martians = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Ignore ICMP redirects
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0 
net.ipv6.conf.default.accept_redirects = 0

# Ignore Directed pings
net.ipv4.icmp_echo_ignore_all = 1

And load the new rules as follows.

service procps start

For more information: https://blog.mattbrock.co.uk/hardening-the-security-on-ubuntu-server-14-04/

11. If you are serving web traffic install mod-security

Web application firewalls can be helpful in warning of and fending off a range of attack vectors including SQL injection, (D)DOS, cross-site scripting (XSS) and many others.

“ModSecurity is an open source, cross-platform web application firewall (WAF) module. Known as the “Swiss Army Knife” of WAFs, it enables web application defenders to gain visibility into HTTP(S) traffic and provides a power rules language and API to implement advanced protections.”

For more information: https://modsecurity.org/

12. Install a firewall such as IPtables

IPtables is a highlight configurable and very powerful Linux forewall which has a great deal to offer in terms of bolstering hosts based security.

iptables is a user-space application program that allows a system administrator to configure the tables provided by the Linux kernel firewall (implemented as different Netfilter modules) and the chains and rules it stores.” – Wikipedia.

For more information: https://help.ubuntu.com/community/IptablesHowTo

13. Keep all packages up to date at all times and install security updates as soon as possible

 sudo apt-get update        # Fetches the list of available updates
 sudo apt-get upgrade       # Strictly upgrades the current packages
 sudo apt-get dist-upgrade  # Installs updates (new ones)

14. Install multifactor authentication for shell access

Nowadays it’s possible to use multi-factor authentication for shell access thanks to Google Authenticator.

For more information: https://www.digitalocean.com/community/tutorials/how-to-set-up-multi-factor-authentication-for-ssh-on-ubuntu-14-04

15. Add a second level of authentication behind every web based login page

Stolen passwords are a common problem whether as a result of a vulnerable web application, an SQL injection, a compromised end user computer or something else altogether adding a second layer of protection using .htaccess authentication with credentials stored on the filesystem not in a database is great added security.

For more information: http://stackoverflow.com/questions/6441578/how-secure-is-htaccess-password-protection

Encryption In The Cloud

Is it safe? 

Three simple yet chilling words immortalized by the 1976 movie Marathon Man staring Laurence Olivier and Dustin Hoffman, in which Olivier tries to discover by very unpleasant means whether the location of his stolen diamonds has been exposed.


Marathon Man (1976)

Well had Sir Lawrence encrypted that information, there would have been no need for him to worry because he would have known that short of using a weak cypher or vulnerable algorithm or password, encrypted data has a very strong chance of remaining secret no matter what.

What is encryption and why should your business be using it?

Encryption is a means of algorithmically scrambling data so that it can only be read by someone with the required keys. Most commonly it protects our mobile and internet communications but it can also restrict access to the data at rest on our computers, in our data centers and in the public cloud. It protects our privacy and provides anonymity. And although encryption can’t stop data theft by any malicious actor who might try to steal it, it will certainly stop them from being able to access or read it.

Case in point had Sony Pictures Entertainment been encrypting their data in 2014. It would have been much harder for the perpetrators of the huge data theft against their corporate systems to extract any information from the information they stole. As it was, none of it was and much of it was leaked over the internet, including a lot of confidential business information and several unreleased movies.

Several years on and a string of other major data breaches later, figures published in the Ponemon Institute‘s 2016 Cloud Data Security study reveal persisting trends.

Research, conducted by the Ponemon Institute, surveyed 3,476 IT and IT security professionals in the United States, United Kingdom, Australia, Germany, France, Japan, Russian Federation, India and Brazil about the governance policies and security practices their organizations have in place to secure data in cloud environments.

On the importance of encrypting your data in the cloud. 72% of respondents said the ability to encrypt data was important and 86% said it would become even more important over the next two years. But where only 42% of respondents were actively using it to secure sensitive data and only 55% of those stated their organization had control of their keys.

Why is control of your encryption keys important?

While encryption may add a significant layer of control over access to your data, sharing your keys with third parties such as a service provider can still leave room for unwanted access. Whether it’s a disgruntled employee or government agency with legal powers, the end result might still be a serious breach of your privacy. Which is why if you have taken the trouble of encrypting your data, you really should also give serious consideration to your key management policy.

How secure is encryption today?

The relative strengths and weaknesses of the various algorithms available vary of course. However AES which is one of the more widely used algorithms today sports three different block ciphers AES-128, AES-192 and AES-256, each capable of encrypting and decrypting data in blocks of 128 bits using cryptographic keys of 128-bit, 192-bit and 256-bit lengths. And is currently considered uncrackable with any possible weaknesses deriving from errors made during it’s implementation in software.

What options are there for encrypting business data in the public cloud?

Disk level encryption is a means of securing virtual machines as well as their data, which allows the user to control their keys while protecting from disk theft, improper disk disposal and unauthorized access to the virtual container.

Software level encryption involves the data being encrypted before it is uploaded into cloud based storage. In this use case only the owner of the data has access to their keys. Tools for managing data in this manner include Truecrypt and 7Zip among others both of which support high standards of encryption such as AES256. And the type of cloud storage used is often cold or long term archival such as Azure Cold Blob Storage.

Filesystem level encryption consists of data being automatically encrypted in the cloud. The data is permanently encrypted until downloaded. Encryption, decryption and key management are transparent to end users. The data can be accessed while encrypted. Ownership of the required keys varies in this model but is often shared between the owner and provider. AWS S3 is an example of an object store with support for this kind of encryption.


Thales nShield Connect Harware Security Module.

Hardware level encryption involves the use of a third party Hardware Security Module (HSM) which provides dedicated and exclusive generation, handling and storage of encryption keys. In this scenario, only the owner has access to the required keys, here again the data can be accessed transparently while encrypted.

So, is it safe?

Well going back to Sir Laurence’s dilemma. There are no absolutes in computer security and probably never will be. But it seems clear that encrypting any data you want to control and managing the keys to it carefully, is sure to make it a much safer if not completely safe.

Using Microsoft Azure Table Service REST API to collect data samples

Sometimes we need a simple solution that requires collecting data from multiple sources. The sources of data can be IoT devices or systems working on different platforms and in different places. Traditionally, integrators start thinking about implementation of a custom centralised REST API with some database repository. This solution can take days to implement and test, it is very expensive and requires hosting, maintenance, and support. However, in many cases, it is not needed at all. This post introduces the idea that out-of-the-box Azure Tables REST API is good enough to start your data collection, research, and analysis in no time. Moreover, the suggested solution offers very convenient REST API that supports JSON objects and very flexible NoSQL format. Furthermore, what’s great is that you do not need to write lots of code and hire programmers. Anybody who understands how to work with REST API, create headers and put JSON in the Web request body can immediately start working on a project and sending data to a very cheap Azure Tables storage. Additional benefits of using Azure Tables are: native support in Microsoft Azure Machine Learning, other statistical packages also allow you to download data from Azure Tables.

Microsoft provides Azure Tables SDKs for various languages and platforms. By all means you should use these SDKs; your life will be much easier. However, some systems don’t have this luxury and require developers to work with Azure Tables REST API directly. The only requirement for your system is that it should be able to execute web requests, and you should be able to work with the headers of these requests. Most of the systems satisfy this requirement. In this post, I explain how to form web requests and work with the latest Azure Table REST API from your code. I’ve also created a reference code to support my findings. It is written in C# but this technique can be replicated to other languages and platforms.

The full source code is hosted on GitHub here:



You will need an Azure subscription. Create there a new storage account and create a test table with the name “mytable” in it. Below is my table that I’ve created in the Visual Studio 2015.

Picture 1

I’ve created a helper class that has two main methods: RequestResource and InsertEntity.

The full source code of this class is here:

Testing the class is easy. I’ve created a console application, prepared a few data samples and called our Azure Tables helpers methods. The source code of this program is below.

The hardest part in calling Azure Tables Web API is creating encrypted signature string to form Authorise header. It can be a bit tricky and not very clear for beginners. Please take a look at the detailed documentation that describes how to sign a string for various Azure Storage services: https://msdn.microsoft.com/en-au/library/dd179428.aspx

To help you with the authorisation code, I’ve written an example of how it can be done in the class AzuretableHelper. Please take a look at the code that creates strAuthorization variable. First, you will need to form a string that contains your canonical resource name and current time in the specific format including newline characters in-between. Then, this string has to be encrypted with so-called HMAC SHA-256 encryption algorithm. The key for this encryption code is a SharedKey that you can obtain from your Azure Storage Account as shown here:

Picture 2

The encrypted Authorisation string has to be re-created on each request you execute. Otherwise, Azure API will reject your requests with the Unuthorised error message in the response.

The meaning of other request’s headers is straightforward. You specify the version, format, and other attributes. To see the full list of methods you can perform on Azure Tables and to read about all attributes and headers you can refer to MSDN documentation:


Once you’ve mastered this “geeky” method of using Azure Tables Web API, you can send your data straight to your Azure Tables without using any intermediate Web API facilities. Also, you can read Azure Tables data to receive configuration parameters or input commands from another applications. A similar approach can be applied to other Azure Storage API, for example, Azure Blob storage, or Azure Queue.





Azure Automation Runbooks with Azure AD Service Principals and Custom RBAC Roles


If you’ve ever worked in any form of systems administrator role then you will be familiar with process automation, even only for simple tasks like automating backups. You will also be familiar with the pain of configuring and managing identities for these automated processes (expired password or disabled/deleted account ever caused you any pain?!)

While the cloud offers us many new ways of working and solving traditional problems it also maintains many concepts we’d be familiar from environments of old. As you might have guessed, the idea of specifying user context for automated processes has not gone away (yet).

In this post I am going to look at how in Azure Automation Runbooks we can leverage a combination of an Azure Active Directory Service Principal and an Azure RBAC Custom Role to configure an non-user identity with constrained execution scope.

The benefits of this approach are two-fold:

  1. No password expiry…

View original post 694 more words

Windows Server 2012 R2 (ADFS 3.0): Migrating ADFS Configuration Database from WID to SQL

You already have a working ADFS setup which has been configured to use the Windows Internal Database (WID) to store its configuration database. However, things may have changed since you implemented it and you may now have one (or more) of the below requirements which will need an upgrade to SQL server.

  • Need more than five federation servers in the ADFS Farm (supporting more than 10 relying parties)
  • Leverage high availability features of SQL or
  • Enable support for SAML artefact resolution or WS Federation token replay detection.

The below step-by-step procedure should help you with the migration of the ADFS configuration database from WID to SQL with minimal or no downtime (however, plan accordingly such that it has the least impact in case something goes wrong).

The steps also cover configuration of each of the ADFS servers (Primary and Secondary) in the farm to use the SQL Server for its configuration database.

For simplicity, i have used the below scenario comprising of:

Proposed Design


  • Two Web Application Proxies (WAP) – wap1 and wap2
  • External load balancer (ELB) in front of the WAPs.

Private / Corporate network

  • Two ADFS Servers – adfs1 and adfs2
  • Internal Load Balancer (ILB) in front of the ADFS Servers
  • SQL Server (Standalone). Additional steps need to be performed (not covered in this blog) when using SQL Server with high availability options such as SQL Always-On or Merge Replication


Ensure you have a complete backup of your ADFS servers. You can use Windows Server Backup or your thirty-party backup solution to backup the ADFS servers.

Load Balancer Configuration

During the course of this exercise the internal load balancer will be configured multiple times to ensure a smooth migration with minimal impact to end users.

Remove the primary ADFS Server (adfs1) from the internal load balancer configuration such that all traffic is directed to the secondary server (adfs2).

Primary ADFS Server steps

  • Stop the ADFS windows service by issuing “net stop adfssrv” in an elevated command prompt or via the Windows Services Manager.

net stop adfssrv

  • Download and install SQL Server Management Studio (SSMS) (if not already present)
  • Launch SSMS in Administrator mode
  • Connect to your WID using \\.\pipe\MICROSOFT##WID\tsql\query as the server name in SSMS.

SSMS connect dialog

You should be able to see the two ADFS databases (AdfsArtifactStore and AdfsConfiguration) as shown below:

SSMS showing the two ADFS databases

  • To find the physical location of the ADFSConfiguration and ADFSArtifactStore in WID, run the below query  by starting up a ‘New Query’. The default path is C:\Windows\WID\Data\.
SELECT name, physical_name AS current_file_location FROM sys.master_files

Results showing physical location of DB files

  • Restart WID from SSMS. This is just to ensure that there is no lock on the databases. Right Click on the WID db and select ‘Restart‘.

Restarting the database

Restarting the database

  • Now we need to detach both the databases. Run the below query on the WID using SSMS
USE [master]
EXEC master.dbo.sp_detach_db @dbname = N'AdfsArtifactStore'
EXEC master.dbo.sp_detach_db @dbname = N'AdfsConfiguration'

Running the commands on the WID

  • Now copy the databases identified earlier from the Primary ADFS Server to your SQL Server’s Data directory (for example C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA).


SQL Server – Steps

  • On the SQL Server, bring up the SQL Server Management Studio (SSMS) and connect to the SQL instance (or default instance) where the ADFS databases will be hosted.
  • Create a login with the ADFS windows service account (which was used for the initial ADFS setup and configuration). I used Contoso\svcadfs.

Adding SQL Server user

  • Now attach the databases copied earlier on the SQL server. Run the below using the SQL Server Management Studio. Modify the path as appropriate if the db files were copied to a location other than ‘C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA’

 USE [master]
 CREATE DATABASE [AdfsConfiguration] ON
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsConfiguration.mdf' ),
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsConfiguration_log.ldf' )

 CREATE DATABASE [AdfsArtifactStore] ON
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsArtifactStore.mdf' ),
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsArtifactStore_log.ldf' )
 ALTER DATABASE AdfsConfiguration set enable_broker with rollback immediate

  • On successful execution of the above, you should be able to see the two ADFS databases in SSMS (you may need to do a refresh if not displayed automatically)

Two databases shown in SSMS

  • Ensure that the ADFS Service Account has the “db_genevaservice” role membership on both the databases

Grant service account right database role

Firewall Configuration

Ensure that the SQL Server is reachable from the ADFS servers on port 1433. You may need to update network firewalls and / or host firewall configuration on the SQL Server (depending on the type of network setup you may have).

Primary ADFS Server Steps

  • Start the ADFS windows service by issuing “net start adfssrv” from an elevated command prompt or from the Windows Services Manager
  • Launch a PowerShell console in Administrator Mode and execute the below lines in order

 $temp= GEt-WmiObject -namespace root/ADFS -class SecurityTokenService
 $temp.ConfigurationdatabaseConnectionstring="data source=[sqlserver\instance];initial catalog=adfsconfiguration;integrated security=true"

Note: replace [sqlserver\instance] with actual server\instance. If not running as an instance, just server. I am using ‘SQLServer’ as it is the hostname of the SQL server being used in this example.

PowerShell Configuration

  • Change the connection string property in “AdfsProperties” by issuing the below command from the PowerShell console

Set-AdfsProperties -ArtifactDbConnection "Data Source=[sqlserver\instance];Initial Catalog=AdfsArtifactStore;Integrated Security=True"

Note: Change [sqlserver\instance]  with the name of your SQL server and instance (as applicable)

PowerShell Configuration

  • Restart the ADFS Service by executing “net stop adfssrv” and “Nnet start adfsrv” from an elevated command prompt or from the Windows Services Manager.

Restarting service

  • To check if the configuration has been successful, run “Get-AdfsProperties” from a PowerShell console. You should see the ADFS properties listed (as below) with the key being Data Source=SQLServer; Initial Catalog=AdfsArtifactStore; Integrated Security=True

Output from Get-AdfsProperties

This completes the migration of the ADFS configuration database from WID to SQL and also the configuration of the Primary ADFS server to use the SQL Database. Now we need to configure the secondary ADFS server(s) to use the SQL Database.

Load Balancer Configuration

Update the internal load balancer to:

  • Add the Primary ADFS (adfs1) to the load balance configuration and
  • Remove the secondary ADFS (adfs2) server which needs to be reconfigured to point to the SQL Server.

Secondary ADFS Server steps

  • Stop the ADFS Windows service by issuing “net stop adfssrv” in an elevated command prompt
  • To change the configuration database connection string to point to the new SQL ADFS configuration database run the below command lines (in order) from a PowerShell Console
$temp= Get-WmiObject -namespace root/ADFS -class SecurityTokenService
$temp.ConfigurationdatabaseConnectionstring=”data source=&amp;amp;lt;SQLServer\SQLInstance&amp;amp;gt;; initial catalog=adfsconfiguration;integrated security=true”

Note: Change [sqlserver\instance] with the name of your SQL server / instance as used for the primary server configuration.

PowerShell Configuration

  • Start the ADFS Service by executing “Net Start ADFSSRV” from an elevated command prompt and verify that the service starts up successfully. ( I have had the issue where my ADFS server was (strangely) not able to resolve the NETBIOS name of the SQL Server, hence the service wouldn’t start properly. Also, check if the federation service is running using the service account that was provided login to the SQL Database)
  • To check if the configuration has been successful, run “Get-AdfsProperties” from a PowerShell console. You should see the ADFS properties listed (as below) with the key being  Data Source=SQLServer; Initial Catalog=AdfsArtifactStore; Integrated Security=True

Output from Get-AdfsProperties

Repeat above steps for each of the secondary servers (if you have more than one) and ensure that all ADFS servers are added back to the internal load balancer configuration.

I hope this post has been useful.

Solving self signed certificate blocking by inspecting web requests using Xamarin iOS

In iOS 9 Apple have turned on App Transport Security (ATS) as default. So if your iOS application is connecting to an insecure url resource then you’re out of luck as it will be blocked, an example of the most likely culprit is viewing a webpage without SSL encryption.

With the current application I’m working on I ran into trouble whilst loading a dev resource only to find it was using a self signed certificate and being blocked by the OS. I also had a requirement to inspect an Ajax web request and the WebView ShouldStartLoad delegate was not receiving these request callbacks.

First things first, if your applications requests are being blocked you will see something like:

NSURLSession/NSURLConnection HTTP load failed (kCFStreamErrorDomainSSL, -9802)

The best way to resolve this issue is to ensure all resources are securely protected in the correct way, and therefore your application, and users data, is secure!

But this is not always possible!  And if you’re like me then you have no choice but to step around the default ATS setting.

Quick, change the Info.plist!

To resolve a problem with a blocked resource you can turn off ATS for you application with a single entry in your Info.plist file. Whilst this is the simplest and quickest approach, my advice would be to avoid doing this. Yes, you may just be making the change for a quick test or prototype app, but it’s easy to forget you made the change and before you know it you’ve released an insecure app to your users.

My advice is to turn off ATS for the only resources you trust. In my case a single domain.

This can be achieved by adding the following into your Info.plist file:

			<key>[ENTER YOUR BASE URL HERE]</key>

Spying on all requests

iOS is no longer blocking my webpage but we need a way to listen in to all requests our web view is making so we can work our magic with trusting the self signed certificate (remember this part is for dev releases only and not production).

Start by creating a new class and have it inherit from NSUrlProtocol.  Then override the default delegate methods so that we can add our custom logic.

[Export ("canInitWithRequest:")]

public static bool canInitWithRequest (NSUrlRequest request)
   Console.WriteLine (String.Format ("URL # { 0}. Loading URL: { 1}", counter.ToString(), request.Url.ToString()));

   if (CustomNsUrlProtocol.GetProperty(MY_KEY, request) == myObj)
      return false; // request has already been handled	

   return true;

Here we get the opportunity to check all requests before they are loaded, and make a decision if we need to do something clever or not.  The default behaviour here is to return ‘true’ which means we need to do something.  So let’s do something…

public override void StartLoading ()
   // start loading the URL now we have handled it
   new NSUrlConnection (Request, new CustomNSUrlConnectionDataDelegate (this, new NSUrl(Constants.baseUrl)), true);

The clever part is to create a new instance of a NSUrlConnectionDataDelegate and use this delegate to inspect the certificate of each request.

Finally we just need to tell our custom protocol that we have now dealt with this request and we do not want to see it again.

public override NSUrlRequest Request
   get {
         NSMutableUrlRequest mutableRequest = (NSMutableUrlRequest)base.Request.MutableCopy ();
         CustomNsUrlProtocol.SetProperty (myObj, MY_KEY, mutableRequest);
         return mutableRequest;

We’ve assigned an object to our request.  Now when it attempts to load, via canInitWithRequest, it will recognise that we have already dealt with this request and the browser will continue to load the request as normal.

On to the magic

So far you’ve probably noticed we haven’t actually done anything with our requests other than inspect them and assign an object to say we have done so.

Create a class that inherits from NSUrlConnectionDataDelegate and override CanAuthenticateAgainstProtectionSpace.  This method will be called if we receive an authentication for any of the requests we pass if from our NSUrlProtocol.

public override bool CanAuthenticateAgainstProtectionSpace (NSUrlConnection connection, NSUrlProtectionSpace protectionSpace)
   #if DEBUG || TEST
      if (protectionSpace.AuthenticationMethod.Equals (Foundation.NSUrlProtectionSpace.AuthenticationMethodServerTrust))
         return true;
      return false;
      return false;

The default value we return is false which means we will not authenticate against any insecure connections.  However, if we receive a challenge and we are running in the development or test environment then we return true as this is something we want to handle.

Finally I use the delegate ReceivedAuthenticationChallenge to trust the request.

public override void ReceivedAuthenticationChallenge (NSUrlConnection connection, NSUrlAuthenticationChallenge challenge)
   var baseUrl = new NSUrl(Constants.baseUrl); // this is the base URL we trust

   // check we trust the host - an additional layer of security
   if (challenge.ProtectionSpace.Host.Equals (baseUrl.Host))
      challenge.Sender.UseCredential (new NSUrlCredential (challenge.ProtectionSpace.ServerSecTrust), challenge);
      Console.WriteLine ("Host not trusted: " + challenge.ProtectionSpace.Host);

A quick check to ensure the URL is from a webpage I trust, and if so I provide my own credentials and reload the connection.  Re-run the application and if all goes well there will be a heap of URLs written to the debug log as they are ‘taken care of’ and the page will load as normal.

As always, leave a comment below if you have a question.  A sample project with the complete working code can be downloaded from here.

If you’re after more information or want to introduce more control over your request loading then I recommend this page as a starting point: https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/URLLoadingSystem/Articles/AuthenticationChallenges.html

Secure Azure Virtual Network Defense In Depth using Network Security Groups, User Defined Routes and Barracuda NG Firewall

Security Challenge on Azure

There are few common security related questions when we start planning migration to Azure:

  • How can we restrict the ingress and egress traffic on Azure ?
  • How can we route the traffic on Azure ?
  • Can we have Firewall kit, Intrusion Prevention System (IPS), Network Access Control, Application Control and Anti – Malware on Azure DMZ ?

This blog post intention is to answer above questions using following Azure features combined with Security Virtual Appliance available on Azure Marketplace:

  • Azure Virtual Network (VNET)
  • Azure Network Security Groups (NSGs)
  • Azure Network Security Rule
  • Azure Forced Tunelling
  • Azure Route Table
  • Azure IP Forwarding
  • Barracuda NG Firewall available on Azure Marketplace

One of the most common methods of attack is The Script Kiddie / Skiddie / Script Bunny / Script Kitty. Script Kiddies attacks frequency is one of the highest frequency and still is. However the attacks have been evolved into something more advanced, sophisticated and far more organized. The diagram below illustrates the evolution of attacks:

evolution of attacks


The main target of the attacks from the lowest sophistication level of the attacks to the most advanced one is our data. Data loss = financial loss. We are working together and sharing the responsibility with our cloud provider to secure our cloud environment. This blog post will focus on Azure environment.

Defense in Depth

Based on SANS Institute of Information Security. Defense in depth is the concept of protecting a computer network with a layer of defensive mechanisms. There are varies of defensive mechanisms and countermeasures to protect our Azure environment because there are many attack scenarios and attack methods available.

In this post we will use combination of Azure Network Security Groups to establish Security Zone discussed previously on my previous blog, deploy network firewall including Intrusion Prevention System on our Azure network to implement additional high security layer and route the traffic to our security kit. On Secure Azure Network blog we have learned on how to establish the simple Security Zone on our Azure VNET. The underlying concept behind the zone model is the increasing level of trust from outside into the center. On the outside is the Internet – Zero Trust which is where the Script Kiddies and other attackers reside.

The diagram below illustrates the simple scenario we will implement on this post:


There are four main configurations we need to do in order to establish solution as per diagram above:

  • Azure VNET Configuration
  • Azure NSG and Security Rules
  • Azure User Defined Routes and IP Forwarding
  • Barracuda NG Firewall Configuration

In this post we will focus on the last two items. This tutorial link will assist the readers on how to create Azure VNET and my previous blog post will assist the readers on how to establish Security Zone using Azure NSGs.

Barracuda NG Firewall

The Barracuda NG Firewall fills the functional gaps between cloud infrastructure security and Defense-In-Depth strategy by providing protection where our application and data reside on Azure rather than solely where the connection terminates.

The Barracuda NG Firewall can intercept all Layer 2 through 7 traffic and apply Policy – based controls, authentication, filtering and other capabilities. Just like its physical device, Barracuda NG Firewall running on Azure has traffic management capability and bandwidth optimizations.

The main features:

  • PAYG – Pay as you go / BYOL – Bring your own license
  • ExpressRoute Support
  • Network Firewall
  • VPN
  • Application Control
  • IDS – IPS
  • Anti-Malware
  • Network Access Control Management
  • Advanced Threat Detection
  • Centralized Management

Above features are necessary to establish a virtual DMZ in Azure to implement our Defense-In-Depth and Security Zoning strategy.

Choosing the right size of Barracuda NG Firewall will determine the level of support and throughput to our Azure environment. Details of the datasheet can be found here.

I wrote handy little script below to deploy Barracuda NG Firewall Azure VM with two Ethernets :

User Defined Routes in Azure

Azure allows us to re-defined the routing in our VNET which we will use in order to re-direct the routing through our Barracuda NG Firewall. We will enable IP forwarding for the Barracuda NG Firewall virtual appliance and then create and configure the routing table for the backend networks so all traffic is routed through the Barracuda NG Firewall.

There are some notes using Barracuda NG Firewall on Azure:

  • User-defined routing at the time of writing cannot be used for two Barracuda NG Firewall units in a high availability cluster
  • After the Azure routing table has been applied, the VMs in the backend networks are only reachable via the NG Firewall. This also means that existing Endpoints allowing direct access no longer work

Step 1: Enable IP Forwarding for Barracuda NG Firewall VM

In order to forward the traffic, we must enable IP forwarding on Primary network interface and other network interfaces (Ethernet 1 and Ethernet 2) on the Barracuda NG Firewall VM.

Enable IP Forwarding:

Enable IP Forwarding on Ethernet 1 and Ethernet 2:

On the Azure networking side, our Azure Barracuda NG Firewall VM is now allowed to forward IP packets.

Step 2: Create Azure Routing Table

By creating a routing table in Azure, we will be able to redirect all Internet outbound connectivity from Mid and Backend subnets of the VNET to the Barracuda NG Firewall VM.

Firstly, create the Azure Routing Table:

Next, we need to add the Route to the Azure Routing Table:

As we can see the next hop IP address for the default route is the IP address of the default network interface of the Barracuda NG Firewall ( We have extra two network interfaces which can be used for other routing ( and

Lastly, we will need to assign the Azure Routing Table we created to our Mid or Backend subnet.

Step 3: Create Access Rules on the Barracuda NG Firewall

By default all outgoing traffic from the mid or backend is blocked by the NG Firewall. Create an access rule to allow access to the Internet.

Download the Barracuda NG Admin to manage our Barracuda NG Firewall running on Azure and login to our Barracuda NG Admin console:



Create a PASS access rule:

  • Source – Enter our mid or backend subnet
  • Service – Select Any
  • Destination – Select Internet
  • Connection – Select Dynamic SNAT
  • Click OK and place the access rule higher than other rules blocking the same type of traffic
  • Click Send Changes and Activate


Our VMs in the mid or backend subnet can now access the Internet via the Barracuda NG Firewall. RDP to my VM sitting on Mid subnet, browse to Google.com:


Let’s have a quick look at Barracuda NG Admin Logs 🙂


And we are good to go using same method configuring the rest to protect our Azure environment:

  • Backend traffic to go pass our Barracuda NG Firewall before hitting the Mid traffic and Vice Versa
  • Mid traffic to go pass our Barracuda NG Firewall before hitting the Frontend traffic and Vice Versa

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.




Azure ExpressRoute in Australia via Equinix Cloud Exchange

Microsoft Azure ExpressRoute provides dedicated, private circuits between your WAN or datacentre and private networks you build in the Microsoft Azure public cloud. There are two types of ExpressRoute connections – Network (NSP) based and Exchange (IXP) based with each allowing us to extend our infrastructure by providing connectivity that is:

  • Private: the circuit is isolated using industry-standard VLANs – the traffic never traverses the public Internet when connecting to Azure VNETs and, when using the public peer, even Azure services with public endpoints such as Storage and Azure SQL Database.
  • Reliable: Microsoft’s portion of ExpressRoute is covered by an SLA of 99.9%. Equinix Cloud Exchange (ECX) provides an SLA of 99.999% when redundancy is configured using an active – active router configuration.
  • High Speed speeds differ between NSP and IXP connections – but go from 10Mbps up to 10Gbps. ECX provides three choices of virtual circuit speeds in Australia: 200Mbps, 500Mbps and 1Gbps.

Microsoft provided a handy table comparison between all different types of Azure connectivity on this blog post.

ExpressRoute with Equinix Cloud Exchange

Equinix Cloud Exchange is a Layer 2 networking service providing connectivity to multiple Cloud Service Providers which includes Microsoft Azure. ECX’s main features are:

  • On Demand (once you’re signed up)
  • One physical port supports many Virtual Circuits (VCs)
  • Available Globally
  • Support 1Gbps and 10Gbps fibre-based Ethernet ports. Azure supports virtual circuits of 200Mbps, 500Mbps and 1Gbps
  • Orchestration using API for automation of provisioning which provides almost instant provisioning of a virtual circuit.

We can share an ECX physical port so that we can connect to both Azure ExpressRoute and AWS DirectConnect. This is supported as long as we use the same tagging mechanism based on either 802.1Q (Dot1Q) or 802.1ad (QinQ). Microsoft Azure uses 802.1ad on the Sell side (Z-side) to connect to ECX.

ECX pre-requisites for Azure ExpressRoute

The pre-requisites for connecting to Azure regardless the tagging mechanism are:

  • Two Physical ports on two separate ECX chassis for redundancy.
  • A primary and secondary virtual circuit per Azure peer (public or private).

Buy-side (A-side) Dot1Q and Azure ExpressRoute

The following diagram illustrates the network setup required for ExpressRoute using Dot1Q ports on ECX:

Dot1Q setup

Tags on the Primary and Secondary virtual circuits are the same when the A-side is Dot1Q. When provisioning virtual circuits using Dot1Q on the A-Side use one VLAN tag per circuit request. This VLAN tag should be the same VLAN tag used when setting up the Private or Public BGP sessions on Azure using Azure PowerShell.

There are few things that need to be noted when using Dot1Q in this context:

  1. The same Service Key can be used to order separate VCs for private or public peerings on ECX.
  2. Order a dedicated Azure circuit using Azure PowerShell Cmdlet (shown below) and obtain the Service Key and use the this to raise virtual circuit requests with Equinix.https://gist.github.com/andreaswasita/77329a14e403d106c8a6

    Get-AzureDedicatedCircuit returns the following output.Get-AzureDedicatedCircuit Output

    As we can see the status of ServiceProviderProvisioningState is NotProvisioned.

    Note: ensure the physical ports have been provisioned at Equinix before we use this Cmdlet. Microsoft will start charging as soon as we create the ExpressRoute circuit even if we don’t connect it to the service provider.

  3. Two physical ports need to be provisioned for redundancy on ECX – you will get the notification from Equinix NOC engineers once the physical ports have been provisioned.
  4. Submit one virtual circuit request for each of the private and public peers on the ECX Portal. Each request needs a separate VLAN ID along with the Service Key. Go to the ECX Portal and submit one request for private peering (2 VCs – Primary and Secondary) and One Request for public peering (2VCs – Primary and Secondary).Once the ECX VCs have been provisioned check the Azure Circuit status which will now show Provisioned.expressroute03

Next we need to configure BGP for exchanging routes between our on-premises network and Azure as a next step, but we will come back to this after we have a quick look at using QinQ with Azure ExpressRoute.

Buy-side (A-side) QinQ Azure ExpressRoute

The following diagram illustrates the network setup required for ExpressRoute using QinQ ports on ECX:

QinQ setup

C-TAGs identify private or public peering traffic on Azure and the primary and secondary virtual circuits are setup across separate ECX chassis identified by unique S-TAGs. The A-Side buyer (us) can choose to either use the same or different VLAN IDs to identify the primary and secondary VCs.  The same pair of primary and secondary VCs can be used for both private and public peering towards Azure. The inner tags identify if the session is Private or Public.

The process for provisioning a QinQ connection is the same as Dot1Q apart from the following change:

  1. Submit only one request on the ECX Portal for both private and public peers. The same pair of primary and secondary virtual circuits can be used for both private and public peering in this setup.

Configuring BGP

ExpressRoute uses BGP for routing and you require four /30 subnets for both the primary and secondary routes for both private and public peering. The IP prefixes for BGP cannot overlap with IP prefixes in either your on-prem or cloud environments. Example Routing subnets and VLAN IDs:

  • Primary Private: (VLAN 100)
  • Secondary Private: (VLAN 100)
  • Primary Public: (VLAN 101)
  • Secondary Public: (VLAN 101)

The first available IP address of each subnet will be assigned to the local router and the second will be automatically assigned to the router on the Azure side.

To configure BGP sessions for both private and public peering on Azure use the Azure PowerShell Cmdlets as shown below.

Private peer:

Public peer:

Once we have configured the above we will need to configure the BGP sessions on our on-premises routers and ensure any firewall rules are modified so that traffic can be routed correctly.

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.

Skype for Business External Authentication

Microsoft Lync/Skype for Business has revolutionised the way people can communicate and collaborate in the workplace. With light weight and portable form factors coming into their own, devices have enabled businesses to rethink their communication strategy. Lync not only enables users to communicate using great device form factors, but also from wherever they may be located. The sense of a roaming lync identity brings freedom to how people choose to collaborate and office spaces, desks and name tags mounted above them, seem like a necessity of the past. The enablement of remote connectivity across these devices is pivotal in a Lync deployment, but sometimes isn’t entirely understood. In some circumstances security is of high concern for all forms of connectivity that can be done over the public internet, but you wouldn’t want to go without it. To remove remote access for users would be crippling the UC strategy that you were trying to put in place.

When we think about Lync/SFB with external authentication we first must articulate that there’s more than one form of authentication a user can attempt and there is many device types they can attempt authentication with. Therefore it can also be said that there is more than one endpoint and port on the edge of the corporate network listening, waiting and proxying these forms of authentication. What we need to do is make sure that each case is in a controlled and known measure to best suit your deployment.

Question: “So Arran what is secure?”

Answer: “Well the security policy should govern what is and isn’t classified as secure for you.”

The common device(s) attempting authentication are:

  1. Lync Office Client
  2. Mobile/Tablet app
  3. Windows 8 Store App

With these authentication types:

  1. NTLM
  2. TLS-DSK
  3. Passive (ADFS)

We aren’t going to talk about Kerberos cause we are concerned with external logins.


NTLM is usually well understood as a simple challenge/response authentication but if we look at it in Lync it means that every time a web ticket expires the same challenge authentication must be presented. Usually to make this simple to the end-user we allow them to cache/save the password to the device for re-authentication on our behalf.




NTLM will generally be a big ‘NO’ straight away if these conversations have started with a security team, so let’s look at Transport Layer Security Derived Session Key (TLS-DSK) as a certificate based authentication. Every Lync Front End Server is issuing a Lync User Certificate upon initial successful authentication and once the certificate is saved, the stored AD Credentials aren’t needed for the validity of the certificate which can range from 8 hours to 365 days (your choice). A key point to make about TLS-DSK is that if I have multiple devices I will receive my certificate for each. I will then use my certificate on each device to authenticate to Lync as me. Additionally the certificate I have stored is only trusted by Lync, not my entire domain or ADFS and can’t be used across other application or services.


TLS-DSK allows us to move away from the simple challenge authentication and subsequent re-authentications all together.

TLS-DSK is great!


By disabling NTLM on external registration (shown in the diagram above with Green – Internal and Blue -External) we can then understand that a client has to have obtained a Lync certificate from the internal Front End Servers when on-premises and not provisioned through an Edge proxy. The certificate can NOT be issued from external locations due to the authentication process breaking when the client requests a web ticket to start the process. Currently Skype for Business does not do this natively. The client NTLM authentication against the web services is via the Simple URLs which is controlled via a Reverse Proxy. Currently there are only a few out the box solutions for this, Lync Solutions and Skype Shield are worth investigating.

If you go the extra mile to not allow NTLM authentication from your external network, you are then protected via additional forms of on-premises security;

  • The individual has gotten physical access to a site location through a perimeter locked entrance
  • The individual has gotten network layer 2 connectivity with a device on a access switch
  • The individual has an approved domain joined computer to gain access to domain services on the appropriate VLAN
  • The individual has supplied correct credentials of a provisioned Lync user account.

Sounds like a job for Tom Cruise hanging from the roof if you ask me. The end result is a Lync user certificate in the user store of the trusted machine. We should now be happy for the user to go out in the world with this device knowing that themselves and the managed device are who we think.


What if you would NOT like Lync to do any authentication.

“I have a centralised authentication services called Active Directory Federation Services (ADFS) and I would like to use it with Lync”.

Lync can be integrated with ADFS as your Secure Token Service (STS) and also provide a second factor if needed. You can then leverage forms based authentication or smart cards. There are a few tricks to enabling Lync Server with passive authentication, first of all to enable passive we must disable all other forms of authentication. As this is a ‘big bang’ approach to effectively disable all other authentication protocols per pool make sure you plan and test appropriately.


Lync Mobile Clients

Lync is a free download in each of the mobile stores and we can’t control if a trusted or untrusted device downloads it. So for our external access of the Lync 2013 mobile client we must feel comfortable with the authentication, which comes in the 3 forms that are available;

  • NTLM
  • Lync Certificate
  • Passive

NTLM once again maybe too basic to meet our requirements and we need to look at what else is on offer, James Frost has a great comment at the bottom of this post about products that can be used. As James points out that the certificate web service URL is exposed to the internet via the reverse proxy you are using. Depending on what type of device you are using will indicate what might be possible for you to achieve with validation of approved devices or authentication attempts. The reverse proxy is breaking the connection from front to backend and this is a perfect time to inspect it if necessary. ADFS can be a big leap just to comply with remote authentication with the Lync app, instead we can protect our users and devices via whipping the initial NTLM credentials from the devices disk and memory via PowerShell mobility in band provisioning policy;

Set-CsMobilityPolicy -AllowSaveCredentials <$true/$false>

We can also disable EWS requests which does simple NTLM requests to exchange inside the app also;

Set-CsMobilityPolicy - AllowExchangeConnectivity <$true/$false>

Where does the authentication happen?

When I stated that there is “more than one endpoint on the edge of the corporate network, waiting and proxying these forms of authentication” this is because clients behave differently. Lync Office Client directs its authentication requests to the Lync Edge servers with SIP over TLS, while all mobile device apps will use a reverse proxy that has a published URL rule as all its signalling traffic is SIP over HTTPS while on a 3G/4G data network. So there is more then one ingress point for authentication to monitor. The Edge Server is inspecting all packets, while a Reverse Proxy can inspect the HTTPS traffics as it’s also splitting the traffic from frontend to backend.

Lync External Auth Flow


Options are available to change the way Lync can authenticate to your network. A greater understanding of the authentication endpoints, protocols and flows will aid you in being on top of your environment and smoothly rolling out external devices that are secure and trusted.

Let’s Hack It: Securing data on the mobile, the what, why, and how

Our solution to secure the mobile app data

Our solution to secure the mobile app data. It uses open source libraries and took a very minimal effort to implement

Here is the presentation of tonight’s talk. It was great to see so many passionate developers and business people at Melbourne Mobile. I have embedded the slides below and I hope you find it useful.

Talk Summary

This presentation is basically a summary of what I have learned and the experience I have had going through my recent project. In trying to secure the users data on the mobile device, I have come to learn quite few common flaws in the security implementation, I have learned more reasons why you need to protect the data on your mobile app, and have come to know and use few useful open source projects. I share my experience from this project hoping that it would help you gain some insight and protect your mobile app data easily.

I said in the presentation that you could start securing the app data by a couple of lines of code and I do mean it. In fact, if you use SQLCipher, then you could do that by only changing one line of code :). I have put the links to all my reference and the libraries I have used at the end of the slides.

Let's Hack It Presentation

Let’s Hack It Presentation

Finally, just in case you do not like Prezi, here is the slides as a PDF file:
Let’s Hack It, the presentation as PDF