Ubuntu security hardening for the cloud.

Hardening Ubuntu Server Security For Use in the Cloud

The following describes a few simple means of improving Ubuntu Server security for use in the cloud. Many of the optimizations discussed below apply equally to other Linux based distribution although the commands and settings will vary somewhat.

Azure cloud specific recommendations

  1. Use private key and certificate based SSH authentication exclusively and never use passwords.
  2. Never employ common usernames such as root , admin or administrator.
  3. Change the default public SSH port away from 22.

AWS cloud specific recommendations

AWS makes available a small list of recommendation for securing Linux in their cloud security whitepaper.

Ubuntu / Linux specific recommendations

1. Disable the use of all insecure protocols (FTP, Telnet, RSH and HTTP) and replace them with their encrypted counterparts such as sFTP, SSH, SCP and HTTPS

yum erase inetd xinetd ypserv tftp-server telnet-server rsh-server

2. Uninstall all unnecessary packages

dpkg --get-selections | grep -v deinstall
dpkg --get-selections | grep postgres
yum remove packageName

For more information: http://askubuntu.com/questions/17823/how-to-list-all-installed-packages

3. Run the most recent kernel version available for your distribution

For more information: https://wiki.ubuntu.com/Kernel/LTSEnablementStack

4. Disable root SSH shell access

Open the following file…

sudo vim /etc/ssh/sshd_config

… then change the following value to no.

PermitRootLogin yes

For more information: http://askubuntu.com/questions/27559/how-do-i-disable-remote-ssh-login-as-root-from-a-server

5. Grant shell access to as few users as possible and limit their permissions

Limiting shell access is an important means of securing a system. Shell access is inherently dangerous because of the risk of unlawfully privilege escalations as with any operating systems, however stolen credentials are a concern too.

Open the following file…

sudo vim /etc/ssh/sshd_config

… then add an entry for each user to be allowed.

AllowUsers jim,tom,sally

For more information: http://www.cyberciti.biz/faq/howto-limit-what-users-can-log-onto-system-via-ssh/

6. Limit or change the IP addresses SSH listens on

Open the following file…

sudo vim /etc/ssh/sshd_config

… then add the following.

ListenAddress <IP ADDRESS>

For more information:


7. Restrict all forms of access to the host by individual IPs or address ranges

TCP wrapper based access lists can be included in the following files.


Note: Any changes to your hosts.allow and hosts.deny files take immediate effect, no restarts are needed.


ALL : 123.12.

Would match all hosts in the network.


An IP address and subnet mask can be used in a rule.

sshd : /etc/sshd.deny

If the client list begins with a slash (/), it is treated as a filename. In the above rule, TCP wrappers looks up the file sshd.deny for all SSH connections.


This will allow SSH connections from only the machine with IP address and block all other connection attemps. You can use the options allow or deny to allow or restrict access on a per client basis in either of the files.

in.telnetd : : deny
in.telnetd : : allow

Warning: While restricting system shell access by IP address be very careful not to loose access to the system by locking the administrative user out!

For more information: https://debian-administration.org/article/87/Keeping_SSH_access_secure

8. Check listening network ports

Check listening ports and uninstall or disable all unessential or insecure protocols and deamons.

netstat -tulpn

9. Install Fail2ban

Fail2ban is a means of dealing with unwanted system access attempts over any protocol against a Linux host. It uses rule sets to automate variable length IP banning sources of configurable activity patterns such as SPAM, (D)DOS or brute force attacks.

“Fail2Ban is an intrusion prevention software framework that protects computer servers from brute-force attacks. Written in the Python programming language, it is able to run on POSIX systems that have an interface to a packet-control system or firewall installed locally, for example, iptables or TCP Wrapper.” – Wikipedia

For more information: https://www.digitalocean.com/community/tutorials/how-to-protect-ssh-with-fail2ban-on-ubuntu-14-04

10. Improve the robustness of TCP/IP

Add the following to harden your networking configuration…


… such as

sudo vim /etc/sysctl.d/10-network-security.conf
Ignore ICMP broadcast requests
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Disable source packet routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0 
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0

# Ignore send redirects
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Block SYN attacks
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 5

# Log Martians
net.ipv4.conf.all.log_martians = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Ignore ICMP redirects
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0 
net.ipv6.conf.default.accept_redirects = 0

# Ignore Directed pings
net.ipv4.icmp_echo_ignore_all = 1

And load the new rules as follows.

service procps start

For more information: https://blog.mattbrock.co.uk/hardening-the-security-on-ubuntu-server-14-04/

11. If you are serving web traffic install mod-security

Web application firewalls can be helpful in warning of and fending off a range of attack vectors including SQL injection, (D)DOS, cross-site scripting (XSS) and many others.

“ModSecurity is an open source, cross-platform web application firewall (WAF) module. Known as the “Swiss Army Knife” of WAFs, it enables web application defenders to gain visibility into HTTP(S) traffic and provides a power rules language and API to implement advanced protections.”

For more information: https://modsecurity.org/

12. Install a firewall such as IPtables

IPtables is a highlight configurable and very powerful Linux forewall which has a great deal to offer in terms of bolstering hosts based security.

iptables is a user-space application program that allows a system administrator to configure the tables provided by the Linux kernel firewall (implemented as different Netfilter modules) and the chains and rules it stores.” – Wikipedia.

For more information: https://help.ubuntu.com/community/IptablesHowTo

13. Keep all packages up to date at all times and install security updates as soon as possible

 sudo apt-get update        # Fetches the list of available updates
 sudo apt-get upgrade       # Strictly upgrades the current packages
 sudo apt-get dist-upgrade  # Installs updates (new ones)

14. Install multifactor authentication for shell access

Nowadays it’s possible to use multi-factor authentication for shell access thanks to Google Authenticator.

For more information: https://www.digitalocean.com/community/tutorials/how-to-set-up-multi-factor-authentication-for-ssh-on-ubuntu-14-04

15. Add a second level of authentication behind every web based login page

Stolen passwords are a common problem whether as a result of a vulnerable web application, an SQL injection, a compromised end user computer or something else altogether adding a second layer of protection using .htaccess authentication with credentials stored on the filesystem not in a database is great added security.

For more information: http://stackoverflow.com/questions/6441578/how-secure-is-htaccess-password-protection

Are There Sufficient Standards in Cloud Computing Today?

The hybrid cloud may be a hot topic with adoption growing faster than ever but should we be concerned about a lack of established standards?

What is the Hybrid Cloud?

Private clouds, whether owned or leased, generally consist of closed IT infrastructures accessible only to a business which then makes available resources to it’s own internal customers. Private clouds are often home to core applications where control is essential to the business, they can also offer economies of scales where companies can afford larger, long term investments and have the ability to either run these environments themselves or pay for a managed service. Private cloud investments tend to operate on a CAPEX model.

Public clouds are shared platforms for services made available by third parties to their customers on a pay-as-you go basis. Public cloud environments are best suited to all but the most critical and expensive applications to run. They offer the significant benefit of not requiring large upfront capital investments because they operate on an OPEX model.

Hybrid clouds on the other hand are made up of a mix of both types of resources working together across secured, private network connections. They can offer the benefits of both models but run the risk of additional complexity and can lessen the benefits of working at scale.

Enter the Multi-Cloud

With an ever growing number of businesses seeking to adopt a multi-cloud / multi-vendor strategy, the potential benefits of this new take are clear. It’s an approach which offers increased resiliency and the best in feature sets while minimizing lock-in; albeit at the cost of having to manage more complex infrastructure and billing structures.

However in the absence of standards, cloud providers and hardware vendors have been building proprietary stacks with little common ground which is stymying the movement of applications and workloads across clouds and represents a challenge for business up-take.

So it seems clear that a gap in cloud computing standards and insufficient overlap among hardware vendors of private cloud technologies has been hampering adoption something which needs to be addressed.

Standards are Coming However

Generally speaking standards follow market forces, particularly where the pace of innovation is fairly rapid, in a market of this size however they will undoubtedly catch up eventually. Case in point a number of standards are expected be finalized reasonably soon and reach the industry inside the next couple of years from organizations such as the IEEE Standards Association, Cloud Standards Coordination, The Open Networking User Group and others which will be a welcome development and a significant asset for the industry.

Some additional information about these organizations.

– IEEE Standards Association

Developing Standards for Cloud Computing

“The IEEE Standards Association (IEEE-SA) is a leading consensus building organization that nurtures, develops and advances global technologies, through IEEE. We bring together a broad range of individuals and organizations from a wide range of technical and geographic points of origin to facilitate standards development and standards related collaboration. With collaborative thought leaders in more than 160 countries, we promote innovation, enable the creation and expansion of international markets and help protect health and public safety. Collectively, our work drives the functionality, capabilities and interoperability of a wide range of products and services that transform the way people live, work and communicate.” – IEEE Standards Association.

– Cloud Standards Customer Council

“The Cloud Standards Customer Council (CSCC) is an end user advocacy group dedicated to accelerating cloud’s successful adoption, and drilling down into the standards, security and interoperability issues surrounding the transition to the cloud. The Council separates the hype from the reality on how to leverage what customers have today and how to use open, standards-based cloud computing to extend their organizations. CSCC provides cloud users with the opportunity to drive client requirements into standards development organizations and deliver materials such as best practices and use cases to assist other enterprises.

Cloud Standards Customer Council founding enterprise members include IBM, Kaavo, CA Technologies, Rackspace & Software AG. More than 500 of the world’s leading organizations have already joined the Council, including Lockheed Martin, Citigroup, Boeing, State Street Bank, Aetna, AARP, AT&T, Ford Motor Company, Lowe’s, and others.” – Cloud Standards Customer Council.


– The Open Networking User Group’s Mission Statement and History

“The ONUG Hybrid Cloud Working Group framework seeks to commoditize infrastructure and increase choice among enterprise buyers of public cloud services. The goal is to have a framework, which identifies a minimum set of common issues and collective requirements that will swing leverage into the hands of enterprise buyers of hybrid cloud services.

The ONUG Mission is to enable greater choice and options for IT business leaders by advocating for open interoperable hardware and software-defined infrastructure solutions that span across the entire IT stack, all in an effort to create business value.”

The Open Networking User Group (ONUG) was created in early 2012 as the result of a discussion between Nick Lippis, of the Lippis Report, and Ernest Lefner, about the need for a smaller, more user-focused open networking conference. From there, the two brought together the founding board of IT leaders from the likes of Bank of America, Fidelity Investments, JPMorgan Chase, UBS, and Gap Inc. Managed by Nick, the board worked together to create the first ONUG event, held on February 13, 2013 at the Fidelity Auditorium in Boston, Massachusetts. – The Open Networking Users group.



Run Chromium OS without having to buy a Chromebook thanks to CloudReady

Thanks to the good folks at Neverware, you can now run Google’s cloud centric OS on a wider range of hardware than just Chromebooks alone. To enable this, what Neverware have done is repackage Google’s Chromium operating system.  This OS is at the core of it’s range of branded laptops, and is now made available to all.

CloudReady running on different hardware.

CloudReady running on different hardware.

The differences

Where Google build and maintain open source versions of Android and Chromium, their real value proposition is to add proprietary features onto both before selling them on branded devices. Enter CloudReady, based entirely on the open source core of Chromium making it a vanilla experience. Given it’s nature, not all features of are available in the first release, an example of this is Powerwash and the Trusted Platform Module. A full list of differences is available from Neverware of course.

Software updates

Updates to CloudReady are delivered in a similarly transparent manner to the OS as with Chromium, however these updates are available by Neverware, and not Google. CloudReady is also several major releases behind Chromium, for reasons owing to development. it it worth noting that Neverware have somewhat boldly committed to “indefinite” support for the OS.


Neverware are focused on generating revenue through selling devices, and OS licenses, as well as support to education and the enterpise at a later date. The caveat, however is there is currently no official support for the free version, you will have to look to  community support through their user forum.CloudReady recovery media creator.

CloudReady recovery media creator.


CloudReady is available for download from Neverware. Installing it is just matter of creating a USB based installer from, which to boot and launch the process. This procedure is supported on a Chrome OS, Windows or Mac devices. Now that you havecreated the source media, you will then need to reboot the target system, and have it boot from the relevant USB port by applying the required BIOS settings. Alternatively CloudReady can also be dual booted alongside other operating systems. Detailed installation instructions are available from their web site.


CloudReady installer.


Neverware have made available a number of useful hardware support lists including.

Chromium OS is an open-source project that aims to build an operating system that provides a fast, simple, and more secure computing experience for people who spend most of their time on the web. Here you can review the project’s design docs, obtain the source code, and contribute. – chromium.org

Neverware is a venture-backed technology company that provides a service to make old PCs run like new. In February 2015 the company launched its second product, CloudReady; an operating system built on Google’s open source operating system Chromium.

CloudReady can be installed on older PCs in order to make them perform like a Chromebook. CloudReady machines can even be managed under the Google Admin console, which is a true line of demarcation from just installing Chrome. It was founded by CEO Jonathan Hefter and currently specializes in the education sector. It is headquartered in the Flatiron District of Manhattan. – Wikipedia


How to make a copy of a virtual machine running Windows in Azure

How to make a copy of a virtual machine running Windows in Azure

I was called upon recently to help a customer create copies of some of their Windows virtual machines. The idea was to quickly deploy copies of these hosts at any time as opposed to using a system image or point in time copy.

The following PowerShell will therefore allow you to make a copy or clone of a Windows virtual machine using a copy of it’s disks in Azure Resource Manager mode.

Create a new virtual machine from a copy of the disks of another

Having finalized the configuration of the source virtual machine the steps required are as follows.

  1. Stop the source virtual machine, then using Storage Explorer copy it’s disks to a new location and rename them in line with the target name of the new virtual machine.

  2. Run the following in PowerShell making the required configuration changes.

Get-AzureRmSubscription –SubscriptionName "<subscription-name>" | Select-AzureRmSubscription

$location = (get-azurermlocation | out-gridview -passthru).location
$rgName = "<resource-group>"
$vmName = "<vm-name>"
$nicname = "<nic-name>"
$subnetID = "<subnetID>"
$datadisksize = "<sizeinGB>"
$vmsize = (Get-AzureLocation | Where-Object { $_.name -eq "East US"}).VirtualMachineRoleSizes | out-gridview -passthru
$osDiskUri = "https://<storage-acccount>.blob.core.windows.net/vhds/<os-disk-name.vhd>"
$dataDiskUri = "https://<storage-acccount>.blob.core.windows.net/vhds/<data-disk-name.vhd>"

Notes: The URIs above belong to the copies not the original disks and the SubnetID refers to it’s resource ID.

$nic = New-AzureRmNetworkInterface -Name $nicname -ResourceGroupName $rgName -Location $location -SubnetId $subnetID
$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize $vmsize
$vm = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $nic.Id
$osDiskName = $vmName + "os-disk"
$vm = Set-AzureRmVMOSDisk -VM $vm -Name $osDiskName -VhdUri $osDiskUri -CreateOption attach -Windows
$dataDiskName = $vmName + "data-disk"
$vm = Add-AzureRmVMDataDisk -VM $vm -Name $dataDiskName -VhdUri $dataDiskUri -Lun 0 -Caching 'none' -DiskSizeInGB $datadisksize -CreateOption attach
New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vm

List virtual machines in a resource group.

$vmList = Get-AzureRmVM -ResourceGroupName $rgName

Having run the above. Log on to the new host in order to make the required changes.

Enterprise Cloud Take Up Accelerating Rapidly According to New Study By McKinsey

A pair of studies published a few days ago by global management consulting firm McKinsey & Company entitled IT as a service: From build to consume show enterprise adoption of Infrastructure as a Service (IaaS) services accelerating increasingly rapidly over the next two years into 2018.

Of the two, one examined the on-going migrations of 50 global businesses. The other saw a large number of CIOs, from small businesses up to Fortune 100 companies, interviewed on the progress of their transitions and the results speak for themselves.

1. Compute and storage is shifting massively to cloud service providers.

Compute and storage is shift massively to the cloud service providers.

Compute and storage is shift massively to the cloud service providers.

“The data reveals that a notable shift is under way for enterprise IT vendors, with on-premise shipped server instances and storage capacity facing compound annual growth rates of –5 percent and –3 percent, respectively, from 2015 to 2018.”

With on-premise storage and server sales growth going into negative territory, it’s clear the next couple of years will see the hyperscalers of this world consume an ever increasing share of global infrastructure hardware shipments.

2.Companies of all sizes are shifting to off-premise cloud services.

Companies of all sizes are shifting to off-premise cloud services.

Companies of all sizes are shifting to off-premise cloud services.

“A deeper look into cloud adoption by size of enterprise shows a significant shift coming in large enterprises (Exhibit 2). More large enterprises are likely to move workloads away from traditional and virtualized environments toward the cloud—at a rate and pace that is expected to be far quicker than in the past.

The report also anticipates the number of enterprises hosting at least one workload on an IaaS platform will see an increase of 41% in the three year period to 2018. While that of small and medium sized businesses will increase a somewhat less aggressive 12% and 10% respectively.

3. A fundamental shift is underway from a build to consume model for IT workloads.


“The survey showed an overall shift from build to consume, with off-premise environments expected to see considerable growth (Exhibit 1). In particular, enterprises plan to reduce the number of workloads housed in on-premise traditional and virtualized environments, while dedicated private cloud, virtual private cloud, and public infrastructure as a service (IaaS) are expected to see substantially higher rates of adoption.”

Another takeaway is that the share of traditional and virtualized on-premise workloads will shrink significantly from 77% and 67% in 2015 to 43% and 57% respectively in 2018. While virtual private cloud and IaaS will grow from 34% and 25% in 2015 to 54% and 37% respectively in 2018.

Cloud adoption will have far-reaching effects

The report concludes “McKinsey’s global ITaaS Cloud and Enterprise Cloud Infrastructure surveys found that the shift to the cloud is accelerating, with large enterprises becoming a major driver of growth for cloud environments. This represents a departure from today, and we expect it to translate into greater headwinds for the industry value chain focused on on-premise environments; cloud-service providers, led by hyperscale players and the vendors supplying them, are likely to see significant growth.”

About McKinsey & Company

McKinsey & Company is a worldwide management consulting firm. It conducts qualitative and quantitative analysis in order to evaluate management decisions across the public and private sectors. Widely considered the most prestigious management consultancy, McKinsey’s clientele includes 80% of the world’s largest corporations, and an extensive list of governments and non-profit organizations.

Web site: McKinsey & Company
The full report: IT as a service: From build to consume

Troubleshooting Azure Network Security Groups

Some things I learned recently whilst troubleshooting a customer’s network security group (NSG) configuration.

Default rules

The default configuration of all NSGs includes 3 inbound and outbound rules which is something to be aware of. You can vizualise these rules in the Azure portal or with the following PowerShell. The default rules cannot be disabled but can be overridden by creating rules with a lower priority (read higher number!).

Choose a resource group

$nsgName = '<NSGNAME>'
$rgName = (Get-AzureRmResourceGroup | Out-GridView -Title 'Select Azure Resource Group:' -PassThru).ResourceGroupName

Display default rules

(Get-AzureRmNetworkSecurityGroup -Name $nsgName -ResourceGroupName $rgName).DefaultSecurityRules | Select-Object * | Out-GridView

Display custom rules

(Get-AzureRmNetworkSecurityGroup -Name $nsgName -ResourceGroupName $rgName).SecurityRules | Select-Object * | Out-GridView
NSG default rule set.

NSG default rule set.

By default no inbound traffic is allowed except for requests from any Azure load balancers which may have been provisioned. Traffic across the subnet is allowed as is some outbound traffic including to the internet. So it’s important to use caution when considering the application of additional DENY ALL rules to the existing configuration.


As discussed previously there are two operating modes in Azure, Service Management (ASM) and Resource Manager (ARM) with NSG behaviour differing across the two, so it’s important to be aware of the differences here too.

In ASM NSGs can be applied at the virtual machine as well as the network interface and subnet level. In ARM NSG can only be applied at the subnet and network interface level. However diagnostic logging of NSG events is available in ARM but not in ASM.

VPN and Express Route Gateways

Applying Network Security Groups to VPN and Express Route Gateways is strongly recommended against.

NSG diagnostic logging

Packet trace functionality may not be available for troubleshooting NSGs at this time but diagnostic logging can be used to better understand the nature of any problems your configuration might be suffering. In order to enable Network Security Group logging browse to the NSG in the portal and head for the Diagnostics tab. Options include whether to log to a Storage Account or the Events Hub using a Message Bus. You also have a choice of logging retention period, from 0 (indefinite) to 365 days. However existing logs will be lost if you change storage account. And there’s a hard requirement for the storage account you chose to be in the same region as the resource in question.

Enabling Network Security Group diagnostics logging.

Enabling Network Security Group diagnostics logging.

Log Types

There are three different kinds of logs available for troubleshooting problems with Network Security Groups: Audit, Event and Counter. Audit logs are enabled by default across all subscriptions, do not require a separate storage account, have a 90 day retention period and can be viewed in the portal. Event logs need to be manually enabled per NSG and can be used to view which rules have been enabled and at what level they been applied. Counter logs also need to be manually enabled per NSG and can be used to see how often a rules was triggered to ALLOW or DENY traffic.

Analysing Logs

The following means are available for analyzing the logs mentioned above:

  1. PowerShell.
  2. The Azure CLI.
  3. REST APIs.
  4. The portal.
  5. Power BI.
  6. Azure Log Analytics.

Image a Windows Virtual Machine In Azure, then Deploy And Join It To A Domain

The following Azure Resource Manager mode PowerShell will allow you to create an image of an existing Windows virtual machine in Azure, deploy it at will and join it to a domain if necessary.

Login to PowerShell

$SubID = "your-subscription-ID"
Select-AzureRmSubscription -SubscriptionId $SubID

Create the virtual machine image

Run sysprep on the desired virtual machine in Azure.


When prompted for System Cleanup Action choose ‘Enter System Out of The Box Experience (OOBE)‘, Generalize and Shutdown from Shutdown Options.


Deallocate and generalize the virtual machine

Stop-AzureRmVM -ResourceGroupName "resourceGroup" -Name "vmName"
Set-AzureRmVm -ResourceGroupName "resourceGroup" `
-Name "vmName" -Generalized

Create the virtual machine image

Save-AzureRmVMImage -ResourceGroupName YourResourceGroup  `
-VMName YourWindowsVM `
-DestinationContainerName YourImagesContainer `
-VHDNamePrefix YourTemplatePrefix -Path Yourlocalfilepath\Filename.json

Set the path to the image file you’ve just created the URI to which is available in the storage account where the image file was created under the following path.


$imageURI = "https://storageAccountName.blob.core.windows.net/system/Microsoft.Compute/Images/imagesContainer/templatePrefix-osDisk.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd"

Define the required variables

Complete the list of variables below needed to define the deployment.

$rgname = "destination-resource-group-name"
$vmsize = "vm-size"
$vmname = "vm-name"
$locName= "vm-location"
$nicName = "network-interface-name"
$vnetName = "virtual-network-name"
$SubName = "subnet-name"
$DomainName = "domain-name"
$DomainJoinAdminName = $DomainName + "\username"
$DomainJoinPassword = "password"
$osDiskName = "OS-disk-name"
$osDiskVhdUri = "URL-to-your-OS-disk-image-file"

Deploy the virtual machine.

Complete the deployment using the PowerShell below.

$vm = New-AzureRmVMConfig -VMName $vmname -VMSize $vmsize
$vnet=Get-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rgName
$subnet = $Vnet.Subnets | Where-Object { $_.Name -eq $SubName}
$pip=New-AzureRmPublicIpAddress -Name $nicName -ResourceGroupName $rgName -Location $locName -AllocationMethod Dynamic
$nic=New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $rgName -Location $locName -SubnetId
$subnet.Id -PublicIpAddressId $pip.Id
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id
$vm = Set-AzureRmVMOSDisk -VM $vm -VhdUri $osDiskVhdUri -name $osDiskName -CreateOption attach -Windows
New-AzureRmVM -ResourceGroupName $rgname -Location $locName -VM $vm

Join the domain.

Join the virtual machines to the domain using the PowerShell below.

Set-AzureRMVMExtension -VMName $VMName –ResourceGroupName $rgname -Name "JoinAD" -ExtensionType "JsonADDomainExtension" -Publisher "Microsoft.Compute" -TypeHandlerVersion "1.0" -Location $locName -Settings @{ "Name" = $DomainName; "OUPath" = ""; "User" = $DomainJoinAdminName; "Restart" = "true"; "Options" = 3} -ProtectedSettings @{ "Password" = $DomainJoinPassword}

Encryption In The Cloud

Is it safe? 

Three simple yet chilling words immortalized by the 1976 movie Marathon Man staring Laurence Olivier and Dustin Hoffman, in which Olivier tries to discover by very unpleasant means whether the location of his stolen diamonds has been exposed.


Marathon Man (1976)

Well had Sir Lawrence encrypted that information, there would have been no need for him to worry because he would have known that short of using a weak cypher or vulnerable algorithm or password, encrypted data has a very strong chance of remaining secret no matter what.

What is encryption and why should your business be using it?

Encryption is a means of algorithmically scrambling data so that it can only be read by someone with the required keys. Most commonly it protects our mobile and internet communications but it can also restrict access to the data at rest on our computers, in our data centers and in the public cloud. It protects our privacy and provides anonymity. And although encryption can’t stop data theft by any malicious actor who might try to steal it, it will certainly stop them from being able to access or read it.

Case in point had Sony Pictures Entertainment been encrypting their data in 2014. It would have been much harder for the perpetrators of the huge data theft against their corporate systems to extract any information from the information they stole. As it was, none of it was and much of it was leaked over the internet, including a lot of confidential business information and several unreleased movies.

Several years on and a string of other major data breaches later, figures published in the Ponemon Institute‘s 2016 Cloud Data Security study reveal persisting trends.

Research, conducted by the Ponemon Institute, surveyed 3,476 IT and IT security professionals in the United States, United Kingdom, Australia, Germany, France, Japan, Russian Federation, India and Brazil about the governance policies and security practices their organizations have in place to secure data in cloud environments.

On the importance of encrypting your data in the cloud. 72% of respondents said the ability to encrypt data was important and 86% said it would become even more important over the next two years. But where only 42% of respondents were actively using it to secure sensitive data and only 55% of those stated their organization had control of their keys.

Why is control of your encryption keys important?

While encryption may add a significant layer of control over access to your data, sharing your keys with third parties such as a service provider can still leave room for unwanted access. Whether it’s a disgruntled employee or government agency with legal powers, the end result might still be a serious breach of your privacy. Which is why if you have taken the trouble of encrypting your data, you really should also give serious consideration to your key management policy.

How secure is encryption today?

The relative strengths and weaknesses of the various algorithms available vary of course. However AES which is one of the more widely used algorithms today sports three different block ciphers AES-128, AES-192 and AES-256, each capable of encrypting and decrypting data in blocks of 128 bits using cryptographic keys of 128-bit, 192-bit and 256-bit lengths. And is currently considered uncrackable with any possible weaknesses deriving from errors made during it’s implementation in software.

What options are there for encrypting business data in the public cloud?

Disk level encryption is a means of securing virtual machines as well as their data, which allows the user to control their keys while protecting from disk theft, improper disk disposal and unauthorized access to the virtual container.

Software level encryption involves the data being encrypted before it is uploaded into cloud based storage. In this use case only the owner of the data has access to their keys. Tools for managing data in this manner include Truecrypt and 7Zip among others both of which support high standards of encryption such as AES256. And the type of cloud storage used is often cold or long term archival such as Azure Cold Blob Storage.

Filesystem level encryption consists of data being automatically encrypted in the cloud. The data is permanently encrypted until downloaded. Encryption, decryption and key management are transparent to end users. The data can be accessed while encrypted. Ownership of the required keys varies in this model but is often shared between the owner and provider. AWS S3 is an example of an object store with support for this kind of encryption.


Thales nShield Connect Harware Security Module.

Hardware level encryption involves the use of a third party Hardware Security Module (HSM) which provides dedicated and exclusive generation, handling and storage of encryption keys. In this scenario, only the owner has access to the required keys, here again the data can be accessed transparently while encrypted.

So, is it safe?

Well going back to Sir Laurence’s dilemma. There are no absolutes in computer security and probably never will be. But it seems clear that encrypting any data you want to control and managing the keys to it carefully, is sure to make it a much safer if not completely safe.

Create a Cloud Strategy For Your Business

Let’s be clear, today’s cloud as a vehicle for robust and flexible enterprise grade IT is here and it’s here to stay. Figures published by IDG Research’s 2015 Enterprise Cloud Computing Survey predict that in 2016 25% of total enterprise IT budgets will be allocated to cloud computing.


Steady increase in cloud utilisation. Source: IDG Enterprise Cloud Computing Survey.

They also reported that the average cloud spend for all the enterprises surveyed would reach 2.87M in the following year and that 72% of enterprises have at least one application running in the cloud already, compared to 57% in 2012. IDC in the meantime predicts that public cloud adoption will increase from 22% in 2016 to 32.1% within the next 18 months which equates to no less than 45.8% growth.


Wide range in cloud investments. Source: 2015 IDG Enterprise Cloud Computing Survey.

Now any organization looking to leverage the cloud needs a governing strategy. And research has shown that businesses with a vision are likely to be more efficient, successful and keep their costs down further than those who don’t. So let’s consider how a well rounded cloud strategy should be a priority for any business that doesn’t have one.

What are the key benefits of a cloud strategy?

A well thought out strategy will help an organization integrate cloud based IT into their business model in a more structured way. One that has given the proper consideration to all of its requirements.

1. Stay in control of your business in the era of on-demand cloud services

With the proliferation of cloud services available today. Shadow IT, that is to say the unsanctioned use of cloud services inside an organization, is a growing problem which if left unchecked runs the risk of a being a problem. Control it, before it controls you.

2. Better prepared infrastructure

Fully consider the potential ramifications and benefits which present themselves by using a structured approach. By properly mapping all of the requirements of it’s infrastucture, network, storage and compute resources a business will manage their migration with greater efficiency.

3. Increased benefit

Whether it’s a change in consumption models from owned to “pay-as-you-go” and the resulting shift from CAPEX to OPEX expenditures, potential for greater flexibility, efficiency and choice offered by the cloud over so called traditional IT. It’s seems obvious that having a well thought out strategy will magnify the value of these benefits.

4. Increased opportunity

Having a solid strategy is also likely to make for a bigger opportunity as an organization maps out it’s business and carefully thinks through the possibilities before taking making any changes.

Strategy formulation

Creating a migration strategy requires mapping out the suitability of all existing applications, weighing up their value against the costs and savings cloud services may offer and choosing which to prioritize.

1. Evaluate your applications

The first step is a business wide evaluation of all existing applications, categorized by two factors: business value and flexibility. With business value equating to the place and importance an asset holds in an organization and flexibility meaning it’s suitability for migration. It should seek to understand how they are deployed, how critical they are and whether moving them to the cloud will be cost effective or not.

2. Choose the right cloud model

The second is to determine the right cloud model for your requirements.

Private clouds, whether owned or leased, consist of closed IT infrastructures accessible only to a business which then makes available resources to it’s own internal customers. Private clouds are often home to core applications where control is essential to the business, they can also offer economies of scales where companies can afford larger, long term investments and have the ability to either run these environments themselves or pay for a managed service. Private cloud investments tend to operate on a CAPEX model.

Public clouds are shared platforms for services made available by third parties to their customers on a pay-as-you go basis. Public cloud environments are best suited to all but the most critical and expensive applications to run. They offer the significant benefit of not requiring large upfront capital investments because they operate on an OPEX model.

Hybrid clouds are made up of a mix of both types of resources working together across secured, private network connections. They can offer the benefits of both models but run the risk of additional complexity and can lessen the benefits of working at scale.

Why a clear strategy is vital for your business today?

The benefits of the cloud are real. Maximizing their value is therefore key to any business looking to leverage them. And the best way to do that is to have a well thought out strategy.

Performance Tuning Ubuntu Server For Use in Azure cloud

The following describes how to performance tune Ubuntu Server virtual machines for use in Azure. Although this article focuses on Ubuntu Server because it’s better established in Azure at this time. It’s worth mentioning that Debian offers better performance and stability overall, albeit at the cost of some of the more recent functionality support available in Ubuntu. Regardless many of the optimizations discussed below apply equally to both although commands and settings may vary occasionally.

Best practice recommendations from Microsoft.

  1. Don’t use the OS disk for other workloads.
  2. Use a 1TB disk minimum for all data workloads.
  3. Use storage accounts in the same datacenter as your virtual machines.
  4. In need of additional IOPs? Add more, not bigger disks.
  5. Limit the number of disks in a storage account to no more than 40.
  6. Use Premium storage for blobs backed by SSDs where necessary.
  7. Disable ‘barriers’ for all premium disks using ‘Readonly’ or ‘None’ caching.
  8. Storage accounts have a limit of 20K IOPs and 500TB capacity.
  9. Enable ‘Read’ caching for small read datasets only, disable it if not.
  10. Don’t store your Linux swapfile on the temporary drive provided by default.
  11. Use EXT4 filesystem.
  12. In Azure IOPs are throttled according to VM size so choose accordingly.

Linux specific optimisations you might also consider.

1. Decrease memory ‘swappiness’ and increase inode caching:

sudo echo vm.swappiness=10 >> /etc/sysctl.conf
sudo echo vm.vfs_cache_pressure=50 >> /etc/sysctl.conf

For more information: http://askubuntu.com/questions/184217/why-most-people-recommend-to-reduce-swappiness-to-10-20

2. Disable CPU scaling / run at maximum frequency all the time:

sudo chmod -x /etc/init.d/ondemand

For more information: http://askubuntu.com/questions/523640/how-i-can-disable-cpu-frequency-scaling-and-set-the-system-to-performance

3. Mount all disks with ‘noatime’ and ‘nobarrier’ (see above) options:

sudo vim /etc/fstab

Add ‘noatime,nobarrier’ to the mount options of all disks.

For more information: https://wiki.archlinux.org/index.php/fstab

4. Upgrade to a more recent Ubuntu kernel image and remove the old:

sudo aptitude update
sudo aptitude search linux-image
sudo aptitude install -y linux-image-4.4.0-28-generic
sudo aptitude remove -y linux-image-3.19.0-65-generic

In the example above the latest available kernel version available is version ‘linux-image-4.4.0-28-generic’ and the version currently installed was ‘linux-image-3.19.0-65-generic’ but these will change of course.

5. Change IO scheduler to something more suited to SSDs (i.e. deadline):

Edit the grub defaults file.

sudo vim /etc/default/grub

Change the following line from



GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash elevator=deadline

Then run

sudo update-grub2

For more information: http://stackoverflow.com/questions/1009577/selecting-a-linux-i-o-scheduler

6. Mount a suitably sized data disk:

First start by creating a new 1TB disk using the Azure CLI.


Partition the new disk and format it in ext4 using the following script.

for i in $hdd;do
echo "n
"|fdisk $i;mkfs.ext4 $i;done

Mount the disk.

mkdir /mnt/data/
mount -t ext4 /dev/sdc1 /mnt/data/

Obtain UUID of newly mounted disk.

blkid /dev/sdc

Add the following to /etc/fstab.

UUID=<NEW DISK UUID>       /        ext4   noatime,defaults,discard        0 0

6. Add a swap file:

sudo dd if=/dev/zero of=/mnt/data/swapfile bs=1G count=32
sudo chmod 600 /mnt/data/swapfile
sudo mkswap /mnt/data/swapfile
sudo swapon /mnt/data/swapfile
echo "/mnt/data/swapfile   none    swap    sw    0   0" >> /etc/fstab

8. Enable Linux Kernel TRIM support for SSD drives:

sudo sed -i 's/exec fstrim-all/exec fstrim-all --no-model-check/g' /etc/cron.weekly/fstrim

For more information: https://www.leaseweb.com/labs/2013/12/ubuntu-14-04-lts-supports-trim-ssd-drives/