Performance Tuning Ubuntu Server For Use in Azure cloud

The following describes how to performance tune Ubuntu Server virtual machines for use in Azure. Although this article focuses on Ubuntu Server because it’s better established in Azure at this time. It’s worth mentioning that Debian offers better performance and stability overall, albeit at the cost of some of the more recent functionality support available in Ubuntu. Regardless many of the optimizations discussed below apply equally to both although commands and settings may vary occasionally.

Best practice recommendations from Microsoft.

  1. Don’t use the OS disk for other workloads.
  2. Use a 1TB disk minimum for all data workloads.
  3. Use storage accounts in the same datacenter as your virtual machines.
  4. In need of additional IOPs? Add more, not bigger disks.
  5. Limit the number of disks in a storage account to no more than 40.
  6. Use Premium storage for blobs backed by SSDs where necessary.
  7. Disable ‘barriers’ for all premium disks using ‘Readonly’ or ‘None’ caching.
  8. Storage accounts have a limit of 20K IOPs and 500TB capacity.
  9. Enable ‘Read’ caching for small read datasets only, disable it if not.
  10. Don’t store your Linux swapfile on the temporary drive provided by default.
  11. Use EXT4 filesystem.
  12. In Azure IOPs are throttled according to VM size so choose accordingly.

Linux specific optimisations you might also consider.

1. Decrease memory ‘swappiness’ and increase inode caching:

sudo echo vm.swappiness=10 >> /etc/sysctl.conf
sudo echo vm.vfs_cache_pressure=50 >> /etc/sysctl.conf

For more information: http://askubuntu.com/questions/184217/why-most-people-recommend-to-reduce-swappiness-to-10-20

2. Disable CPU scaling / run at maximum frequency all the time:

sudo chmod -x /etc/init.d/ondemand

For more information: http://askubuntu.com/questions/523640/how-i-can-disable-cpu-frequency-scaling-and-set-the-system-to-performance

3. Mount all disks with ‘noatime’ and ‘nobarrier’ (see above) options:

sudo vim /etc/fstab

Add ‘noatime,nobarrier’ to the mount options of all disks.

For more information: https://wiki.archlinux.org/index.php/fstab

4. Upgrade to a more recent Ubuntu kernel image and remove the old:

sudo aptitude update
sudo aptitude search linux-image
sudo aptitude install -y linux-image-4.4.0-28-generic
sudo aptitude remove -y linux-image-3.19.0-65-generic

In the example above the latest available kernel version available is version ‘linux-image-4.4.0-28-generic’ and the version currently installed was ‘linux-image-3.19.0-65-generic’ but these will change of course.

5. Change IO scheduler to something more suited to SSDs (i.e. deadline):

Edit the grub defaults file.

sudo vim /etc/default/grub

Change the following line from

GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash”

to

GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash elevator=deadline

Then run

sudo update-grub2

For more information: http://stackoverflow.com/questions/1009577/selecting-a-linux-i-o-scheduler

6. Mount a suitably sized data disk:

First start by creating a new 1TB disk using the Azure CLI.

https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-classic-attach-disk/

Partition the new disk and format it in ext4 using the following script.

#!/bin/sh</div>
hdd="/dev/sdc"
for i in $hdd;do
echo "n
p
1
w
"|fdisk $i;mkfs.ext4 $i;done

Mount the disk.

mkdir /mnt/data/
mount -t ext4 /dev/sdc1 /mnt/data/

Obtain UUID of newly mounted disk.

blkid /dev/sdc

Add the following to /etc/fstab.

UUID=<NEW DISK UUID>       /        ext4   noatime,defaults,discard        0 0

6. Add a swap file:

sudo dd if=/dev/zero of=/mnt/data/swapfile bs=1G count=32
sudo chmod 600 /mnt/data/swapfile
sudo mkswap /mnt/data/swapfile
sudo swapon /mnt/data/swapfile
echo "/mnt/data/swapfile   none    swap    sw    0   0" >> /etc/fstab

8. Enable Linux Kernel TRIM support for SSD drives:

sudo sed -i 's/exec fstrim-all/exec fstrim-all --no-model-check/g' /etc/cron.weekly/fstrim

For more information: https://www.leaseweb.com/labs/2013/12/ubuntu-14-04-lts-supports-trim-ssd-drives/

 

Effective Testing: Demystifying improvement and efficiency

If you have recently finished a system implementation or a large scale transformation work, you must be familiar with the phrase ‘test process efficiency’ and wondered what this refers to. But unlike the older days, just the delivery of a test function does not suffice the need for quality management. Increasingly organisations are looking for improved and optimised processes rather than being bound by the red lines of a traditional testing regime.

There are few key pillars that drive the test efficiency in any organisation, often these are called the ‘levers of testing function’ in a software implementation lifecycle. They have an immense impact on the final outcome and drive the quality of the delivered product or services. There are probably as many levers as there are different solutions. However at the core of it, a few fundamental principles remain at the top and drive the rest. I prefer to see it in the following way:

Is it the right time to start?

Early entry: If you plan to make a large, significant contribution to the end result, get involved early. Quality should not be ‘reactive’ in nature; it has to be imbibed from the very beginning of your software development.

How enough is ‘enough’?

Risk assessment and prioritisation: In an ideal world, even for a minor project, you will have infinite combinations to test and an enormous amount of data and logic to verify, with few organisations having the time or money to invest. Importantly, you will have to strike a balance between your goals and the risk appetite. A proper risk assessment will allow you to focus on just the right amount of test conditions instead of elsewhere, where the returns are not justified.

When do we know it’s ‘done’?

Acceptance criteria: This is often the most ignored part of any testing function but one of the most important. When you are playing with an infinite set, trying to prioritise and select the optimum figure, it can prove costly if you don’t know where to stop. A set of predefined criteria aligned with the risk appetite and quality goals will prove very useful.

Control minus the bureaucracy

Governance:  Most of the matured testing functions do have an amount of governance mechanism built into it but often is not complete. It is quite important to understand that there are few dimensions to the whole governance mechanism that makes it more full-proof and sound.

a. Team and reporting structure

b. Controls and escalation path

c. Check points including entry/exit gates

Independence vs Collaboration

Cross team collaboration: Success of any testing function heavily relies on the team environment. Unfortunately testing has often been viewed as an ‘independent function’ and suffers heavily from lack of information and coordination, resulting in a whole lot of duplication, rework and inefficiency. Some of the tangible outcomes of a closed and collaborative team effort are visible in

a. Increased flow of information

b. A solid and sound release including build management – its where things get tricky as you start to discover multiple touch points

c. Defect resolution including allocation and triage

This approach does not go without a word of caution. A cooperative and collaborative environment is favourable, as long as it does not impact and destabilise the objectivity and integrity of the testing.

Once you have the right levers to drive any testing function, you can increase the efficiency of one or multiple processes across the board. The next big question you may have is where exactly can they be applied? How can I ensure these efficiency factors become ‘tangible’ and importantly, how will I measure them? This in itself is a big enough discussion to warrant several posts and is often a matter of big debate. I am going to discuss that in detail in one of my other posts later on. In a nutshell, this efficiency can be demonstrated and measured across all the ‘test processes’ involved in various stages of a test cycle.

So what ‘process’ are we talking about?

To understand it better, let’s explore the steps in a typical testing regime. Any large scale testing programs will consist of the following test stages

A. Initiation and conceptualisation
B. Test strategy and planning
C. Test design
D. Test execution and implementation
E. Test analysis and reporting
F. Test closure and wrap up

Each of these stages mark a very special significance and involve a number of test processes, e.g. a ‘test design’ stage will involve processes like ‘requirement analysis and prioritisation, ‘building test specification’, ‘creating test scenarios’ and ‘building test cases and data requirements’. Each of these are able to be analysed, managed objectively, measured as well as controlled, all of which help to improve efficiencies which in turn leads to overall productivity.

A classic example is where a team is following an agile delivery approach; where all of these test processes have often been measured across each sprint. As you move from one sprint to another, you continue to observe all of the processes and collect relevant metrics during the lifecycle of the project. A quick analysis of the data will tell you where you will need to focus on to improve your delivery.

To conclude, it is important to understand and reflect on your current process; this is the next big step in your testing regime once you set up a testing function. Improving a test process not only lifts your team’s performance and motivation but you will continue to reduce costs for your client with the improvement to the overall process.

So, time to go back to the table and ask yourself the fundamental question – is my testing efficient?

Best Practices for Managing Azure Subscriptions in Windows Azure PowerShell Cmdlets

Windows Azure PowerShell cmdlets makes it nice and easy to get started managing your Windows Azure services. Using the Get-AzurePublishSettingsFile cmdlet you can login into your WAZ subscription and fetch the details you need to manage your cloud services. PowerShell even saves these details locally so you can reuse them again in the future. This is great for personal accounts and small teams getting to know the Windows Azure PowerShell cmdlets. However in larger organisations this can quickly lead to management issues and security risks.

Before we get into the recommended practices, lets take a look at what happens when we use the  Get-AzurePublishSettingsFile cmdlet.

Get-AzurePublishSettingsFile

To run this command you must first download and install the Windows Azure PowerShell cmdlets.

The Get-AzurePublishSettingsFile cmdlet opens your browser and prompts you to log into using your Microsoft Account. If you administer multiple subscriptions, you are prompted to select which Windows Azure Subscription (or associated directory to be more correct) to use.

Get-AzurePublishSettingsFile_-_0

Get-AzurePublishSettingsFile_-_1

You will then be prompted to download an xml file with a “.publishsettings” extension that contains subscription details (name and subscription id) as well as a *newly* generated management certificate (saved as base64 encoded string). As part of this process, the management certificate’s public key is also added to the management certificates collection of the selected subscription.

Get-AzurePublishSettingsFile_-_2

After saving the file to your local machine you need to run the Import-AzurePublishSettingsFile cmdlet to update your Windows Azure PowerShell cmdlet subscription profile.

Import-AzurePublishSettingsFile–PublishSettingsFile“C:\Temp\MyAccountName-date-credentials.publishsettings”

This cmdlet updates the Windows Azure PowerShell cmdlets profile data stored in your Roaming profile folder, %appdata%/Windows Azure Powershell. It also imports the management certificate into your Personal Certificate store so it is accessible by PowerShell.

Once we have imported the publish settings file successfully we can use PowerShell to view, delete and update the profile data saved locally.

A few important things to note here:

  • The file downloaded contains an encoded management certificate that serves as the credentials to administer your Windows Azure subscriptions and services. This needs to be treated accordingly and stored securely. Managing public key infrastructure (PKI) in this way probably doesn’t comply with your organisation’s security policy or at the very least becomes a unmanageable way to control administrator access to your Windows Azure services.
  • Not only do we have many management certificates drifting around on local machines, we also get a build up of public key certificates in the Windows Azure Portal. At the time of writing we are allowed 100 certificates per subscription. Identifying certificate ownership is only possible via thumbprint matching with the locally saved certificate details. This makes managing access control very challenging in large teams.
  • Not being able to identify which certificate belongs to which administrator then leads to a security auditing and logging issues. Windows Azure Management Services provides operational logs for auditing and troubleshooting purposes. When a PowerShell script performs an operation against a service, an event is logged and the certificate thumbprint used to authenticate against the API is recorded. Once again, if we have no visibility of who owns which certificate, we severely reduce our ability to meet our organisation’s security policy requirements.
  • Previously this process automatically included *all* subscriptions for which you were either a service administrator or co-administrator. This was very problematic for those of us that managed subscriptions on behalf of multiple client’s as it forced you to save details for all of your client’s subscriptions to the local machine. Not a very good conversation to have with a client when working onsite! Forcing the settings file to be scoped to just one subscription (or WAZ directory) is an improvement but still not the recommended practice.

So knowing what we do now, what is the recommended practice? How do we better manage these private key certificates? How should we setup our teams effectively and perhaps more importantly more securely? Lets take a look.

Recommended Practice

  • Manage Windows Azure Management certificates using your organisations PKI infrastructure (e.g. Windows Server Certificate Authority)Get-AzurePublishSettingsFile_-_3Here is some guidance however talk with your CA administrator to check your organisations policies:
    • Certificates must be X.509 version 3
    • Ensure Subject properties associate the cert with the admin user (don’t leave this as generic descriptions. We want to be able to identify the subject = user)
    • Key length must be 2048 bits
    • Use strong private key passwords
    • Don’t enable private keys to be exported

     

  • Upload service management certificate public key to Portal
    • You may need to export the public key (*.cer) first from your Personal Certificate store
    • Upload the *.cer file to the Windows Azure Portal
  • In your PowerShell scripts use the following cmdlets to set and manage your subscription context instead of relying on the publish settings fileeukijypl

Using the cmdlets above we can more effectively manage the service management access control in the PowerShell scripts we write. It requires a few more lines but I mitigate this by including the following function in a script template so that if forms the basis of all my WAZ management scripts. We also could pop this into a module and import it from our script. Either way, we are not having to rewrite this pattern each time.

<#
.SYNOPSIS
   Sets the specified Windows Azure Subscription as the current context
.DESCRIPTION
   First creates/updates the subscription profile
   Checks the required management certificate is installed
   Sets the subscription context for all WAZ cmdlets used in the session
.EXAMPLE
   Set-AzureSubscriptionContext -SubscriptionName “MySubscription” -SubscriptionId “00000000-0000-0000-0000-000000000000” -CertificateThumbprint “00000000000000000000000000000000000000000”
.OUTPUTS
   None
#>
function Set-AzureSubscriptionContext
{
param
(
# Windows Azure Subscription Name
[Parameter(Mandatory = $true)]
[String]
$SubscriptionName,

        # Windows Azure Subscription Id
[Parameter(Mandatory = $true)]
[String]
$SubscriptionId,

# Management Certificate thumbnail
[Parameter(Mandatory = $true)]
[String]
$CertificateThumbprint
)

# Get management certificate from personal store
$certificate = Get-Item cert:\\CurrentUser\My\$CertificateThumbprint
if ($certificate -eq $null) {
throw “Management certificate for $SubscriptionName was not found in the users personal certificate store. Check thumbprint or install certificate”
}

# Set subscription profile
Set-AzureSubscription -SubscriptionName $SubscriptionName -SubscriptionId $SubscriptionId -Certificate $certificate

# Select subscription as the current context
Select-AzureSubscription -SubscriptionName $SubscriptionName

}
 

By now we should have a better understanding of what goes on under the covers and some of the management pitfalls to avoid. Following the tips above we can achieve:

  • Better compliance with your enterprise security policy.
  • Central management of certificate generation and ownership.
  • Tighter controls on script execution (as management certs need to be present on the local machine). Remember this is only achievable we mark private keys as non-exportable.
  • Better administration event logging and auditing as event log entries will capture unique thumbprint of admin user.

So next time you are setting up your team’s Windows Azure Dev-Ops environment, take some time to consider your approach. Often the getting started guides and training course material take the easy, lowest barrier to entry approach rather than follow best practice.

Good Practices for Managing Microsoft Azure Subscriptions

We’ve published some updated guidance for Service Admin account management based on the new RBAC access control techniques now available in Azure. While the classic non-RBAC portal is required, the content in the post here is still very relevant though!

Overview

Over the years it has been drilled into me to use “Least Privilege” access whenever and however possible. Least Privilege is all about limiting users, systems, and services to only those privileges which are absolutely essential to get the job done. Microsoft Azure should be no different and some would argue even more important when it comes to Least Privilege. However what tends to happen in the cloud space is business units want to avoid/bypass IT Departments and setup in the case of this article Microsoft Azure subscriptions without much thought. This can lead to incorrect users having access to production services and once discovered is hard to correct as once services are deployed you can’t easily move services between subscriptions.

Microsoft Azure Subscription Components:

Enterprise Administrator

Standard Customer – Not Applicable

Enterprise Agreement Customers – The Enterprise Administrator has the ability to add or associate Accounts to the Enrolment, can view usage data across all Accounts, can view the monetary commitment balance associated to the Enrolment, and can provide Account Owner visibility to view charges.

There is no limit to the number of Enterprise Administrators on an Enrolment.

Account Owner

Standard Customer – The Account Owner is the Microsoft Account (formerly Live ID) or Azure Active Directory (AAD) Account that is responsible financially for the Microsoft Azure monetary commitment. Microsoft will send invoices to the email address associated with the Account Owner. The Account Owner can add Subscriptions for their Account, update the Service Administrator and Co-Administrators for an individual Subscription and view usage data for their Account.

Enterprise Agreement Customers – this definition changes slightly. The Account Owner (by default) will not have visibility of the monetary commitment balance unless they also have Enterprise Administrator rights. An Enterprise Administrator can choose to grant the Account Owner the rights to view the monetary commitment.

There is a limit of 1 Account Owner for each Account.

Subscription

A Subscription is a billing container for deployed Microsoft Azure services.

There is a limit of 1 Service Administrator for each Subscription.

There is a limit of 200 Co-Administrators for each Subscription.

Service Administrator

The Service Administrator can perform all functions within a Subscription including add/remove Co-Administrators. By default the Service Administrator will be the same as the Account Owner.

Co-Administrator

A Co-administrator can perform all functions within a Subscription except change the Service Administrator and add/remove other Co-administrators.

Use Multiple Microsoft Azure Subscriptions

Multiple Subscriptions allow a company to easy view billing for each Subscription and limit who can access the Microsoft Azure services associated with that subscription.

An example of using multiple subscriptions might be:

Subscription 1

  • Name: “Company – Project 1 – Development”
  • Service Administrator: Development Manager
  • Co-Administrators: Developers on Project 1

Note: A developer may not need to be a Co-Administrator but instead only require a Management Certificate as explained below.

Subscription 2

  • Name: “Company – Project 1 – Test”
  • Service Administrator: Test Manager
  • Co-Administrator: Testers on Project 1

Subscription 3

  • Name: “Company – Project 1 – Pre-Production”
  • Service Administrator: IT Manager
  • Co-Administrator: Senior IT Support Team (Level 3)

Subscription 4

  • Name: “Company – Project 1 – Production”
  • Service Administrator: IT Manager
  • Co-Administrator: Senior IT Support Team (Level 3)

Use Descriptive Names for Microsoft Azure Subscriptions

When it comes to naming a Microsoft Azure subscription, it is good practice to use descriptive names.

I typically recommend the following format, but as long as you develop a format that works for your company go for it.

My Recommendation:

<Company> – <ProjectName> – <Environment>

Explanation of My Recommendation

<Company> is the name of your company. You might be wondering “Why would I add my company name to my company’s Subscriptions that seems a little overkill”. The reason I recommend this is if/when you hire a contractor or outside company to assist you with your Microsoft Azure services you will most likely need to make then a Co-administrator on a particular Subscription. When that contractor/company logs into the Microsoft Azure portal they see a list of all subscription they are associated with. So they might see: CustomerASubscription1, CustomerASubscription2, CustomerBSubscription1, etc.

If you named your Subscription “Development” and some other company named their Subscription “Development” then the contractor/company would see “Development” and “Development” in the list. To me that adds a risk where they could accidentally perform some action on the wrong Subscription. The risk is pretty small as they would more than likely also need to know the service name such as storage account “XYZ”, but still a risk is a risk and when possible should be mitigated.

<ProjectName> is the name that your project is known by. You might have one project that is all about your company’s intranet which is called “Inside” and another project that is all about your company’s public web site. In this case you might have one set of service for “Inside” where you use “Inside” as the project name and another set of services for your public website where you might use the name “PublicWebsite” as the project name.

Now you might be thinking “That seems like I might end up with a lot of Subscription in the end which is pretty overkill”. Yes, you are correct you might, but then again I would rather have a lot of Subscriptions that I can control access to over, say, one big Subscription called “Production”. Plus do I really want the administrator of ProjectA having full access to ProjectB services in Production?

<Environment> is the name I choose to use because in most of the deployment of Microsoft Azure services I have worked with, that really is what it turns out to be. A group of Microsoft Azure services that all form part of “Development”, “Test”, or “Production”.

NOTE: I don’t recommend using an environment name of “Staging” because I feel that it can become confusing when Windows Azure deployments have two slots “Staging” and “Production”.

Use Named User Accounts

A named account is an account that is associated to a single person and typically named in such a way as to identify that person. Example: First.Last@outlook.com instead of SweetGuy88@outlook.com. You might know who SweetGuy88 is today but in six months times I bet you forget who that is and have to run around trying to figure it out. Save yourself the hassle and used named accounts day 1 where possible.

Another thing I recommend is that you setup named accounts @ your company name instead of @hotmail, @outlook, etc. When you setup a Microsoft Account, you are required to verify your identity which means Microsoft sends an email to first.last@company.com that you must click verify before the account is considered verified. When I use contractors or outside companies I recommend that they setup the accounts to be the same as the work email address I know them by. This helps me as an administrator identity who is inside my organisation and who isn’t.

Establish Guidelines for Microsoft Accounts (formerly Live IDs)

Microsoft Accounts are out of your company’s control for the most part. As such I recommend you establish some guidelines for Microsoft Accounts that you try and push your users to adopt. It would be nice if Microsoft had a way to setup Enterprise Microsoft Accounts that as an Enterprise I could better control. Federation with Microsoft Azure Active Directory helps in this matter but does not solve the problem 100%.

The Guidelines I personally use are:

  • Use a strong password. Example: 16 characters long with mixed case and numbers. The longer and more complicated the better. If this account is associated with your production Windows Azure subscription, then this account can access not only the Microsoft Azure services (Stop/Delete services) but also has access to all the data in your storage accounts within the associated Subscription.
  • Use a named account. Use an account name that is easy to identity. first.last instead of SweetGuy88.
  • Link to your company email. This way the account will have to be verified using a company email address they have access to. Also if the person leaves your organisation you may in some cases be able to reset the password if required.
  • Change Passwords Regularly. This is always a good practice but is becoming harder and harder to do when we have so many passwords to remember. If remembering passwords becomes an issue a tool such as LastPass or KeePass might provide assistance.
  • Add Alternate email address. I also associate with my Microsoft Account another email address so that if I forget my password, I have another ways to get back in.

Use Microsoft Azure Affinity Groups

Microsoft’s Definition – Affinity groups allow you to group your Microsoft Azure services to optimize performance. All services within an affinity group will be located in the same data center. An Affinity Group is required in order to create a virtual network.

Why is this a good practice? Originally when Microsoft Azure datacentres were built, latency between different parts of the datacentres could be high. Today this is less of a reason to use Affinity Groups as Microsoft datacentres are now built in a way to keep latency down. Affinity Groups are now used mainly to simplify deployment of services, instead of having to pick both a Region and Subscription you only need to pick an Affinity Group. Affinity Groups are used in all of Microsoft’s backend management logic of Microsoft Azure. When services are moved, restarted, restored, etc. an Affinity Group is taken into an account and treated as a set of services and kept close together and in the same physical datacentre. Once services are deployed you can’t move services easily into an Affinity Group.

Use Management Certificates

Microsoft’s Definition – Management Certificates permit client access to resources in your Microsoft Azure subscription. Management certificates are x.509 v3 certificates that only contain a public key, and are saved as a .cer file.

If a person requires the ability to deploy or change services running in Microsoft Azure but does not require access to the Azure Management Portal, then provide them only a Management Certificate. This scenario is common with a large development teams. A developer that needs to deploy to Microsoft Azure services through a tool such as Visual Studio may only require a Management Certificate.

NOTE: There is a limit of 100 management certificates per Microsoft Azure subscription. There is also a limit of 100 management certificates for all Subscriptions under a specific Service Administrator’s user ID. If the user ID for the Account Administrator has already been used to add 100 management certificates and there is a need for more certificates, you can add a Co-Administrator to add the additional certificates.  Before adding more than 100 certificates, see if you can reuse an existing certificate. Using Co-Administrators adds potentially unneeded complexity to your certificate management process.

Standard Customer – Good Practice Setup

Standard Customer - Good Practice Setup

Enterprise Agreement Customers – Good Practice Setup

Enterprise Agreement Customer - Good Practice Setup

Summary of Good Practices

  1. Use Multiple Microsoft Azure Subscriptions
  2. Use Descriptive Names for MicrosoftAzure Subscriptions
  3. Use Named User Accounts
  4. Establish Guidelines for Microsoft Accounts (formerly Live IDs)
  5. Use Microsoft Azure Affinity Groups
  6. Use Management Certificates.
We’ve published some updated guidance for Service Adminaccount management based on the new RBAC access control techniques now available in Azure. While the classic non-RBAC portal is required, the content in the post here is still very relevant though!