Don’t Make This Cloud Storage Mistake

In recent months a number of large profile data leaks have occurred which have made millions of customers’ personal details easily available to anyone on the internet. Three recent cases GOP, Verizon and WWE involved incorrectly configured Amazon S3 buckets (Amazon was not at fault in any way).

Even though it is unlikely you will ever find the URLs to Public Cloud storage such as Amazon S3 or Azure Storage Accounts, they are surprisingly easy to find using the search engine SHODAN which scours the internet for hidden URLs. This then allows hackers or anyone access to an enormous amount of internet-connected devices, from Cloud storage to web-cams.

Better understanding of the data that you wish to store in the Cloud can help you make a more informed decision on the method of storage.

Data Classification

Before you even look at storing your company or customer data in the Cloud you should be classifying your data in some way. Most companies classify their data according to sensitivity. This process then gives you a better understanding of how your data should be stored.

One possible method is to divide data into several categories, based upon the impact to the business in the event of an unauthorised release. For example, the first category would be public, which is intended for release and poses no risk to the business. The next category is low business impact (LBI), which might include data or information that does not contain Personally Identifiable Information (PII) or cover sensitive topics but would generally not be intended for public release. Medium business impact (MBI) data can include information about the company that might not be sensitive, but when combined or analysed could provide competitive insights, or some PII that is not of a sensitive nature but that should not be released for privacy protection. Finally, high business impact (HBI) data is anything covered by any regulatory constraints, involves reputational matters for the company or individuals, anything that could be used to provide competitive advantage, anything that has financial value that could be stolen, or anything that could violate sensitive privacy concerns.

Next, you should set policy requirements for each category of risk. For example, LBI might require no encryption. MBI might require encryption in transit. HBI, in addition to encryption in transit, would require encryption at rest.

The Mistake – Public vs Private Cloud Storage

When classifying the data to be stored in the Cloud the first and most important question is “Should this data be available to the public, or just to individuals within the company?”

Once you have answered this question you can now configure your Cloud storage whether Amazon S3, Azure Storage accounts or whichever provider you are using. One of the most important options available when configuring Cloud storage is whether it is set to “Private” or “Public” access. This is where the mistake was made in the cases mentioned earlier. In all of these cases the Amazon S3 buckets were set to “Public“, however the data stored within them was of a private nature.

The problem here is the understanding of the term “Public” when configuring Cloud storage. Some may think that the term “Public” means that the data is available publicly to all individuals within your company, however this is not the case. The term “Public” means that your data is available to anyone who can access your Cloud Storage URL, whether they are within your company or a member of the general public.

This setting is of vital importance, once you are sure this is correct you can then worry about other features that may be required such as encryption in transit and encryption at rest.

This is a simple error with a big impact which can cost your company or customer a lot of money and even more importantly their reputation.

Azure Build Pipeline using ARM Templates and Visual Studio Team Services


When having to deploy resources within Azure you can easily log in to the Azure Portal and start deploying resources, however with the number of components needed to build a working solution this can quickly become time consuming. You may also need to deploy the same solution in a Development, Test, and Production environment and then make some changes to the environment along the way.

There is a lot of talk about DevOps and Infrastructure as Code (IaC) in the IT industry at the moment. A significant part of DevOps is automation. Let’s see how we can automate our deployments using automation and Infrastructure as Code.

There are a range of different tools available for these tasks. For this example we will use the following.

ARM Template

Our starting point is to create an ARM Template (JSON format) for our environment. The resources being deployed for this example are:

  • VNET and subnet
  • Internal Load Balancer
  • Availability Set
  • Storage Account (for diagnostics)
  • 2 x Virtual Machines (using Managed Disks) assigned to our LB.

Information for Managed Disks can be found here –

The ARM Template and parameters file are available here

The two files used are:

  • ARM Template – VSTSDemoDeploy.json
  • Parameters file – VSTSDemoParameters.json

More information for authoring ARM Templates can be found here –

Create our Local Git Repo

Launch a command prompt and change to the root of C drive which is where we want to clone our VSTSDemo folder to.

Run the following command

git clone


You will now see a VSTSDemo folder in the root of C drive. Open the folder and delete the .git folder (it may be hidden)

Our next step is to initiate our local folder as a Git project.

Enter the following Git command from the VSTSDemo folder

git init

Building the Pipeline with VSTS

If you do not already have an account for VSTS then you can sign up for a free account here –

Now we need to create a project in VSTS, if not signed in already sign in.

Click New Team Project.

Give your project a name, fill in the description, set the version control to Git and the Work item process to Agile, click Create.

Once your Project has finished creating expand “or push an existing repository from command line

This gives us the commands that we need to run. Before running them we need to check the status of our local repository. From the command line run this command from the VSTSDemo directory

git status


We can see that our branch has untracked files, so we need to add them to our repo, to do this run

git add .

Now we need to send our commit, to do this run

git commit -m "Initial check-in."

We can now run the commands supplied by VSTS at our command prompt. First run

git remote add origin

Where xxxxx is your VSTS account name and yyyyy is your VSTS Project name

Now run

git push -u origin –all

Sign in to VSTS when/if prompted

You will see something like the below when completed if successful.

Refresh your VSTS page and you will now see that Code has been committed.

Now we need to create the build definition, click on Build & Release. Now click New definition, then empty process.

Check that the sources are correct.

When deploying we will also need to deploy the Resource Group that will contain the resources. To do this click on Add Task. Select Azure Resource Group Deployment and click Add.

Click the tick box next to the Azure Resource Group Deployment and fill in the required settings.

  • Azure Subscription – will need to click the Authorize button
  • Resource Group
  • Location
  • Template – VSTSDemoDeploy.json
  • Template Parameters – VSTSDemoParameters.json
  • Deployment Mode – Incremental

An important note around the Deployment Mode see the description below. Choose carefully!

Now click on the Triggers tab and enable Continuous Integration

Click Save.

We now have a build pipeline. Let’s use it to deploy our environment. From the Build & Release page click on the Build Definition

Click Queue new build

Click OK on the Queue build page.

You will see the below when the build begins.

Wait for the build to finish.

Let’s log in to the Azure Subscription and take a look at our new resources.

Looks like everything is there.

Make a change – Scale Up

Now let’s make a change by increasing the size of the VM’s.

From within VSTS click on the Code tab and edit our VSTSDemoParameters file. Let’s change the Virtual Machine Size to something bigger – Standard A2 for instance. Click Commit when done.

Add a meaningful comment and click Commit.

We can see that a new build has started. That is our Continuous Integration and deployment working, it will build any changes we make automatically. Your VM’s will restart once the build starts as they are resizing.

Wait for the build to finish.

Let’s check our VM’s from the Azure Portal to see the new size.

Our instance sizes are now Standard A2.

Make another change – Scale Out

Instead of using larger VM sizes this time we need to increase the number of VM’s from 2 to 4.

From within VSTS click on the Code tab and edit our VSTSDemoDeploy file. Let’s change the numberOfInstances variable from 2 to 4. Click Commit when finished, which will kick off a new build.

Once the build finishes check your Azure Subscription and you should now have 4 VM’s instead of 2.

If we check our Availability Set all VM’s are members.

Lastly, we can check our Load Balancer Backend Pools, all VM’s are members.


VSTS and ARM Templates can make deployments of your environments a lot quicker and easier, it also makes managing additional deployments along the life cycle of your application an easier task. This method can be used to deploy any resources that are deployable using ARM Templates, whether IaaS or PaaS.