Don’t Make This Cloud Storage Mistake

In recent months a number of large profile data leaks have occurred which have made millions of customers’ personal details easily available to anyone on the internet. Three recent cases GOP, Verizon and WWE involved incorrectly configured Amazon S3 buckets (Amazon was not at fault in any way).

Even though it is unlikely you will ever find the URLs to Public Cloud storage such as Amazon S3 or Azure Storage Accounts, they are surprisingly easy to find using the search engine SHODAN which scours the internet for hidden URLs. This then allows hackers or anyone access to an enormous amount of internet-connected devices, from Cloud storage to web-cams.

Better understanding of the data that you wish to store in the Cloud can help you make a more informed decision on the method of storage.

Data Classification

Before you even look at storing your company or customer data in the Cloud you should be classifying your data in some way. Most companies classify their data according to sensitivity. This process then gives you a better understanding of how your data should be stored.

One possible method is to divide data into several categories, based upon the impact to the business in the event of an unauthorised release. For example, the first category would be public, which is intended for release and poses no risk to the business. The next category is low business impact (LBI), which might include data or information that does not contain Personally Identifiable Information (PII) or cover sensitive topics but would generally not be intended for public release. Medium business impact (MBI) data can include information about the company that might not be sensitive, but when combined or analysed could provide competitive insights, or some PII that is not of a sensitive nature but that should not be released for privacy protection. Finally, high business impact (HBI) data is anything covered by any regulatory constraints, involves reputational matters for the company or individuals, anything that could be used to provide competitive advantage, anything that has financial value that could be stolen, or anything that could violate sensitive privacy concerns.

Next, you should set policy requirements for each category of risk. For example, LBI might require no encryption. MBI might require encryption in transit. HBI, in addition to encryption in transit, would require encryption at rest.

The Mistake – Public vs Private Cloud Storage

When classifying the data to be stored in the Cloud the first and most important question is “Should this data be available to the public, or just to individuals within the company?”

Once you have answered this question you can now configure your Cloud storage whether Amazon S3, Azure Storage accounts or whichever provider you are using. One of the most important options available when configuring Cloud storage is whether it is set to “Private” or “Public” access. This is where the mistake was made in the cases mentioned earlier. In all of these cases the Amazon S3 buckets were set to “Public“, however the data stored within them was of a private nature.

The problem here is the understanding of the term “Public” when configuring Cloud storage. Some may think that the term “Public” means that the data is available publicly to all individuals within your company, however this is not the case. The term “Public” means that your data is available to anyone who can access your Cloud Storage URL, whether they are within your company or a member of the general public.

This setting is of vital importance, once you are sure this is correct you can then worry about other features that may be required such as encryption in transit and encryption at rest.

This is a simple error with a big impact which can cost your company or customer a lot of money and even more importantly their reputation.

Brisbane O365 Saturday

On the weekend I had a pleasure of presenting to the O365 Saturday Brisbane event. Link below

http://o365saturdayaustralia.com/

In my presentation I demonstrated a new feature within Azure AD that allows the automatic assigment of licences to any of your Azure subscriptions using Dynamic Groups. So what’s cool about this feature?

Well, if you have a well established organisational structure within your on-premise AD and you are synchronising any of the attributes that you need to identity this structure, then you can have your users automatically assigned licences based on their job type, department or even location. The neat thing about this is it drastically simplifies the management of your licence allocation, which until now has been largely done through complicated scripting processes either through an enterprise IAM system, or just through the service desk when users are being setup for the first time.

You can view or download the presentation by clicking on the following link.

O365 Saturday Automate Azure Licencing

Throughout the Kloud Blog there is an enormous amount of material featuring the new innovative way to use Azure AD to advance your business, and if you would like a more detailed post on this topic then please place a comment below and I’ll put something together.

 

Deploy a PHP site to Azure Web Apps using Dropbox

siliconvalve

I’ve been having some good fun getting into the nitty gritty of Azure’s Open Source support and keep coming across some amazing things.

If you want to move away from those legacy hosting businesses and want a simple method to deploy static or dynamic websites, then this is worth a look.

The sample PHP site I used for this demonstration can be cloned on Github here: https://github.com/banago/simple-php-website

The video is without sound, but should be easy enough to follow without.

It’s so simple even your dog could do it.

Dogue

View original post

Cloud Security Research: Cross-Cloud Adversary Analytics

Newly published research from security firm Rapid7 is painting a worrying picture of hackers and malicious actors increasingly looking for new vectors against organizations with resources hosted in public cloud infrastructure environments.

Some highlights of Rapid7’s report:

  • The six cloud providers in our study make up nearly 15% of available IPv4 addresses on the internet.
  • 22% of Softlayer nodes expose database services (MySQL & SQL Server) directly to the internet.
  • Web services are prolific, with 53-80% of nodes in each provider exposing some type of web service.
  • Digital Ocean and Google nodes expose shell (Telnet & SSH) services at a much higher rate – 86% and 74%, respectively – than the other four cloud providers in this study.
  • A wide range of attacks were detected, including ShellShock, SQL Injection, PHP webshell injection and credentials attacks against ssh, Telnet and remote framebuffer (e.g. VNC, RDP & Citrix).

Findings included nearly a quarter of hosts deployed in IBM’s SoftLayer public cloud having databases publicly accessible over the internet, which should be a privacy and security concern to those organization and their customers.

Many of Google’s cloud customers leaving shell access publicly accessible over protocols such as SSH and much worse still, telnet which is worrying to say the least.

Businesses using the public cloud being increasingly probed by outsiders looking for well known vulnerabilities such as OpenSSL Heartbleed (CVE-2014-0160), Stagefright (CVE-2015-1538) and Poodle (CVE-2014-3566) to name but a few.

Digging further into their methodologies, looking to see whether these were random or targeted. It appears these actors are honing their skills in tailoring their probes and attacks to specific providers and organisations.

Rapid7’s research was conducted by means of honey traps, hosts and services made available solely for the purpose of capturing untoward activity with a view to studying how these malicious outsiders do their work. What’s more the company has partnered with Microsoft, Amazon and others under the auspices of projects Heisenberg and Sonar to leverage big data analytics to mine the results of their findings and scan the internet for trends.

Case in point project Heisenberg saw the deployment of honeypots in every geography in partnership with all major public cloud providers. And scanned for compromised digital certifcates in those environments. While project Sonar scanned millions of digital certificates on the internet for sings of the same.

However while the report leads to clear evidence showing that hackers are tailoring their attacks to different providers and organisations. It reads as somewhat more of an indictment of the poor standard of security being deployed by some organisations in the public cloud today. Than a statement on the security practices of the major providers.

The 2016 national exposure survey.

Read about the Heisenberg cloud project (slides).

Windows Information Protection – enabling BYO

Windows 7 has entered the extended support phase of its lifecycle.  What’s this mean? Well Microsoft won’t end security updates for your Windows 7 PC ‘s until the 14th of January 2020, so security should be covered.  However, feature updates (bug fixes), free phone and online support have already ended.  At the same time as Windows 7 leaves extended support Office 365 connection policies are changing to only allow Office clients in mainstream support to connect (that will be Microsoft Office 2016 or later and Microsoft Office 365 ProPlus)[i].  So, if you’re are running Windows 7 and/or Office 2013 or earlier now is the time to look to the future.

As we all know from the press and personal usage, the real successor to Windows 7 is the evergreen, bi-annually updated Windows 10.  The continual change of Windows 10 (aka Windows as a service) along with evergreen SaaS apps enterprises are increasingly adopting,  combined with an end user expectation of always updated and current apps (courtesy of smart phones) means the desktop strategies of yesterday (i.e. tightly managed, infrequently updated, limited or no personalisation) no longer look appropriate.

And BYO remains a hot topic for customers and pundits alike.

So how can you manage a continually changing desktop and support BYOD yet maintain the security of your data?

Microsoft have introduced a couple of capabilities to address these problems.  This blog will focus on developments in the ability to protect corporate data on lightly managed corporate or private devices – specifically, data at rest.

Windows Information Protection (WIP) is a new capability that harnesses Azure Rights Management and Intune (also available via System Center Configuration Manager) to protect data on Windows 10 Anniversary Update (build 1607) or later devices.  These are all part of the Azure Information Protection offering that addresses both client and server side protection of data.

WIP is an option under Intune -> Mobile Apps -> App Protection Policies.  As with any other Intune policy WIP can be applied to any Azure AD user group.

WIP has two main considerations regarding data security; data source and data access.

Data Source

In WIP you define network boundaries.   Below is the network boundaries blade in Intune.

Network boundary

A network boundary effectively defines where data does not need to be protected (i.e. within the boundary, say Office 365) and where it does (i.e. accessing outside the boundary such as downloading a file from Office 365, as per the figure below).

Explorer

On-premises applications and file servers could be within another network boundary, as could other SaaS options.  When data is sourced externally (a PC on the internet) from within a network boundary it should be marked as “work” and encrypted, as shown below.

explorer

Data Access

WIP has the concept of “Allowed Apps”.  These are applications defined within the WIP policy to be allowed to access work data.  Below is the allowd apps blade in Intune.

allowed apps

Microsoft classifies applications into “enlightened apps” and “unenlightened apps”. Enlightened apps can differentiate between corporate and personal data, correctly determining which to protect, based on your policies.  Unenlightened apps can’t differentiate between corporate and personal data, and so all data is considered corporate and encrypted (by Windows not the app).  The Microsoft client apps (Office, Edge, Notepad, etc.) are examples of “enlightened apps”.  Finally, if an application is not defined in Allowed Apps then it can’t read work data, nor can corporate data be cut and paste into an app that is Allowed.  If the scenario where an “unenlightened app” won’t work with WIP it can be defined as exempt and the corporate data is not encrypted.

With Windows Information Protection, Windows now includes the functionality necessary to identify personal and business information, determine which apps have access to it, and provide the basic controls necessary to determine what users are able to do with business data (e.g.: Copy and Paste restrictions). Windows Information Protection is designed specifically to work with the Office 365 ProPlus and Azure Rights Management, which can help protect business data when it leaves the device or when its shared with others (e.g.: Print restrictions; Email forwarding).[i]

And this capability is available in all editions of Windows 10 Anniversary Update (build 1607 or later).

Do you need Azure RMS?

WIP is focussed on securing enterprise data on a device.  It does not address securing enterprise data in the wild.  Azure RMS provides rights management to data once it has left a device.  Azure RMS works with a fairly limited set of applications (mainly Microsoft Office across most platforms). With WIP alone a protected file can’t be shared with another user, say by USB or an external drive or even an email attachment. It will be encrypted and inaccessible.  With RMS data protection can be extended to data that leaves the device, such as an email attachment from an enlightened app (think Word, Excel, PowerPoint, OneNote, etc.) or a file on a USB drive or a cloud drive. With RMS you can audit and monitor usage of your protected files, even after these files leave your organisation’s boundaries.

Addressing your Information Protection needs (on Windows 10)

WIP is not the definitive be-all and end-all capability for protecting corporate data.  Rather it is part of a suite of capabilities that Microsoft provide on Windows 10.  BitLocker protects the device, WIP provides data separation and data leakage protection and AIP provides additional more complex data leakage protection as well as sharing protection.   These three capabilities combine to protect the data at rest, in use and when shared.

So, now enterprise data can be secured on a Windows 10 device rather than the traditional approach of securing the device; suddenly BYOD doesn’t look that scary or impractical.

[i] Taken from Introducing Windows Information Protection <https://blogs.technet.microsoft.com/windowsitpro/2016/06/29/introducing-windows-information-protection/>

[i] Taken from Office 365 system requirements changes for Office <https://techcommunity.microsoft.com/t5/Office-365-Blog/Office-365-system-requirements-changes-for-Office-client/ba-p/62327>

 

Implementing Bootstrap and Font-awesome in SharePoint Framework solutions using React

Responsive Design has been the biggest driving factor for SharePoint framework (SPFx) solutions. In a recent SPFx project for a customer, we developed a component using React, Bootstrap and Font-awesome icons for a responsive look and feel. While building the UI piece, we encountered many issues during the initial set up, so I am writing this blog with detail steps for future reference. One of the key fixes mentioned in this post, is for the WOFF2 font-type file which is a component in font-awesome and bootstrap.

In this blog post, I will not be detailing the technical implementation (business logic and functionality) just focusing on the UI implementation. The following steps outline how to configure SPFx React client side web parts to use Bootstrap and Font-awesome CSS styles

Steps:

  1. Create a SharePoint Framework project using Yeoman. Refer this link from Microsoft docs for reference.
  2. Next, install JQuery, Bootstrap and Font-awesome using npm so that it can be available from within node_modules
    npm install jquery --save
    npm install @types/jquery --save-dev
    npm install bootstrap --save
    npm install @types/bootstrap --save-dev
    npm install jquery --save
    npm install @types/jquery --save-dev
    

    Check the node_modules folder to make sure they got installed successfully

  3. Now locate config.json file in config folder and add the entry below for third party JS library references.
    "externals": {
          "jquery": {
          "path": "node_modules/jquery/dist/jquery.min.js",
          "globalName": "jQuery"
        },
        "bootstrap": {
          "path": "node_modules/bootstrap/dist/js/bootstrap.min.js",
          "globalName": "bootstrap"
        }

    Then reference them in the .ts file in the src folder using import

    import * as jQuery from "jquery";
    import * as bootstrap from "bootstrap";
    
  4. For CSS reference, we can either refer to the public CDN links using SPComponentloader.loadCss() or else refer to the local version as below in the .tsx file
    Note: Don’t use ‘require’ for js scripts as they are already imported in above step. If included again it will cause a component load error.

    require('../../../../node_modules/bootstrap/dist/css/bootstrap.css');
    require('../../../../node_modules/bootstrap/dist/css/bootstrap.min.css');
    require('../../../../node_modules/bootstrap/dist/css/bootstrap-theme.css');
    require('../../../../node_modules/bootstrap/dist/css/bootstrap-theme.min.css');
    require('../../../../node_modules/font-awesome/css/font-awesome.css');
    require('../../../../node_modules/font-awesome/css/font-awesome.min.css');
    
  5. When using React, copy the html to the .tsx file in the components folder. If you want to use the HTML CSS classes as-is and not the SASS way, refer to this blog post. For image references, here is a good post to refer.
    For anyone new to React as me, few tips below for styling:
    1. Use className instead of HTML class attribute
    2. In order to use inline styles, use style={{style attributes}} or define an object, since everything in JSX are elements
  6. When ready, use gulp serve to launch your solution in local workbench.
    Important: If you’re using custom icons or fonts from the above CSS libraries, you will receive Typescript errors saying that loader module was not found for WOFF2 font type. Here, you will need to push the custom loader for WOFF2 font type through gulpfile.js as below.First install url-loader from npm.

     npm install url-loader --save-dev

    Then modify gulpfile.js at the root directory to load the custom loader.

    build.configureWebpack.mergeConfig({ 
      additionalConfiguration: (generatedConfiguration) => { 
        generatedConfiguration.module.loaders.push([ 
          { test: /\.woff2(\?v=[0-9]\.[0-9]\.[0-9])?$/, loader: 'url-loader', query: { limit: 10000, mimetype: 'application/font-woff2'} } 
        ]); 
        return generatedConfiguration; 
      } 
    });
    

    Now gulp serve your solution and it should work fine.

You might still face CSS issues in the solution as the referring CSS doesn’t exactly match the HTML-CSS implementation. To resolve any conflicts, use CSS Override (!important) wherever necessary.

In this post I have shown how we can configure bootstrap and font-awesome third party CSS files to work with SharePoint Framework solutions while leveraging React Framework.

Cloud PABX

Are you looking at a Skype for Business Cloud solution and somewhat perplexed by the many options available? It is confusing choosing from the types of Skype For Business licensing and topologies however you will need to contemplate the following concepts to help you make a decision on the migration path to Skype for Business Cloud PABX.

Cloud PBX
is your phone system in the cloud, it is tightly integrated with Office 365. With the E5 license users get the voice capabilities they need—make, receive, and manage calls from anywhere, using any device. You have the choice to use your current telco provider or subscribe to Microsoft PSTN calling for public call access. Cloud PBX also provides great controls for IT to increase agility and consolidate management compared to traditional phone hardware.

PSTN Calling is an add-on to Cloud PBX that provides domestic and international calling services for businesses directly from Office 365. Instead of contracting with a traditional telephony carrier and using an on-premises IP-PBX, you can purchase the Cloud PBX from Microsoft and add on PSTN Calling for a complete enterprise telephony experience for end-user. With PSTN calling Microsoft is the Telco provider. However, PSTN Calling subscription will be a few months before it is released in Australia.

Cloud PABX Cloud Connector Edition is available now in Australia. So, ask yourself the following to see which is right for you. Some common questions customers need to ask

Existing Contracts

Do I have existing long-term contract with PSTN provider?

  • If you are locked into a telco provider you may be receiving a generous discount for all your PSTN, MOBILES AND DATA services. Microsoft Skype for Business Cloud Connect edition is mechanism to extend your telco services to the Cloud through an on-premise gateway. It is possible to maintain existing telco agreements with this topology.

Analogue Devices

Do I have lots of Analogue services and Common Area Phones?

  • One of the biggest challenges with migrating to Cloud PABX is what are you going to do with all your old legacy analogue devices. Ideally you will replace these devices with Skype for Business optimised devices. However, you may encounter a road block that will stop you upgrading these devices to new IP phones. For example, that door phone and release relay, what do you do with that? Or the phones in remote warehouse locations not covered by a data network.
  • The legacy fax service can be migrated to hosted fax services or multifunction devices.

You have two options:

  1. Replace legacy Analogue devices with low end LYNC online IP phones and pay a basic SFB online subscription.
  2. Deploy SFB Cloud PABX CCE and configure the on-premise gateway with additional Analogue services and configure special routing tables for it. The following diagram shows how legacy infrastructure is integrated.

Summary

Consider the following before implementing a Cloud PBX

Pros:

  • One portal in Office 365 for email and phone system
  • Easy to setup and maintenance
  • Minimal on-premise Infrastructure upgrades needed
  • Lowest expertise needed for use and maintenance
  • Lower TCO compared to on-premise Skype for Business implementations
  • Minimal Incremental license addition to enable E5
Cons:
  • Expensive when compared to third party hosting
  • Missing feature parity compared to on-premise Skype for Business versions
  • Minimal control over upgrades and customization

You have invested significant capital in a legacy PABX network. The PABX vendor is charging you software subscription and upgrade fees to keep your legacy telephone infrastructure current, your maintenance contracts costs are high and all you are getting is the ability to make and receive voice calls. If you subscribe to Office 365 E3 then it is a small incremental cost to update to a Cloud PABX E5 license. The business drivers to migrate the whole network to a cloud based model may result in substantial cost savings.

So, you have your business driver to migrate some or all of your PABX infrastructure to Skype for Business Cloud PABX and with the correct integration topology it is possible to perform a staged or full migration. I will describe this in my next Blog.

Keith Coutlemanis

Putting SQL to REST with Azure Data Factory

Microsoft’s integration stack has slowly matured over the past years, and we’re on the verge of finally breaking away from BizTalk Server, or are we? In this article I’m going to explore Azure Data Factory (ADF). Rather than showing the usual out of the box demo I’m going to demonstrate a real-world scenario that I recently encountered at one of Kloud’s customers.

ADF is a very easy to use and cost-effective solution for simple integration scenarios that can be best described as ETL in the ‘old world’. ADF can run at large scale, and has a series of connectors to load data from a data source, apply a simple mapping and load the transformed data into a target destination.

ADF is limited in terms of standard connectors, and (currently) has no functionality to send data to HTTP/RESTful endpoints. Data can be sourced from HTTP endpoints, but in this case, we’re going to read data from a SQL server and write it to a HTTP endpoint.

Unfortunately ADF tooling isn’t available in VS2017 yet, but you can download the Microsoft Azure DataFactory Tools for Visual Studio 2015 here. Next we’ll use the extremely useful 3rd party library ‘Azure.DataFactory.LocalEnvironment’ that can be found on GitHub. This library allows you to debug ADF projects locally, and eases deployment by generating ARM templates. The easiest way to get started is to open the sample solution, and modify accordingly.

You’ll also need to setup an Azure Batch account and storage account according to Microsoft documentation. Azure Batch is running your execution host engine, which effectively runs your custom activities on one or more VMs in a pool of nodes. The storage account will be used to deploy your custom activity, and is also used for ADF logging purposes. We’ll also create a SQL Azure AdventureWorksLT database to read some data from.

Using the VS templates we’ll create the following artefacts:

  • AzureSqlLinkedService (AzureSqlLinkedService1.json)
    This is the linked service that connects the source with the pipeline, and contains the connection string to connect to our AdventureWorksLT database.
  • WebLinkedService (WebLinkedService1.json)
    This is the linked service that connects to the target pipeline. ADF doesn’t support this type as an output service, so we only use it to refer to from our HTTP table so it passes schema validation.
  • AzureSqlTableLocation (AzureSqlTableLocation1.json)
    This contains the table definition of the Azure SQL source table.
  • HttpTableLocation (HttpTableLocation1.json)
    T
    he tooling doesn’t contain a specific template for Http tables, but we can manually tweak any table template to represent our target (JSON) structure.

AzureSqlLinkedService

AzureSqlTable

Furthermore, we’ll adjust the DataDownloaderSamplePipeline.json to use the input and output tables that are defined above. We’ll also set our schedule and add a custom property to define a column mapping that allows us to map between input columns and output fields.

The grunt of the solution is performed in the DataDownloaderActivity class, where custom .NET code ‘wires together’ the input and output data sources and performs the actual copying of data. The class uses a SqlDataReader to read records, and copies them in chunks as JSON to our target HTTP service. For demonstration purposes I am using the Request Bin service to verify that the output data made its way to the target destination.

We can deploy our solution via PowerShell, or the Visual Studio 2015 tooling if preferred:

NewADF

After deployment we can see the data factory appearing in the portal, and use the monitoring feature to see our copy tasks spinning up according to the defined schedule:

ADF Output

In the Request Bin that I created I can see the output batches appearing one at a time:

RequestBinOutput

As you might notice it’s not all that straightforward to compose and deploy a custom activity, and having to rely on Azure Batch can incur significant cost unless you adopt the right auto scaling strategy. Although the solution requires us to write code and implement our connectivity logic ourselves, we are able to leverage some nice platform features as a reliable execution host, retry logic, scaling, logging and monitoring that are all accessible through the Azure portal.

The complete source code can be found here. The below gists show the various ADF artefacts and the custom .NET activity.

The custom activity C# code:

A Year with Azure Functions

siliconvalve

“The best journeys answer questions that in the beginning you didn’t even think to ask.” – Jeff Johnson – 180° South

I thought with the announcement at Build 2017 of the preview of the Functions tooling for Visual Studio 2017 that I would take a look back on the journey I’ve been on with Functions for the past 12 months. This post is a chance to cover what’s changed and capture some of the lessons I learned along the way, and that hopefully you can take something away from.

In the beginning

I joined the early stages of a project team at a customer and was provided with access to their Microsoft Azure cloud environment as a way to stand up rapid prototypes and deliver ongoing solutions.

I’ve been keen for a while to do away with as much traditional infrastructure as possible in my solutions, primarily as a way…

View original post 1,383 more words

Kloud Managed Services team responds to the global (WannaCry) ransomware attack

It’s been a busy week for our Managed Services team, who have collaborated together across our globally distributed team to provide a customer first security response to this most recent ransomware attack.

From the outset our teams acted swiftly to ensure the several hundred servers, platforms & desktop infrastructure under our management were secure.

Based on our deep knowledge of our customers, their environments and landscapes Kloud Managed Services were able to swiftly develop individual security approaches to remediate discovered security weak points.

In cases where customers have opted for Kloud’s robust approach to patch management the process has ensured customer environments were proactively protected against the threat. For these customers no action was required other than to confirm the relevant patches were in place for all infrastructure Kloud manage.

Communication is key in responding to these types of threats and the Managed Services team’s ability to collaborate and share knowledge quickly internally and then externally with our customers has ensured we remain on top of this threat and are ever vigilant for the next one.