Provide non-admin users with read-only access to Service Endpoints in VSTS


I am currently transitioning some work to another team in our business. Part of this transition has been to pre-configure various Service Endpoints in Visual Studio Team Services (VSTS) to provide a way for the new team to deploy into target Azure environments without the team necessarily having direct or privileged access into those Azure environments.

In this post I am going to look at how you can grant users access to these Service Endpoints without them being able to modify them. This post will also be useful if you’ve configured Service Endpoints (as an admin) and then others on the team (who are non-admins) are unable to see them.

Note that this advice applies to any Service Endpoint – not just Azure!

By default only users who are members of the following groups can see Service Endpoints:

– Project Admins
– Endpoint Admins
– Endpoint Creators.

It’s unlikely that…

View original post 230 more words

EU GDPR – is it relevant to Australian companies?

The new General Data Protection Regulation (GDPR) from the European Union (EU) imposes new rules on organisations that offer goods and services to the people in the EU, or collects and analyses data tied to EU residents, no matter where the organisations or the data processing is located. GDPR comes into force in May 2018.

If your customers reside in the EU, whether you have a presence in the EU or not, then GDPR applies to you. The internet lets you interact with customers where ever they are, and GDPR applies to anyone that deals with EU people where ever they are.

And the term personal data covers everything from IP address, to cookie data, to submitted forms, to CCTV and even to a photo of a landscape that can be tied to an identity. Then there is sensitive personal data, such as ethnicity, sexual orientation and genetic data, which have enhanced protections.

And for the first time there are very strong penalties for non-compliance – the maximum fine for a GDPR breach is EU$20M, or 4% of worldwide annual turnover. The maximum fine can be imposed for the most serious infringements e.g. not having sufficient customer consent to process data or violating the core of Privacy by Design concepts.

Essentially GDPR states that organisations must:

  • provide clear notice of data collection
  • outline the purpose the data is being used for
  • collect the data needed for that purpose
  • ensure that the data is only kept as long as required to process
  • disclose whether the data will be shared within or outside or the EU
  • protect personal data using appropriate security
  • individuals have the right to access, correct and erase their personal data, and to stop an organisation processing their data
  • and that organisations notify authorities of personal data breaches.

Specific criteria for companies required to comply are:

  • A presence in an EU country
  • No presence in the EU, but it processes personal data of European residents
  • More than 250 employees
  • Fewer than 250 employees but the processing it carries out is likely to result in a risk for the rights and freedoms of data subject, is not occasional, or includes certain types of sensitive personal data. That effectively means almost all companies.

What does this mean in real terms to common large companies? Well…

  • Apple turned over about USD$230B in 2017, so the maximum fine applicable to Apple would be USD$9.2B
  • CBA turned over AUD$26B in 2017 and so their maximum fine would “only” be AUD$1B
  • Telstra turned over AUD$28.2B in 2017, the maximum fine would be AUD$1.1B.


The GDPR legislation won’t impact Australian businesses, will it? What if an EU resident gets a Telstra phone or CBA credit/travel card whilst on holiday in Australia or if your organisation has local regulatory data retention requirements that appear, on the surface at least, at odds with GDPR obligations…

I would get legal advice if the organisation provides services that may be used by EU nationals.

In a recent PWC “Pulse Survey: US Companies ramping up General Data Protection Regulation (GDPR) budgets” 92% of responses stated that GDPR is one of several top priorities.

Technology cannot alone make an organisation GDPR compliant. There must be policy, process, people changes to support GDPR. But technology can greatly assist organisations that need to comply with GDPR.

Microsoft has invested in providing assistance to organisations impacted by GDPR.

Office 365 Advanced Data Governance enables you to intelligently manage your organisation’s data with classifications. The classifications can be applied automatically, for example, if there is GDPR German PII data present in the document the document can be marked as confidential when saved. With the document marked the data can be protected, whether that is to encrypt the file or assign permissions based on user IDs, or add watermarks indicating sensitivity.

An organisation can choose to encrypt their data at rest in Office 365, Dynamics 365 or Azure with their own encryption keys. Alternatively, a Microsoft generated key can be used.  Sounds like a no-brainer, all customers will use customer keys. However, the customer must have a HSM (Hardware Security Module) and a proven key management capability.

Azure Information Protection enables an organisation to track and control marked data. Distribution of data can be monitored, and access and access attempts logged. This information can allow an organisation to revoke access from an employee or partner if data is being shared without authorisation.

Azure Active Directory (AD) can provide risk-based conditional access controls – can the user credentials be found in public data breaches, is it an unmanaged device, are they trying to access a sensitive app, are they a privileged user or have they just completed an impossible trip (logged in five minutes ago from Australia, the current attempt is from somewhere that is a 12 hour flight away) – to assess the risk of the user and the risk of the session and based on that access can be provided, or request multi-factor authentication (MFA), or limit or deny access.

Microsoft Enterprise Mobility + Security (EMS) can protect your cloud and on-premises resources. Advanced behavioural analytics are the basis for identifying threats before data is compromised. Advanced Threat Analytics (ATA) detects abnormal behaviour and provides advanced threat detection for on-premises resources. Azure AD provides protection from identity-based attacks and cloud-based threat detection and Cloud App Security detects anomalies for cloud apps. Cloud App Security can detect what cloud apps are being used, as well as control access and can support compliance efforts with regulatory mandates such as Payment Card Industry (PCI), Health Insurance Accountability and Portability Act (HIPAA), Sarbanes-Oxley (SOX), General Data Protection Regulation (GDPR) and others. Cloud App Security can apply policies to apps from Microsoft or other vendors, such as Box, Dropbox, Salesforce, and more.

Microsoft provides a set of compliance and security tools to help organisations meet their regulatory obligations. To reiterate policy, process and people changes are required to support GDPR.

Please discuss your legal obligations with a legal professional to clarify any obligations that the EU GDPR may place on your organisation. Remember May 2018 is only a few months away.

Validating a Yubico YubiKeys’ One Time Password (OTP) using Single Factor Authentication and PowerShell

Multi-factor Authentication comes in many different formats. Physical tokens historically have been very common and moving forward with FIDO v2 standards will likely continue to be so for many security scenarios where soft tokens (think Authenticator Apps on mobile devices) aren’t possible.

Yubico YubiKeys are physical tokens that have a number of properties that make them desirable. They don’t use a battery (so aren’t limited to the life of the battery), they come in many differing formats (NFC, USB-3, USB-C), can hold multiple sets of credentials and support open standards for multi-factor authentication. You can checkout Yubico’s range of tokens here.

YubiKeys ship with a configuration already configured that allows them to be validated against YubiCloud. Before we configure them for a user I wanted a quick way to validate that the YubiKey was valid. You can do this using Yubico’s demo webpage here but for other reasons I needed to write my own. There wasn’t any PowerShell examples anywhere, so now that I’ve worked it out, I’m posting it here.


You will need a Yubikey. You will need to register and obtain a Yubico API Key using a Yubikey from here.

Validation Script

Update the following script to change line 2 for your ClientID that  you received after registering against the Yubico API above.

Running the script validates that the Key if valid.

YubiKey Validation.PNG

Re-running the submission of the same key (i.e I didn’t generate a new OTP) gets the expected response that the Request is Replayed.

YubiKey Validation Failed.PNG


Using PowerShell we can negate the need to leverage any Yubico client libraries and validate a YubiKey against YubiCloud.


Identifying Active Directory Users with Pwned Passwords using Microsoft/Forefront Identity Manager v2, k-Anonymity and Have I Been Pwned


In August 2017 Troy Hunted released a sizeable list of Pwned Passwords. 320 Million in fact.

I subsequently wrote this post on Identifying Active Directory Users with Pwned Passwords using Microsoft/Forefront Identity Manager which called the API and sets a boolean attribute in the MIM Service that could be used with business logic to force users with accounts that have compromised passwords to change their password on next logon.

Whilst that was a proof of concept/discussion point of sorts AND  I had a disclaimer about sending passwords across the internet to a third-party service there was a lot of momentum around the HIBP API and I developed a solution and wrote this update to check the passwords locally.

Today Troy has released v2 of that list and updated the API with new features and functionality. If you’re playing catch-up I encourage you to read Troy’s post from August last year, and my two posts about checking Active Directory passwords against that list.

Leveraging V2 (with k-Anonymity) of the Have I Been Pwned API

With v2 of the HIBP passwod list and API the number of leaked credentials in the list has grown to half a billion. 501,636,842 Pwned Passwords to be exact.

With the v2 list in conjunction with Junade Ali from Cloudflare the API has been updated to be leveraged with a level of anonymity. Instead of sending a SHA-1 hash of the password to check if the password you’re checking is on the list you can now send a truncated version of the SHA-1 hash of the password and you will be returned a set of passwords from the HIBP v2 API. This is done using a concept called k-anonymity detailed brilliantly here by Junade Ali.

v2 of the API also returns a score for each password in the list. Basically how many times the password has previously been seen in leaked credentials lists. Brilliant.

Updated Pwned PowerShell Management Agent for Pwned Password Lookup

Below is an updated Password.ps1 script for the previous API version of my Pwned Password Management Agent for Microsoft Identity Manager. It functions by;

  • taking the new password received from PCNS
  • hashes the password to SHA-1 format
  • looks up the v2 HIBP API using part of the SHA-1 hash
  • updates the MIM Service with Pwned Password status

Checkout the original post with all the rest of the details here.


Of course you can also download (recommended via Torrent) the Pwned Password dataset. Keep in mind that the compressed dataset is 8.75 GB and uncompressed is 29.4 GB. Convert that into an On-Premise SQL Table(s) as I did in the linked post at the beginning of this post and you’ll be well in excess of that.

Awesome work from Tory and Junade.


Using Intune and AAD to protect against Spectre and Meltdown

Kieran Jacobsen is a Melbourne based IT professional specialising in Microsoft infrastructure, automation and security. Kieran is Head of Information Technology for Microsoft partner, Readify.

I’m a big fan of Intune’s device compliance policies and Azure Active Directory’s (AAD) conditional access rules. They’re one piece of the puzzle in moving to a Beyond Corp model, that I believe is the future of enterprise networks.

Compliance policies allow us to define what it takes for a device (typically a client) to be considered secure. The rules could include the use of a password, encryption, OS version or even if a device has been jail-broken or rooted. In Intune we can define policies for Windows 8.1 and 10, Windows Phone, macOS, iOS and Android.

One critical thing to highlight is that compliance policies don’t enforce settings and don’t make changes to a device. They’re simply a decision-making tool that allows Intune (and AAD) to determine the status of the device. If we want to make changes to a device, we need to use Intune configuration policies. It’s up to the admin or the user to make a non-compliant device compliant.

A common misconception with compliance policies are that the verification process occurs in real-time, that is, when a user tries to login the device’s compliance status is checked. The check occurs on an hourly basis, though users and admins can trigger off a check manually.

The next piece of the puzzle are conditional access policies. These are policies that allow us to target different sign-in experiences for different applications, devices and user accounts. A user on a compliant device may receive a different sign-in experience to someone using a web browser on some random unknown device.

How compliance policies and conditional access work together

To understand how Compliance Policies and Conditional Access works, let’s look at a user story.

Fred works in the Accounting department at Capital Systems. Fred has a work PC issued by Capital’s IT Team, and a home PC that he bought from a local computer store.

The IT team has defined two Conditional Access policies:

  • For Office 365: a user can connect from a compliant device, or needs to pass an MFA check.
  • For the finance system: the user can only connect from a compliant device and must pass an MFA check.

How does this work in practice?

When Fred tries to access his email from his work device, perhaps through a browser, AAD will check his device’s compliance status during login. As Fred’s work PC is compliant, it will allow access to his email.

Fred now goes home, on the train he remembers he forgot to reply to an important email. When Fred gets home, he starts his home PC and navigates to the Office 365 portal. This time, AAD doesn’t know the device, so it will treat the device as non-compliant. This time, Fred will be prompted to complete MFA before he can access his email.

Things are different for Fred when he tries to access Capital’s finance system. Fred will be able to access this system from his work PC as its complaint, assuming he completes an MFA request. Fred won’t be able to access this finance system from his home PC as his device isn’t compliant.

These rules allow Capital System’s IT team to govern who can access an application, from what devices they can access it from, and if they need to complete MFA.

Ensuring Spectre and Meltdown Patches are installed

We can use compliance policies to check if a device’s OS version contains the Spectre and Meltdown patches. When Intune checks the devices compliance, if isn’t running with expected patch level, it will be marked as non-compliant.

What does this mean for the user? In Fred’s case, if his work PC lacks those updates, he may receive extra MFA prompts and loose access to the finance system, until he installs the right patches.

The Intune portal and PowerBI can be used to generate reports on device compliance and identify devices that need attention. You can also configure Intune to email a user when their device becomes non-compliant. This email can be customised, I recommend that you include a link to a remediation guide or to your support system.

Configuring Intune Compliance Policies

Compliance policies can be created and modified in the Azure Portal via the Intune panel. Simply navigate to the Device Compliance and then Policies. You’ll need to create a separate policy for each OS that you want to manage compliance.

Within a compliance policy, we specify an OS version using a “” formatted string.

The major versions numbers are:

  • Windows 10 – 10.0 Note that the .0 is important*
  • Windows 8.1 – 3
  • macOS – 10

We can express things like Windows 10 Fall Creators, or macOS High Sierra using the minor version number.

  • Windows 10 Fall Creators Update – 10.0.16299
  • macOS High Sierra – 10.13

Finally, we can narrow down to a specific release or patch by using the build version number. For instance, the January updates for each platform are:

  • Windows 10 Fall Creators Update – 10.0.16299.192
  • macOS High Sierra – 10.13.2

You can specify the minimum and maximum OS version by navigating to Properties, Settings and then Device Properties.



Setting the minimum Windows 10 version in a compliance policy.


Setting the minimum macOS version in a compliance policy.

Once you have made this change, devices that don’t meet the minimum version will be marked as non-compliant during their next compliance evaluation.

Kieran Jacobsen

Checking and patching your Microsoft Windows computer for Meltdown and Spectre


A Google team named Project Zero in mid 2017 identified vulnerabilities with many Intel, AMD and ARM CPU’s that allow speculative pre-processing of code to be abused. Speculative pre-processing aids performance which is why it exists. However when used maliciously it would allow an attacker to use JavaScript in a webpage to access memory that could contain information present in a users environment such as key strokes, passwords and personal sensitive information.

A very good overview on the how (and a little of the why) is summarised in a series of tweets by Graham Sutherland here.


In the January Security updates Microsoft have provided updates to protect its operating systems (Windows 7 SP1 and later). More on this below. They have also provided a PowerShell Module to inspect and report on the status of a Windows operating system.

What you are going to need to do is patch your Windows Operating System and update your computers firmware (BIOS).

Using an Administrative PowerShell session on a Windows workstation with Windows Management Framework 5.x installed the following three lines will download and install the PowerShell module, import it and execute it to report on the status.

Install-Module SpeculationControl
Import-Module SpeculationControl

The output below shows that the operating system does not contain the updates for the vulnerability.

PowerShell Check.PNG

Obtaining the Windows Security Updates

Microsoft included updates for its operating systems (Windows 7 SP1 and newer) on January 3 2018 in the January update as shown below.  They can be obtained from the Microsoft Security Portal here. Search for CVE-2017-5715 to get the details.


Go to the Microsoft Update Catalog to obtain the update individually.

The quickest and easiest though is to press your Windows Key, select the Gear (settings) icon, Update & Security, Windows Update.

Update & Security.PNG

Check status, install the updates, and restart your Windows computer.

Windows Update.PNG

Speculation Control Status

After installing the updates and restarting the computer we can run the check again. It now shows we are partially protected. Protected for Meltdown but partially protected for Spectre. A BIOS update is required to complete the mitigation for Spectre.

Rerun Powershell Check.PNG

For me I obtained the latest BIOS for my laptop from the manufacturers support website. If you are also on a Lenovo Yoga 910 that is here. However for me the latest Lenovo firmware doesn’t include updates for this vulnerability. And my particular model of laptop isn’t listed as being affected. I’ll keep checking to see if that changes.


In Microsoft environments your patching strategy will get you most of the way with the Microsoft January Security updates. BIOS updates to your fleet will take additional planning and effort to complete.


Another day, another data breach


Make no mistake the Equifax Data Breach of about 143 million records (approx. 44% of US population) is one of the largest and ‘will be’ the most expensive data breach in the history. Equifax is one of the four largest American credit agencies alongside Experian, Trans Union and Innovis.

The data breach notification by Equifax should remind us that data breaches are inevitable and these breaches will continue to happen and make headlines.

However, the key message of this breach is the reporting firm took over 5 weeks to publicly disclose the data breach, which means that the personal information of 143 million people was exposed for over 2 months before they were made aware of the compromise. (Please Note: The breach occurred sometime in May and was not even detected until late July 2017)

And to no surprise, the stock market didn’t react well! As of Monday 11/09 the company lost about $2 billion in market cap on Friday, tumbling nearly 14%.  (This figure will surely go up)

A proposed class action seeking as much as USD$70 billion in damages nationally to represent 143 million consumers alleges, Equifax didn’t spend enough on protecting data. (Or should we say Equifax did not take reasonable steps to protect information)

With this treasure trove in hand what is the most likely move of the hackers?

  1. They would be already selling this information on the dark web or negotiating with Equifax for a ransom; or
  2. Data mining to see if they can use this data for identify theft. Imagine hackers creating countless new identities out of thin air. You’re not the only one who thinks this is a terrifying scenario!

Whatever the reason for this breach or the attack vector was, organizations that hold more personal data than they need, always carry more risk for themselves and their consumers.

In 2016, Microsoft frames digital growth with its estimate that by 2020 four billion people will be online — twice the number that are online now. They predict 50 billion devices will be connected to the Internet by 2020, and data volumes online will be 50 times greater than today and cybercrime damage costs to hit $6 trillion annually by 2021.

This is the real impact if corporations and individuals who are in-charge of cybersecurity do not understand what are the fundamentals of cybersecurity and the difference between IT Security and Information Security.

The lessons from this breach are simple – A breach in cybersecurity can cost a company both financially and damage to their reputation, so it’s imperative that you invest in cybersecurity that is relative to the data classification that you have.

A good starting point will be to identify how information/data is handled in its entire life cycle: In-Transit, In-Use, At-Rest and Disposal.

If you need any help with how to protect your data, have a chat to us today and find out how Kloud/Telstra can help you overcome your specific data security requirements.

170 Days to Go…

Notifiable Data Breach Scheme starts on 22nd February 2018 — How well are you prepared?


The focus on cyber security is rapidly increasing partly due to recent high-profile security breaches within major organisations and businesses. Evolving levels of sophistication, stealth, and reach of organised cyber-attacks requires more attention than ever before. Coupling cyber concerns with threats organisations face internally, cyber security now resides high on many corporate risk registers as a top concern for executives and business owners.

In response to the increase in cyber threats and activities, organisations require greater visibility and understanding into their current level of maturity. This in turn leads towards a process of strengthening the organisations controls to a more mature state that lends itself to cyber risk reduction.

In February 2017, the Commonwealth government passed the Privacy Amendment (Notifiable Data Breaches) Act 2017, which amended the Privacy Act 1988, making it mandatory for companies and organisations (Government/ Non-Government) to report “eligible data breaches” to the ‘Office of the Australian Information Commissioner’ (OAIC) and any affected, ‘at-risk’ individuals.

The Privacy Act 1998 has been amended to encourage entities to uplift their current security posture to ensure personally identifiable information is protected in its entire ‘data life cycle’ and securely deleted when no longer required.


The ‘Notifiable Data Breach’ (NDB) scheme applies to most Australian and Norfolk Island Government agencies, all private sector and not-for-profit organisations with an annual turnover of more than $3 million, all private health service providers and some small businesses (collectively called ‘APP entities’). To see if applies your organisations please refer to Privacy Act 1988.

The above entities must take reasonable steps to protect personally identifiable information they hold. This includes but is not limited to protection against malicious actions, such as theft or ‘hacking’, that may arise from internal errors or failure to follow information handling policies that cause accidental loss or disclosure.

In general, if there is a real risk of serious harm as a result of a data breach, the affected individuals and the OAIC should be notified. Some of the key facts from the ‘2017 Cost of Data Breach Study from Ponemon Institute’ and ‘Mandiant’ indicate:

  • It took businesses an average of 191 days to identify the data breach and an average of 66 days to contain the breach1;
  • Data breaches cost companies an average of $139 per compromised record2; and
  • Only 31% of organizations globally discovered IT security compromises through their own resources last year, according to Mandiant.
1, 2 Ponemon Institute© 2017 Cost of Data Breach Study – Australia

High profile security breaches in Australia

When it comes to data security breaches, last year saw 1792 data breaches, which led to almost 1.4 billion data records lost or stolen from organisations globally according to the Gemalto Breach Level Index.

In Australia, we saw a combined total of 15,899 cyber security incidents reported based on the Australian Cyber Security Centre (ACSC) Threat Report 2016 which included:

  • Threats to Government – Between 1 January 2015 and 30 June 2016, ASD, as part of the ACSC, responded to 1095 cyber security incidents on government systems, which were considered serious enough to warrant operational responses; and
  • Threats to Private Sector – Between July 2015 and June 2016, CERT Australia responded to 14,804 cyber security incidents affecting Australian businesses, 418 of which involved systems of national interest (SNI) and critical infrastructure (CI).

Some of the known ‘publicised’ high profile security incidents in Australia include:

  • Red Cross – 1.28 million blood donor records from 2010 published to a publicly facing website in Oct 2016;
  • Menulog – 1.1 million customer records compromised including names, Phone Numbers, Addresses and Order Histories in 2016;
  • NAB – 60,000 customer records was sent to the wrong website last December;
  • Big W – Personal details of Big W customers leaked online in Nov 2016;
  • David Jones & Kmart – An inherent vulnerability within the online portals of David Jones and Kmart was used to compromise customer records in late 2015; and
  • Telstra – Pacnet an Asian subsidiary of Telstra was compromised in 2015 in an attack affecting thousands of customers including federal government departments/ agencies.

For more information on how prepared Australian organisations (Government/ Private) are to meet the ever-growing cyber security threat, please look at the ACSC Cyber Security Survey 2016.

Does this apply to you & what is personal information?

This scheme applies to entities that have an obligation under Australian Privacy Principles (APP11) of the privacy act to protect Personally Identifiable Information (PII) it holds.

[(s 26W(1) (a)) – ‘Personal information’ (PII) is defined in s 6(1) of the Privacy Act to include information or an opinion about an identified individual, or an individual who is reasonably identifiable, whether the information or opinion is true or not, and whether the information or opinion is recorded in a material form or not].

The term ‘personal information’ encompasses a broad range of information. A number of different types of information are explicitly recognised as constituting personal information under the Privacy Act. The following are all types of personal information:

  • ‘Sensitive information’; (includes information or opinion about an individual’s racial or ethnic origin, political opinion, religious beliefs, sexual orientation or criminal record, provided the information or opinion otherwise meets the definition of personal information)
  • ‘Health information’; (which is also ‘sensitive information’)
  • ‘Credit information’; financial information
  • ‘Employee record’ information; (subject to exemptions) and
  • ‘Tax file number information’.

Although not explicitly recognised as personal information under the Privacy Act, information may be explicitly recognised as personal information under other legislation.

Further, the definition of personal information is not limited to information about an individual’s private or family life, but extends to any information or opinion that is about the individual, from which they are reasonably identifiable. This can include information about an individual’s business or work activities.

  • Example-1. Customer name, phone number and email address are collected by a business or government agency to create a customer contact file. The customer contact file constitutes personal information, as he/she is the subject of the record.
  • Example-2: Information that a ‘person’ was born with foetal alcohol syndrome reveals that his/her biological mother consumed alcohol during her pregnancy. This information may therefore be personal information about ‘person’s mother’ as well as the ‘person’ itself.

For detailed information what constitutes personal information please click here.

Entities covered by the NDB scheme

Australian Government agencies (and the Norfolk Island administration) and all businesses and not-for-profit organisations with an annual turnover more than $3 million have responsibilities under the Privacy Act, subject to some exceptions.

Some small business operators (organisations with a turnover of $3 million or less) are also covered by the Privacy Act including:

  • Private sector health service providers. Organisations providing a health service include:
    • Traditional health service providers, such as private hospitals, day surgeries, medical practitioners, pharmacists and allied health professionals;
    • Complementary therapists, such as naturopaths and chiropractors; and
    • Gyms and weight loss clinics.
  • Childcare centres, private schools and private tertiary educational institutions;
  • Businesses that sell or purchase personal information; and
  • Credit reporting bodies.

For more information about your responsibilities under the Privacy Act click here

Steps Entities Can Take

The reasonable steps entities should take to ensure the security of personal information will depend on their circumstances, including the following:

  • The nature of the entity holding the personal information;
  • The amount and sensitivity of the personal information held;
  • The possible adverse consequences for an individual;
  • The information handling practices of the entity holding the information;
  • The practicability of implementing the security measure, including the time and cost involved; and
  • Whether a security measure is itself privacy invasive.

The circumstances outlined above, will influence the reasonable steps that an organisation should take to destroy or de-identify/ classify personal information.

It is important that entities take reasonable steps to protect information they hold as a data breach could have very significant impact on their reputation and ongoing business operations.

The OIAC provides guidance on responsible steps organisations can undertake here.

Where to begin

A good starting point will be to look at the Privacy management framework (Framework) which provides steps the OAIC expects you to take to meet your ongoing compliance obligations under APP 1.2.

Below are some of the steps Entities can take to increase the security posture and comply with the Australian Privacy Principles (APP11)

  • Step 1: Entities can embed a culture of privacy and compliance by:
    • Treating Personal Information as valuable; (the first step is to classify unstructured data)
    • Assigning accountabilities to an individual to manage privacy;
    • Adopting Privacy by design principles in all projects and decisions;
    • Develop and implement privacy management plans that aligns with business objectives and privacy obligations; and
    • Implement a reporting structure and capture non–compliance incidents.
  • Step 2: Entities can establish a robust and effective privacy practices, procedures & systems by:
    • Keep information up to date irrespective of its physical location or third parties;
    • Develop and maintain a clearly articulated and up to date privacy policy which aligns with your privacy obligations; (This includes all information security polices as most of the information security policies are interlinked so it’s very important to have an up to date set of information security policies)
    • Develop and maintain processes to ensure you are handling personal information in accordance with your privacy obligations; (This includes how information is handled in its entire life-cycle: In-Transit, In-Use and At-Rest)
    • Perform a risk assessment to identify, assess and manage privacy risks across the business;
    • Undertake Privacy Impact Assessments (PIA) to make sure you are compliant with the privacy laws; and
    • Develop a data breach response process or a Security Incident Response Plan (SIRP) which will guide/ assist you to respond effectively in case of a data breach.
  • Step 3: Evaluate your privacy practices, procedures and systems (assurance)
    As security is a continuous improvement process, it is important to ensure all your practices, procedures, processes and systems are working effectively. Assurance activities should include:
    • Monitoring, review and measurement of all your privacy and compliance obligations against your privacy management framework; and
    • Risk assessments of third-party service providers and contractors;
  • Step 4: Enhance your response to privacy issues/ concerns
    • Use the results from Step 3 to update/ enhance and uplift your security and privacy risk profile, which includes your people, process and technology areas; and
    • Monitor and address new security risks and threats by implementing good system hygiene. A good starting point will be to implement recommendations from the Australian Signals Directorate, Australian Cyber Security Centre and CERT Australia, which provides mitigation strategies to help organisations mitigate cyber security incidents.

Whilst, the introduction of the new legislation is a good opportunity to evaluate and measure your organisation’s compliance with the Privacy Act provisions. It is also a good starting point for all organisations to continually assess the known state of risk from the ever-changing cyber threat landscape, by developing a maturity model, which can aid in further instances of cyber risk reduction.

If you need any help with the above recommendations, have a chat to us today and find out how Kloud/Telstra can help you overcome your specific NDB security/privacy obligations.

Static Security Analysis of Container Images with CoreOS Clair

Container security is (or should be) a concern to anyone running software on Docker Containers. Gone are the days when running random Images found on the internet was common place. Security guides for Containers are common now: examples from Microsoft and others can be found easily online.

The two leading Container Orchestrators also offer their own security guides: Kubernetes Security Best Practices and Docker security.

Container Image Origin

One of the single biggest factors in Container security is determined by the origin of container Images:

  1. It is recommended to run your own private Registry to distribute Images
  2. It is recommended to scan these Images against known vulnerabilities.

Running a private Registry is easy these days (Azure Container Registry for instance).

I will concentrate on the scanning of Images in the remainder of this post and show how to look for common vulnerabilities using Core OS Clair. Clair is probably the most advanced non commercial scanning solution for Containers at the moment, though it requires some elbow grease to run this way. It’s important to note that the GUI and Enterprise features are not free and are sold under the Quay brand.

As security scanning is recommended as part of the build process through your favorite CI/CD pipeline, we will see how to configure Visual Studio Team Services (VSTS) to leverage Clair.

Installing CoreOS Clair

First we are going to run Clair in minikube for the sake of experimenting. I have used Fedora 26 and Ubuntu. I will assume you have minikube, kubectl and docker installed (follow the respective links to install each piece of software) and that you have initiated a minikube cluster by issuing the “minikube start” command.

Clair is distributed through a docker image or you can also compile it yourself by cloning the following Github repository:

In any case, we will run the following commands to clone the repository, and make sure we are on the release 2.0 branch to benefit from the latest features (tested on Fedora 26):

~/Documents/github|⇒ git clone
Cloning into 'clair'...
remote: Counting objects: 8445, done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 8445 (delta 0), reused 2 (delta 0), pack-reused 8440
Receiving objects: 100% (8445/8445), 20.25 MiB | 2.96 MiB/s, done.
Resolving deltas: 100% (3206/3206), done.
Checking out files: 100% (2630/2630), done.

rafb@overdrive:~/Documents/github|⇒ cd clair
⇒ git fetch
⇒ git checkout -b release-2.0 origin/release-2.0
Branch release-2.0 set up to track remote branch release-2.0 from origin.
Switched to a new branch 'release-2.0'

The Clair repository comes with a Kubernetes deployment found in the contrib/k8s subdirectory as shown below. It’s the only thing we are after in the repository as we will run the Container Image distributed by Quay:

⇒ ls -l contrib/k8s
total 8
-rw-r--r-- 1 rafb staff 1443 Aug 15 14:18 clair-kubernetes.yaml
-rw-r--r-- 1 rafb staff 2781 Aug 15 14:18 config.yaml

We will modify these two files slightly to run Clair version 2.0 (for some reason the github master branch carries an older version of the configuration file syntax – as highlighted in this github issue).

In the config.yaml, we will change the postgresql source from:

source: postgres://postgres:password@postgres:5432/postgres?sslmode=disable


source: host=postgres port=5432 user=postgres password=password sslmode=disable

In config.yaml, we will change the version of the Clair image from latest to 2.0.1:





Once these changes have been made, we can deploy Clair to our minikube cluster by running those two commands back to back:

kubectl create secret generic clairsecret --from-file=./config.yaml 
kubectl create -f clair-kubernetes.yaml

By looking at the startup logs for Clair, we can see it fetches a vulnerability list at startup time:

[rbigeard@flanger ~]$ kubectl get pods NAME READY STATUS RESTARTS AGE 
clair-postgres-l3vmn 1/1 Running 1 7d 
clair-snmp2 1/1 Running 4 7d 
[rbigeard@flanger ~]$ kubectl logs clair-snmp2 
{"Event":"fetching vulnerability updates","Level":"info","Location":"updater.go:213","Time":"2017-08-14 06:37:33.069829"}
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"ubuntu.go:88","Time":"2017-08-14 06:37:33.069960","package":"Ubuntu"} 
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"oracle.go:119","Time":"2017-08-14 06:37:33.092898","package":"Oracle Linux"} 
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"rhel.go:92","Time":"2017-08-14 06:37:33.094731","package":"RHEL"}
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"alpine.go:52","Time":"2017-08-14 06:37:33.097375","package":"Alpine"}

Scanning Images through Clair Integrations

Clair is just a backend and we therefore need a frontend to “feed” Images to it. There are a number of frontends listed on this page. They range from full Enterprise-ready GUI frontends to simple command line utilities.

I have chosen to use “klar” for this post. It is a simple command line tool that can be easily integrated into a CI/CD pipeline (more on this in the next section). To install klar, you can compile it yourself or download a release.

Once installed, it’s very easy to use and parameters are passed using environment variables. In the following example, CLAIR_OUTPUT is set to “High” so that we only see the most dangerous vulnerabilities. CLAIR_ADDRESS is the address of Clair running on my minikube cluster.

Note that since I am pulling an image from an Azure Container Registry instance and I have specified a DOCKER_USER and DOCKER_PASSWORD variable in my environment.


Analysing 3 layers 
Found 26 vulnerabilities 
CVE-2017-8804: [High]  
The xdr_bytes and xdr_string functions in the GNU C Library (aka glibc or libc6) 2.25 mishandle failures of buffer deserialization, which allows remote attackers to cause a denial of service (virtual memory allocation, or memory consumption if an overcommit setting is not used) via a crafted UDP packet to port 111, a related issue to CVE-2017-8779. 
CVE-2017-10685: [High]  
In ncurses 6.0, there is a format string vulnerability in the fmt_entry function. A crafted input will lead to a remote arbitrary code execution attack. 
CVE-2017-10684: [High]  
In ncurses 6.0, there is a stack-based buffer overflow in the fmt_entry function. A crafted input will lead to a remote arbitrary code execution attack. 
CVE-2016-2779: [High]  
runuser in util-linux allows local users to escape to the parent session via a crafted TIOCSTI ioctl call, which pushes characters to the terminal's input buffer. 
Unknown: 2 
Negligible: 15 
Low: 1 
Medium: 4 
High: 4

So Clair is showing us the four “High” level common vulnerabilities found in the nginx image that I pulled from Docker Hub. At times of writing, this is consistent with the details listed on docker hub. It’s not necessarily a deal breaker as those vulnerabilities are only potentially exploitable by local users on the Container host which mean we would need to protect the VMs that are running Containers well!

Automating the Scanning of Images in Azure using a CI/CD pipeline

As a proof-of-concept, I created a “vulnerability scanning” Task in a build pipeline in VSTS.

Conceptually, the chain is as follows:

Container image scanning VSTS pipeline

I created an Ubuntu Linux VM and built my own VSTS agent following published instructions after which I installed klar.

I then built a Kubernetes cluster in Azure Container Service (ACS) (see my previous post on the subject which includes a script to automate the deployment of Kubernetes on ACS), and deployed Clair to it, as shown in the previous section.

Little gotcha here: my Linux VSTS agent and my Kubernetes cluster in ACS ran in two different VNets so I had to enable VNet peering between them.

Once those elements are in place, we can create a git repo with a shell script that calls klar and a build process in VSTS with a task that will execute the script in question:

Scanning Task in a VSTS Build

The content of is very simple (This would have to be improved for a production environment obviously, but you get the idea):

CLAIR_ADDR=http://X.X.X.X:30060 klar Ubuntu

Once we run this task in VSTS, we get the list of vulnerabilities in the output which can be used to “break” the build based on certain vulnerabilities being discovered.

Build output view in VSTS


Hopefully you have picked up some ideas around how you can ensure Container Images you run in your environments are secure, or at least you know what potential issues you are having to mitigate, and that a build task similar to the one described here could very well be part of a broader build process you use to build Containers Images from scratch.

Set your eyes on the Target!


So in my previous posts I’ve discussed a couple of key points in what I define as the basic principles of Identity and Access Management;

Now that we have all the information needed, we can start to look at your target systems. Now in the simplest terms this could be your local Active Directory (Authentication Domain), but this could be anything, and with the adoption of cloud services, often these target systems are what drives the need for robust IAM services.

Something that we are often asked as IAM consultants is why. Why should the corporate applications be integrated with any IAM Service, and these are valid questions. Sometimes depending on what the system is and what it does, integrating with an IAM system isn’t a practical solution, but more often there are many benefits to having your applications integrated with and IAM system. These benefits include:

  1. Automated account provisioning
  2. Data consistency
  3. If supported Central Authentication services


With any target system much like the untitled1IAM system itself, the one thing you must know before you go into any detail are the requirements. Every target system will have individual requirements. Some could be as simple as just needing basic information, first name, last name and date of birth. But for most applications there is allot more to it, and the requirements will be derived largely by the application vendor, and to a lessor extent the application owners and business requirements.

IAM Systems are for the most part extremely flexible in what they can do, they are built to be customized to an enormous degree, and the target systems used by the business will play a large part in determining the amount of customisations within the IAM system.

This could be as simple as requiring additional attributes that are not standard within both the IAM system and your source systems, or could also be the way in which you want the IAM system to interact with the application i.e. utilising web services and building custom Management Agents to connect and synchronise data sets between.

But the root of all this data is when using an IAM system you are having a constant flow of data that is all stored within the “Vault”. This helps ensure that any changes to a user is flowed to all systems, and not just the phone book, it also ensures that any changes are tracked through governance processes that have been established and implemented as part of the IAM System. Changes made to a users’ identity information within a target application can be easily identified, to the point of saying this change was made on this date/time because a change to this persons’ data occurred within the HR system at this time.


Most IAM systems will have management agents or connectors (the phases can vary depending on the vendor you use) built for the typical “Out of Box” systems, and these will for the most part satisfy the requirements of many so you don’t tend to have to worry so much about that, but if you have “bespoke” systems that have been developed and built up over the years for your business then this is where the custom management agents would play a key part, and how they are built will depend on the applications themselves, in a Microsoft IAM Service the custom management agents would be done using an Extensible Connectivity Management Agent (ECMA). How you would build and develop management agents for FIM or MIM is quite an extensive discussion and something that would be better off in a separate post.

One of the “sticky” points here is that most of the time in order to integrate applications, you need to have elevated access to the applications back end to be able to populate data to and pull data from the application, but the way this is done through any IAM system is through specific service accounts that are restricted to only perform the functions of the applications.

Authentication and SSO

Application integration is something seen to tighten the security of the data and access to applications being controlled through various mechanisms, authentication plays a large part in the IAM process.

During the provisioning process, passwords are usually set when an account is created. This is either through using random password generators (preferred), or setting a specific temporary password. When doing this though, it’s always done with the intent of the user resetting their password when they first logon. The Self Service functionality that can be introduced to do this enables the user to reset their password without ever having to know what the initial password was.

Depending on the application, separate passwords might be created that need to be managed. In most cases IAM consultants/architects will try and minimise this to not being required at all, but this isn’t always the case. In these situations, the IAM System has methods to manage this as well. In the Microsoft space this is something that can be controlled through Password Synchronisation using the “Password Change Notification Service” (PCNS) this basically means that if a user changes their main password that change can be propagated to all the systems that have separate passwords.

SONY DSCMost applications today use standard LDAP authentication to provide access to there application services, this enables the password management process to be much simpler. Cloud Services however generally need to be setup to do one of two things.

  1. Store local passwords
  2. Utilise Single Sign-On Services (SSO)

SSO uses standards based protocols to allow users to authenticate to applications with managed accounts and credentials which you control. Examples of these standard protocols are the likes of SAML, oAuth, WS-Fed/WS-Trust and many more.

There is a growing shift in the industry for these to be cloud services however, being the likes of Microsoft Azure Active Directory, or any number of other services that are available today.
The obvious benefit of SSO is that you have a single username or password to remember, this also greatly reduces the security risk that your business has from and auditing and compliance perspective having a single authentication directory can help reduce the overall exposure your business has to compromise from external or internal threats.

Well that about wraps it up, IAM for the most part is an enabler, it enables your business to be adequately prepared for the consumption of Cloud services and cloud enablement, which can help reduce the overall IT spend your business has over the coming years. But one thing I think I’ve highlighted throughout this particular series is requirements requirements requirements… repetitive I know, but for IAM so crucially important.

If you have any questions about this post or any of my others please feel free to drop a comment or contact me directly.