Static Security Analysis of Container Images with CoreOS Clair

Container security is (or should be) a concern to anyone running software on Docker Containers. Gone are the days when running random Images found on the internet was common place. Security guides for Containers are common now: examples from Microsoft and others can be found easily online.

The two leading Container Orchestrators also offer their own security guides: Kubernetes Security Best Practices and Docker security.

Container Image Origin

One of the single biggest factors in Container security is determined by the origin of container Images:

  1. It is recommended to run your own private Registry to distribute Images
  2. It is recommended to scan these Images against known vulnerabilities.

Running a private Registry is easy these days (Azure Container Registry for instance).

I will concentrate on the scanning of Images in the remainder of this post and show how to look for common vulnerabilities using Core OS Clair. Clair is probably the most advanced non commercial scanning solution for Containers at the moment, though it requires some elbow grease to run this way. It’s important to note that the GUI and Enterprise features are not free and are sold under the Quay brand.

As security scanning is recommended as part of the build process through your favorite CI/CD pipeline, we will see how to configure Visual Studio Team Services (VSTS) to leverage Clair.

Installing CoreOS Clair

First we are going to run Clair in minikube for the sake of experimenting. I have used Fedora 26 and Ubuntu. I will assume you have minikube, kubectl and docker installed (follow the respective links to install each piece of software) and that you have initiated a minikube cluster by issuing the “minikube start” command.

Clair is distributed through a docker image or you can also compile it yourself by cloning the following Github repository: https://github.com/coreos/clair.

In any case, we will run the following commands to clone the repository, and make sure we are on the release 2.0 branch to benefit from the latest features (tested on Fedora 26):

~/Documents/github|⇒ git clone https://github.com/coreos/clair
Cloning into 'clair'...
remote: Counting objects: 8445, done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 8445 (delta 0), reused 2 (delta 0), pack-reused 8440
Receiving objects: 100% (8445/8445), 20.25 MiB | 2.96 MiB/s, done.
Resolving deltas: 100% (3206/3206), done.
Checking out files: 100% (2630/2630), done.

rafb@overdrive:~/Documents/github|⇒ cd clair
rafb@overdrive:~/Documents/github/clair|master
⇒ git fetch
rafb@overdrive:~/Documents/github/clair|master
⇒ git checkout -b release-2.0 origin/release-2.0
Branch release-2.0 set up to track remote branch release-2.0 from origin.
Switched to a new branch 'release-2.0'

The Clair repository comes with a Kubernetes deployment found in the contrib/k8s subdirectory as shown below. It’s the only thing we are after in the repository as we will run the Container Image distributed by Quay:

rafb@overdrive:~/Documents/github/clair|release-2.0
⇒ ls -l contrib/k8s
total 8
-rw-r--r-- 1 rafb staff 1443 Aug 15 14:18 clair-kubernetes.yaml
-rw-r--r-- 1 rafb staff 2781 Aug 15 14:18 config.yaml

We will modify these two files slightly to run Clair version 2.0 (for some reason the github master branch carries an older version of the configuration file syntax – as highlighted in this github issue).

In the config.yaml, we will change the postgresql source from:

source: postgres://postgres:password@postgres:5432/postgres?sslmode=disable

to

source: host=postgres port=5432 user=postgres password=password sslmode=disable

In config.yaml, we will change the version of the Clair image from latest to 2.0.1:

From:

image: quay.io/coreos/clair

to

image: quay.io/coreos/clair:v2.0.1

Once these changes have been made, we can deploy Clair to our minikube cluster by running those two commands back to back:

kubectl create secret generic clairsecret --from-file=./config.yaml 
kubectl create -f clair-kubernetes.yaml

By looking at the startup logs for Clair, we can see it fetches a vulnerability list at startup time:

[rbigeard@flanger ~]$ kubectl get pods NAME READY STATUS RESTARTS AGE 
clair-postgres-l3vmn 1/1 Running 1 7d 
clair-snmp2 1/1 Running 4 7d 
[rbigeard@flanger ~]$ kubectl logs clair-snmp2 
...
{"Event":"fetching vulnerability updates","Level":"info","Location":"updater.go:213","Time":"2017-08-14 06:37:33.069829"}
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"ubuntu.go:88","Time":"2017-08-14 06:37:33.069960","package":"Ubuntu"} 
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"oracle.go:119","Time":"2017-08-14 06:37:33.092898","package":"Oracle Linux"} 
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"rhel.go:92","Time":"2017-08-14 06:37:33.094731","package":"RHEL"}
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"alpine.go:52","Time":"2017-08-14 06:37:33.097375","package":"Alpine"}

Scanning Images through Clair Integrations

Clair is just a backend and we therefore need a frontend to “feed” Images to it. There are a number of frontends listed on this page. They range from full Enterprise-ready GUI frontends to simple command line utilities.

I have chosen to use “klar” for this post. It is a simple command line tool that can be easily integrated into a CI/CD pipeline (more on this in the next section). To install klar, you can compile it yourself or download a release.

Once installed, it’s very easy to use and parameters are passed using environment variables. In the following example, CLAIR_OUTPUT is set to “High” so that we only see the most dangerous vulnerabilities. CLAIR_ADDRESS is the address of Clair running on my minikube cluster.

Note that since I am pulling an image from an Azure Container Registry instance and I have specified a DOCKER_USER and DOCKER_PASSWORD variable in my environment.

# CLAIR_OUTPUT=High CLAIR_ADDR=http://192.168.99.100:30060 \
klar-1.4.1-linux-amd64 romainregistry1.azurecr.io/samples/nginx

Analysing 3 layers 
Found 26 vulnerabilities 
CVE-2017-8804: [High]  
The xdr_bytes and xdr_string functions in the GNU C Library (aka glibc or libc6) 2.25 mishandle failures of buffer deserialization, which allows remote attackers to cause a denial of service (virtual memory allocation, or memory consumption if an overcommit setting is not used) via a crafted UDP packet to port 111, a related issue to CVE-2017-8779. 
https://security-tracker.debian.org/tracker/CVE-2017-8804 
----------------------------------------- 
CVE-2017-10685: [High]  
In ncurses 6.0, there is a format string vulnerability in the fmt_entry function. A crafted input will lead to a remote arbitrary code execution attack. 
https://security-tracker.debian.org/tracker/CVE-2017-10685 
----------------------------------------- 
CVE-2017-10684: [High]  
In ncurses 6.0, there is a stack-based buffer overflow in the fmt_entry function. A crafted input will lead to a remote arbitrary code execution attack. 
https://security-tracker.debian.org/tracker/CVE-2017-10684 
----------------------------------------- 
CVE-2016-2779: [High]  
runuser in util-linux allows local users to escape to the parent session via a crafted TIOCSTI ioctl call, which pushes characters to the terminal's input buffer. 
https://security-tracker.debian.org/tracker/CVE-2016-2779 
----------------------------------------- 
Unknown: 2 
Negligible: 15 
Low: 1 
Medium: 4 
High: 4

So Clair is showing us the four “High” level common vulnerabilities found in the nginx image that I pulled from Docker Hub. At times of writing, this is consistent with the details listed on docker hub. It’s not necessarily a deal breaker as those vulnerabilities are only potentially exploitable by local users on the Container host which mean we would need to protect the VMs that are running Containers well!

Automating the Scanning of Images in Azure using a CI/CD pipeline

As a proof-of-concept, I created a “vulnerability scanning” Task in a build pipeline in VSTS.

Conceptually, the chain is as follows:

Container image scanning VSTS pipeline

I created an Ubuntu Linux VM and built my own VSTS agent following published instructions after which I installed klar.

I then built a Kubernetes cluster in Azure Container Service (ACS) (see my previous post on the subject which includes a script to automate the deployment of Kubernetes on ACS), and deployed Clair to it, as shown in the previous section.

Little gotcha here: my Linux VSTS agent and my Kubernetes cluster in ACS ran in two different VNets so I had to enable VNet peering between them.

Once those elements are in place, we can create a git repo with a shell script that calls klar and a build process in VSTS with a task that will execute the script in question:

Scanning Task in a VSTS Build

The content of scan.sh is very simple (This would have to be improved for a production environment obviously, but you get the idea):

#!/bin/bash
CLAIR_ADDR=http://X.X.X.X:30060 klar Ubuntu

Once we run this task in VSTS, we get the list of vulnerabilities in the output which can be used to “break” the build based on certain vulnerabilities being discovered.

Build output view in VSTS

Summary

Hopefully you have picked up some ideas around how you can ensure Container Images you run in your environments are secure, or at least you know what potential issues you are having to mitigate, and that a build task similar to the one described here could very well be part of a broader build process you use to build Containers Images from scratch.

SSL Tunneling with socat in Docker to safely access Azure Redis on port 6379

Redis Cache is an advanced key-value store that we should have all come across in one way or another by now. Azure, AWS and many other cloud providers have fully managed offerings for it, which is “THE” way we want to consume it.  As a little bit of insight, Redis itself was designed for use within a trusted private network and does not support encrypted connections. Public offerings like Azure use TLS reverse proxies to overcome this limitation and provide security around the service.

However some Redis client libraries out there do not talk TLS. This becomes a  problem when they are part of other tools that you want to compose your applications with.

Solution? We bring in something that can help us do protocol translation.

socat – Multipurpose relay (SOcket CAT)

Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. Because the streams can be constructed from a large set of different types of data sinks and sources (see address types), and because lots of address options may be applied to the streams, socat can be used for many different purposes.

https://linux.die.net/man/1/socat

In short: it is a tool that can establish a communication between two points and manage protocol translation between them.

An interesting fact is that socat is currently used to port forward docker exec onto nodes in Kubernetes. It does this by creating a tunnel from the API server to Nodes.

Packaging socat into a Docker container

One of the great benefits of Docker is that it allows you to work in sandbox environments. These environments are then fully transportable and can eventually become part of your overall application.

The following procedure prepares a container that includes the socat binary and common certificate authorities required for public TLS certificate chain validation.

We first create our  docker-socat/Dockerfile

Now we build a local Docker image by executing docker build -t socat-local docker-socat. You are free to push this image to a Docker Registry at this point.

Creating TLS tunnel into Azure Redis

To access Azure Redis you we need 2 things:

  1. The FQDN: XXXX-XXXX-XXXX.redis.cache.windows.net:6380
    where all the X’s represent your dns name.
  2. The access key, found under the Access Keys menu of your Cache instance. I will call it THE-XXXX-PASSWORD

Let’s start our socat tunnel by spinning up the container we just built an image for. Notice I am binding port 6379 to my Desktop so that I can connect to the tunnel from localhost:6379 on my machine.

Now let’s have a look at the  arguments I am passing in to socat (which is automatically invoked thanks to the ENTRYPOINT ["socat"] instruction we included when building the container image).

  1. -v
    For checking logs when when doing docker logs socat
  2. TCP-LISTEN:6379,fork,reuseaddr
    – Start a socket listener on port 6379
    – fork to allow for subsequent connections (otherwise a one off)
    – reuseaddr to allow socat to restart and use the same port (in case a previous one is still held by the OS)

  3. openssl-connect:XXXX-XXXX-XXXX.redis.cache.windows.net:6380
    – Create a TLS connect tunnel to the Azure Redis Cache.

Testing connectivity to Azure Redis

Now I will just test my tunnel using redis-cli which I can also use from a container.  In this case THE-XXXX-PASSWORD is the Redis Access Key.

The thing to notice here is the--net host flag. This instructs Docker not to create a new virtual NIC and namespace to isolate the container network but instead use the Host’s (my desktop) interface. This means that localhost in the container is really localhost on my Desktop.

If everything is set up properly and outgoing connections on port6379 are allowed, you should get a PONG message back from redis.

Happy hacking!

Protect Your Business and Users from Email Phishing in a Few Simple Steps

The goal of email phishing attacks is obtain personal or sensitive information from a victim such as credit card, passwords or username data, for malicious purposes. That is to say trick a victim into performing an unwitting action aimed at stealing sensitive information from them. This form of attack is generally conducted by means of spoofed emails or instant messaging communications which try to deceive their target as to the nature of the sender and purpose of the email they’ve received. An example of which would be an email claiming to be from a bank asking for credential re-validation in the hope of stealing them by means of a cloned website.

Some examples of email Phishing attacks.

Spear phishing

Phishing attempts directed at specific individuals or companies have been termed spear phishing. Attackers may gather personal information about their target to increase their probability of success. This technique is by far the most successful on the internet today, accounting for 91% of attacks. [Wikipedia]

Clone phishing

Clone phishing is a type of phishing attack whereby a legitimate, and previously delivered, email containing an attachment or link has had its content and recipient address(es) taken and used to create an almost identical or cloned email. The attachment or link within the email is replaced with a malicious version and then sent from an email address spoofed to appear to come from the original sender. It may claim to be a resend of the original or an updated version to the original. This technique could be used to pivot (indirectly) from a previously infected machine and gain a foothold on another machine, by exploiting the social trust associated with the inferred connection due to both parties receiving the original email. [Wikipedia]

Whaling

Several phishing attacks have been directed specifically at senior executives and other high-profile targets within businesses, and the term whaling has been coined for these kinds of attacks  In the case of whaling, the masquerading web page/email will take a more serious executive-level form. The content will be crafted to target an upper manager and the person’s role in the company. The content of a whaling attack email is often written as a legal subpoena, customer complaint, or executive issue. Whaling scam emails are designed to masquerade as a critical business email, sent from a legitimate business authority. The content is meant to be tailored for upper management, and usually involves some kind of falsified company-wide concern. Whaling phishers have also forged official-looking FBI subpoena emails, and claimed that the manager needs to click a link and install special software to view the subpoena. [Wikipedia]

Staying ahead of the game from an end user perspective

  1. Take a very close look at the sender’s email address.

Phishing email will generally use an address that looks genuine but isn’t (e.g. accounts@paypals.com) or try to disguise the email’s real sender with what looks like a genuine address but isn’t using HTML trickery (see below).

  1. Is the email addressed to you personally?

Companies with whom you have valid accounts will always address you formally by means of your name and surname. Formulations such as ‘Dear Customer’ is a strong indication the sender doesn’t know you personally and should perhaps be avoided.

  1. What web address is the email trying to lure you to?

Somewhere within a phishing email, often surrounded by links to completely genuine addresses, will be one or more links to the means by which the attacker is to steal from you. In many cases a web site that looks genuine enough, however there are a number of ways of confirming it’s validity.

  1. Hover your cursor over any link you receive in an email before you click it if you’re unsure because it will reveal the real destination sometimes hidden behind deceptive HTML. Also look at the address very closely. The deceit may be obvious or well hidden in a subtle typo (e.g. accouts@app1e.com).

a. Be wary of URL redirection services such as bit.ly which hide the ultimate destination of a link.

b. Be wearing of very long URLs. If in doubt do a Google search for the root domain.

c. Does the email contain poor grammar and spelling mistakes?

d. Many times the quality of a phishing email isn’t up to the general standard of a company’s official communications. Look for spelling mistakes, barbarisms, grammatical errors and odd characters in they email as a sign that something may be wrong.

 

Mitigating the impact of Phishing attacks against an organization

  1. Implement robust email and web access filtering.

  2. User education.

  3. Deploy an antivirus endpoint protection solution.

  4. Deploy Phishing attack aware endpoint protection software.

 

Windows 10 Domain Join + AAD and MFA Trusted IPs

Background

Those who have rolled out Azure MFA (in the cloud) to non-administrative users are probably well aware of the nifty Trusted IPs feature.   For those that are new to this, the short version is that this capability is designed to make it a little easier on the end user experience by allowing you to define a set of ‘trusted locations’ (e.g. your corporate network) in which MFA is not required.

This capability works via two methods:

  • Defining a set of ‘Trusted” IP addresses.  These IP addresses will be the public facing IP addresses of your Web Proxies and/or network gateways and firewalls
  • Utilising issued claims from Federated Users.   This uses the insidecorporatenetwork = true claim, sent by ADFS, to determine that this user is coming from a ‘trusted location’.  Enabling this capability is discussed in this article.

The Problem

Now, the latter of these is what needs further consideration when you are looking to moving to the ‘modern world’ of Windows 10 and Azure AD (AAD).  Unfortunately, due to some changes made in the way that ‘Win10 Domain Joined with AAD Registration (AAD+DJ) machines performs Single Sign On (SSO) with AAD, the method of utilising federated claims to determine a ‘trusted location’ for MFA will no longer work.

To understand why this is the case, I highly encourage that you first read Jairo Cadena‘s truly excellent blog series that discuss in detail around how Win10 AAD SSO and its associated services works.  The key takeaways from those posts are that Win10 now has this concept of a Primary Refresh Token (PRT) and with this approach to authentication you now have the following changes:

  • The PRT is what is used to obtain access tokens to AAD applications
  • The PRT is cached and has a sliding window lifetime from 14 days up to 90 days
  • The use of the PRT is built into the Windows 10 credential provider.  Both IE and Edge know to utilise the PRT when communicating with AAD
  • It effectively replaces the ADFS with Integrated Windows Auth (IWA) approach to achieve SSO with Azure AD
    • That is, the auth flow is no longer: Browser –> Login to AAD –> Redirect to ADFS –> Perform IWA SSO –> SAML Token provided with claims –> AAD grants access
    • Instead, the auth flow is a lot more streamlined:  Browser –> Login and provide PRT to AAD –> AAD grants access

Hopefully from this auth flow change you can see why Microsoft have done this.  Because the old way relied on IWA to perform ‘seamless’ SSO, it only worked when the device was domain joined and you had line of sight to a DC to perform kerberos.  So when connecting externally, you would always see the prompt from the ADFS forms based authentication.  In the new way, whenever an auth prompt came in from AAD, the credential provider could see this and immediately provide the cached PRT, providing SSO regardless of your network location.  It also meant that you no longer needed a domain joined machine to achieve ‘seamless’ SSO!

The side effect though is that because the SAML token provided by ADFS is no longer involved in gaining access, Azure AD loses visibility on those context based claims like insidecorporatenetwork which subsequently means that specific Trusted IPs feature no longer works.   While this is most commonly used for MFA scenarios, be aware that this will also apply to any Azure AD Conditional Access rules you define that uses the Trusted IPs criteria (e.g. block access to app when external).

Side Note: If you want to confirm this behaviour yourself, simply use a Win10 machine that is both Domain Joined and AAD Registered, perform a fiddler capture, and compare the sign in experience differences between a IE and Edge (i.e. PRT aware) and Chrome (i.e. not PRT aware)

The Solution/Workaround?

So, you might ask, how do you fix this apparent gap in capability?   Does this mean you’re going backwards now?   For any enterprise customer of decent size, managing a set of IP address ranges may not be practical or desireable in order to drive MFA (or conditional access) behaviours between internal and external users.   The federated user claim method was a simple, low admin, way of solving that problem.

To answer this question, I would actually take a step back and look at the underlying problem that you’re trying to solve.  If we remind ourselves of the MFA mantra, the idea is to ensure that the user provides “something they know” (e.g. a secret/password) and “something they have” (e.g. a mobile device) to prove their ‘trustworthiness’.

When we make a decision to allow an MFA bypass for internal users, we are predicating this on the fact that, from a security standpoint, they have met their ‘trustworthiness’ level through a seperate means.  This might be through a security access card that lets them into an office location or utilising a corporate laptop that can perform a VPN connection.  Both of which ultimately lets them connect to the internal network and thus is what you use as your criteria for granting them the luxury of not having to perform another factor of authentication.

So with that in mind, what you could then do is to also expand that critera to include Domain Joined machines.  That is, if a user is utilising a corporate issued device that has been domain joined (and registered to AAD), this can now act as your “something you have” aspect of the MFA mantra to prove your trustworthiness, and so you no longer need to differentiate whether they are actually internal or external anymore.

To achieve this, you’ll need to use Azure AD Conditional Access policies, and modify your Access Grant rules to look something like that below:

Win10PRT1

You’ll also need to perform the steps outlined in the How to configure automatic registration of Windows domain-joined devices with Azure Active Directory article to ensure the devices properly identify themselves as being domain joined.

Side Note:  If you include the Workplace Join packages as outlined above, this approach can also expand to Windows 7 and Windows 8.1 devices.

Side Note 2: You can also include Intune managed mobile devices for your ‘bypass criterias’ if you include the Require device to be marked as compliant critera as well.

Fun Fact: You’ll note that in my image the (preview) reference for ‘require one of the selected controls’ is still there.  This is because until recently (approx. May/June 2017), the MFA or domain joined device criteria didn’t acutally work because of the behaviour/order of how the evaluations were being done.  When AAD was evaluating the domain joined criteria, if it failed it would immediately block access rather then trying the MFA approach next, thus preventing an ‘or’ scenario.   This has now been fixed and I expect the (preview) tag to be removed soon.

Summary

The move to the modern ‘any where, any device’ approach to end user computing means that there is a need to start re-assessing how you approach security.  Old world views of security being defined via network boundaries will eventually disappear and instead you’ll need to consider user-and device based contexts to define when to initiate security controls.

With Windows 10’s approach to authentication with AAD, internal and external access is no longer relevant and should not be used for your criteria in driving MFA or conditional access. Instead, use the device based conditions such as ‘device compliance’ or ‘domain join’ as one of your deciding factors.

Social Engineering Is A Threat To Your Organisation

social_engineering
Of the many attacks, hacks and exploits perpetrated against organisations. One of the most common vulnerabilities businesses face and need to guard against is the result of the general goodness or weakness, depending on how you choose to look at it, of our human natures exploited through means of social engineering.

Social engineering is a very common problem in cyber security. It consists of the simple act of getting an individual to unwittingly perform an unsanctioned or undersirable action under false pretenses. Whether granting access to a system, clicking a poisoned link, revealing sensitive information or any other improperly authorised action. The act relies on the trusting nature of human beings, their drive to help and work with one another. All of which makes social engineering hard to defend against and detect.

Some of the better known forms of social engineering include:

Phishing

Phishing is a technique of fraudulently obtaining private information. Typically, the phisher sends an e-mail that appears to come from a legitimate business—a bank, or credit card company—requesting “verification” of information and warning of some dire consequence if it is not provided. The e-mail usually contains a link to a fraudulent web page that seems legitimate—with company logos and content—and has a form requesting everything from a home address to an ATM card’s PIN or a credit card number. [Wikipedia]

Tailgating

An attacker, seeking entry to a restricted area secured by unattended, electronic access control, e.g. by RFID card, simply walks in behind a person who has legitimate access. Following common courtesy, the legitimate person will usually hold the door open for the attacker or the attackers themselves may ask the employee to hold it open for them. The legitimate person may fail to ask for identification for any of several reasons, or may accept an assertion that the attacker has forgotten or lost the appropriate identity token. The attacker may also fake the action of presenting an identity token. [Wikipedia]

Baiting

Baiting is like the real-world Trojan horse that uses physical media and relies on the curiosity or greed of the victim. In this attack, attackers leave malware-infected floppy disks, CD-ROMs, or USB flash drives in locations people will find them (bathrooms, elevators, sidewalks, parking lots, etc.), give them legitimate and curiosity-piquing labels, and waits for victims. For example, an attacker may create a disk featuring a corporate logo, available from the target’s website, and label it “Executive Salary Summary Q2 2012”. The attacker then leaves the disk on the floor of an elevator or somewhere in the lobby of the target company. An unknowing employee may find it and insert the disk into a computer to satisfy his or her curiosity, or a good Samaritan may find it and return it to the company. In any case, just inserting the disk into a computer installs malware, giving attackers access to the victim’s PC and, perhaps, the target company’s internal computer network. [Wikipedia]

Water holing

Water holing is a targeted social engineering strategy that capitalizes on the trust users have in websites they regularly visit. The victim feels safe to do things they would not do in a different situation. A wary person might, for example, purposefully avoid clicking a link in an unsolicited email, but the same person would not hesitate to follow a link on a website he or she often visits. So, the attacker prepares a trap for the unwary prey at a favored watering hole. This strategy has been successfully used to gain access to some (supposedly) very secure systems. [Wikipedia]

Quid pro quo

Quid pro quo means something for something. An attacker calls random numbers at a company, claiming to be calling back from technical support. Eventually this person will hit someone with a legitimate problem, grateful that someone is calling back to help them. The attacker will “help” solve the problem and, in the process, have the user type commands that give the attacker access or launch malware. [Wikipedia]

Now do something about it!

As threats to orginisation’s cyber security go. Social engineering is a significant and prevalent threat, and not to be under-estimated.

However the following are some of the more effective means of guarding against it.

  1. Be vigilent…
  2. Be vigilent over the phone, through email and online.
  3. Be healthily skeptical and aware of your surroundings.
  4. Always validate the requestor’s identity before considering their request.
  5. Validate the request against another member of staff if necessary.

Means of mitigating social engineering attacks:

  1. Use different logins for all resources.
  2. Use multi-factor authentication for all sensitive resources.
  3. Monitor account usage.

Means of improving your staff’s ability to detect social engineering attacks:

  1. Educate your staff.
  2. Run social engineering simulation exercises across your organisation.

Ultimately of course the desired outcome of trying to bolster your’s organisation’s ability to detect a social engineering attack. Is a situation where the targeted user isn’t fooled by the attempt against their trust and performs accordingly, such as knowing not to click the link in an email purporting to help them retrieve their lost banking details for example.

Some additional tips:

  1. Approach all unsolicited communications no matter who the originator claims to be with skepticism.
  2. Pay close attention to the target URL of all links by hovering your cursor over them to hopefully reveal their true destination.
  3. Look to the HTTPS digital certificate of all sensitive websites you visit for identity information.
  4. Use spam filtering, Antivirus software and anti-phising software.

Cloud Security Research: Cross-Cloud Adversary Analytics

Newly published research from security firm Rapid7 is painting a worrying picture of hackers and malicious actors increasingly looking for new vectors against organizations with resources hosted in public cloud infrastructure environments.

Some highlights of Rapid7’s report:

  • The six cloud providers in our study make up nearly 15% of available IPv4 addresses on the internet.
  • 22% of Softlayer nodes expose database services (MySQL & SQL Server) directly to the internet.
  • Web services are prolific, with 53-80% of nodes in each provider exposing some type of web service.
  • Digital Ocean and Google nodes expose shell (Telnet & SSH) services at a much higher rate – 86% and 74%, respectively – than the other four cloud providers in this study.
  • A wide range of attacks were detected, including ShellShock, SQL Injection, PHP webshell injection and credentials attacks against ssh, Telnet and remote framebuffer (e.g. VNC, RDP & Citrix).

Findings included nearly a quarter of hosts deployed in IBM’s SoftLayer public cloud having databases publicly accessible over the internet, which should be a privacy and security concern to those organization and their customers.

Many of Google’s cloud customers leaving shell access publicly accessible over protocols such as SSH and much worse still, telnet which is worrying to say the least.

Businesses using the public cloud being increasingly probed by outsiders looking for well known vulnerabilities such as OpenSSL Heartbleed (CVE-2014-0160), Stagefright (CVE-2015-1538) and Poodle (CVE-2014-3566) to name but a few.

Digging further into their methodologies, looking to see whether these were random or targeted. It appears these actors are honing their skills in tailoring their probes and attacks to specific providers and organisations.

Rapid7’s research was conducted by means of honey traps, hosts and services made available solely for the purpose of capturing untoward activity with a view to studying how these malicious outsiders do their work. What’s more the company has partnered with Microsoft, Amazon and others under the auspices of projects Heisenberg and Sonar to leverage big data analytics to mine the results of their findings and scan the internet for trends.

Case in point project Heisenberg saw the deployment of honeypots in every geography in partnership with all major public cloud providers. And scanned for compromised digital certifcates in those environments. While project Sonar scanned millions of digital certificates on the internet for sings of the same.

However while the report leads to clear evidence showing that hackers are tailoring their attacks to different providers and organisations. It reads as somewhat more of an indictment of the poor standard of security being deployed by some organisations in the public cloud today. Than a statement on the security practices of the major providers.

The 2016 national exposure survey.

Read about the Heisenberg cloud project (slides).

Security Vulnerability Revealed in Azure Active Directory Connect

Microsoft ADFS

The existence of a new and potentially serious privilege escalation and password reset vulnerability in Azure Active Directory Connect (AADC) was recently made public by Microsoft.

https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnectsync-whatis

Fixing the problem can be achieved by means of an upgrade to the latest available release of AADC 1.1.553.0.

https://www.microsoft.com/en-us/download/details.aspx?id=47594

The Microsoft security advisory qualifies the issue as important and was published on Technet under reference number 4033453:

https://technet.microsoft.com/library/security/4033453.aspx#ID0EN

Azure Active Directory Connect as we know takes care of all operations related to the synchronization of identity information between on-premises environments and Active Directory Federation Services (ADFS) in the cloud. The tool is also the recommended successor to Azure AD Sync and DirSync.

Microsoft were quoted as saying…

The update addresses a vulnerability that could allow elevation of privilege if Azure AD Connect Password writeback is mis-configured during enablement. An attacker who successfully exploited this vulnerability could reset passwords and gain unauthorized access to arbitrary on-premises AD privileged user accounts.

When setting up the permission, an on-premises AD Administrator may have inadvertently granted Azure AD Connect with Reset Password permission over on-premises AD privileged accounts (including Enterprise and Domain Administrator accounts)

In this case as stated by Microsoft the risk consists of a situation where a malicious administrator resets the password of an active directory user using “password writeback”. Allowing the administrator in question to gain privileged access to a customer’s on-premises active directory environment.

Password writeback allows Azure Active Directory to write passwords back to an on-premises Active Directory environment. And helps simplify the process of setting up and managing complicated on-premises self-service password reset solutions. It also provides a rather convenient cloud based means for users to reset their on-premises passwords.

Users may look for confirmation of their exposure to this vulnerability by checking whether the feature in question (password writeback) is enabled and whether AADC has been granted reset password permission over on-premises AD privileged accounts.

A further statement from Microsoft on this issue read…

If the AD DS account is a member of one or more on-premises AD privileged groups, consider removing the AD DS account from the groups.

CVE reference number CVE-2017-8613 was attributed to the vulnerability.

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-8613

Another global ransomware attack is fast spreading !!!

With the events that have escalated since last night here, is a quick summary and initial response from Kloud on how organisations can take proactive steps to mitigate the current situation. Please feel free to distribute internally as you see fit.

 Background

A new wave of powerful cyber-attack (Petya – as referred to in the blue comments below) hit Europe on Tuesday in a possible reprise of a widespread ransomware assault in May that affected 150 countries.  As reported this ransomware demands are targeting the government and key infrastructure systems all over the world.

Those behind the attack appeared to have exploited the same type of hacking tool used in the WannaCry ransomware attack that infected hundreds of thousands of computers in May 2017 before a British researcher created a ‘kill-switch’. This is a different variant to the old WannaCry ransomware, however this nasty piece of ransomware works very differently from any other known variants due to the following:

  • Petya uses the NSA Eternalblue exploit but also spreads in internal networks with WMIC and PSEXEC. In some cases fully patched systems can also get hit with this exploit;
  • Unlike other traditional ransomware, Petya does not encrypt files on a targeted system one by one;
  • Instead, Petya reboots victims computers and encrypts the hard drive’s master file table (MFT) and renders the master boot record (MBR) inoperable, restricting access to the full system by seizing information about file names, sizes, and location on the physical disk; and
  • Petya ransomware replaces the computer’s MBR with its own malicious code that displays the ransom note and leaves computers unable to boot.

The priority is to apply emergency patches and ensure you are up to date.

Make sure users across the business do not click on links and attachments from people or email addresses they do not recognise.

How is it spreading?

Like most of the ransomware attacks ‘Petya’ ransomware is exploiting SMBv1 EternalBlue exploit, just like WannaCry, and taking advantage of unpatched Windows machines.

Petya ransomware is successful in spreading because it combines both a client-side attack (CVE-2017-0199) and a network based threat (MS17-010)

EternalBlue is a Windows SMB exploit leaked by the infamous hacking group Shadow Brokers in its April 2017 data dump, who claimed to have stolen it from the US intelligence agency NSA, along with other Windows exploits.

What assets are at risk?

All unpatched computer systems and all users who do not practice safe online behaviour.

 What is the impact?

The nature of the attack itself is not new. Ransomware spreads by emails with malicious links or attachments have been increasing in recent years.

This ransomware attack follows a relatively typical formula:

  • Locks all the data on a computer system;
  • Provides instructions on what to do next, which includes a demand of ransom; (Typically US$300 ransom in bitcoins)
  • Demand includes paying ransom in a defined period of time otherwise the demand increases, leading to complete destruction of data; and
  • Uses RSA-2048 bit encryption, which makes decryption of data extremely difficult. (next to impossible in most cases)

Typically, these types of attacks do not involve the theft of information, but rather focus on generating cash by preventing critical business operations until the ransom is paid, or the system is rebuilt from unaffected backups.

This attack involving ransomware known as ‘NotPetya’, also referred to as ‘Petya’ or ‘Petwrap’ spreads rapidly, exploiting a weakness in unpatched versions of Microsoft Windows (Windows SMB1 vulnerability).

 Immediate Steps if compromised…

Petya ransomware encrypts systems after rebooting the computer. So if your system is infected with Petya ransomware and it tries to restart, just do not power it back on.

“If machine reboots and you see a message to restart, power off immediately! This is the encryption process. If you do not power on, files are fine.” <Use a LiveCD or external machine to recover files>

 Steps to mitigate the risk…

We need to act fast. Organisations need to ensure staff are made aware of the risk, reiterating additional precautionary measures, whilst simultaneously ensuring that IT systems are protected including:

  • Prioritise patching systems immediately which are performing critical business functions or holding critical data first;
  • Immediately patch Windows machines in the environment (post proper testing). The patch for the weakness identified was released in March 2017 as part of MS17-010 / CVE-2017-01;
  • Disable the unsecured, 30-year-old SMBv1 file-sharing protocol on your Windows systems and servers;
  •  Since Petya Ransomware is also taking advantage of WMIC and PSEXEC tools to infect fully-patched Windows computers, you are also advised to disable WMIC (Windows Management Instrumentation Command-line)
  • Companies should forward a cyber security alert and communications to their employees (education is the Key) requesting them to be vigilant at this time of heightened risk and reminding them to;
    • Not open emails from unknown sources.
    • Be wary of unsolicited emails that demand immediate action.
    • Not click on links or download email attachments sent from unknown users or which seem suspicious.
    • Clearly defined actions for reporting incidents.
  •  Antivirus vendors have been working to release signatures in order protect systems. Organisations should ensure that all systems are running current AV DAT signatures. Focus should be on maintaining currency;
  • Maintain up-to-date backups of files and regularly verify that the backups can be restored. Priority should be on ensuring systems performing critical business functions or holding critical data are verified first;
  • Monitor your network, system, media and logs for any malicious software, possible ex-filtration of data, abnormal behaviour or unauthorised network connections;
  • Practice safe online behaviour; and
  • Report all incidents to your (IT) helpdesk or Security Operations team immediately.

These attacks happen quickly and unexpectedly. You also need to act swiftly to close any vulnerabilities in your systems.

AND

What to if you believe you have been attacked by “Petya” and need assistance?

Under new mandatory breach reporting laws organisations may have obligations to report this breach to the privacy commissioner (section 26WA).

If you or anyone in the business would like to discuss the impact of this attack or other security related matters, we are here to help.

Please do not hesitate to contact us for any assistance.

Ubuntu security hardening for the cloud.

Hardening Ubuntu Server Security For Use in the Cloud

The following describes a few simple means of improving Ubuntu Server security for use in the cloud. Many of the optimizations discussed below apply equally to other Linux based distribution although the commands and settings will vary somewhat.

Azure cloud specific recommendations

  1. Use private key and certificate based SSH authentication exclusively and never use passwords.
  2. Never employ common usernames such as root , admin or administrator.
  3. Change the default public SSH port away from 22.

AWS cloud specific recommendations

AWS makes available a small list of recommendation for securing Linux in their cloud security whitepaper.

Ubuntu / Linux specific recommendations

1. Disable the use of all insecure protocols (FTP, Telnet, RSH and HTTP) and replace them with their encrypted counterparts such as sFTP, SSH, SCP and HTTPS

yum erase inetd xinetd ypserv tftp-server telnet-server rsh-server

2. Uninstall all unnecessary packages

dpkg --get-selections | grep -v deinstall
dpkg --get-selections | grep postgres
yum remove packageName

For more information: http://askubuntu.com/questions/17823/how-to-list-all-installed-packages

3. Run the most recent kernel version available for your distribution

For more information: https://wiki.ubuntu.com/Kernel/LTSEnablementStack

4. Disable root SSH shell access

Open the following file…

sudo vim /etc/ssh/sshd_config

… then change the following value to no.

PermitRootLogin yes

For more information: http://askubuntu.com/questions/27559/how-do-i-disable-remote-ssh-login-as-root-from-a-server

5. Grant shell access to as few users as possible and limit their permissions

Limiting shell access is an important means of securing a system. Shell access is inherently dangerous because of the risk of unlawfully privilege escalations as with any operating systems, however stolen credentials are a concern too.

Open the following file…

sudo vim /etc/ssh/sshd_config

… then add an entry for each user to be allowed.

AllowUsers jim,tom,sally

For more information: http://www.cyberciti.biz/faq/howto-limit-what-users-can-log-onto-system-via-ssh/

6. Limit or change the IP addresses SSH listens on

Open the following file…

sudo vim /etc/ssh/sshd_config

… then add the following.

ListenAddress <IP ADDRESS>

For more information:

http://askubuntu.com/questions/82280/how-do-i-get-ssh-to-listen-on-a-new-ip-without-restarting-the-machine

7. Restrict all forms of access to the host by individual IPs or address ranges

TCP wrapper based access lists can be included in the following files.

/etc/hosts.allow
/etc/hosts.deny

Note: Any changes to your hosts.allow and hosts.deny files take immediate effect, no restarts are needed.

Patterns

ALL : 123.12.

Would match all hosts in the 123.12.0.0 network.

ALL : 192.168.0.1/255.255.255.0

An IP address and subnet mask can be used in a rule.

sshd : /etc/sshd.deny

If the client list begins with a slash (/), it is treated as a filename. In the above rule, TCP wrappers looks up the file sshd.deny for all SSH connections.

sshd : ALL EXCEPT 192.168.0.15

This will allow SSH connections from only the machine with IP address 192.168.0.15 and block all other connection attemps. You can use the options allow or deny to allow or restrict access on a per client basis in either of the files.

in.telnetd : 192.168.5.5 : deny
in.telnetd : 192.168.5.6 : allow

Warning: While restricting system shell access by IP address be very careful not to loose access to the system by locking the administrative user out!

For more information: https://debian-administration.org/article/87/Keeping_SSH_access_secure

8. Check listening network ports

Check listening ports and uninstall or disable all unessential or insecure protocols and deamons.

netstat -tulpn

9. Install Fail2ban

Fail2ban is a means of dealing with unwanted system access attempts over any protocol against a Linux host. It uses rule sets to automate variable length IP banning sources of configurable activity patterns such as SPAM, (D)DOS or brute force attacks.

“Fail2Ban is an intrusion prevention software framework that protects computer servers from brute-force attacks. Written in the Python programming language, it is able to run on POSIX systems that have an interface to a packet-control system or firewall installed locally, for example, iptables or TCP Wrapper.” – Wikipedia

For more information: https://www.digitalocean.com/community/tutorials/how-to-protect-ssh-with-fail2ban-on-ubuntu-14-04

10. Improve the robustness of TCP/IP

Add the following to harden your networking configuration…

10-network-security.conf

… such as

sudo vim /etc/sysctl.d/10-network-security.conf
Ignore ICMP broadcast requests
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Disable source packet routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0 
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0

# Ignore send redirects
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Block SYN attacks
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 5

# Log Martians
net.ipv4.conf.all.log_martians = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Ignore ICMP redirects
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0 
net.ipv6.conf.default.accept_redirects = 0

# Ignore Directed pings
net.ipv4.icmp_echo_ignore_all = 1

And load the new rules as follows.

service procps start

For more information: https://blog.mattbrock.co.uk/hardening-the-security-on-ubuntu-server-14-04/

11. If you are serving web traffic install mod-security

Web application firewalls can be helpful in warning of and fending off a range of attack vectors including SQL injection, (D)DOS, cross-site scripting (XSS) and many others.

“ModSecurity is an open source, cross-platform web application firewall (WAF) module. Known as the “Swiss Army Knife” of WAFs, it enables web application defenders to gain visibility into HTTP(S) traffic and provides a power rules language and API to implement advanced protections.”

For more information: https://modsecurity.org/

12. Install a firewall such as IPtables

IPtables is a highlight configurable and very powerful Linux forewall which has a great deal to offer in terms of bolstering hosts based security.

iptables is a user-space application program that allows a system administrator to configure the tables provided by the Linux kernel firewall (implemented as different Netfilter modules) and the chains and rules it stores.” – Wikipedia.

For more information: https://help.ubuntu.com/community/IptablesHowTo

13. Keep all packages up to date at all times and install security updates as soon as possible

 sudo apt-get update        # Fetches the list of available updates
 sudo apt-get upgrade       # Strictly upgrades the current packages
 sudo apt-get dist-upgrade  # Installs updates (new ones)

14. Install multifactor authentication for shell access

Nowadays it’s possible to use multi-factor authentication for shell access thanks to Google Authenticator.

For more information: https://www.digitalocean.com/community/tutorials/how-to-set-up-multi-factor-authentication-for-ssh-on-ubuntu-14-04

15. Add a second level of authentication behind every web based login page

Stolen passwords are a common problem whether as a result of a vulnerable web application, an SQL injection, a compromised end user computer or something else altogether adding a second layer of protection using .htaccess authentication with credentials stored on the filesystem not in a database is great added security.

For more information: http://stackoverflow.com/questions/6441578/how-secure-is-htaccess-password-protection

Encryption In The Cloud

Is it safe? 

Three simple yet chilling words immortalized by the 1976 movie Marathon Man staring Laurence Olivier and Dustin Hoffman, in which Olivier tries to discover by very unpleasant means whether the location of his stolen diamonds has been exposed.

marathon-man-laurence-olivier

Marathon Man (1976)

Well had Sir Lawrence encrypted that information, there would have been no need for him to worry because he would have known that short of using a weak cypher or vulnerable algorithm or password, encrypted data has a very strong chance of remaining secret no matter what.

What is encryption and why should your business be using it?

Encryption is a means of algorithmically scrambling data so that it can only be read by someone with the required keys. Most commonly it protects our mobile and internet communications but it can also restrict access to the data at rest on our computers, in our data centers and in the public cloud. It protects our privacy and provides anonymity. And although encryption can’t stop data theft by any malicious actor who might try to steal it, it will certainly stop them from being able to access or read it.

Case in point had Sony Pictures Entertainment been encrypting their data in 2014. It would have been much harder for the perpetrators of the huge data theft against their corporate systems to extract any information from the information they stole. As it was, none of it was and much of it was leaked over the internet, including a lot of confidential business information and several unreleased movies.

Several years on and a string of other major data breaches later, figures published in the Ponemon Institute‘s 2016 Cloud Data Security study reveal persisting trends.

Research, conducted by the Ponemon Institute, surveyed 3,476 IT and IT security professionals in the United States, United Kingdom, Australia, Germany, France, Japan, Russian Federation, India and Brazil about the governance policies and security practices their organizations have in place to secure data in cloud environments.

On the importance of encrypting your data in the cloud. 72% of respondents said the ability to encrypt data was important and 86% said it would become even more important over the next two years. But where only 42% of respondents were actively using it to secure sensitive data and only 55% of those stated their organization had control of their keys.

Why is control of your encryption keys important?

While encryption may add a significant layer of control over access to your data, sharing your keys with third parties such as a service provider can still leave room for unwanted access. Whether it’s a disgruntled employee or government agency with legal powers, the end result might still be a serious breach of your privacy. Which is why if you have taken the trouble of encrypting your data, you really should also give serious consideration to your key management policy.

How secure is encryption today?

The relative strengths and weaknesses of the various algorithms available vary of course. However AES which is one of the more widely used algorithms today sports three different block ciphers AES-128, AES-192 and AES-256, each capable of encrypting and decrypting data in blocks of 128 bits using cryptographic keys of 128-bit, 192-bit and 256-bit lengths. And is currently considered uncrackable with any possible weaknesses deriving from errors made during it’s implementation in software.

What options are there for encrypting business data in the public cloud?

Disk level encryption is a means of securing virtual machines as well as their data, which allows the user to control their keys while protecting from disk theft, improper disk disposal and unauthorized access to the virtual container.

Software level encryption involves the data being encrypted before it is uploaded into cloud based storage. In this use case only the owner of the data has access to their keys. Tools for managing data in this manner include Truecrypt and 7Zip among others both of which support high standards of encryption such as AES256. And the type of cloud storage used is often cold or long term archival such as Azure Cold Blob Storage.

Filesystem level encryption consists of data being automatically encrypted in the cloud. The data is permanently encrypted until downloaded. Encryption, decryption and key management are transparent to end users. The data can be accessed while encrypted. Ownership of the required keys varies in this model but is often shared between the owner and provider. AWS S3 is an example of an object store with support for this kind of encryption.

nshield_connect_45_left

Thales nShield Connect Harware Security Module.

Hardware level encryption involves the use of a third party Hardware Security Module (HSM) which provides dedicated and exclusive generation, handling and storage of encryption keys. In this scenario, only the owner has access to the required keys, here again the data can be accessed transparently while encrypted.

So, is it safe?

Well going back to Sir Laurence’s dilemma. There are no absolutes in computer security and probably never will be. But it seems clear that encrypting any data you want to control and managing the keys to it carefully, is sure to make it a much safer if not completely safe.