Windows 10 Domain Join + AAD and MFA Trusted IPs

Background

Those who have rolled out Azure MFA (in the cloud) to non-administrative users are probably well aware of the nifty Trusted IPs feature.   For those that are new to this, the short version is that this capability is designed to make it a little easier on the end user experience by allowing you to define a set of ‘trusted locations’ (e.g. your corporate network) in which MFA is not required.

This capability works via two methods:

  • Defining a set of ‘Trusted” IP addresses.  These IP addresses will be the public facing IP addresses of your Web Proxies and/or network gateways and firewalls
  • Utilising issued claims from Federated Users.   This uses the insidecorporatenetwork = true claim, sent by ADFS, to determine that this user is coming from a ‘trusted location’.  Enabling this capability is discussed in this article.

The Problem

Now, the latter of these is what needs further consideration when you are looking to moving to the ‘modern world’ of Windows 10 and Azure AD (AAD).  Unfortunately, due to some changes made in the way that ‘Win10 Domain Joined with AAD Registration (AAD+DJ) machines performs Single Sign On (SSO) with AAD, the method of utilising federated claims to determine a ‘trusted location’ for MFA will no longer work.

To understand why this is the case, I highly encourage that you first read Jairo Cadena‘s truly excellent blog series that discuss in detail around how Win10 AAD SSO and its associated services works.  The key takeaways from those posts are that Win10 now has this concept of a Primary Refresh Token (PRT) and with this approach to authentication you now have the following changes:

  • The PRT is what is used to obtain access tokens to AAD applications
  • The PRT is cached and has a sliding window lifetime from 14 days up to 90 days
  • The use of the PRT is built into the Windows 10 credential provider.  Both IE and Edge know to utilise the PRT when communicating with AAD
  • It effectively replaces the ADFS with Integrated Windows Auth (IWA) approach to achieve SSO with Azure AD
    • That is, the auth flow is no longer: Browser –> Login to AAD –> Redirect to ADFS –> Perform IWA SSO –> SAML Token provided with claims –> AAD grants access
    • Instead, the auth flow is a lot more streamlined:  Browser –> Login and provide PRT to AAD –> AAD grants access

Hopefully from this auth flow change you can see why Microsoft have done this.  Because the old way relied on IWA to perform ‘seamless’ SSO, it only worked when the device was domain joined and you had line of sight to a DC to perform kerberos.  So when connecting externally, you would always see the prompt from the ADFS forms based authentication.  In the new way, whenever an auth prompt came in from AAD, the credential provider could see this and immediately provide the cached PRT, providing SSO regardless of your network location.  It also meant that you no longer needed a domain joined machine to achieve ‘seamless’ SSO!

The side effect though is that because the SAML token provided by ADFS is no longer involved in gaining access, Azure AD loses visibility on those context based claims like insidecorporatenetwork which subsequently means that specific Trusted IPs feature no longer works.   While this is most commonly used for MFA scenarios, be aware that this will also apply to any Azure AD Conditional Access rules you define that uses the Trusted IPs criteria (e.g. block access to app when external).

Side Note: If you want to confirm this behaviour yourself, simply use a Win10 machine that is both Domain Joined and AAD Registered, perform a fiddler capture, and compare the sign in experience differences between a IE and Edge (i.e. PRT aware) and Chrome (i.e. not PRT aware)

The Solution/Workaround?

So, you might ask, how do you fix this apparent gap in capability?   Does this mean you’re going backwards now?   For any enterprise customer of decent size, managing a set of IP address ranges may not be practical or desireable in order to drive MFA (or conditional access) behaviours between internal and external users.   The federated user claim method was a simple, low admin, way of solving that problem.

To answer this question, I would actually take a step back and look at the underlying problem that you’re trying to solve.  If we remind ourselves of the MFA mantra, the idea is to ensure that the user provides “something they know” (e.g. a secret/password) and “something they have” (e.g. a mobile device) to prove their ‘trustworthiness’.

When we make a decision to allow an MFA bypass for internal users, we are predicating this on the fact that, from a security standpoint, they have met their ‘trustworthiness’ level through a seperate means.  This might be through a security access card that lets them into an office location or utilising a corporate laptop that can perform a VPN connection.  Both of which ultimately lets them connect to the internal network and thus is what you use as your criteria for granting them the luxury of not having to perform another factor of authentication.

So with that in mind, what you could then do is to also expand that critera to include Domain Joined machines.  That is, if a user is utilising a corporate issued device that has been domain joined (and registered to AAD), this can now act as your “something you have” aspect of the MFA mantra to prove your trustworthiness, and so you no longer need to differentiate whether they are actually internal or external anymore.

To achieve this, you’ll need to use Azure AD Conditional Access policies, and modify your Access Grant rules to look something like that below:

Win10PRT1

You’ll also need to perform the steps outlined in the How to configure automatic registration of Windows domain-joined devices with Azure Active Directory article to ensure the devices properly identify themselves as being domain joined.

Side Note:  If you include the Workplace Join packages as outlined above, this approach can also expand to Windows 7 and Windows 8.1 devices.

Side Note 2: You can also include Intune managed mobile devices for your ‘bypass criterias’ if you include the Require device to be marked as compliant critera as well.

Fun Fact: You’ll note that in my image the (preview) reference for ‘require one of the selected controls’ is still there.  This is because until recently (approx. May/June 2017), the MFA or domain joined device criteria didn’t acutally work because of the behaviour/order of how the evaluations were being done.  When AAD was evaluating the domain joined criteria, if it failed it would immediately block access rather then trying the MFA approach next, thus preventing an ‘or’ scenario.   This has now been fixed and I expect the (preview) tag to be removed soon.

Summary

The move to the modern ‘any where, any device’ approach to end user computing means that there is a need to start re-assessing how you approach security.  Old world views of security being defined via network boundaries will eventually disappear and instead you’ll need to consider user-and device based contexts to define when to initiate security controls.

With Windows 10’s approach to authentication with AAD, internal and external access is no longer relevant and should not be used for your criteria in driving MFA or conditional access. Instead, use the device based conditions such as ‘device compliance’ or ‘domain join’ as one of your deciding factors.

Social Engineering Is A Threat To Your Organisation

social_engineering
Of the many attacks, hacks and exploits perpetrated against organisations. One of the most common vulnerabilities businesses face and need to guard against is the result of the general goodness or weakness, depending on how you choose to look at it, of our human natures exploited through means of social engineering.

Social engineering is a very common problem in cyber security. It consists of the simple act of getting an individual to unwittingly perform an unsanctioned or undersirable action under false pretenses. Whether granting access to a system, clicking a poisoned link, revealing sensitive information or any other improperly authorised action. The act relies on the trusting nature of human beings, their drive to help and work with one another. All of which makes social engineering hard to defend against and detect.

Some of the better known forms of social engineering include:

Phishing

Phishing is a technique of fraudulently obtaining private information. Typically, the phisher sends an e-mail that appears to come from a legitimate business—a bank, or credit card company—requesting “verification” of information and warning of some dire consequence if it is not provided. The e-mail usually contains a link to a fraudulent web page that seems legitimate—with company logos and content—and has a form requesting everything from a home address to an ATM card’s PIN or a credit card number. [Wikipedia]

Tailgating

An attacker, seeking entry to a restricted area secured by unattended, electronic access control, e.g. by RFID card, simply walks in behind a person who has legitimate access. Following common courtesy, the legitimate person will usually hold the door open for the attacker or the attackers themselves may ask the employee to hold it open for them. The legitimate person may fail to ask for identification for any of several reasons, or may accept an assertion that the attacker has forgotten or lost the appropriate identity token. The attacker may also fake the action of presenting an identity token. [Wikipedia]

Baiting

Baiting is like the real-world Trojan horse that uses physical media and relies on the curiosity or greed of the victim. In this attack, attackers leave malware-infected floppy disks, CD-ROMs, or USB flash drives in locations people will find them (bathrooms, elevators, sidewalks, parking lots, etc.), give them legitimate and curiosity-piquing labels, and waits for victims. For example, an attacker may create a disk featuring a corporate logo, available from the target’s website, and label it “Executive Salary Summary Q2 2012”. The attacker then leaves the disk on the floor of an elevator or somewhere in the lobby of the target company. An unknowing employee may find it and insert the disk into a computer to satisfy his or her curiosity, or a good Samaritan may find it and return it to the company. In any case, just inserting the disk into a computer installs malware, giving attackers access to the victim’s PC and, perhaps, the target company’s internal computer network. [Wikipedia]

Water holing

Water holing is a targeted social engineering strategy that capitalizes on the trust users have in websites they regularly visit. The victim feels safe to do things they would not do in a different situation. A wary person might, for example, purposefully avoid clicking a link in an unsolicited email, but the same person would not hesitate to follow a link on a website he or she often visits. So, the attacker prepares a trap for the unwary prey at a favored watering hole. This strategy has been successfully used to gain access to some (supposedly) very secure systems. [Wikipedia]

Quid pro quo

Quid pro quo means something for something. An attacker calls random numbers at a company, claiming to be calling back from technical support. Eventually this person will hit someone with a legitimate problem, grateful that someone is calling back to help them. The attacker will “help” solve the problem and, in the process, have the user type commands that give the attacker access or launch malware. [Wikipedia]

Now do something about it!

As threats to orginisation’s cyber security go. Social engineering is a significant and prevalent threat, and not to be under-estimated.

However the following are some of the more effective means of guarding against it.

  1. Be vigilent…
  2. Be vigilent over the phone, through email and online.
  3. Be healthily skeptical and aware of your surroundings.
  4. Always validate the requestor’s identity before considering their request.
  5. Validate the request against another member of staff if necessary.

Means of mitigating social engineering attacks:

  1. Use different logins for all resources.
  2. Use multi-factor authentication for all sensitive resources.
  3. Monitor account usage.

Means of improving your staff’s ability to detect social engineering attacks:

  1. Educate your staff.
  2. Run social engineering simulation exercises across your organisation.

Ultimately of course the desired outcome of trying to bolster your’s organisation’s ability to detect a social engineering attack. Is a situation where the targeted user isn’t fooled by the attempt against their trust and performs accordingly, such as knowing not to click the link in an email purporting to help them retrieve their lost banking details for example.

Some additional tips:

  1. Approach all unsolicited communications no matter who the originator claims to be with skepticism.
  2. Pay close attention to the target URL of all links by hovering your cursor over them to hopefully reveal their true destination.
  3. Look to the HTTPS digital certificate of all sensitive websites you visit for identity information.
  4. Use spam filtering, Antivirus software and anti-phising software.

Cloud Security Research: Cross-Cloud Adversary Analytics

Newly published research from security firm Rapid7 is painting a worrying picture of hackers and malicious actors increasingly looking for new vectors against organizations with resources hosted in public cloud infrastructure environments.

Some highlights of Rapid7’s report:

  • The six cloud providers in our study make up nearly 15% of available IPv4 addresses on the internet.
  • 22% of Softlayer nodes expose database services (MySQL & SQL Server) directly to the internet.
  • Web services are prolific, with 53-80% of nodes in each provider exposing some type of web service.
  • Digital Ocean and Google nodes expose shell (Telnet & SSH) services at a much higher rate – 86% and 74%, respectively – than the other four cloud providers in this study.
  • A wide range of attacks were detected, including ShellShock, SQL Injection, PHP webshell injection and credentials attacks against ssh, Telnet and remote framebuffer (e.g. VNC, RDP & Citrix).

Findings included nearly a quarter of hosts deployed in IBM’s SoftLayer public cloud having databases publicly accessible over the internet, which should be a privacy and security concern to those organization and their customers.

Many of Google’s cloud customers leaving shell access publicly accessible over protocols such as SSH and much worse still, telnet which is worrying to say the least.

Businesses using the public cloud being increasingly probed by outsiders looking for well known vulnerabilities such as OpenSSL Heartbleed (CVE-2014-0160), Stagefright (CVE-2015-1538) and Poodle (CVE-2014-3566) to name but a few.

Digging further into their methodologies, looking to see whether these were random or targeted. It appears these actors are honing their skills in tailoring their probes and attacks to specific providers and organisations.

Rapid7’s research was conducted by means of honey traps, hosts and services made available solely for the purpose of capturing untoward activity with a view to studying how these malicious outsiders do their work. What’s more the company has partnered with Microsoft, Amazon and others under the auspices of projects Heisenberg and Sonar to leverage big data analytics to mine the results of their findings and scan the internet for trends.

Case in point project Heisenberg saw the deployment of honeypots in every geography in partnership with all major public cloud providers. And scanned for compromised digital certifcates in those environments. While project Sonar scanned millions of digital certificates on the internet for sings of the same.

However while the report leads to clear evidence showing that hackers are tailoring their attacks to different providers and organisations. It reads as somewhat more of an indictment of the poor standard of security being deployed by some organisations in the public cloud today. Than a statement on the security practices of the major providers.

The 2016 national exposure survey.

Read about the Heisenberg cloud project (slides).

Security Vulnerability Revealed in Azure Active Directory Connect

Microsoft ADFS

The existence of a new and potentially serious privilege escalation and password reset vulnerability in Azure Active Directory Connect (AADC) was recently made public by Microsoft.

https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnectsync-whatis

Fixing the problem can be achieved by means of an upgrade to the latest available release of AADC 1.1.553.0.

https://www.microsoft.com/en-us/download/details.aspx?id=47594

The Microsoft security advisory qualifies the issue as important and was published on Technet under reference number 4033453:

https://technet.microsoft.com/library/security/4033453.aspx#ID0EN

Azure Active Directory Connect as we know takes care of all operations related to the synchronization of identity information between on-premises environments and Active Directory Federation Services (ADFS) in the cloud. The tool is also the recommended successor to Azure AD Sync and DirSync.

Microsoft were quoted as saying…

The update addresses a vulnerability that could allow elevation of privilege if Azure AD Connect Password writeback is mis-configured during enablement. An attacker who successfully exploited this vulnerability could reset passwords and gain unauthorized access to arbitrary on-premises AD privileged user accounts.

When setting up the permission, an on-premises AD Administrator may have inadvertently granted Azure AD Connect with Reset Password permission over on-premises AD privileged accounts (including Enterprise and Domain Administrator accounts)

In this case as stated by Microsoft the risk consists of a situation where a malicious administrator resets the password of an active directory user using “password writeback”. Allowing the administrator in question to gain privileged access to a customer’s on-premises active directory environment.

Password writeback allows Azure Active Directory to write passwords back to an on-premises Active Directory environment. And helps simplify the process of setting up and managing complicated on-premises self-service password reset solutions. It also provides a rather convenient cloud based means for users to reset their on-premises passwords.

Users may look for confirmation of their exposure to this vulnerability by checking whether the feature in question (password writeback) is enabled and whether AADC has been granted reset password permission over on-premises AD privileged accounts.

A further statement from Microsoft on this issue read…

If the AD DS account is a member of one or more on-premises AD privileged groups, consider removing the AD DS account from the groups.

CVE reference number CVE-2017-8613 was attributed to the vulnerability.

https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-8613

Another global ransomware attack is fast spreading !!!

With the events that have escalated since last night here, is a quick summary and initial response from Kloud on how organisations can take proactive steps to mitigate the current situation. Please feel free to distribute internally as you see fit.

 Background

A new wave of powerful cyber-attack (Petya – as referred to in the blue comments below) hit Europe on Tuesday in a possible reprise of a widespread ransomware assault in May that affected 150 countries.  As reported this ransomware demands are targeting the government and key infrastructure systems all over the world.

Those behind the attack appeared to have exploited the same type of hacking tool used in the WannaCry ransomware attack that infected hundreds of thousands of computers in May 2017 before a British researcher created a ‘kill-switch’. This is a different variant to the old WannaCry ransomware, however this nasty piece of ransomware works very differently from any other known variants due to the following:

  • Petya uses the NSA Eternalblue exploit but also spreads in internal networks with WMIC and PSEXEC. In some cases fully patched systems can also get hit with this exploit;
  • Unlike other traditional ransomware, Petya does not encrypt files on a targeted system one by one;
  • Instead, Petya reboots victims computers and encrypts the hard drive’s master file table (MFT) and renders the master boot record (MBR) inoperable, restricting access to the full system by seizing information about file names, sizes, and location on the physical disk; and
  • Petya ransomware replaces the computer’s MBR with its own malicious code that displays the ransom note and leaves computers unable to boot.

The priority is to apply emergency patches and ensure you are up to date.

Make sure users across the business do not click on links and attachments from people or email addresses they do not recognise.

How is it spreading?

Like most of the ransomware attacks ‘Petya’ ransomware is exploiting SMBv1 EternalBlue exploit, just like WannaCry, and taking advantage of unpatched Windows machines.

Petya ransomware is successful in spreading because it combines both a client-side attack (CVE-2017-0199) and a network based threat (MS17-010)

EternalBlue is a Windows SMB exploit leaked by the infamous hacking group Shadow Brokers in its April 2017 data dump, who claimed to have stolen it from the US intelligence agency NSA, along with other Windows exploits.

What assets are at risk?

All unpatched computer systems and all users who do not practice safe online behaviour.

 What is the impact?

The nature of the attack itself is not new. Ransomware spreads by emails with malicious links or attachments have been increasing in recent years.

This ransomware attack follows a relatively typical formula:

  • Locks all the data on a computer system;
  • Provides instructions on what to do next, which includes a demand of ransom; (Typically US$300 ransom in bitcoins)
  • Demand includes paying ransom in a defined period of time otherwise the demand increases, leading to complete destruction of data; and
  • Uses RSA-2048 bit encryption, which makes decryption of data extremely difficult. (next to impossible in most cases)

Typically, these types of attacks do not involve the theft of information, but rather focus on generating cash by preventing critical business operations until the ransom is paid, or the system is rebuilt from unaffected backups.

This attack involving ransomware known as ‘NotPetya’, also referred to as ‘Petya’ or ‘Petwrap’ spreads rapidly, exploiting a weakness in unpatched versions of Microsoft Windows (Windows SMB1 vulnerability).

 Immediate Steps if compromised…

Petya ransomware encrypts systems after rebooting the computer. So if your system is infected with Petya ransomware and it tries to restart, just do not power it back on.

“If machine reboots and you see a message to restart, power off immediately! This is the encryption process. If you do not power on, files are fine.” <Use a LiveCD or external machine to recover files>

 Steps to mitigate the risk…

We need to act fast. Organisations need to ensure staff are made aware of the risk, reiterating additional precautionary measures, whilst simultaneously ensuring that IT systems are protected including:

  • Prioritise patching systems immediately which are performing critical business functions or holding critical data first;
  • Immediately patch Windows machines in the environment (post proper testing). The patch for the weakness identified was released in March 2017 as part of MS17-010 / CVE-2017-01;
  • Disable the unsecured, 30-year-old SMBv1 file-sharing protocol on your Windows systems and servers;
  •  Since Petya Ransomware is also taking advantage of WMIC and PSEXEC tools to infect fully-patched Windows computers, you are also advised to disable WMIC (Windows Management Instrumentation Command-line)
  • Companies should forward a cyber security alert and communications to their employees (education is the Key) requesting them to be vigilant at this time of heightened risk and reminding them to;
    • Not open emails from unknown sources.
    • Be wary of unsolicited emails that demand immediate action.
    • Not click on links or download email attachments sent from unknown users or which seem suspicious.
    • Clearly defined actions for reporting incidents.
  •  Antivirus vendors have been working to release signatures in order protect systems. Organisations should ensure that all systems are running current AV DAT signatures. Focus should be on maintaining currency;
  • Maintain up-to-date backups of files and regularly verify that the backups can be restored. Priority should be on ensuring systems performing critical business functions or holding critical data are verified first;
  • Monitor your network, system, media and logs for any malicious software, possible ex-filtration of data, abnormal behaviour or unauthorised network connections;
  • Practice safe online behaviour; and
  • Report all incidents to your (IT) helpdesk or Security Operations team immediately.

These attacks happen quickly and unexpectedly. You also need to act swiftly to close any vulnerabilities in your systems.

AND

What to if you believe you have been attacked by “Petya” and need assistance?

Under new mandatory breach reporting laws organisations may have obligations to report this breach to the privacy commissioner (section 26WA).

If you or anyone in the business would like to discuss the impact of this attack or other security related matters, we are here to help.

Please do not hesitate to contact us for any assistance.

Ubuntu security hardening for the cloud.

Hardening Ubuntu Server Security For Use in the Cloud

The following describes a few simple means of improving Ubuntu Server security for use in the cloud. Many of the optimizations discussed below apply equally to other Linux based distribution although the commands and settings will vary somewhat.

Azure cloud specific recommendations

  1. Use private key and certificate based SSH authentication exclusively and never use passwords.
  2. Never employ common usernames such as root , admin or administrator.
  3. Change the default public SSH port away from 22.

AWS cloud specific recommendations

AWS makes available a small list of recommendation for securing Linux in their cloud security whitepaper.

Ubuntu / Linux specific recommendations

1. Disable the use of all insecure protocols (FTP, Telnet, RSH and HTTP) and replace them with their encrypted counterparts such as sFTP, SSH, SCP and HTTPS

yum erase inetd xinetd ypserv tftp-server telnet-server rsh-server

2. Uninstall all unnecessary packages

dpkg --get-selections | grep -v deinstall
dpkg --get-selections | grep postgres
yum remove packageName

For more information: http://askubuntu.com/questions/17823/how-to-list-all-installed-packages

3. Run the most recent kernel version available for your distribution

For more information: https://wiki.ubuntu.com/Kernel/LTSEnablementStack

4. Disable root SSH shell access

Open the following file…

sudo vim /etc/ssh/sshd_config

… then change the following value to no.

PermitRootLogin yes

For more information: http://askubuntu.com/questions/27559/how-do-i-disable-remote-ssh-login-as-root-from-a-server

5. Grant shell access to as few users as possible and limit their permissions

Limiting shell access is an important means of securing a system. Shell access is inherently dangerous because of the risk of unlawfully privilege escalations as with any operating systems, however stolen credentials are a concern too.

Open the following file…

sudo vim /etc/ssh/sshd_config

… then add an entry for each user to be allowed.

AllowUsers jim,tom,sally

For more information: http://www.cyberciti.biz/faq/howto-limit-what-users-can-log-onto-system-via-ssh/

6. Limit or change the IP addresses SSH listens on

Open the following file…

sudo vim /etc/ssh/sshd_config

… then add the following.

ListenAddress <IP ADDRESS>

For more information:

http://askubuntu.com/questions/82280/how-do-i-get-ssh-to-listen-on-a-new-ip-without-restarting-the-machine

7. Restrict all forms of access to the host by individual IPs or address ranges

TCP wrapper based access lists can be included in the following files.

/etc/hosts.allow
/etc/hosts.deny

Note: Any changes to your hosts.allow and hosts.deny files take immediate effect, no restarts are needed.

Patterns

ALL : 123.12.

Would match all hosts in the 123.12.0.0 network.

ALL : 192.168.0.1/255.255.255.0

An IP address and subnet mask can be used in a rule.

sshd : /etc/sshd.deny

If the client list begins with a slash (/), it is treated as a filename. In the above rule, TCP wrappers looks up the file sshd.deny for all SSH connections.

sshd : ALL EXCEPT 192.168.0.15

This will allow SSH connections from only the machine with IP address 192.168.0.15 and block all other connection attemps. You can use the options allow or deny to allow or restrict access on a per client basis in either of the files.

in.telnetd : 192.168.5.5 : deny
in.telnetd : 192.168.5.6 : allow

Warning: While restricting system shell access by IP address be very careful not to loose access to the system by locking the administrative user out!

For more information: https://debian-administration.org/article/87/Keeping_SSH_access_secure

8. Check listening network ports

Check listening ports and uninstall or disable all unessential or insecure protocols and deamons.

netstat -tulpn

9. Install Fail2ban

Fail2ban is a means of dealing with unwanted system access attempts over any protocol against a Linux host. It uses rule sets to automate variable length IP banning sources of configurable activity patterns such as SPAM, (D)DOS or brute force attacks.

“Fail2Ban is an intrusion prevention software framework that protects computer servers from brute-force attacks. Written in the Python programming language, it is able to run on POSIX systems that have an interface to a packet-control system or firewall installed locally, for example, iptables or TCP Wrapper.” – Wikipedia

For more information: https://www.digitalocean.com/community/tutorials/how-to-protect-ssh-with-fail2ban-on-ubuntu-14-04

10. Improve the robustness of TCP/IP

Add the following to harden your networking configuration…

10-network-security.conf

… such as

sudo vim /etc/sysctl.d/10-network-security.conf
Ignore ICMP broadcast requests
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Disable source packet routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0 
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0

# Ignore send redirects
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Block SYN attacks
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 5

# Log Martians
net.ipv4.conf.all.log_martians = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Ignore ICMP redirects
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0 
net.ipv6.conf.default.accept_redirects = 0

# Ignore Directed pings
net.ipv4.icmp_echo_ignore_all = 1

And load the new rules as follows.

service procps start

For more information: https://blog.mattbrock.co.uk/hardening-the-security-on-ubuntu-server-14-04/

11. If you are serving web traffic install mod-security

Web application firewalls can be helpful in warning of and fending off a range of attack vectors including SQL injection, (D)DOS, cross-site scripting (XSS) and many others.

“ModSecurity is an open source, cross-platform web application firewall (WAF) module. Known as the “Swiss Army Knife” of WAFs, it enables web application defenders to gain visibility into HTTP(S) traffic and provides a power rules language and API to implement advanced protections.”

For more information: https://modsecurity.org/

12. Install a firewall such as IPtables

IPtables is a highlight configurable and very powerful Linux forewall which has a great deal to offer in terms of bolstering hosts based security.

iptables is a user-space application program that allows a system administrator to configure the tables provided by the Linux kernel firewall (implemented as different Netfilter modules) and the chains and rules it stores.” – Wikipedia.

For more information: https://help.ubuntu.com/community/IptablesHowTo

13. Keep all packages up to date at all times and install security updates as soon as possible

 sudo apt-get update        # Fetches the list of available updates
 sudo apt-get upgrade       # Strictly upgrades the current packages
 sudo apt-get dist-upgrade  # Installs updates (new ones)

14. Install multifactor authentication for shell access

Nowadays it’s possible to use multi-factor authentication for shell access thanks to Google Authenticator.

For more information: https://www.digitalocean.com/community/tutorials/how-to-set-up-multi-factor-authentication-for-ssh-on-ubuntu-14-04

15. Add a second level of authentication behind every web based login page

Stolen passwords are a common problem whether as a result of a vulnerable web application, an SQL injection, a compromised end user computer or something else altogether adding a second layer of protection using .htaccess authentication with credentials stored on the filesystem not in a database is great added security.

For more information: http://stackoverflow.com/questions/6441578/how-secure-is-htaccess-password-protection

Encryption In The Cloud

Is it safe? 

Three simple yet chilling words immortalized by the 1976 movie Marathon Man staring Laurence Olivier and Dustin Hoffman, in which Olivier tries to discover by very unpleasant means whether the location of his stolen diamonds has been exposed.

marathon-man-laurence-olivier

Marathon Man (1976)

Well had Sir Lawrence encrypted that information, there would have been no need for him to worry because he would have known that short of using a weak cypher or vulnerable algorithm or password, encrypted data has a very strong chance of remaining secret no matter what.

What is encryption and why should your business be using it?

Encryption is a means of algorithmically scrambling data so that it can only be read by someone with the required keys. Most commonly it protects our mobile and internet communications but it can also restrict access to the data at rest on our computers, in our data centers and in the public cloud. It protects our privacy and provides anonymity. And although encryption can’t stop data theft by any malicious actor who might try to steal it, it will certainly stop them from being able to access or read it.

Case in point had Sony Pictures Entertainment been encrypting their data in 2014. It would have been much harder for the perpetrators of the huge data theft against their corporate systems to extract any information from the information they stole. As it was, none of it was and much of it was leaked over the internet, including a lot of confidential business information and several unreleased movies.

Several years on and a string of other major data breaches later, figures published in the Ponemon Institute‘s 2016 Cloud Data Security study reveal persisting trends.

Research, conducted by the Ponemon Institute, surveyed 3,476 IT and IT security professionals in the United States, United Kingdom, Australia, Germany, France, Japan, Russian Federation, India and Brazil about the governance policies and security practices their organizations have in place to secure data in cloud environments.

On the importance of encrypting your data in the cloud. 72% of respondents said the ability to encrypt data was important and 86% said it would become even more important over the next two years. But where only 42% of respondents were actively using it to secure sensitive data and only 55% of those stated their organization had control of their keys.

Why is control of your encryption keys important?

While encryption may add a significant layer of control over access to your data, sharing your keys with third parties such as a service provider can still leave room for unwanted access. Whether it’s a disgruntled employee or government agency with legal powers, the end result might still be a serious breach of your privacy. Which is why if you have taken the trouble of encrypting your data, you really should also give serious consideration to your key management policy.

How secure is encryption today?

The relative strengths and weaknesses of the various algorithms available vary of course. However AES which is one of the more widely used algorithms today sports three different block ciphers AES-128, AES-192 and AES-256, each capable of encrypting and decrypting data in blocks of 128 bits using cryptographic keys of 128-bit, 192-bit and 256-bit lengths. And is currently considered uncrackable with any possible weaknesses deriving from errors made during it’s implementation in software.

What options are there for encrypting business data in the public cloud?

Disk level encryption is a means of securing virtual machines as well as their data, which allows the user to control their keys while protecting from disk theft, improper disk disposal and unauthorized access to the virtual container.

Software level encryption involves the data being encrypted before it is uploaded into cloud based storage. In this use case only the owner of the data has access to their keys. Tools for managing data in this manner include Truecrypt and 7Zip among others both of which support high standards of encryption such as AES256. And the type of cloud storage used is often cold or long term archival such as Azure Cold Blob Storage.

Filesystem level encryption consists of data being automatically encrypted in the cloud. The data is permanently encrypted until downloaded. Encryption, decryption and key management are transparent to end users. The data can be accessed while encrypted. Ownership of the required keys varies in this model but is often shared between the owner and provider. AWS S3 is an example of an object store with support for this kind of encryption.

nshield_connect_45_left

Thales nShield Connect Harware Security Module.

Hardware level encryption involves the use of a third party Hardware Security Module (HSM) which provides dedicated and exclusive generation, handling and storage of encryption keys. In this scenario, only the owner has access to the required keys, here again the data can be accessed transparently while encrypted.

So, is it safe?

Well going back to Sir Laurence’s dilemma. There are no absolutes in computer security and probably never will be. But it seems clear that encrypting any data you want to control and managing the keys to it carefully, is sure to make it a much safer if not completely safe.

Using Microsoft Azure Table Service REST API to collect data samples

Sometimes we need a simple solution that requires collecting data from multiple sources. The sources of data can be IoT devices or systems working on different platforms and in different places. Traditionally, integrators start thinking about implementation of a custom centralised REST API with some database repository. This solution can take days to implement and test, it is very expensive and requires hosting, maintenance, and support. However, in many cases, it is not needed at all. This post introduces the idea that out-of-the-box Azure Tables REST API is good enough to start your data collection, research, and analysis in no time. Moreover, the suggested solution offers very convenient REST API that supports JSON objects and very flexible NoSQL format. Furthermore, what’s great is that you do not need to write lots of code and hire programmers. Anybody who understands how to work with REST API, create headers and put JSON in the Web request body can immediately start working on a project and sending data to a very cheap Azure Tables storage. Additional benefits of using Azure Tables are: native support in Microsoft Azure Machine Learning, other statistical packages also allow you to download data from Azure Tables.

Microsoft provides Azure Tables SDKs for various languages and platforms. By all means you should use these SDKs; your life will be much easier. However, some systems don’t have this luxury and require developers to work with Azure Tables REST API directly. The only requirement for your system is that it should be able to execute web requests, and you should be able to work with the headers of these requests. Most of the systems satisfy this requirement. In this post, I explain how to form web requests and work with the latest Azure Table REST API from your code. I’ve also created a reference code to support my findings. It is written in C# but this technique can be replicated to other languages and platforms.

The full source code is hosted on GitHub here:

https://github.com/dimkdimk/AzureTablesRestDemo

Prerequisites.

You will need an Azure subscription. Create there a new storage account and create a test table with the name “mytable” in it. Below is my table that I’ve created in the Visual Studio 2015.

Picture 1

I’ve created a helper class that has two main methods: RequestResource and InsertEntity.

The full source code of this class is here:

Testing the class is easy. I’ve created a console application, prepared a few data samples and called our Azure Tables helpers methods. The source code of this program is below.

The hardest part in calling Azure Tables Web API is creating encrypted signature string to form Authorise header. It can be a bit tricky and not very clear for beginners. Please take a look at the detailed documentation that describes how to sign a string for various Azure Storage services: https://msdn.microsoft.com/en-au/library/dd179428.aspx

To help you with the authorisation code, I’ve written an example of how it can be done in the class AzuretableHelper. Please take a look at the code that creates strAuthorization variable. First, you will need to form a string that contains your canonical resource name and current time in the specific format including newline characters in-between. Then, this string has to be encrypted with so-called HMAC SHA-256 encryption algorithm. The key for this encryption code is a SharedKey that you can obtain from your Azure Storage Account as shown here:

Picture 2

The encrypted Authorisation string has to be re-created on each request you execute. Otherwise, Azure API will reject your requests with the Unuthorised error message in the response.

The meaning of other request’s headers is straightforward. You specify the version, format, and other attributes. To see the full list of methods you can perform on Azure Tables and to read about all attributes and headers you can refer to MSDN documentation:

https://msdn.microsoft.com/library/dd179423.aspx

Once you’ve mastered this “geeky” method of using Azure Tables Web API, you can send your data straight to your Azure Tables without using any intermediate Web API facilities. Also, you can read Azure Tables data to receive configuration parameters or input commands from another applications. A similar approach can be applied to other Azure Storage API, for example, Azure Blob storage, or Azure Queue.

 

 

 

 

Azure Automation Runbooks with Azure AD Service Principals and Custom RBAC Roles

siliconvalve

If you’ve ever worked in any form of systems administrator role then you will be familiar with process automation, even only for simple tasks like automating backups. You will also be familiar with the pain of configuring and managing identities for these automated processes (expired password or disabled/deleted account ever caused you any pain?!)

While the cloud offers us many new ways of working and solving traditional problems it also maintains many concepts we’d be familiar from environments of old. As you might have guessed, the idea of specifying user context for automated processes has not gone away (yet).

In this post I am going to look at how in Azure Automation Runbooks we can leverage a combination of an Azure Active Directory Service Principal and an Azure RBAC Custom Role to configure an non-user identity with constrained execution scope.

The benefits of this approach are two-fold:

  1. No password expiry…

View original post 694 more words

Windows Server 2012 R2 (ADFS 3.0): Migrating ADFS Configuration Database from WID to SQL

You already have a working ADFS setup which has been configured to use the Windows Internal Database (WID) to store its configuration database. However, things may have changed since you implemented it and you may now have one (or more) of the below requirements which will need an upgrade to SQL server.

  • Need more than five federation servers in the ADFS Farm (supporting more than 10 relying parties)
  • Leverage high availability features of SQL or
  • Enable support for SAML artefact resolution or WS Federation token replay detection.

The below step-by-step procedure should help you with the migration of the ADFS configuration database from WID to SQL with minimal or no downtime (however, plan accordingly such that it has the least impact in case something goes wrong).

The steps also cover configuration of each of the ADFS servers (Primary and Secondary) in the farm to use the SQL Server for its configuration database.

For simplicity, i have used the below scenario comprising of:

Proposed Design

DMZ

  • Two Web Application Proxies (WAP) – wap1 and wap2
  • External load balancer (ELB) in front of the WAPs.

Private / Corporate network

  • Two ADFS Servers – adfs1 and adfs2
  • Internal Load Balancer (ILB) in front of the ADFS Servers
  • SQL Server (Standalone). Additional steps need to be performed (not covered in this blog) when using SQL Server with high availability options such as SQL Always-On or Merge Replication

Backups

Ensure you have a complete backup of your ADFS servers. You can use Windows Server Backup or your thirty-party backup solution to backup the ADFS servers.

Load Balancer Configuration

During the course of this exercise the internal load balancer will be configured multiple times to ensure a smooth migration with minimal impact to end users.

Remove the primary ADFS Server (adfs1) from the internal load balancer configuration such that all traffic is directed to the secondary server (adfs2).

Primary ADFS Server steps

  • Stop the ADFS windows service by issuing “net stop adfssrv” in an elevated command prompt or via the Windows Services Manager.

net stop adfssrv

  • Download and install SQL Server Management Studio (SSMS) (if not already present)
  • Launch SSMS in Administrator mode
  • Connect to your WID using \\.\pipe\MICROSOFT##WID\tsql\query as the server name in SSMS.

SSMS connect dialog

You should be able to see the two ADFS databases (AdfsArtifactStore and AdfsConfiguration) as shown below:

SSMS showing the two ADFS databases

  • To find the physical location of the ADFSConfiguration and ADFSArtifactStore in WID, run the below query  by starting up a ‘New Query’. The default path is C:\Windows\WID\Data\.
SELECT name, physical_name AS current_file_location FROM sys.master_files

Results showing physical location of DB files

  • Restart WID from SSMS. This is just to ensure that there is no lock on the databases. Right Click on the WID db and select ‘Restart‘.

Restarting the database

Restarting the database

  • Now we need to detach both the databases. Run the below query on the WID using SSMS
USE [master]
GO
EXEC master.dbo.sp_detach_db @dbname = N'AdfsArtifactStore'
GO
EXEC master.dbo.sp_detach_db @dbname = N'AdfsConfiguration'
GO

Running the commands on the WID

  • Now copy the databases identified earlier from the Primary ADFS Server to your SQL Server’s Data directory (for example C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA).

p8

SQL Server – Steps

  • On the SQL Server, bring up the SQL Server Management Studio (SSMS) and connect to the SQL instance (or default instance) where the ADFS databases will be hosted.
  • Create a login with the ADFS windows service account (which was used for the initial ADFS setup and configuration). I used Contoso\svcadfs.

Adding SQL Server user

  • Now attach the databases copied earlier on the SQL server. Run the below using the SQL Server Management Studio. Modify the path as appropriate if the db files were copied to a location other than ‘C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA’

 USE [master]
 GO
 CREATE DATABASE [AdfsConfiguration] ON
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsConfiguration.mdf' ),
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsConfiguration_log.ldf' )
 FOR ATTACH
 GO

 CREATE DATABASE [AdfsArtifactStore] ON
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsArtifactStore.mdf' ),
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsArtifactStore_log.ldf' )
 FOR ATTACH
 GO
 ALTER DATABASE AdfsConfiguration set enable_broker with rollback immediate
 GO

  • On successful execution of the above, you should be able to see the two ADFS databases in SSMS (you may need to do a refresh if not displayed automatically)

Two databases shown in SSMS

  • Ensure that the ADFS Service Account has the “db_genevaservice” role membership on both the databases

Grant service account right database role

Firewall Configuration

Ensure that the SQL Server is reachable from the ADFS servers on port 1433. You may need to update network firewalls and / or host firewall configuration on the SQL Server (depending on the type of network setup you may have).

Primary ADFS Server Steps

  • Start the ADFS windows service by issuing “net start adfssrv” from an elevated command prompt or from the Windows Services Manager
  • Launch a PowerShell console in Administrator Mode and execute the below lines in order

 $temp= GEt-WmiObject -namespace root/ADFS -class SecurityTokenService
 $temp.ConfigurationdatabaseConnectionstring="data source=[sqlserver\instance];initial catalog=adfsconfiguration;integrated security=true"
 $temp.put()

Note: replace [sqlserver\instance] with actual server\instance. If not running as an instance, just server. I am using ‘SQLServer’ as it is the hostname of the SQL server being used in this example.

PowerShell Configuration

  • Change the connection string property in “AdfsProperties” by issuing the below command from the PowerShell console

Set-AdfsProperties -ArtifactDbConnection "Data Source=[sqlserver\instance];Initial Catalog=AdfsArtifactStore;Integrated Security=True"

Note: Change [sqlserver\instance]  with the name of your SQL server and instance (as applicable)

PowerShell Configuration

  • Restart the ADFS Service by executing “net stop adfssrv” and “Nnet start adfsrv” from an elevated command prompt or from the Windows Services Manager.

Restarting service

  • To check if the configuration has been successful, run “Get-AdfsProperties” from a PowerShell console. You should see the ADFS properties listed (as below) with the key being Data Source=SQLServer; Initial Catalog=AdfsArtifactStore; Integrated Security=True

Output from Get-AdfsProperties

This completes the migration of the ADFS configuration database from WID to SQL and also the configuration of the Primary ADFS server to use the SQL Database. Now we need to configure the secondary ADFS server(s) to use the SQL Database.

Load Balancer Configuration

Update the internal load balancer to:

  • Add the Primary ADFS (adfs1) to the load balance configuration and
  • Remove the secondary ADFS (adfs2) server which needs to be reconfigured to point to the SQL Server.

Secondary ADFS Server steps

  • Stop the ADFS Windows service by issuing “net stop adfssrv” in an elevated command prompt
  • To change the configuration database connection string to point to the new SQL ADFS configuration database run the below command lines (in order) from a PowerShell Console
$temp= Get-WmiObject -namespace root/ADFS -class SecurityTokenService
$temp.ConfigurationdatabaseConnectionstring=”data source=&amp;amp;lt;SQLServer\SQLInstance&amp;amp;gt;; initial catalog=adfsconfiguration;integrated security=true”
$temp.put()

Note: Change [sqlserver\instance] with the name of your SQL server / instance as used for the primary server configuration.

PowerShell Configuration

  • Start the ADFS Service by executing “Net Start ADFSSRV” from an elevated command prompt and verify that the service starts up successfully. ( I have had the issue where my ADFS server was (strangely) not able to resolve the NETBIOS name of the SQL Server, hence the service wouldn’t start properly. Also, check if the federation service is running using the service account that was provided login to the SQL Database)
  • To check if the configuration has been successful, run “Get-AdfsProperties” from a PowerShell console. You should see the ADFS properties listed (as below) with the key being  Data Source=SQLServer; Initial Catalog=AdfsArtifactStore; Integrated Security=True

Output from Get-AdfsProperties

Repeat above steps for each of the secondary servers (if you have more than one) and ensure that all ADFS servers are added back to the internal load balancer configuration.

I hope this post has been useful.