Kerberos Web Application Configuration and Federation.

I’ve spent a lot of time at a client site recently working on a large complex application migration project. In my scenario, the client is migrating applications from another domain, to their own. There are no domain trusts in place, so you could consider it as an acquisition/merger type scenario.

One of the common challenges often encountered in this type of work is troubleshooting Kerberos authentication process for web apps. Once the concepts of Kerberos authentication are understood, the process is relatively straight forward. However, understanding this seems to be a common issue in many IT environments, especially when the line between a traditional Infrastructure resource (who may be responsible for configuring service accounts and SPNs) and a developer (who may be responsible for configuring the IIS platform and deploying applications) becomes somewhat grey.

I decided to write a blog about my experience… Hopefully this will help you troubleshoot the Kerberos implementation process, whilst also explaining how to share or “federate” your applications with disparate parties.

How do I know if this process is suitable for my environment?

  • You have legacy line of business applications which aren’t claims aware.
  • You need to share your line of business applications between forests which do not have domain trusts or federation services in place.
  • You have an identity available in each domain – given that there are no trusts or federation in place, this is key.
  • You just want to know more about Kerberos Authentication and Delegation.

Back to basics – single domain Kerberos authentication process.

Below is a brief explanation of how the Kerberos authentication protocol works in a single domain environment. For more information, read the official Microsoft documentation.

single domain Kerberos authentication process

  1. Client contacts the KDC service saying that it’s a user and requests a ticket to get tickets (TGT).
  2. KDC issues the TGT to the client by encrypting with the client machine password hash.
  3. The client requests a service ticket from the KDC TGS, producing its TGT for authorisation.
  4. Based on the previous authorisation, a service ticket is issued to the client.
  5. The client presents its service ticket to the application server for authentication.
  6. Upon successful authentication, client/server session is open. Normal request/response process continues.

The Scenario

In the scenario presented below, we have two companies, each with their own instance of AD DS (DOMAINA and DOMAINB).

  • Applications have been migrated from DOMAINA to DOMAINB.
  • DOMAINA users are currently being migrated to DOMAINB, therefore they have a user account object in each domain.
  • Some users will access applications internally (from DOMAINB). Some users will access applications externally (from DOMAINA).

dual AD DS scenario

Now you may be thinking… How will the remote users access applications externally via the internet using the Kerberos authentication protocol?

Good question 🙂

As we all know, Kerberos authentication (generally speaking), does not allow an internet-connected client to authenticate directly. This is because the Kerberos Key Distribution Centre (KDC) role is usually deployed to a domain controller and therefore… it is understandable that we do not want this role accessible from the internet.

Enter Kerberos Protocol Transition (KPT)!!

KPT enables clients that are unable to get Kerberos tickets from the domain controller to pass through a service that “transitions” the client’s authentication into a true Kerberos authentication request.

In the scenario presented above, the KPT role is played by the load balancer or application delivery controller. This is a common feature provided by many vendors in the Application Delivery Controller (ADC) space. For the remainder of this article, this will be referred to as the “Kerberos SSO engine.”

To ensure a pleasant user experience, external users will browse to an application portal, hosted by the ADC in DOMAINB.local. They authenticate ONCE to the ADC and from that point onwards they can access multiple web applications provided by the ADC (which will effectively proxy their credentials to each web application server using KPT).

Internal users will continue to access applications directly.

Putting it all together – Required Components

Now that you understand the scenario, let’s cover off the required components to make the solution work.

Kerberos SSO engine – APPGW.DOMAINB.local The Kerberos SSO Engine role is played by the ADC. Upon a successful authentication to a web portal, it will proxy users credentials to multiple web applications ensuring a Single Sign On experience.

  • The Kerberos SSO Engine requires a service account which allows the ADC to retrieve Kerberos tickets on behalf of the user authenticating to the Application portal (once SPNs and delegation have been configured).
Web Farm – WEBSRV1 and WEBSRV2.DOMAINB.local The Web Farm is responsible for hosting web applications.

  • Each application requires its own IIS Application Pool
  • Each unique IIS Application Pool requires its own Domain Service Account
  • Each Domain Service Account will require Service Principal Names (SPNs) configured.
Service Principal Names (SPNs) SPNs will need to be configured for each service account which will run the…

  • Kerberos SSO engine
  • IIS Application Pools.

SPNs should always be in the format SERVICE/FQDN and SERVICE/NetBIOS name. This is a simple concept which often causes a lot of pain when troubleshooting Kerberos authentication processes. For example, if you had a website with the host header “prodweb01”, you would configure the following SPNs “HTTP/prodweb01” and “HTTP/prodweb01.domainb.local” on the service account which is responsible for running the application pool in IIS.

Application Delivery Controller

Whilst it is out of scope for the purpose of this post to document the configuration process of an ADC, it is worthwhile noting that the roles listed below are common features provided by many vendors in the ADC space today.

In my scenario, I used a free trial of an F5 BIG-IP device hosted on Amazon Web Services (AWS) which comes with a deployment guide.

It is also worth noting that AWS also offers a free trial of a Citrix NetScaler which is a competitive alternate to the F5.

The ADC has three primary roles in the scenario presented above:

  • To load balance the web farm which will host our web-based applications
  • To provide an application portal style page for external users (DOMAINA) to access their web applications – On an F5 device, these are called Webtops. On a Citrix NetScaler, these are called Gateways.
  • To act as the Kerberos SSO engine, which will carry out the Kerberos Protocol Transition (KPT) process on behalf of each user.

I will leave it up to you to evaluate which device is the best fit for your organisation, if you already have one of these devices available to you then decision may be that simple :).

Web Farm and Service Account Configuration

In the scenario I presented above, I used two Windows Server 2012 R2 VMs with IIS installed to host my web farm (you guessed it – the VMs were also located in AWS). The web servers were then placed into a server pool on my ADC and presented by a single VIP for load balancing purposes. Finally, I created a dummy website to use as a test page.

Now we are ready to get into the “nuts and bolts” of the Kerberos web application configuration.

Configuration Guide

The following steps assume that you have created a test webpage to perform the configuration on (shown below).

test web page

  1. Launch IIS Manager and select your Website > Authentication.

    IIS Manager Authentication

    As you can see above, by default only Anonymous Authentication is allowed.

  2. Now we need to enable Windows Authentication and disable Anonymous Authentication. This is a common stumbling block I have encountered in the field. If Anonymous authentication is enabled, IIS will always try to authenticate using it first, even if other methods are enabled. You can read more about IIS Authentication precedence on MSDN. Your configuration should now look like the image shown below.

    IIS Authentication Settings

  3. As you’re probably aware, Windows Authentication providers consist of Negotiate or NTLM. Negotiate is actually a container which uses Kerberos as the first authentication method and then NTLM as fall back (when Kerberos fails). We now need modify our providers to ensure Negotiate takes precedence. Select Windows Authentication > Providers and make sure that Negotiate is at the top of the list as Shown below.

    Providers Window showing Negotiate at top

Service Account and SPN Configuration

We are now ready to configure our service account which will run the Application Pool for our test website.

Since we would like to access our website using a custom host header, we need to register a Service Principal Name (SPN). Furthermore, given that our website is operating in a web farm, we are best placed to register the SPN to a domain service account and use the aforementioned service account to run the test website’s Application Pool (on both members of our web farm).

Registering an SPN to a computer account will not work in this scenario given that we have multiple web farm members. Kerberos gets very unhappy when we have duplicate SPNs registered to multiple accounts and because of this, I would STRONGLY advise you to use domain service accounts. One of the things I have taught myself to check first when troubleshooting Kerberos issues is to validate that there are no duplicate SPNs configured (you can do this using the SETSPN -L command).

For the purpose of this example I have created a domain user account called “IIS_Service”. As you can see below, there are currently no SPNs configured on this account.

Note: if you aren’t clear on the exact purpose of an SPN, please do some reading before proceeding.

IIS Service Properties window

Now that you are clear on the purpose of an SPN, let’s review the configuration…

Website Bindings (host header): http://testsite and http://testsite.domainb.local
IIS Service Account: DOMAINB\IIS_Service
Output: Setspn –S HTTP/testsite IIS_Service
Setspn –S HTTP/testsite.domainb.local IIS_Service
  1. We are now ready to register the SPNs to the IIS Service account. SPNs should always be in the format SERVICE/FQDN and SERVICE/NetBIOS name. You can do this using the commands listed in the “Output” section of the table above. Once you have run these from an administrative command prompt (with domain admin rights) you should see an output similar to the following…

    setspn output console window

  2. It is good practise to verify that the SPN’s you configured have been entered correctly. You can do this by running “setspn –l domain account” as shown below.
    Checking SPNs have been setup correctly

Now that we have verified that our SPNs have been configured correctly, we can return to the Website and Application Pool to finalise the configuration.

In the next section we will define the service account (IIS_Service) used to run the website’s Application Pool. IIS will use this service account to decrypt the Kerberos ticket (presented by the client) and authenticate the session.

Application Pool Configuration

  1. Navigate to the website’s Application Pool:
    Application Pool
  2. Select Advanced Settings > Browse to Identity and change the service account to IIS_Service

    Changing App Pool Identity

    Changing App Pool Identity

    Changing App Pool Identity

  3. Validate that the service account entered correctly.
    Checking App Pool Identity
  4. Navigate to IIS > Sites > Test Site > Configuration Editor.
  5. From the drop down menu, browse to system.webServer > security > authentication > windowsAuthentication:
    Configuration Editor
  6. Change useAppPoolCredentials to True.

    Note: by setting the useAppPoolCredentials value to True you have configured IIS to use the domain service account to decrypt the clients Kerberos ticket which is presented to the web server to authenticate the session.

  7. You will need to run an IISRESET command to update the configuration. Unfortunately recycling the Application Pool and Website is not sufficient.
  8. Test the web application – your browser should have http://*.domainb.local in the local intranet zone to ensure seamless single sign on.

You can validate that Kerberos authentication is working correctly by inspecting the traffic with Fiddler:

Fiddler used to show Kerberos ticket

This completes the web farm and account configuration.

Kerberos SSO Engine and Delegation Explained

Now that we have successfully configured our web site to use Kerberos authentication we need to configure delegation to allow the ADC to perform KPT (like we discussed earlier in the post).

As you have probably guessed, the ADC also requires a Service Account to perform KPT. This will allow it to act on behalf (proxy) of users to complete the Kerberos ticket request process. This is especially useful when our users are external to the network, accessing applications via a secure portal, as per the opening scenario. (Yes folks, we have almost gone full circle!)

To handle this process, I have created another service account called “SSO_Service.” I have also registered the SPN “HTTP/apps.domainb.local” – as this is the URL for my application portal page (shown on the scenario diagram above).

We are now ready to configure Kerberos Constrained Delegation, but before we go any further I thought I should provide a brief explanation.

In its simplest form, delegation is the act of providing a service account (SSO_Service in my example) with the authority to delegate the logged in credentials of any user to a backend service. That’s it!

In my scenario, the front end service is the web application portal provided by our application delivery controller. The backend service is the web application (Test Site) we have configured. Therefore, upon successful authentication, credentials will be delegated from the web application portal to our backend web applications, providing seamless single sign on experience for our users. This is best represented by the conceptual diagram shown below.

Conceptual diagram

Kerberos Constrained Delegation – Configuration

  1. Locate the service account you would like to configure delegation access for (SSO_Service in my example) and select the Delegation tab.

    Delegation tab for service account

    TIP: you must have an SPN registered to the service account for the Delegation tab to be visible.

  2. Select Trust this user for delegation to specified services only > Use Kerberos only.
  3. Select Add and browse to the service account you would like to delegate to and select the SPN we registered previously.
    Service Account Delegation

We have now authorised the SSO_Service account to delegate the logged in credentials of any user to the IIS_Service service account. It is important to remember that the IIS service account only has SPNs configured to access the test website :).

Hopefully this helps you to better understand Kerberos Authentication whilst providing insight as to how you can share secure access to Kerberos web applications externally.

Do It Yourself Cloud Accelerator – Part II BranchCache

In the last post I introduced the idea of breaking the secure transport layer between cloud provider and employee with the intention to better deliver those services to employees using company provided infrastructure.

In short we deployed a server which re-presents the cloud secure urls using a new trusted certificate. This enables us to do some interesting things like provide centralised and shared caching across multiple users. The Application Request Routing (ARR) module is designed for delivering massively scalable content delivery networks to the Internet which when turned on its head can be used to deliver cloud service content efficiently to internal employees. So that’s a great solution where we have cacheable content like images, javascript, css, etc. But can we do any better?

Yes we can and it’s all possible because we now own the traffic and the servers delivering it. To test the theory I’ll be using a SharePoint Online home page which by itself is 140K and overall the total page size with all resources uncached is a whopping 1046K.

Compression

Surprisingly when you look at a Fiddler trace of a SharePoint Online page the main page content coming from the SharePoint servers is not compressed (The static content, however, is) and it is also marked as not cacheable (since it can change each request). That means we have a large page download occurring for every page which is particularly expensive if (as many organisations do) you have the Intranet home page as a default on the browser opening.

Since we are using Windows Server IIS to host the Application Request Router we get to take a free ride on some of the other modules that have been built for IIS like, for instance, compression. There are two types of compression available in IIS, static compression which can be used to pre-calculate the compressed output of static files, or dynamic compression which will compress the output of dynamically generated pages on the fly. This is the compression module we need to compress the home page on the way through our router.

Install the Dynamic Compression component of the Web Server(IIS) role

Configuring compression is simple, firstly, make sure the IIS Server level has Dynamic Compression enabled and also the Default Web Site level

By enabling dynamic compression we are allowing the Cloud Accelerator to step in between server and client and inject gzip encoding on anything that isn’t already compressed. On our example home page the effect is to reduce the download content size from a whopping 142K down to 34K

We’ve added compression to uncompressed traffic which will help the experience for individuals on the end of low bandwidth links, but is there anything we can do to help the office workers?

BranchCache

BranchCache is a Windows Server role and Windows service that has been around since Server 2008/Win 7 and despite being enormously powerful has largely slipped under the radar. BranchCache is a hosted or peer to peer file block sharing technology much like you might find behind torrent style file sharing networks. Yup that’s right, if you wanted to, you could build a huge file sharing network using out of the box Windows technology! But it can be used for good too.

BranchCache operates deep under the covers of Windows operating systems when communicating using one of the BranchCache-enabled protocols HTTP, SMB (file access), or BITS(Background Intelligent Transfer Service). When a user on a BranchCache enable device accesses files on a BranchCache enabled file server or accesses web content on a BranchCache enabled web server the hooks in the HTTP.SYS and SMB stacks kick in before transferring all the content from the server.

HTTP BranchCache

So how does it work with HTTP?

When a request is made from a BranchCache enabled client there is an extra header in the request Accept-Encoding: peerdist which signifies that this client not only accepts normal html responses but also accepts another form of response, content hashes.

If the server has the BranchCache feature enabled it may respond with Content-Encoding: peerdist along with a set of hashes instead of the actual content. Here’s what a BranchCache response looks like:


Note that if there was no BranchCache operating at the server a full response of 89510 bytes of javascript would have been returned by the server. Instead a response of just 308 bytes was returned which contains just a set of hashes. These hashes point to content that can then be requested from a local BranchCache or even broadcast out on the local subnet to see if any other BranchCache enabled clients or cache host servers have the actual content which corresponds to those hashes. If the content has been previously requested by one of the other BranchCache enabled clients in the office then the data is retrieved immediately, otherwise an additional request is made to the server (with MissingDataRequest=true) for the data. Note that this means some users will experience two requests and therefore slower response time until the distributed cache is primed with data.

It’s important at this point to understand the distinction between the BranchCache and the normal HTTP caching that operates under the browser. The browser cache will cache whole HTTP objects where possible as indicated by cache headers returned by the server. The BranchCache will operate regardless of HTTP cache-control headers and operates on a block level caching parts of files rather than whole files. That means you’ll get caching across multiple versions of files that have changed incrementally.

BranchCache Client Configuration

First up note that BranchCache client functionality is not available on all Windows versions. BranchCache functionality is only available in the Enterprise and Ultimate editions of Windows 7/8 which may restrict some potential usage.

There are a number of ways to configure BranchCache on the client including Group Policy and netsh commands, however the easiest is to use Powsershell. Launch an elevated Powershell command window and execute any of the Branch Cache Cmdlets.

  • Enable-BCLocal: Sets up this client as a standalone BranchCache client; that is it will look in its own local cache for content which matches the hashes indicated by the server.
  • Enable-BCDistributed: Sets up this client to broadcast out to the local network looking for other potential Distributed BranchCache clients.
  • Enable-BCHostedClient: Sets up this client to look at a particular static server nominated to host the BranchCache cache.

While you can use a local cache, the real benefits come from distributed and hosted mode, where the browsing actions of a single employee can benefit the whole office. For instance if Employee A and Employee B are sitting in the same office and both browse to the same site then most of the content for Employee B will be retrieved direct from Employee A’s laptop rather than re-downloading from the server. That’s really powerful particularly where there are bandwidth constraints in the office and common sites that are used by all employees. But it requires that the web server serving the content participates in the Branchcache protocol by installing the BranchCache feature.

HTTP BranchCache on the Server

One of the things you lose when moving to the SharePoint Online service from an on premises server is the ability install components and features on the server including BranchCache. However, by routing requests via our Cloud Accelerator that feature is available again simply by installing the Windows Server BranchCache Feature.

With the BranchCache feature installed on the Cloud Accelerator immediately turns the SharePoint Online service into a BranchCache enabled service so the size of the Body items downloaded from to the browser goes from this:

To this:

So there are some restrictions and configuration. First, you won’t normally see any peerdist hash responses for content body size less than 64KB. Also you’ll need a latency of about 70mS between client and server before BranchCache bothers stepping in. Actually you can change these parameters but it’s not obvious from the public API’s. The settings are stored at this registry key (HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\PeerDistKM\Parameters) which will be picked up next time you start the BranchCache service on the server. Changing these parameters can have a big effect on performance and depend on the exact nature of the bandwidth or latency environment the clients are operating in. In the above example I changed the MinContentLength from the default 64K (which would miss most of the content from SharePoint) to 4K. The effect of changing the minimum content size to 4K is quite dramatic on bandwidth but will penalise those on a high latency link due to the multiple requests for many small pieces of data not already available in your cache peers.

The following chart shows the effect of our Cloud Accelerator on the SharePoint Online home page for two employees in a single office. Employee A browses to the site first, then Employee B on another BranchCache enabled client browses to the same page.

Where:

  • Office 365: Out of the box raw service
  • Caching: With caching headers modified by our Cloud Accelerator
  • Compression: With compression added to dynamic content (like the home page)
  • BranchCache 64K: With BranchCache enabled for >64K data
  • BranchCache 4K: With BranchCache enabled for >4K data

So while adopting a cloud based service is often a cost effective solution for businesses, if the result negatively impacts users and user experience then it’s unlikely to gather acceptance and may actually be avoided in preference for old on-premises habits like local files shares and USB drives. The Cloud Accelerator gives us back the ownership of the traffic and the ability to implement powerful features to bring content closer to users. Next post we’ll show how the accelerator can be scaled out ready for production.

ELBs do not cater for your environment? Set up HAProxy for your IIS servers

Recently we encountered a scenario where we needed to look for an alternative for Amazon Web Services (AWS) Elastic Load Balancing (ELB) due to an existing IIS configuration used in an organisation.  We found that HAProxy was the best candidate in terms of simplicity & the suitability for scenario we were addressing.

This post will show you how you can leverage HAProxy to load balance IIS web servers hosted in AWS EC2 and explain briefly why HAProxy is best suited to address our scenario.

The scenario

Consider you have two web servers you need to load balanced; each hosts several websites configured using multiple IP addresses.  In this case, there is no need to handle SSL at the load balancer (LB) layer, the LB only passes through SSL requests to the backend servers.

Web server 1 hosts the following websites:

  • KloudWeb port 80 with IIS binding to 192.168.137.31:80
  • KloudWeb port 443 with IIS binding to 192.168.137.31:443
  • KloudMetro port 80 with IIS binding to 192.168.137.15:80
  • KloudMetro port 443 with IIS binding to 192.168.137.15:443
  • Note: 192.168.137.31 is the primary interface IP address of web server 1.

Web server 2 hosts the following websites:

  • KloudWeb port 80 with IIS binding to 192.168.137.187:80
  • KloudWeb port 443 with IIS binding to 192.168.137.187:443
  • KloudMetro port 80 binding to 192.168.137.107:80
  • KloudMetro port 443 binding to 192.168.137.107:443
  • Note: 192.168.137.187 is the primary interface IP address of web server 2.

Why Amazon Elastic Load Balancer is less ideal in this case?

ELB only delivers traffic and load balance the primary interface i.e. eth0.  To make this scenario work with ELB, the IIS binding configuration needs to be amended to either the following:

  • KloudWeb or KloudMetro will need to change ports other than port 80 or 443 for the HTTP and HTTPS respectively; or
  • Use different host headers

Those alternatives could not be employed as we needed to migrate environments as-is.  Given this, replacing ELB is the most viable option to support the scenario. Note: There are merits for binding different IPs for different sites, however a similar goal can be achieved with a single IP address by assigning custom ports on the binding settings in IIS – host headers. Further details on the pros and cons on both approaches can be found  here.

Why HAProxy?

HAProxy is a very popular choice for replacing ELB in many AWS scenarios.  It provides both the features of L4 & L7 traditional load balancers and a flexibility that is rarely found in a software based load balancer.  We also assessed alternatives such as LVS or NGINX – both of which are free for use, but decided to go ahead with HAProxy since it supports SSL pass-through using its tcp port forwarder feature and the simplicity it provides.

One thing to note: at the time of writing, HAProxy stable release 1.4 does not support SSL termination at the load balancer (there are 3rd party tools that can support them e.g. bundled with nginx). The newest version (in dev) now supports SSL offload capability therefore eliminating the need to install any components outside HAProxy to handle SSL.

The Preparation Steps

To prepare, we need the following info:

  • The Load Balancer “VIP” addresses
  • Backend addresses (since you need to bind the VIP addresses to the different backend addresses)
  • LB listener ports and backend server ports

Let’s get hands on

First of all, you may be surprised by how simple it is to configure HAProxy on AWS.  Key thing is to understand what goal or scenario you would like to achieve and (once again), the preparation to collect relevant information.

Instance creation

  • We have chosen to use the ‘vanilla’ Amazon Linux AMI in Sydney. Spin this instance up on the UI or command line
  • Assigned two IP addresses for this HAProxy instance to host the two websites
  • Created a security group which only allows SSH (port 22) and Web connections (port 80 & 443).  You can also separate them to limit  SSH connection from certain addresses for an additional security
  • Connect to your newly created instance (via Putty or the built-in AWS Java console)

Configure your HAProxy

  • Make sure you have changed as root or an account with a sudo right
  • Install haproxy – yum install haproxy
  • Once it is installed, browse to the /etc/haproxy directory and review the haproxy.cfg file
  • Backup the haproxy.cfg file – cp haproxy.cfg haproxy.cfg.backup
  • Modify the original file with the following configuration – vi haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

# KloudWeb & KloudMetro Web Servers
# -----------------------

listen kloudwebhttp
bind 192.168.200.10:80
mode http
stats enable
stats auth admin:apassword
balance roundrobin
option httpchk /kloudlb/test.aspx
server webserver1 192.168.137.31:80
server webserver2 192.168.137.187:80

listen kloudwebhttps
bind 192.168.200.10:443
mode tcp
balance roundrobin
server webserver1 192.168.137.31:443
server webserver1 192.168.137.187:443

listen kloudmetrohttp
bind 192.168.200.11:80
mode http
stats enable
stats auth admin:apassword
balance roundrobin
option httpchk /kloudlb/test.aspx
server webserver1 192.168.137.15:80
server webserver2 192.168.137.107:80

listen kloudmetrohttps
bind 192.168.200.11:443
mode tcp
balance roundrobin
server webserver1 192.168.137.15:443
server webserver1 192.168.137.107:443

Testing

Once you have modified the file, run HAProxy to test its functionality

  • On a SSH console, enter – service haproxy start
  • HAProxy will verify the configuration and start the service
  • From this point you can see the built-in dashboard of your new HAProxy configuration by going to the link below
  • Hit or access your website (with IP or DNS)
  • Any new requests will update  the stats shown here in real time (see kloudmetrohttp for updated stats)

The HAProxy Configuration explained

Apart from the default configuration, this section briefly details the configuration file we used above. See the following documentation for more info.  There are ways you can leverage the vast features of HAProxy such as performing advanced health checks based on regular expression and modifying polling time which are beyond the scope of this blog post.

# listen <name> -- this specifies Haproxy listener group, you can define a logical name for your servers

# bind <IP addr of LB>:<LB listener port> -- The binding IP address & port

# mode <http or tcp> -- this is set to http (L7 load balancing) or TCP (L4 load balancing)

# stats enable -- Enable the Haproxy stats

# stats auth admin:<anypassword> -- Set the username and password for accessing the site

# balance roundrobin -- this sets the algorithm used for load balancing requests.

# option httpchk <uri> -- this configuration will perform an optional health check to put the listener in or out of service

# server <name> <server ip addr>:<server port> check port <server port> -- this sets the backend servers which will be load balanced.

An Overview of Server Name Indication (SNI) and Creating an IIS SNI Web SSL Binding Using PowerShell in Windows Server 2012

One of the frustrating limitations in supporting secure websites has been the inability to share IP addresses among SSL websites. In the day, there were a few ways to solve this limitation. One, you could use multiple IP addresses, binding a SSL certificate to each combination of an IP address and standard SSL port. This has been the best method to date but it is administratively heavy and not necessarily a good use of valuable IP addresses. Another approach was to use additional non-standard ports for SSL. While this saved IP addresses, you would potentially run up against strict firewall or proxy limitations making this method undesirable. Finally, in the IIS 7 and 7.5 worlds you could use host-headers to share a certificate among websites but you were limited to a single certificate that each web site would have to share.

The reason behind these limitations rest in the handshake that takes place between the browser and the web server. When a SSL client request is initiated, the HTTP header data is not available to the web server. Only after successful handshake are the headers encrypted and sent to the web server. Too late to allow for successful redirection to the desired web site.

Solving this limitation required an extension to the Transport Layer Security (TLS) protocol that includes the addition of what hostname a client is connecting to when a handshake is initiated with a web server. The name of the extension is Server Name Indication (SNI). Of course, extending the definition of a protocol is never as easy as updating an RFC. Both client and server compatibility are required to make use of these extensions. On the client side, roughly 95% of browsers support SNI. Specifically those are:

  • Internet Explorer 7 or later
  • Mozilla Firefox 2.0 or later
  • Opera 8 or later
  • Google Chrome 6 or later
  • Safari 3 or later
  • Test Your Browser

In the Microsoft world, support for the SNI extensions to TLS were introduced with Windows Server 2012 and IIS 8.  Through the Internet Information Services (IIS) Manager and a web sites bindings UI, SNI can be specified for a HTTPS site along with a host header:

SNI-SSL-Binding-1

There are many resources on the Internet that deal with setting up and configuring a site using SSL bindings as well as utilizing SNI from within the IIS Manager.  Where I’d like to focus the second part of this blog is in creating SNI web-bindings using PowerShell.  As a driver for implementing SNI is the scalability it provides, this scalability might be for naught if not coupled with the ability to deploy a solution without the use of a GUI.

There are three parts to successfully assigning and associating any SSL binding with a website through PowerShell:

  1. A SSL binding needs to be created for the web site
  2. A certificate needs to exist in the local machine certificate store
  3. A SSL binding relationship needs to be created to associate a certificate with a web site
Creating the Web Site Binding

Creating the web site binding is a straightforward process.  The following PowerShell sequence would be used to create the binding and assign the correct port, host header and specification for use of SNI:

# Import IIS Management PowerShell Module
Import-Module WebAdministration

$hostHeader = "test.com"

New-WebBinding -Name "Test Website" -Protocol "https" -Port 443 -HostHeader $hostHeader -SslFlags 1

The name specified would be the name of the web site you’d like to add the binding to.  The protocol and port are standard for SSL bindings.  The host header is the URL you’d like the web site to respond to.  Finally, SslFlags with a value of 1 enables SNI for this binding.

Retrieving the Certificate from the Certificate Store

While I won’t cover the process to request a certificate or import the certificate into the local machine store, there are two factors we need to address before using the certificate in the third and final step.

In order to use the certificate in IIS it is critical that the certificate is imported allowing the private key to be exported.  If a certificate is used without an exportable private key, IIS will be unable to bind that certificate.

Creating the SSL association in the third step requires we have some reference to the certificate we’d like to associate with the web site.  There are two values that can be used.  The thumbprint of the certificate or a reference to the certificate object itself.

In order to retrieve the thumbprint of a certificate the following PowerShell command is used:

$thumbprint = (Get-ChildItem Cert:\LocalMachine\My | Where-Object {$_.FriendlyName -eq "Test Cert"}).Thumbprint

In the above example the friendly name of the certificate is used as the matching context.  One could also use the subject instead.

In order to get a reference to the certificate itself the following syntax can be used:

$certificate = Get-ChildItem Cert:\LocalMachine\My | Where-Object {$_.FriendlyName -eq "Test Cert"}

After this step you will now either have a direct reference to the certificate or the value for the certificates thumbprint.

Creating the SSL Association

The final step in the puzzle is tying together both the binding and the certificate.  This can be the trickiest part to get right.  PowerShell doesn’t provide a native cmdlet to directly do this. Instead, one needs to use the IIS drive exposed by the WebAdministration module to create a SslBinding object and associate that object with the certificate.

The PowerShell sequence for that task is as follows, if you’re using the certificate object:

New-Item -Path "IIS:\SslBindings\!443!test.com" -Value $certificate -SSLFlags 1

If you’re using the thumbprint your command would be:

New-Item -Path "IIS:\SslBindings\!443!test.com" -Thumbprint $certificate -SSLFlags 1

If successful, you should receive confirmation displaying the host name, along with the site the host name is bound to.  To confirm that SNI is in use run the following command from the command line:

SNI-SSL-Binding-2

In the above, notice the SSL binding is using the hostname:port syntax which confirms SNI is in use.

Following the above steps will allow you to take advantage of the new Server Name Indication (SNI) implementation in Windows Server 2012 and IIS 8.