ADFS Service Communication Certificate Renewal Steps

Hi Guys, adfs service comprises of certificates which serve different purpose for federation service. In this blog post I will share a brief description of these certificates, their purpose and will discuss renewal process of service communication certificate.


Type of ADFS Certificates and their purpose


Certificate Type Description Purpose
Service Communication certificate


Standard Secure Sockets Layer (SSL) certificate that is used for securing communications between federation servers, clients, Web Application Proxy, and federation server proxy computers. Ensures the identity of a remote computer

Proves your identity to a remote computer


Encryption Certificates


  Token decryption
Signing Certificates


Standard X.509 certificate that is used for securely signing all tokens Token signing



Renewal Steps

Service Communication certificate

In comparison this certificate is very similar to IIS certificate used to secure a website. It is generally issued by a trusted CA authority and can be either SAN or wild card certificate. This certificate is installed an all ADFS servers in the farm and update procedure should be done on primary ADFS server. Below is the list of steps involved in renewal.


  1. Generate CSR from primary ADFs server. This can be done via IIS.
  2. Once certificate is issued, add new certificate in Certificate store.
  3. Verify Private Key on the certificate. Make sure new certificate has the private key.
  4. Assign Permissions to the Private Key for ADFS service account. Right click on the certificate, click manage private keys, add adfs service account and assign permissions as shown in below screenshot.



  1. From ADFS console select “Set Service Communication Certificate”
  2. Select new certificate from prompted list of certificates.
  3. Run Get-AdfsSslCertificate. Make a note of the thumbprint of the new certificate.
  4. If it’s unclear which certificate is new, open MMC snapin, locate the new certificate and scroll down in the list of properties to see the thumbprint.
  5. Run


  1. Restart the ADFS service
  2. Copy and import the new certificate to the Web Application Proxy/Proxies
  3. On each wap server run following cmdlet.


That’s it you are all done. You can verify that new certificate has been assigned to adfs service by executing Run Get-AdfsSslCertificate. Another verification step would be to open the browser and navigate to federation page. Here you should be able to see the new certificate in the browser. I will further discuss encryption and signing certificate renewal process in upcoming blogs.



Exchange Online – Mapi over Http Transition

Microsoft has announced that from 31st October 2017, outlook clients using RPC over Http protocol to connect to Office 365 will be no longer supported. Only Mapi over Http clients will be in action onwards. This announcement has left many administrators thinking, What exactly does that mean for my organization? What actions are required to avoid any business impact? Is it time to update outlook clients and upto what level? And last but not the least how can I verify if all necessary steps have been taken to ensure business as usual. Lets try to answer these questions one by one.

So what does this announcement means for any organization? In simple words, any outlook client which still use RPC over Http to connect to Office 365 will be retired and hence would require to be updated if possible. This means that outlook 2007 and earlier versions will be retired and will be no longer able to connect to exchange online. So, this would require following actions from Office 365 administrators.

  1. Update Outlook 2007 or earlier versions of outlook to latest outlook version.
  2. For outlook 2010 and higher minimum required updates are following :
Office version Update Build number
Office 2016 The December 8, 2015 update
  • Subscription: 16.0.6568.20xx
  • MSI: 16.0.4312.1001
Office 2013 Office 2013 Service Pack 1 (SP1) and the December 8, 2015 update 15.0.4779.1002
Office 2010 Office 2010 Service Pack 2 (SP2) and the December 8, 2015 update 14.0.7164.5002

Note The December 8, 2015 updates for Office are listed in Microsoft Knowledge Base article, 3121650: “December 8, 2015, update for Office”. It is  recommended that you keep outlook clients updated with the most recent product updates as several MAPI over HTTP issues have been fixed since December 2015.

Additionally, you may have to make sure that Outlook clients aren’t using a registry key to disable MAPI over HTTP. For more information, see Microsoft Knowledge Base article, 2937684 : “Outlook 2013 or 2016 may not connect using MAPI over HTTPs as expected”.

Now while you make all efforts to ensure you meet the deadline and take all necessary steps to update your environment, you do not need assurance that you have completed your job. A simple report providing information from office 365 about outlook clients connecting to your tenant should do the job. Lets get this report now following below steps.

To retrieve this information, enable owner access auditing for each mailbox, and then query the audit log for the Outlook version that’s used to log on to the mailbox. To do this, follow these steps:

  1. Connect to Exchange Online using remote PowerShell.
  2. Enable mailbox auditing for the owner. To do this, run one of the following commands:
    • For one mailbox:

    • For all mailboxes:

Note: Mailbox auditing may take up to 24 hours to get enabled.

  1. Search the audit log. To do this, run one of the following commands:
    • For one mailbox:

    • For all mailboxes and export results to a .csv file

The above powershell command will produce a comprehensive report which you can use as a guide line to ensure that all your clients are ready for switch to Mapi over Http. Here is a sample output.

DKIM for Custom Domain in Office 365


As Office 365 service keeps adding new features and functions, it is important for global admins to keep up with the latest offerings and service enhancements office 365 provides. In this blog post I am going to discuss one of the security feature offered by office 365 and how it can be beneficial to organizations when it comes to securing their office365 tenants. This feature is called DKIM. DKIM has been offered by Microsoft for some time now and most of the organizations are using it quite effectively. But I was surprised to know that still there is a huge gap of knowledge around DKIM basics and its implementation scenario. Let’s start with brief description of DKIM.

DKIM stands for Domain Keys Identified Mail. So, the name gives us a clue that it involves digital keys i.e. private/public keys. This means that there will be digital signing or encryption and decryption of messages. Yes! that’s right. So, in simple words “it is a digital signing of emails by sending party using its private keys or in this instance via private domain key and then decryption of this email by the recipient party using domain public keys. “This looks simple and to be honest it is very simple given you understand these questions: Why you need it? What are the benefits? How hard it is to configure and manage? Let’s answer them one by one.

Primary purpose of DKIM is to identify your identity on the internet as you are who you claim to be. Meaning your domain e.g. is a legit domain and the recipient can trust emails coming from this source since it has successfully identified itself. Another analogy to this scenario would be the ssl certificates used by different websites across the internet to validate their identity to clients. This also provides the first benefit to let your domain published as a trusted domain across internet and any spoofing attempt or attempt to send spoofed emails using your domain will fail. Second big benefit you get is your email security strategy gets a big boost and allows you to use more advanced security features such as DMARC for securing your emails.

Now let’s get to the question as how hard it is to implement? The answer is its simple and does not take much of effort in terms of configuration. Though it does require a significant amount of planning to ensure smooth and fruitful outcome. In fact, by default DKIM is enabled for your default office 365 domain i.e.  Let’s review step by step as how it is configured in office 365.


DKIM Configuration:

DKIM requires just two steps for its configuration.

  1. Add CNAME records in public DNS for your custom domain
  2. Enable DKIM for your custom domain in office 365.

CNAME will have following format.

Key things to remember here are:

  • domainGUIDis the same as the domainGUID in the customized MX record for your custom domain
  • initialDomainis the domain that you used when you signed up for Office 365
  • For Office 365, the selectors will always be “selector1” or “selector2”

For example, if your custom domain is, this record will look like this:

To enable DKIM signing for your custom domain through the Office 365 admin center

  1. Sign in to Office 365 with your work or school account.
  2. Select the app launcher icon in the upper-left and choose Admin.
  3. In the lower-left navigation, expand Admin and choose Exchange.
  4. Go to Protection > dkim.
  5. Select the domain for which you want to enable DKIM and then, for Sign messages for this domain with DKIM signatures, choose Enable. Repeat this step for each custom domain.

To enable DKIM signing for your custom domain by using PowerShell

  1. Connect to Exchange Online using remote PowerShell.
  2. Run the following cmdlet:

Where domain is the name of the custom domain for which you want to enable DKIM signing.

For example, for the domain

The above steps will allow you to setup DKIM for your custom domain in office365. I will be further discussing implementation scenarios and test process to verify your settings in my future blog.

Azure AD Connect – Upgrade Errors



Azure AD Connect is the latest release to date for Azure AD sync or previously known as Dirsync service. It comes with some new features which make it even more efficient and useful in Hybrid environment. Besides many new features the primary purpose of this application remains the same i.e. to sync identities from your local (On-Prem) AD to Azure AD.

Of the late I upgraded an AD sync service to AD connect and during the install process I ran into a few issues which I felt are not widely discussed or posted on the web but yet are real world scenarios which people can face during AD connect Install and configuration. Let’s discus them below.


Installation Errors

The very first error is stumped up on was Sync service install failure. The installation process started smoothly and Visual C++ package was installed and sql database created without any issue but during synchronization service installation, process failed and below screen message was displayed.


Event viewer logs suggested that the installation process failed because of install package could not install the required dll files. The primary reason suggested that the install package was corrupt.


sync install error


Actions Taken:

Though I was not convinced but for sake of busting this reason I downloaded new AD connect install package and reinstalled the application but unfortunately it failed at same point.

Next, I switched from my domain account to another service account which was being used to run AD sync service on current server. This account had higher privileges then mine but unfortunately result was the same.

Next I started reviewing the application logs located at following path.


And at first look I found access denied errors logged in. What was blocking the installation files? Yes, none other but the AV. Immediately contacted security administrator and requested to temporarily stop AV scanning. Result was a smooth install on next attempt.

I have shared below some of the related errors I found in the log files.





Configuration Errors:

One of the important configurations in AD connect is the Azure Ad account with global administrator permissions. If you are creating a new account for this purpose and you have not logged on with it to change first time password, then you may face with below error.



Nothing to panic about. All you need to do is log into Azure portal using this account, change password and then add credentials with newly set password into configuration console.

Another error related to Azure Ad sync account was encountered by one of my colleague Lucian and he has beautifully narrated the whole scenario in one of his cool blogs here: Azure AD Connect: Connect Service error


Other Errors and Resolutions:

Before I conclude, I would like to share some more scenarios which you might face during install/configuration and post install. My Kloudie fellows have done their best to explain them. Have a look and happy AAD connecting.


Proxy Errors

Configuring Proxy for Azure AD Connect V1.1.105.0 and above


Sync Errors:

Azure AD Connect manual sync cycle with powershell, Start-ADSyncSyncCycle


AAD Connect – Updating OU Sync Configuration Error: stopped-deletion-threshold-exceeded


Azure Active Directory Connect Export profile error: stopped-server-down









Azure Load Balancer – Add/Remove Vms


Still stuck on azure service manager (ASM)? Have load balancers in environment which you need to configure often to remove/add vms? Not a worry. Even though when it comes to load balancer configuration option in ASM we are pretty much tied down to PowerShell but in this post I will show you how you can use simple PowerShell scripts to configure your load balancer.

Azure load balancer is a layer 4 load balancer (TCP, UDP) and manages the incoming traffic for load and availability. Azure classic portal does not provide any functionality for the Azure administrators to configure load balancer via portal. The only option we have is the PowerShell.

In real world scenario you will often need to take your azure Vms out of load balancer to perform updates or to trouble shoot production issues. And that’s where capability to configure your load balancers comes handy. Lets have a look at a simple scenario as an example where you have two azure Vms Web01 and Web02 in a subscription named Myazuresubscription, both are configured for an external load balancer in Azure named ExtLB. Vms have service names as Webserv01 and Websrv02 respectively. Let’s get started:

Remove Vm from Load Balancer

Let’s first log into our subscription using the following PowerShell commands.

Once you are logged into your subscription, its time to take your vm out of load balancer. Its worth mentioning here that basically this means that we are going to remove the endpoints of vm which are associated with load balancer. Typically, a vm behind load balancer would be a web server, meaning we will have end points configured for http and https. Hence we will need to remove both these endpoints to take it off the load balancer. However, you may have a different scenario but I will consider in this example that we have configured endpoint for both protocols.

Let’s inspect the existing endpoints of vm Web01:

Important thing to note is that you will need to know the cloud service name of your VM. You can view this under your vm Dashboard in ASM and in ARM it will be the name of Resource group in which this vm resides.




The LBSetName highlighted in red represent the name of load balancer and name highlighted in green represent name of the endpoint. We will use the name of endpoint in our following PowerShell.

To remove Http and Https endpoints from load balancer we will run following command for each endpoint. So in this example we will run it twice once for http and second for https.


This will remove the VM from load balancer. To verify it you can rerun the command above we used to inspect vm endpoints and you will be able to see the endpoints removed in Output. Once you have removed all endpoints of VM configured with Load balancer you can work on your vm and once you are ready its time to add it back.

Important thing to consider is that you should not remove your both web servers together from load balancer as it may result in service loss.


Add VM to Azure Load Balancer

 To add a vm into Azure load balancer, following PowerShell script can be used.  Again you will need to run this script twice each for Http and Https end points.




And we are done. We have successfully added a vm into azure load balancer for both http and https endpoints. Important thing to remember here is that if your Vms are deployed in ARM, you can add/remove vms from load balancer using Azure portal as well as PowerShell.

Also, if you are looking to configure your load balancer for a distribution mode then have a read of another fantastic blog written by our Kloudie.


Azure Load Balancer – Set Distribution Mode



SQL Always on Availability – Database addition to existing Availability Group via PowerShell

In this particular post, I am going to share steps for adding a database into an SQL Always On Availability group. This can be done both via GUI or PowerShell, but here my focus would be on PowerShell in order to make it simple and automated.

For this particular scenario, I am going to add a new database on an already existing AG. New database can be created or restored via a valid backup. I am assuming SQL AG is already setup and we are going to add databases in AG. There is plenty of information available around setting up your SQL AG Setting Up Always on AG for SQL Server.

Once you have created you SQL AG. It’s time to add databases. Now this can be done via GUI as well as PowerShell. Many of us (including me) may find GUI process as a real pain. First of all because it is manual and secondly it leads into other cluster related issues. So here I am going to show you how to add a new database in an existing AG via PowerShell script.

Now for this example, I am assuming a two node SQL AG. I am naming two nodes as sqlserver-0(Primary) and sqlserver-1 (Secondary). Also, for this example, I am going to add a new database in AG by creating a new database onto primary server. Let us name this database as Testdb01. We can also use a backup file to restore it to an existing database.

I will follow standard procedure to add a new database via SQL management studio. While creating or restoring a database important thing to take care of is to make sure the Recovery Model is set to Full. See screenshot below.

Now once the database is created on The Primary node, it’s time to move onto the secondary node. We can forget Primary node for a while and focus on the secondary node. Now before we begin, I would like to share briefly what we are going to do next. So basically we will be running a PowerShell script which will connect to Primary node and create a backup on a local share, then it will restore this backup back onto Primary node and then add it to Always on AG. It’s just that simple.

Now important step here is to create a network share which will be used as a backup path for sql database backups. Let us say we create it at F:\Backup and then share this folder with read/write permissions for SQL service account or SQL admin account.

PowerShell Script

As you can see from:

Line 1-11 we have set our variables.

In line 13 and 14 we are backing up primary instance from server-0 onto Server-1 (line 13). In line 14 same is done for transaction log file.

Line 16 and 17 are using the newly created backup on Server-1 to restore database onto secondary Instance (Server-1).

Line 19 and 20 are adding database into primary and secondary node of the availability group.

And that’s it. We are done. As easy as it is. However, do watch out for one thing which might trouble you. For example below error:

Add-SqlAvailabilityDatabase : The mirror database, “Testdb01”, has insufficient transaction log data to preserve the log backup chain of the principal database.  This may happen if a log backup from the principal database has not been taken or has not been restored on the mirror database.

Make sure that there is no existing backup in the shared folder where you are going to create a new backup otherwise you will get a similar error as shown below. Delete any old backup file before you proceed. That would be another reason for you to create a separate folder for this purpose rather than using your default daily backups folder. If you run into this issue, before attempting again, make sure you go into Shared backup folder and delete old backups and also go into management studio of secondary node and delete the database which will be now showing as restoring.

This is it we are done. We have successfully added our new or restored databases into SQL Always on AG. This means now we are protected against any SQL failures. Databases will fail over to secondary node without any data loss. Also, this means now we have a second copy of our database available in case we accidentally delete a database. God forbid of course J.

Exchange Online Protection Organizational Approach

I have been working for an organisation who had recently migrated to Exchange Online protection (EOP), and we had found that some of his important emails, from a legitimate email source, were getting blocked.

Upon investigation it turns out that a week before the customer’s organisation was hit by a Zero-Day virus which resulted in spoofed emails coming through and landing in the end user mailboxes. This resulted into a bit of chaos and a decision was taken to modify the Bulk email threshold (BCL) to a tighter level. Further investigation with the customer revealed that after setting BCL value to 5 the customer was planning to use Spam Notification emails to release quarantined messages from junk folder with the expectation that once they are marked as “Not Junk Email” in the notification, the sender address will automatically be White Listed by EOP.

The approach did not work and important emails sent from legitimate sources (some were customer orders) got trapped in quarantine folder and kept getting trapped even though the end users were releasing them into their inboxes and marking them as Not Junk. The whole idea of stopping emails at a global level and then allowing at granular level just fell apart.

Now let’s review the whole scenario again and try to clarify some caveats in between. Let’s start with BCL rating change. This was originally set to 7 which is a default value and if you go to this link Bulk Complaint Level values it explains the different threshold values for Bulk email in detail. An important thing to note here is that there is no standard value for every organisation. BCL will vary from organisation to organisation based on several factors and this is something every organisation has to learn over a period of time. To find the sweet spot where only junk email gets blocked and rest flows in.

The second important concept to understand here is the EOP spam notification. General understanding is that once you mark the email as not junk from quarantine mailbox, then it should always land in the inbox next time, just as we do in the Hotmail. However, in practice this is not the case. EOP only uses user input to pass on to Microsoft as an information to record. EOP just learns the user actions here and won’t necessarily take action. This input will vary from user to user, and EOP will only use it to learn about the reputation of the sender. As you can see in the below screenshot, Microsoft is using user input as information to be used in Analysis, and nothing else.



So what is the right thing to do to allow a sender land his mail in your inbox? The answer is in the below screenshot which provides you a tip which we most of the time we ignore but actually gives us a reasonable direction in terms of adopting the right method to ensure legitimate senders don’t get blocked.



Yes, it’s the safe sender list which is maintained by every user. Though it sounds conventional and old school where every user is responsible to maintain his/her own allow list for their mailboxes but in actual fact it gives the real time protection with accurate data. Moreover, the safe sender allow/block list take precedence over EOP rules and policies set by administrator. This means that even if a sender is blocked by the administrator, if that sender is in the safe sender list of a user then this user will be able to receive emails explicitly from this sender while it remains blocked for rest of the organisation.

Talking about safe sender and blocked sender lists, your next argument would be as an administrator how can you ensure that it is managed by every user and you have control over it. To address this first step would be to actually educate the people about it and develop the understanding of how the whole process works. Secondly, you can leverage PowerShell to set up this list on a per user basis as well as for bulk users. Below are the PowerShell commands:

set up safe senders and blocked senders for a single user


Set-MailboxJunkEmailConfiguration -Identity <> -BlockedSendersAndDomains “<domainA>.com”, “<user>@<domainB>.com”,”…” -TrustSendersAndDomains “<domainC>.com”,”<user>@<domainD>.com”,”…”


set up safe senders and blocked senders in bulk

Get-Mailbox | Set-MailboxJunkEmailConfiguration -BlockedSendersAndDomains “<domainA>.com”,”user@<domainB>.com”,”…” -TrustedSendersAndDomains “<domainC>.com”,”user@<domainD>.com”,”…”


A more detailed article for the above commands can be found here set up safe senders and blocked senders in Office 365


Lastly, I would like to discuss here how EOP policies and filters work alongside the safe/blocked senders list. EOP policies and filter provide the first level of defence at a broader level for any organisation. It contains all the known spam sources, black listed IPs/domains and bulk spam sources. It also provides protection against malware by blocking malicious attached files. The major benefit is that almost all spam is blocked outside the organisation’s network and does not overload or consume network resources.

To conclude the above discussion, I would like to lay down following guidelines when thinking in terms of protecting an organization from spam and malicious emails.


  1. EOP provides protection at the organisational level and follows industry standards and best practices to safe guard against known spam and malicious mail sources.
  2. Safe sender/block list provides a second, and more adjustable, level of control.
  3. Spam notifications sent by EOP only collect and send user data to the EOP engine and won’t necessarily allow/block the sender.
  4. Mail protection is a learning process for any organisation and requires updating the system regularly as the environment changes and learns.
  5. End user education is very critical in terms of them playing their role to help the organisation control email spam.