Continuous Credential Prompt when accessing MIM Password Registration Portal

First published at https://nivleshc.wordpress.com

Recently I was at a customer site, setting up a Microsoft Identity Manager (MIM) 2016 environment, which included the deployment of the Self Service Password Registration and Self Service Password Reset portals. For additional security, I was using Kerberos instead of the default NTLM.

I finished installing the MIM Portal, Service, Password Registration and Password Reset Portals without any issues.

I then proceeded to securing all http endpoints by enabling them for SSL and after that removing the http bindings, so that you could access the MIM Portal, Password Registration and Password Reset Portals only via https. No issues there as well.

By this time I was pretty pleased with myself ūüėČ Everything was going as planned, no issues faced at all. Finally, lady luck was showering me with her blessings.

Having finished the installation and configuration, I proceeded to testing the solution.

The first thing to check was the MIM Portal site. I opened up a web browser and navigated to the Microsoft Identity Manager Portal. When prompted, I logged in with the mimadmin domain account credentials. I was successfully logged in and could access all the parts without any hitch.

Now kids, if you are faint at heart, be very wary of what happens next (hint. this is the time when you cover your eyes with your hands when watching a horror movie).

I then tried accessing the Self Service Password Registration Portal and got prompted for credentials.

SSPRegistration_CredentialPrompt

I entered the mimadmin account credentials and pressed enter. Just as I thought I had successfully logged in, the credential prompt returned! hmm, that is weird. I was pretty sure I had typed the username and password correctly. Oh well, maybe I didn’t.

I typed the credentials again and pressed enter. Quick as a flash, the credential prompt returned! Uh? What was happening here?{scratching my head} Hmm, I seem to be making a lot of typos today. I carefully entered the username and password again, taking my time this time, to ensure I was entering it correctly. I then pressed enter and waited.

Well, I didn’t have to wait for long since within a second, I got greeted with the¬†Not Authorized screen!

SSPRegistration_NotAuthorised

Fascinating. It seems that lady luck had flown away because here indeed was an issue with the Self Service Password Registration Portal! Ok, Mister. Lets have a look at whats causing this kerfuffle!

I opened up the event viewer on the Self Service Password Registration server and went through each of the logged events in the Application and System logs, however I couldn’t find any clue as to why the credentials were not working. I secretly had suspicions that the issue could be due to Kerberos token errors, however I couldn’t find anything in the event logs to substantiate my suspicion. Hmm, the plot was indeed getting thicker!

I next started doing some Google searches, thinking that someone else might have encountered the same issue. Alas, it seemed that I was alone in my woes as the results seemed to be quite thin in regards to any possible solution for my issue.

Finally, I decided to follow my dear ol’ Sherlock’s advice “when you have eliminated the impossible, whatever remains, however improbable, must be the truth”

I went through the whole Self Service Password Registration setup process, checking each and every part of the configuration, to ensure that the values were as expected. After 10 minutes, I was almost done checking and no clues so far ūüė¶

Lastly, I opened IIS Manager and checked all the settings. Nothing here as well. Hey back up a bit. What is this?

The Self Service Password Registration Portal site had its useAppPoolCredentials set to False.

SSPRegistration_useAppPoolCredentialsFalse

Now, this should be True! Is this what was causing the issue?

I quickly changed the value for useAppPoolCredentials from False to True.

SSPRegistration_useAppPoolCredentialsTrue

I then opened my web browser again and navigated to the Self Service Password Registration Portal. Once again the familiar credential prompt came up. I entered the same credentials as before and pressed enter.

Woo hoo!! This time around I was successfully logged in.

I sincerely hope that this post helps others who might be encountering the same error.

Have a great day ūüėČ

Ok Google Email me the status of all vms – Part 2

First published at https://nivleshc.wordpress.com

In my last blog, we configured the backend systems necessary for accomplishing the task of asking Google Home “OK Google Email me the status of all vms” and it sending us an email to that effect. If you haven’t finished doing that, please refer back to my last blog¬†and get that done¬†before continuing.

In this blog, we will configure Google Home.

Google Home uses Google Assistant to do all the smarts. You will be amazed at all the tasks that Google Home can do out of the box.

For our purposes, we will be using the platform IF This Then That or IFTTT for short. IFTTT is a very powerful platform as it lets you create actions based on triggers. This combination of triggers and actions is called a recipe.

Ok, lets dig in and create our IFTTT recipe to accomplish our task

1.1 ¬† Go to¬†https://ifttt.com/¬†and create an account (if you don’t already have one)

1.2   Login to IFTTT and click on My Applets menu from the top

IFTTT_MyApplets_Menu

1.3   Next, click on New Applet (top right hand corner)

1.4   A new recipe template will be displayed. Click on the blue + this choose a service

IFTTT_Reicipe_Step1

1.5 ¬† Under¬†Choose a Service¬†type “Google Assistant”

IFTTT_ChooseService

1.6   In the results Google Assistant will be displayed. Click on it

1.7 ¬† If you haven’t already connected IFTTT with Google Assistant, you will be asked to do so. When prompted, login with the Google account that is associated with your Google Home and then approve IFTTT to access it.

IFTTT_ConnectGA

1.8   The next step is to choose a trigger. Click on Say a simple phrase

IFTTT_ChooseTrigger

1.9   Now we will put in the phrases that Google Home should trigger on.

IFTTT_CompleteTrigger

For

  • What do you want to say? enter “email me the status of all vms
  • What do you want the Assistant to say in response? enter “no worries, I will send you the email right away

All the other sections are optional, however you can fill them if you prefer to do so

Click Create trigger

1.10   You will be returned to the recipe editor. To choose the action service, click on + that

IFTTT_That

1.11  Under Choose action service, type webhooks. From the results, click on Webhooks

IFTTT_ActionService

1.12   Then for Choose action click on Make a web request

IFTTT_Action_Choose

1.13   Next the Complete action fields screen is shown.

For

  • URL – paste the webhook url of the runbook that you had copied in the previous blog
  • Method – change this to¬†POST
  • Content Type – change this to¬†application/json

IFTTT_CompleteActionFields

Click Create action

1.13   In the next screen, click Finish

IFTTT_Review

 

Woo hoo. Everything is now complete. Lets do some testing.

Go to your Google Home and say “email me the status of all vms”. Google Home should reply by saying “no worries. I will send you the email right away”.

I have noticed some delays in receiving the email, however the most I have had to wait for is 5 minutes. If this is unacceptable, in the runbook script, modify the Send-MailMessage command by adding the parameter -Priority High. This sends all emails with high priority, which should make things faster. Also, the runbook is currently running in Azure. Better performance might be achieved by using Hybrid Runbook Workers

To monitor the status of the automation jobs, or to access their logs, in the Azure Automation Account, click on Jobs in the left hand side menu. Clicking on any one of the jobs shown will provide more information about that particular job. This can be helpful during troubleshooting.

Automation_JobsLog

There you go. All done. I hope you enjoy this additional task you can now do with your Google Home.

If you don’t own a Google Home yet, you can do the above automation using Google Assistant as well.

Ok Google Email me the status of all vms – Part 1

First published at https://nivleshc.wordpress.com

Technology is evolving at a breathtaking pace. For instance, the phone in your pocket has more grunt than the desktop computers of 10 years ago!

One of the upcoming areas in Computing Science is Artificial Intelligence. What seemed science fiction in the days of Isaac Asimov, when he penned I, Robot seems closer to reality now.

Lately the market is popping up with virtual assistants from the likes of Apple, Amazon and Google. These are “bots” that use Artificial Intelligence to help us with our daily lives, from telling us about the weather, to reminding us about our shopping lists or letting us know when our next train will be arriving. I still remember my first virtual assistant¬†Prody Parrot, which hardly did much when you compare it to Siri, Alexa or Google Assistant.

I decided to test drive one of these virtual assistants, and so purchased a Google Home. First impressions, it is an awesome device with a lot of good things going for it. If only it came with a rechargeable battery instead of a wall charger, it would have been even more awesome. Well maybe in the next version (Google here’s a tip for your next version ūüėČ )

Having played with Google Home for a bit, I decided to look at ways of integrating it with Azure, and I was pleasantly surprised.

In this two-part blog, I will show you how you can use Google Home to send an email with the status of all your Azure virtual machines. This functionality can be extended to stop or start all virtual machines, however I would caution against NOT doing this in your production environment, incase you turn off some machine that is running critical workloads.

In this first blog post, we will setup the backend systems to achieve the tasks and in the next blog post, we will connect it to Google Home.

The diagram below shows how we will achieve what we have set out to do.

Google Home Workflow

Below is a list of tasks that will happen

  1. Google Home will trigger when we say “Ok Google email me the status of all vms”
  2. As Google Home uses Google Assistant, it will pass the request to the IFTTT service
  3. IFTTT will then trigger the webhooks service to call a webhook url attached to an Azure Automation Runbook
  4. A job for the specified runbook will then be queued up in Azure Automation.
  5. The runbook job will then run, and obtain a status of all vms.
  6. The output will be emailed to the designated recipient

Ok, enough talking ūüėČ lets start cracking.

1. Create an Azure AD Service Principal Account

In order to run our Azure Automation runbook, we need to create a security object for it to run under. This security object provides permissions to access the azure resources. For our purposes, we will be using a service principal account.

Assuming you have already installed the Azure PowerShell module, run the following in a PowerShell session to login to Azure

Import-Module AzureRm
Login-AzureRmAccount

Next, to create an Azure AD Application, run the following command

$adApp = New-AzureRmADApplication -DisplayName "DisplayName" -HomePage "HomePage" -IdentifierUris "http://IdentifierUri" -Password "Password"

where

DisplayName is the display name for your AD Application eg “Google Home Automation”

HomePage is the home page for your application eg http://googlehome (or you can ignore the -HomePage parameter as it is optional)

IdentifierUri is the URI that identifies the application eg http://googleHomeAutomation

Password is the password you will give the service principal account

Now, lets create the service principle for the Azure AD Application

New-AzureRmADServicePrincipal -ApplicationId $adApp.ApplicationId

Next, we will give the service principal account read access to the Azure tenant. If you need something more restrictive, please find the appropriate role from https://docs.microsoft.com/en-gb/azure/active-directory/role-based-access-built-in-roles

New-AzureRmRoleAssignment -RoleDefinitionName Reader -ServicePrincipalName $adApp.ApplicationId

Great, the service principal account is now ready. The username for your service principal is actually the ApplicationId suffixed by your Azure AD domain name. To get the Application ID run the following by providing the identifierUri that was supplied when creating it above

Get-AzureRmADApplication -IdentifierUri {identifierUri}

Just to be pedantic, lets check to ensure we can login to Azure using the newly created service principal account and the password. To test, run the following commands (when prompted, supply the username for the service principal account and the password that was set when it was created above)

$cred = Get-Credential 
Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantId {TenantId}

where Tenantid is your Azure Tenant’s ID

If everything was setup properly, you should now be logged in using the service principal account.

2. Create an Azure Automation Account

Next, we need an Azure Automation account.

2.1   Login to the Azure Portal and then click New

AzureMarketPlace_New

2.2   Then type Automation and click search. From the results click the following.

AzureMarketPlace_ResultsAutomation

2.3   In the next screen, click Create

2.4   Next, fill in the appropriate details and click Create

AutomationAccount_Details

3. Create a SendGrid Account

Unfortunately Azure doesn’t provide relay servers that can be used by scripts to email out. Instead you have to either use EOP (Exchange Online Protection) servers or SendGrid to achieve this. SendGrid is an Email Delivery Service that Azure provides, and you need to create an account to use it. For our purposes, we will use the free tier, which allows the delivery of 2500 emails per month, which is plenty for us.

3.1   In the Azure Portal, click New

AzureMarketPlace_New

3.2   Then search for SendGrid in the marketplace and click on the following result. Next click Create

AzureMarketPlace_ResultsSendGrid

3.3   In the next screen, for the pricing tier, select the free tier and then fill in the required details and click Create.

SendGridAccount_Details

4. Configure the Automation Account

Inside the Automation Account, we will be creating a Runbook that will contain our PowerShell script that will do all the work. The script will be using the Service Principal and SendGrid accounts. To ensure we don’t expose their credentials inside the PowerShell script, we will store them in the Automation Account under Credentials, and then access them from inside our PowerShell script.

4.1   Go into the Automation Account that you had created.

4.2   Under Shared Resource click Credentials

AutomationAccount_Credentials

4.3    Click on Add a credential and then fill in the details for the Service Principal account. Then click Create

Credentials_Details

4.4   Repeat step 4.3 above to add the SendGrid account

4.5   Now that the Credentials have been stored, under Process Automation click Runbooks

Automation_Runbooks

Then click Add a runbook and in the next screen click Create a new runbook

4.6   Give the runbook an appropriate name. Change the Runbook Type to PowerShell. Click Create

Runbook_Details

4.7   Once the Runbook has been created, paste the following script inside it, click on Save and then click on Publish

Import-Module Azure
$cred = Get-AutomationPSCredential -Name 'Service Principal account'
$mailerCred = Get-AutomationPSCredential -Name 'SendGrid account'

Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantID {tenantId}

$outputFile = $env:TEMP+ "\AzureVmStatus.html"
$vmarray = @()

#Get a list of all vms 
Write-Output "Getting a list of all VMs"
$vms = Get-AzureRmVM
$total_vms = $vms.count
Write-Output "Done. VMs Found $total_vms"

$index = 0
# Add info about VM's to the array
foreach ($vm in $vms){ 
 $index++
 Write-Output "Processing VM $index/$total_vms"
 # Get VM Status
 $vmstatus = Get-AzurermVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Status

# Add values to the array:
 $vmarray += New-Object PSObject -Property ([ordered]@{
 ResourceGroupName=$vm.ResourceGroupName
 Name=$vm.Name
 OSType=$vm.StorageProfile.OSDisk.OSType
 PowerState=(get-culture).TextInfo.ToTitleCase(($vmstatus.statuses)[1].code.split("/")[1])
 })
}
$vmarray | Sort-Object PowerState,OSType -Desc

Write-Output "Converting Output to HTML" 
$vmarray | Sort-Object PowerState,OSType -Desc | ConvertTo-Html | Out-File $outputFile
Write-Output "Converted"

$fromAddr = "senderEmailAddress"
$toAddr = "recipientEmailAddress"
$subject = "Azure VM Status as at " + (Get-Date).toString()
$smtpServer = "smtp.sendgrid.net"

Write-Output "Sending Email to $toAddr using server $smtpServer"
Send-MailMessage -Credential $mailerCred -From $fromAddr -To $toAddr -Subject $subject -Attachments $outputFile -SmtpServer $smtpServer -UseSsl
Write-Output "Email Sent"

where

  • ‘Service Principal Account’ and ‘SendGrid Account’ are the names of the credentials that were created in the Automation Account (include the ‘ ‘ around the name)
  • senderEmailAddress is the email address that the email will show it came from. Keep the domain of the email address same as your Azure domain
  • recipientEmailAddress is the email address of the recipient who will receive the list of vms

4.8   Next, we will create a Webhook. A webhook is a special URL that will allow us to execute the above script without logging into the Azure Portal. Treat the webhook URL like a password since whoever possesses the webhook can execute the runbook without needing to provide any credentials.

Open the runbook that was just created and from the top menu click on Webhook

Webhook_menu

4.9   In the next screen click Create new webhook

4.10  A security message will be displayed informing that once the webhook has been created, the URL will not be shown anywhere in the Azure Portal. IT IS EXTREMELY IMPORTANT THAT YOU COPY THE WEBHOOK URL BEFORE PRESSING THE OK BUTTON.

Enter a name for the webhook and when you want the webhook to expire. Copy the webhook URL and paste it somewhere safe. Then click OK.

Once the webhook has expired, you can’t use it to trigger the runbook, however before it expires, you can change the expiry date. For security reasons, it is recommended that you don’t keep the webhook alive for a long period of time.

Webhook_details

Thats it folks! The stage has been set and we have successfully configured the backend systems to handle our task. Give yourselves a big pat on the back.

Follow me to the next blog, where we will use the above with IFTTT, to bring it all together so that when we say “OK Google, email me the status of all vms”, an email is sent out to us with the status of all the vms ūüėČ

I will see you in Part 2 of this blog. Ciao ūüėČ

MfaSettings.xml updates not taking effect

First published at https://nivleshc.wordpress.com

Last week, I was at a client site, extending their Microsoft Identity Manager (MIM) 2016 Self Service Password Reset Solution so that it could use Azure MultiFactor Authentication (MFA). This is an elegant solution since instead of using Questions and Answers to authenticate yourself when trying to reset your password, you can use One Time Passwords (OTP), sent as a security code via a text message to your registered mobile device.

I followed the steps as outlined in https://github.com/Microsoft/MIMDocs/blob/master/MIMDocs/DeployUse/working-with-self-service-password-reset.md to enable Azure MFA, and everything went smoothly.

I then proceeded to testing the solution.

Using the Password Registration Portal, I registered my mobile number against my test user account.

I then opened the Password Reset Portal, entered my test user username and proceeded to wait for the text message from Microsoft Azure with the security code, so that I could enter it in the next screen.

MIM_Verify_MobilePhoneVerification

I waited and waited (for at least 5 min), unfortunately the text message didn’t arrive ūüė¶

Ok, troubleshooting time.

On my Microsoft Identity Manager 2016 Service Server, I opened the Windows EventLogs viewer and then expanded the section for Forefront Identity Manager event logs. Aha, I was on the right track as I saw a lot of errors reported.

MIMServiceServer_EventLogs

I went through the event log entries and found one which looked abit odd. The error essentially said that the certificate path contained illegal characters.

MIMServiceServer_Error_CertificatePath

I couldn’t make much sense of this error, so I opened the¬†MfaSettings.xml file to check, and I quickly realised my mistake. I had included the certificate file path within quotes!

I quickly removed the¬†unnecessary¬†” “ , saved the¬†MfaSettings.xml file and restarted my testing process.

I went through the password reset process again, and¬†yet again, I didn’t receive any text message from Microsoft Azure with the security code ūüė¶

I re-checked the eventlogs and noticed the same Exception: Illegal characters in path for the Certificate File Path error. Thinking that I might have forgotten to save the previous modification to the MfaSettings.xml file, I opened the file to confirm. The quotes were no where to be seen! Alas, the plot thickens my dear Watson!

I couldn’t find any explanation for this behaviour. Then, thinking that maybe the MIM server¬†was having issues accessing the long filepath for the Certificate file, I moved the certificate file to a folder that was closer to the root of the C:\ drive, updated the¬†MfaSettings.xml file appropriately and repeated my testing.

Again, no text message ūüė¶

Checking the event logs, I noticed the same dreaded Exception: Illegal characters in path for the Certificate File Path error again.

However, looking closer at the error, I realised that the file path was reported as C:\Program Files\Microsoft Forefront Identity Manager\2010\Service\MfaCerts\cert_key.p12, which wasn’t correct since I had moved the certificate file to another folder and updated the MfaSettings.xml file accordingly!

Suddenly I had that light bulb moment ūüėČ Updates to the¬†MfaSettings.xml file were not being read by the MIM Server! This could only mean that it wasn’t¬†monitoring this file for any changes, quite the opposite to what I had initially assumed!

To force MIM to re-read the MfaSettings.xml file, I restarted the Forefront Identity Manager Service service and went through my password reset testing process again.

Eureka! This time around, I received the text message from Microsoft Azure with the security code! Checking the Event logs, I couldn’t find any new occurrences of the¬†Exception: Illegal characters in path for the Certificate File Path error. Hurray!

I completed the password reset process and confirmed that the password for my test account had indeed been changed.

I hope this post helps others.

BTW, below is a sample of the MFASettings.xml file (for security reasons, the keys have been scrambled, however as seen below, none of the values need quotes)

<?xml version=”1.0″ encoding=”utf-8″ ?>
<SubscriberKeys>
<LICENSE_KEY>1A3FED2C1BZA</LICENSE_KEY>
<GROUP_KEY>a1b234c567890e123456gh1234567eij</GROUP_KEY>
<CERT_PASSWORD>1ABDCDEF1GHDAVWA</CERT_PASSWORD>
<CertFilePath>C:\Program Files\Microsoft Forefront Identity Manager\2010\Service\MfaCert\cert_key.p12</CertFilePath>
<Username>john.doe</Username>
<DefaultCountryCode>61</DefaultCountryCode>
</SubscriberKeys>

Re-execute the UserData script in an AWS Windows Instance

First published at https://nivleshc.wordpress.com

Bootstrapping is an awesome way of customising your instances in AWS (similar capability exists in Azure).

To enable bootstrapping, while configuring the launch instance, in Step 3: Configure Instance Details scroll down to the bottom and then expand Advanced Details.

You will notice a User data text box. This is where you can provide your bootstrap script. The script will be run when your instance is first launched.

AWS_BootstrapScript

I went ahead and entered my script in the text box and proceeded to complete my instance configuration. Once my instance was running, I initiated a Remote Desktop connection to it, to confirm that my script had run. Unfortunately, I couldn’t see any customisations (which meant my script didn’t run)

Thinking that the instance had not been able to access the user data, I opened up Internet Explorer and then browsed to the following url (this is an internal url that can be used to access the user-data)

http://169.254.169.254/latest/user-data/

I was able to successfully access the user-data, which meant that there were no issues with that. ¬†However when checking the content, I noticed a typo! Aha, that was the reason why my customisations didn’t happen.

Unfortunately, according to AWS, user-data is only executed during launch (for those that would like to read, here¬†is the official AWS documentation). To get the fixed bootstrap script to run, I would have to terminate my instance and launch a new one with the corrected script (I tried re-booting my windows instance after correcting my typo, however it didn’t run).

I wasn’t very happy on terminating my current instance and then launching a new one, since for those that might not be aware, AWS EC2 compute charges are rounded up to the next hour. Which means that if I terminated my current instance and launched a new one, I would be charged for 2 x 1hour sessions instead of just 1 x 1 hour!

So I set about trying to find another solution. And guess what, I did find it ūüôā

Reading through the volumes of documentation on AWS, I found that when Windows Instances¬†are provisioned, the service that does the customisations using user-data is called¬†EC2Config. This service runs the initial startup tasks when the instance is first started and then disables them. HOWEVER, there is a way to re-enable the startup tasks later on ūüôā Here is the document that gives more information on¬†EC2Config.

The Amazon Windows AMIs include a utility called EC2ConfigService Settings. This allows you to configure EC2Config to execute the user-data on next service startup. The utility can be found under All Programs (or you can search for it).

AWS_EC2ConfigSettings_AllApps

AWS_EC2ConfigSettings_Search

Once Open, under General you will see the following option

Enable UserData execution for next service start (automatically enabled at Sysprep) eg. or <powershell></powershell>

AWS_EC2ConfigSettings

Tick this option and then press OK. Then restart your Windows Instance.

After your Windows Instance restarts, EC2Config will execute the userData (bootstrap script) and then it will automatically remove the tick from the above option so that the userData is not executed on subsequent restarts (or service starts)

There you go. A simple way to re-run your bootstrap scripts on an AWS Windows Instance without having to terminate the current instance and launching a new one.

There are other options available in the EC2ConfigService Settings that you can explore as well ūüôā

Error rebuilding MIMWAL – File MicrosoftServices.IdentityManagement.WorkflowActivityLibrary.dll not found

First published on Nivlesh’s blog at¬†https://nivleshc.wordpress.com

A few days ago, I was going through the steps for compiling MIMWAL, as listed at http://ithinkthereforeidam.com/installing-the-mimwal/ and came across an interesting problem.

After I had rebuilt my Visual Studio package, I went to run Sign.cmd and kept getting the following error message

Signcmd_Error

Error: File “MicrosoftServices.IdentityManagement.WorkflowActivityLibrary.dll” Not Found. You need to compile WAL solution first! Make sure you use REBUILD Solution menu. Aborting script execution…

This was quite bizarre as I had not deviated from the steps listed in the above mentioned article. It was time to put on my Sherlock hat and find the culprit behind this error!

I opened the SolutionOutput folder and compared the contents to what was shown in the article and found something interesting. The dll mentioned in the error was indeed missing!

Also the file MicrosoftServices.IdentityManagement.WorkflowActivityLibrary.pdb was missing.

This meant that there must have been an error when rebuilding the package in Visual Studio. I alt+tabbed to my Visual Studio screen and in the output pane, saw something interesting. It showed that there had been some issues while copying  MicrosoftServices.IdentityManagement.WorkflowActivityLibrary.dll to the SourceOutput folder.

VisualStudioOutputPane_Error

The error

WorkflowActivityLibrary -> C:\MIMWAL-2.16.1028.0\src\WorkflowActivityLibrary\bin\Release\MicrosoftServices.IdentityManagement.WorkflowActivityLibrary.dll
1> Does C:\MIMWAL-2.16.1028.0\src\SolutionOutput specify a file name
1> or directory name on the target
1> (F = file, D = directory)? ?

seemed to indicate that when Visual Studio was trying to copy the two missing files, it hadn’t been able to determine if the destination folder ¬†SourceOutput was a directory or a file. This resulted in Visual Studio skipping the copy of these files. Doing some investigation, I found that the MIMWAL source package didn’t contain a¬†.\src\SourceOutput folder. This explained why Visual Studio was showing the above warnings.

Based on my findings, I found two solutions that helped resolve the issue

Solution 1

Rebuild the Visual Studio Package again. On the second try, since the SourceOutput directory now exists, the files will be successfully copied.

Solution 2

Before rebuilding the MIMWAL package, create a subfolder called SourceOutput inside the src folder

My preference is for Solution 2 as it means that I won’t get any errors.

After successfully rebuilding the MIMWAL package, I ran sign.cmd and this time around РSuccess! I got the expected result.

VisualStudioOutputPane_Success

Signcmd_Successful

I hope this blog helps anyone else who might be having issues with compiling MIMWAL and running sign.cmd

 

Automate Secondary ADFS Node Installation and Configuration

Originally posted on Nivlesh’s blog @ nivleshc.wordpress.com

Introduction

Additional nodes in an ADFS farm are required to provide redundancy incase your primary ADFS node goes offline. This ensures your ADFS service is still up and servicing all incoming requests. Additional nodes also help in load balancing the incoming traffic, which provides a better user experience in cases of high authentication traffic.

Overview

Once an ADFS farm has been created, adding additional nodes is quite simple and mostly relies on the same concepts for creating the ADFS farm. I would suggest reading my previous blog Automate ADFS Farm Installation and Configuration as some of the steps we will use in this blog were documented in it.

In this blog, I will show how to automatically provision a secondary ADFS node to an existing ADFS farm. The learnings in this blog can be easily used to deploy more ADFS nodes automatically, if needed.

Install ADFS Role

After provisioning a new Azure virtual machine, we need to install the Active Directory Federation Services role on it.  To do this, we will use the same Desired State Configuration (DSC) script that was used in the blog Automate ADFS Farm Installation and Configuration. Please refer to the section Install ADFS Role in the above blog for the steps to create the DSC script file InstallADFS.ps1.

Add to an existing ADFS Farm

Once the ADFS role has been installed on the virtual machine, we will create a Custom Script Extension (CSE) to add it to the ADFS farm.

In order to do this, we need the following

  • certificate that was used to create the ADFS farm
  • ADFS service account credentials¬†that was used to create the ADFS farm

Once the above prerequisites have been met, we need a method for making the files available to the CSE. I documented a neat trick to “sneak-in” the certificate and password files onto the virtual machine by using Desired State Configuration (DSC) package files in my previous blog. Please refer to¬†Automate ADFS Farm Installation and Configuration¬†under the section Create ADFS Farm¬†for the steps.

Also note, for adding the node to the adfs farm, the domain user credentials are not required. The certificate file will be named adfs_certificate.pfx  and the file containing the encrypted adfs service account password will be named adfspass.key.

Assuming that the prerequisites have been satisfied, and the files have been “sneaked” onto the virtual machine, lets proceed to creating the CSE.

Open Windows Powershell ISE and paste the following.

param (
  $DomainName,
  $PrimaryADFSServer,
  $AdfsSvcUsername
)

The above shows the parameters that need to be passed to the CSE where

$DomainName is the name of the Active Directory domain
$PrimaryADFSServer is the hostname of the primary ADFS server
$AdfsSvcUsername is the username of the ADFS service account

Save the file with a name of your choice (do not close the file as we will be adding more lines to it). I named my script AddToADFSFarm.ps1

Next, we need to define a variable that will contain the path to the directory where the certificate file and the file containing the encrypted adfs service account password are stored. Also, we need a variable to hold the key that was used to encrypt the adfs service account password. This will be required to decrypt the password.

Add the following to the CSE file

Next, we need to decrypt the encrypted adfs service account password.

Now, we need to import the certificate into the local computer certificate store. To make things simple, when the certificate was exported from the primary ADFS server, it was encrypted using the adfs service account password.

After importing the certificate, we will read it to get its thumbprint.

Up until now, the steps are very similar to creating an ADFS farm. However, below is where they diverge.

Add the following lines to add the virtual machine to the existing ADFS farm

You now have a custom script extension file that will add a virtual machine as a secondary node to an existing ADFS Farm.

Below is the full CSE

All that is missing now is the method to bootstrap the scripts described above (InstallADFS.ps1 and AddToADFSFarm.ps1) using Azure Resource Manager (ARM) templates.

Below is part of an ARM template that can be added to your existing template to install the ADFS role on a virtual machine and then add the virtual machine as a secondary node to the ADFS farm

In the above ARM template, the parameter ADFS02VMName refers to the hostname of the virtual machine that will be added to the ADFS Farm.

Listed below are the variables that have been used in the ARM template above

The above method can be used to add as many nodes to the ADFS farm as needed.

I hope this comes in handy when creating an ARM template to automatically deploy an ADFS Farm with additional nodes.

Automate ADFS Farm Installation and Configuration

Originally posted on Nivlesh’s blog @ nivleshc.wordpress.com

Introduction

In this multi-part blog, I will be showing how to automatically install and configure a new ADFS Farm. We will accomplish this using Azure Resource Manager templates, Desired State Configuration scripts and Custom Script Extensions.

Overview

We will use Azure Resource Manager to create a virtual machine that will become our first ADFS Server. We will then use a desired state configuration script to join the virtual machine to our Active Directory domain and to install the ADFS role. Finally, we will use a Custom Script Extension to install our first ADFS Farm.

Install ADFS Role

We will be using the xActiveDirectory and xPendingReboot experimental DSC modules.

Download these from

https://gallery.technet.microsoft.com/scriptcenter/xActiveDirectory-f2d573f3

https://gallery.technet.microsoft.com/scriptcenter/xPendingReboot-PowerShell-b269f154

After downloading, unzip the file and  place the contents in the Powershell modules directory located at $env:ProgramFiles\WindowsPowerShell\Modules (unless you have changed your systemroot folder, this will be located at C:\ProgramFiles\WindowsPowerShell\Modules )

Open your Windows Powershell ISE and lets create a DSC script that will join our virtual machine to the domain and also install the ADFS role.

Copy the following into a new Windows Powershell ISE file and save it as a filename of your choice (I saved mine as InstallADFS.ps1)

In the above, we are declaring some mandatory parameters and some variables that will be used within the script

$MachineName is the hostname of the virtual machine that will become the first ADFS server

$DomainName is the name of the domain where the virtual machine will be joined

$AdminCreds contains the username and password for an account that has permissions to join the virtual machine to the domain

$RetryCount and $RetryIntervalSec hold values that will be used to  check if the domain is available

We need to import the experimental DSC modules that we had downloaded. To do this, add the following lines to the DSC script

Import-DscResource -Module xActiveDirectory, xPendingReboot

Next, we need to convert the supplied $AdminCreds into a domain\username format. This is accomplished by the following lines (the converted value is held in $DomainCreds )

Next, we need to tell DSC that the command needs to be run on the local computer. This is done by the following line (localhost refers to the local computer)

Node localhost

We need to tell the LocalConfigurationManager that it should reboot the server if needed, continue with the configuration after reboot,  and to just apply the settings only once (DSC can apply a setting and constantly monitor it to check that it has not been changed. If the setting is found to be changed, DSC can re-apply the setting. In our case we will not do this, we will apply the setting just once).

Next, we need to check if the Active Directory domain is ready. For this, we will use the xWaitForADDomain function from the xActiveDirectory experimental DSC module.

Once we know that the Active Directory domain is available, we can go ahead and join the virtual machine to the domain.

the JoinDomain function depends on xWaitForADDomain. If xWaitForADDomain fails, JoinDomain will not run

Once the virtual machine has been added to the domain, it needs to be restarted. We will use xPendingReboot function from the xPendingReboot experimental DSC module to accomplish this.

Next, we will install the ADFS role on the virtual machine

Our script has now successfully added the virtual machine to the domain and installed the ADFS role on it. Next, create a zip file with InstallADFS.ps1 and upload it to a location that Azure Resource Manager can access (I would recommend uploading to GitHub). Include the xActiveDirectory and xPendingReboot experimental DSC module directories in the zip file as well. Also add a folder called Certificates inside the zip file and put the ADFS certificate and the encrypted password files (discussed in the next section) inside the folder.

In the next section, we will configure the ADFS Farm.

The full InstallADFS.ps1 DSC script is pasted below

Create ADFS Farm

Once the ADFS role has been installed, we will use Custom Script Extensions (CSE) to create the ADFS farm.

One of the requirements to configure ADFS is a signed certificate. I used a 90 day trial certificate from Comodo.

There is a trick that I am using to make my certificate available on the virtual machine. If you bootstrap a DSC script to your virtual machine in an Azure Resource Manager template, the script along with all the non out-of-box DSC modules have to be packaged into a zip file and uploaded to a location that ARM can access. ARM then will download the zip file, unzip it, and place all directories inside the zip file to $env:ProgramFiles\WindowsPowerShell\Modules ( C:\ProgramFiles\WindowsPowerShell\Modules ) ARM assumes the directories are PowerShell modules and puts them in the appropriate directory.

I am using this feature to sneak my certificate on to the virtual machine. I create a folder called Certificates inside the zip file containing the DSC script and put the certificate inside it. Also, I am not too fond of passing plain passwords from my ARM template to the CSE, so I created two files, one to hold the encrypted password for the domain administrator account and the other to contain the encrypted password of the adfs service account. These two files are named adminpass.key and adfspass.key and will be placed in the same Certificates folder within the zip file.

I used the following to generate the encrypted password files

AdminPlainTextPassword and ADFSPlainTextPassword are the plain text passwords that will be encrypted.

$key  is used to convert the secure string into an encrypted standard string. Valid key lengths are 16, 24, 32

For this blog, we will use

$Key = (3,4,2,3,56,34,254,222,1,1,2,23,42,54,33,233,1,34,2,7,6,5,35,43)

Open Windows PowerShell ISE and paste the following (save the file with a name of your choice. I saved mine as ConfigureADFS.ps1)

param (
 $DomainName,
 $DomainAdminUsername,
 $AdfsSvcUsername
)

These are the parameters that will be passed to the CSE

$DomainName is the name of the Active Directory domain
$DomainAdminUsername is the username of the domain administrator account
$AdfsSvcUsername is the username of the ADFS service account

Next, we will define the value of the Key that was used to encrypt the password and the location where the certificate and the encrypted password files will be placed

$localpath = "C:\Program Files\WindowsPowerShell\Modules\Certificates\"
$Key = (3,4,2,3,56,34,254,222,1,1,2,23,42,54,33,233,1,34,2,7,6,5,35,43)

Now, we have to read the encrypted passwords from the adminpass.key and adfspass.key file and then convert them into a domain\username format

Next, we will import the certificate into the local computer certificate store. We will mark the certificate exportable and set the password same as the domain administrator password.

In the above after the certificate is imported,  $cert is used to hold the certificate thumbprint

Next, we will configure the ADFS Farm

The ADFS Federation Service displayname is set to “Active Directory Federation Service”¬†and the Federation Service Name is set to¬†fs.adfsfarm.com

Upload the CSE to a location that Azure Resource Manager can access (I uploaded my script to GitHub)

The full ConfigureADFS.ps1 CSE is shown below

Azure Resource Manager Template Bootstrapping

Now that the DSC and CSE scripts have been created, we need to add them in our ARM template, straight after the virtual machine is provisioned.

To add the DSC script, create a DSC extension and link it to the DSC Package that was created to install ADFS. Below is an example of what can be used

The extension will run after the ADFS virtual machine has been successfully created (referred to as ADFS01VMName)

The MachineName, DomainName and domain administrator credentials are passed to the DSC extension.

Below are the variables that have been used in the json file for the DSC extension (I have listed my GitHub repository location)

Next, we have to create a Custom Script Extension to link to the CSE for configuring ADFS. Below is an example that can be used

The CSE depends on the ADFS virtual machine being successfully provisioned and the DSC extension that installs the ADFS role to have successfully completed.

The DomainName, Domain Administrator Username and the ADFS Service Username are passed to the CSE script

The following contains a list of the variables being used by the CSE (the example below shows my GitHub repository location)

"repoLocation": "https://raw.githubusercontent.com/nivleshc/arm/master/",
"ConfigureADFSScriptUrl": "[concat(parameters('repoLocation'),'ConfigureADFS.ps1')]",

That’s it Folks! You now have an ARM Template¬†that can be used to¬†automatically install the ADFS role and then configure a new ADFS Farm.

In my next blog, we will explore how to add another node to the ADFS Farm and we will also look at how we can automatically create a Web Application Proxy server for our ADFS Farm.

Create a Replica Domain Controller using Desired State Configuration

Originally posted on Nivlesh’s blog @ nivleshc.wordpress.com

Welcome back. In this blog we will continue with our new Active Directory Domain and use Desired State Configuration (DSC) to add a replica domain controller to it, for redundancy.

If you have not read the first part of this blog series, I would recommend doing that before continuing (even if you need a refresher). The first blog can be found at Create a new Active Directory Forest using Desired State Configuration

Whenever you create an Active Directory Domain, you should have, at a minimum, two domain controllers. This ensures that domain services are available even if one domain controller goes down.

To create a replica domain controller we will be using the xActiveDirectory and xPendingReboot experimental DSC modules. Download these from

https://gallery.technet.microsoft.com/scriptcenter/xActiveDirectory-f2d573f3

https://gallery.technet.microsoft.com/scriptcenter/xPendingReboot-PowerShell-b269f154

After downloading the zip files, extract the contents and place them in the PowerShell modules directory located at $env:ProgramFiles\WindowsPowerShell\Modules folder (unless you have changed your systemroot folder, this will be located at C:\ProgramFiles\WindowsPowerShell\Modules )

With the above done, open Windows PowerShell ISE and lets get started.

Paste the following into your PowerShell editor and save it using a filename of your choice (I have saved mine as CreateADReplicaDC.ps1 )

The above declares the parameters that need to be passed to the CreateADReplicaDC DSC configuration function ($DomainName , $Admincreds, $SafemodeAdminCreds). Some variables have also been declared ($RetryCount and $RetryIntervalSec). These will be used later within the configuration script.

Next, we have to import the experimental DSC modules that we have downloaded (unless the DSC modules are present out-of-the-box, they have to be imported so that they are available for use)

The $Admincreds parameter needs to be converted into a domain\username format. This is accomplished by using the following (the result is held in a variable called $DomainCreds)

Next, we need to tell DSC that the command has to be run on the local computer. This is done by the following line (localhost refers to the local computer)

Node localhost

As mentioned in the first blog, the DSC instructions are passed to the LocalConfigurationManager which carries out the commands. We need to tell the LocalConfigurationManager it should reboot the server if required, to continue with the configuration after a reboot, and to just apply the settings once (DSC can apply a setting and constantly monitor it to check that it has not been changed. If the setting is found to be changed, DSC can re-apply the setting. In our case we will not do this, we will apply the setting just once).

Next, we will install the Remote Server Administration Tools

Now, lets install Active Directory Domain Services Role

Before continuing on, we need to check if the Active Directory Domain that we are going to add the domain controller to is available. This is achieved by using the xWaitForADDomain function in the xActiveDirectory module.

Below is what we will use.

where
$DomainName is the name of the domain the domain controller will be joined to
$DomainCreds is the Administrator username and password for the domain (the username is in the format domain\username since we had converted this previously)

The DependsOn in the above code means that this section depends on the Active Directory Domain Services role to have been successfully installed.

So far, our script has successfully installed the Active Directory Domain Services role and has checked to ensure that the domain we will join the domain controller to is available.

So, lets proceed. We will now promote the virtual machine to a replica domain controller of our new Active Directory domain.

The above will only run after DscForestWait successfully signals that the Active Directory domain is available (denoted by DependsOn ). The SafeModeAdministratorPassword is set to the password that was provided in the parameters. The DatabasePath, LogPath and SysvolPath are all set to a location on C:\. This might not be suitable for everyone. For those that are uncomfortable with this, I would suggest to add another data disk to your virtual machine and point the Database, Log and Sysvol path to this new volume.

The only thing left now is to restart the server. So lets go ahead and do that

Keep in mind that the reboot will only happen if the domain controller was successfully added as a replica domain controller (this is due to the DependsOn).

That’s it. We now have a DSC configuration script that will add a replica domain controller to our Active Directory domain

The virtual machine that will be promoted to a replica domain controller needs to have the first domain controller’s ip address added to its dns settings and it also needs to have network connectivity to the first domain controller. These are required for the DSC configuration script to succeed.

The full DSC Configuration script is pasted below

If you would like to bootstrap the above to your Azure Resource Manager template, create a DSC extension and link it to the above DSC Configuration script (the script, together with the experimental DSC modules has to be packaged into a zip file and uploaded to a public location which ARM can access. I used GitHub for this)

One thing to note when bootstrapping is that after you create your first domain controller, you will need to inject its ip address into the dns settings of the virtual machine that will become your replica domain controller. This is required so that the virtual machine can locate the Active Directory domain.

I accomplished this by using the following lines of code in my ARM template, right after the Active Directory Domain had been created.

where

DC01 is the name of the first domain controller that was created in the new Active Directory domain
DC01NicIPAddress is the ip address of DC01
DC02 is the name of the virtual machine that will be added to the domain as a replica Domain Controller
DC02NicName is the name of the network card on DC02
DC02NicIPAddress is the ip address of DC02
DC02SubnetRef is a reference to the subnet DC02 is in
nicTemplateUri is the url of the ARM deployment template that will update the network interface settings for DC02 (it will inject the DNS settings). This file has to be uploaded to a location that ARM can access (for instance GitHub)

Below are the contents of the deployment template file that nicTemplateUri refers to

After the DNS settings have been updated, the following can be used to create a DSC extension to promote the virtual machine to a replica domain controller (it uses the DSC configuration script we had created earlier)

The variables used in the json file above are listed below (I have pointed repoLocation to my GitHub repository)

This concludes the DSC series for creating a new Active Directory domain and adding a replica domain controller to it.

Hope you all enjoyed this blog post.

Create a new Active Directory Forest using Desired State Configuration

Originally posted on Nivlesh’s blog @ nivleshc.wordpress.com

Desired State Configuration (DSC) is a declarative language in which you state ‚Äúwhat‚ÄĚ you want done instead of going into the nitty gritty level to describe exactly how to get it done. Jeffrey Snover (the inventor of PowerShell) quotes Jean-Luc Picard from Star Trek: The Next Generation to describe DSC ‚Äď it tells the servers to ‚ÄúMake it so‚ÄĚ.

In this blog, I will show you how to use DSC to create a brand new Active Directory Forest. In my next blog post, for redundancy, we will add another domain controller to this new Active Directory Forest.

The DSC configuration script can be used in conjunction with Azure Resource Manager to deploy a new Active Directory Forest with a single click, how good is that!

A DSC Configuration script uses DSC Resources to describe what needs to be done. DSC Resources are made up of PowerShell script functions, which are used by the Local Configuration Manager to ‚ÄúMake it so‚ÄĚ.

Windows PowerShell 4.0 and Windows PowerShell 5.0, by default come with a limited number of DSC Resources. Don’t let this trouble you, as you¬†can always install more DSC modules!¬†It‚Äôs as easy as downloading them from a trusted location and placing them in the PowerShell modules directory!¬†

To get a list of DSC Resources currently installed, within PowerShell, execute the following command

Get-DscResource

Out of the box, there are no DSC modules available to create an Active Directory Forest. So, to start off, lets go download a module that will do this for us.

For our purpose, we will be installing the xActiveDirectory PowerShell module (the x in front of ActiveDirectory means experimental), which can be downloaded from https://gallery.technet.microsoft.com/scriptcenter/xActiveDirectory-f2d573f3  (updates to the xActiveDirectory module is no longer being provided at the above link, instead these are now published on GitHub at https://github.com/PowerShell/xActiveDirectory  . For simplicity, we will use the TechNet link above)

Download the following DSC modules as well

https://gallery.technet.microsoft.com/scriptcenter/xNetworking-Module-818b3583

https://gallery.technet.microsoft.com/scriptcenter/xPendingReboot-PowerShell-b269f154

After downloading the zip files, extract the contents and place them in the PowerShell modules directory located at $env:ProgramFiles\WindowsPowerShell\Modules folder

($env: is a PowerShell reference  to environmental variables).

For most systems, the modules folder is located at C:\ProgramFiles\WindowsPowerShell\Modules

However, if you are unsure, run the following PowerShell command to get the location of $env:ProgramFiles 

Get-ChildItem env:ProgramFiles

Tip: To get a list of all environmental variables, run Get-ChildItem env:

With the pre-requisites taken care of, let’s start creating the DSC configuration script.

Open your Windows PowerShell IDE and let’s get started.

Paste the following into your PowerShell editor and save it using a filename of your choice (I have saved mine as CreateNewADForest.ps1)

Configuration CreateNewADForest {
param(
      [Parameter(Mandatory)]
      [String]$DomainName,

      [Parameter(Mandatory)]
      [System.Management.Automation.PSCredential]$AdminCreds,

      [Parameter(Mandatory)]
      [System.Management.Automation.PSCredential]$SafeModeAdminCreds,

      [Parameter(Mandatory)]
      [System.Management.Automation.PSCredential]$myFirstUserCreds,

      [Int]$RetryCount=20,
      [Int]$RetryIntervalSec=30
)

The above declaration defines the parameters that will be passed to our ¬†configuration function. ¬†There are also two “constants” defined as well. I have named my configuration function¬†CreateNewADForest which coincides with the name of the file it is saved in – CreateNewADForest.ps1

Here is some explanation regarding the above parameters and constants

We now have to import the DSC modules that we had downloaded, so add the following line to the above Configuration Script

Import-DscResource -ModuleName xActiveDirectory, xNetworking, xPendingReboot

We now need to convert $AdminCreds the format domain\username. We will store the result in a new object called $DomainCreds

The next line states where the DSC commands will run. Since we are going to run this script from within the newly provisioned virtual machine, we will use localhost as the location

So enter the following

Node localhost
{

Next, we need to tell the Local Configuration Manager to apply the settings only once, reboot the server if needed (during our setup) and most importantly, to continue on with the configuration after reboot. This is done by using the following lines

LocalConfigurationManager
{
     ActionAfterReboot = 'ContinueConfiguration'
     ConfigurationMode = 'ApplyOnly'
     RebootNodeIfNeeded = $true
}

If you have worked with Active Directory before, you will be aware of the importance of DNS. Unfortunately, during my testing, I found that a DNS Server is not automatically deployed when creating a new Active Directory Forest. To ensure a DNS Server is present before we start creating a new Active Directory Forest, the following lines are needed in our script

WindowsFeature DNS
{
     Ensure = "Present"
     Name = "DNS"
}

The above is a declarative statement which nicely shows the ‚ÄúMake it so‚ÄĚ nature of DSC. The above lines are asking the DSC Resource WindowsFeature to Ensure DNS (which refers to DNS Server) is Present. If it is not Present, then it will be installed, otherwise nothing will be done. This is the purpose of the Ensure = “Present” line. The word DNS after WindowsFeature¬†is just a name I have given to this block of code.

Next, we will leverage the DSC function xDNSServerAddress (from the xNetworking module we had installed) to set the local computer’s DNS settings, to its loopback address. This will make the computer refer to the newly installed DNS Server.

xDnsServerAddress DnsServerAddress
{
     Address        = '127.0.0.1'
     InterfaceAlias = 'Ethernet'
     AddressFamily  = 'IPv4'
     DependsOn = "[WindowsFeature]DNS"
}

Notice the DependsOn = “[WindowsFeature]DNS” in the above code. It tells the function that this block of code depends on the [WindowsFeature]DNS section. To put it another way, the above code will only run after the DNS Server has been installed. There you go, we have specified our first dependency.

Next, we will install the Remote Server Administration Tools

WindowsFeature RSAT
{
     Ensure = "Present"
     Name = "RSAT"
}

Now, we will install the Active Directory Domain Services feature in Windows Server. Note, this is just installation of the feature. It does not create the Active Directory Forest.

WindowsFeature ADDSInstall
{
     Ensure = "Present"
     Name = "AD-Domain-Services"
}

So far so good. Now for the most important event of all, creating the new Active Directory Forest. For this we will use the xADDomain function from the xActiveDirectory module.

Notice above, we are pointing the DatabasePath, LogPath and SysvolPath all on to the C: drive. If you need these to be on a separate volume, then ensure an additional data disk is added to your virtual machine and then change the drive letter accordingly in the above code. This section has a dependency on the server’s DNS settings being changed.

If you have previously installed a new Active Directory forest, you will remember how long it takes for the configuration to finish. We have to cater for this in our DSC script, to wait for the Forest to be successfully provisioned, before proceeding. Here, we will leverage on the xWaitForADDomain DSC function (from the xActiveDirectoy module we had installed).

xWaitForADDomain DscForestWait
{
     DomainName = $DomainName
     DomainUserCredential = $DomainCreds
     RetryCount = $RetryCount
     RetryIntervalSec = $RetryIntervalSec
     DependsOn = "[xADDomain]FirstDC"
}

Viola! The new Active Directory Forest has been successfully provisioned! Let’s add a new user to Active Directory and then restart the server.

xADUser FirstUser
{
     DomainName = $DomainName
     DomainAdministratorCredential = $DomainCreds
     UserName = $myFirstUserCreds.Username
     Password = $myFirstUserCreds
     Ensure = "Present"
     DependsOn = "[xWaitForADDomain]DscForestWait"
}

The username and password are what had been supplied in the parameters to the configuration function.

That’s it! Our new Active Directory Forest is up and running. All that is needed now is a reboot of the server to complete the configuration!

To reboot the server, we will use the xPendingReboot module (from the xPendingReboot package that we installed)

xPendingReboot Reboot1
{
     Name = "RebootServer"
     DependsOn = "[xWaitForADDomain]DscForestWait"
}

Your completed Configuration Script should look like below (I have taken the liberty of closing off all brackets and parenthesis)

You can copy the above script to a newly create Azure virtual machine and deploy it manually.

However, a more elegant method will be to bootstrap it to your Azure Resource Manager virtual machine template. One caveat is, the DSC configuration script has to be packaged into a zip file and then uploaded to a location that Azure Resource Manager can access (that means it can’t live on your local computer). You can upload it to your Azure Storage blob container and use a shared access token to give access to your script or upload it to a public repository like GitHub.

I have packaged and uploaded the script to my GitHub repository at https://raw.githubusercontent.com/nivleshc/arm/master/CreateNewADForest.zip

The zip file also contains the additional DSC Modules that were downloaded. To use the above in your Azure Resource Manager template, create a PowerShell DSC Extension and link it to the Azure virtual machine that will become your Active Directory Domain Controller. Below is an example of the extension you can create in your ARM template (the server that will become the first domain controller is called DC01 in the code below).

Below is an except of the relevant variables

And here are the relevant parameters

That’s it folks! Now you have a new shiny Active Directory Forest, that was created for you using a DSC configuration script.

In the second part of this two-part series, we will add an additional domain controller to this Active Directory Forest using DSC configuration script.

Let me know what you think of the above information.