Automate SharePoint Document Libraries Management using Flow, Azure Functions and SharePoint CSOM

I’ve been working on a client requirement to automate SharePoint library management via scripts to implement a document lifecycle with many document libraries that have custom content types and requires regular housekeeping for ownership and permissions.

Solution overview

To provide a seamless user experience, we decided to do the following:

1. Create a document library template (.stp) with all the prerequisite folders and content types applied.

2. Create a list to store the data about entries for said libraries. Add the owner and contributors for the library as columns in that list.

3. Whenever the title, owners or contributors are changed, the destination document library will be updated.

Technical Implementation

The solution has two main elements to automate this process

1. Microsoft Flow – Trigger when an item is created or modified

2. Two Azure Functions – Create the library and update permissions

The broad steps and code are as follows:

1. When the flow is triggered, we would check the status field to find if it is a new entry or a change.

Note: Since Microsoft flow doesn’t have conditional triggers to differentiate between create and modified list item events, use a text column in the SharePoint list which is set to start, in progress and completed values to identify create and update events.

2. The flow will call an Azure function via an HTTP Post action in a Function. Below is the configuration of this.

AzureFunctionFromFlow

3. For the “Create Library” Azure function, create a HTTP C# Function in your Azure subscription.

4. In the Azure Function, open Properties -> App Service Editor. Then add a folder called bin and then copy two files to it.

  • Microsoft.SharePoint.Client
  • Microsoft.SharePoint.Client.Runtime

KuduTools_AzureFunction

Create Lib App Service Editor

Please make sure to get the latest copy of the Nuget package for SharepointPnPOnlineCSOM. To do that, you can set up a VS solution and copy the files from there, or download the Nuget package directly and extract the files from it.

5. After copying the files, reference them in the Azure function using the below code

#r "Microsoft.SharePoint.Client.dll"
#r "Microsoft.SharePoint.Client.Runtime.dll"
#r "System.Configuration"
#r "System.Web"

6. Then create the SharePoint client context and create a connection to the source list.

7. After that, use the ListCreationInformation class to create the Document library from the library template using the code below.

8. After the library is created, break the role inheritance for the library as per the requirement

9. Update the library permissions using the role assignment object

10. To differentiate between People, SharePoint Groups and AD Groups, find the unique ID and add the group as per the script below.

Note: In case you have people objects that are not in AD anymore because they have left the organisation, please refer to this blog for validating them before updating – Resolving “User not found” issue while assigning permissions using SharePoint CSOM

      Note: Try to avoid item.Update() from the Azure Function as that will trigger a second flow run, causing an iterative loop, instead use item.SystemUpdate()

11. After the update is done, return to the Flow with the success value from the Azure Function which will complete the loop.

As shown above, we saw how we can automate document library creation from a template and permissions management using Flow and Azure Functions

Replacing the service desk with bots using Amazon Lex and Amazon Connect (Part 3)

Hopefully you’ve had the chance to follow along in parts 1 and 2 where we set up our Lex chatbot to take and validate input. In this blog, we’ll interface with our Active Directory environment to perform the password reset function. To do this, we need to create a Lambda function that will be used as the logic to fulfil the user’s intent. The Lambda function will be packaged with the python LDAP library to modify the AD password attribute for the user. Below are the components that need to be configured.

Active Directory Service Account

To begin, we need to start by creating a service account in Active Directory that has permissions to modify the password attribute for all users. Our Lambda function will then use this service account to perform password resets. To do this, create a service account in your Active Directory domain and perform the following to delegate the required permissions:

  1. Open Active Directory Users and Computers.
  2. Right click the OU or container that contains organisational users and click Delegate Control
  3. Click Next on the Welcome Wizard.
  4. Click Add and enter the service account that will be granted the reset password permission.
  5. Click OK once you’ve made your selection, followed by Next.
  6. Ensure that Delegate the following common tasks is enabled, and select Reset user passwords and force password change at next logon.
  7. Click Next, and Finish.

KMS Key for the AD Service Account

Our Lambda function will need to use the credentials for the service account to perform password resets. We want to avoid storing credentials within our Lambda function, so we’ll store the password as an encrypted Lambda variable and allow our Lambda function to decrypt it using Amazon’s Key Management Service (KMS). To create the KMS encryption key, perform the following steps:

  1. In the Amazon Console, navigate to IAM
  2. Select Encryption Keys, then Create key
  3. Provide an Alias (e.g. resetpw) then select Next
  4. Select Next Step for all subsequent steps then Finish on step 5 to create the key.

IAM Role for the Lambda Function

Because our Lambda function will need access to several AWS services such as SNS to notify the user of their new password and KMS to decrypt the service account password, we need to provide our function with an IAM role that has the relevant permissions. To do this, perform the following steps:

  1. In the Amazon Console, navigate to IAM
  2. Select Policies then Create policy
  3. Switch to the JSON editor, then copy and paste the following policy, replacing the KMS resource with the KMS ARN created above
    {
       "Version": "2012-10-17",
       "Statement": [
         {
           "Effect": "Allow",
           "Action": "sns:Publish",
           "Resource": "*"
         },
         {
           "Effect": "Allow",
           "Action": "kms:Decrypt",
           "Resource": "<KMS ARN>"
         }
       ]
    }
  4. Provide a name for the policy (e.g. resetpw), then select Create policy
  5. After the policy has been created, select Roles, then Create Role
  6. For the AWS Service, select Lambda, then click Next:Permissions
  7. Search for and select the policy you created in step 5, as well as the AWSLambdaVPCAccessExecutionRole and AWSLambdaBasicExecutionRole policies then click Next: Review
  8. Provide a name for the role (e.g. resetpw) then click Create role

Network Configuration for the Lambda Function

To access our Active Directory environment, the Lambda function will need to run within a VPC or a peered VPC that hosts an Active Directory domain controller. Additionally, we need the function to access the internet to be able to access the KMS and SNS services. Lambda functions provisioned in a VPC are assigned an Elastic network Interface with a private IP address from the subnet it’s deployed in. Because the ENI doesn’t have a public IP address, it cannot simply leverage an internet gateway attached to the VPC for internet connectivity and as a result, must have traffic routed through a NAT Gateway similar to the diagram below.

Password Reset Lambda Function

Now that we’ve performed all the preliminary steps, we can start building the Lambda function. Because the Lambda execution environment uses Amazon Linux, I prefer to build my Lambda deployment package on a local Amazon Linux docker container to ensure library compatibility. Alternatively, you could deploy a small Amazon Linux EC2 instance for this purpose, which can assist with troubleshooting if it’s deployed in the same VPC as your AD environment.

Okay, so let’s get started on building the lambda function. Log in to your Amazon Linux instance/container, and perform the following:

  • Create a project directory and install python-ldap dependencies, gcc, python-devel and openldap-devel
    Mkdir ~/resetpw
    sudo yum install python-devel openldap-devel gcc
  • Next, we’re going to download the python-ldap library to the directory we created
    Pip install python-ldap -t ~/resetpw
  • In the resetpw directory, create a file called reset_function.py and copy and paste the following script

  • Now, we need to create the Lambda deployment package. As the package size is correlated with the speed of Lambda function cold starts, we need to filter out anything that’s not necessary to reduce the package size. The following zip’s the script and LDAP library:
    Cd ~/resetpw
    zip -r ~/resetpw.zip . -x "setuptools*/*" "pkg_resources/*" "easy_install*"
  • We need to deploy this package as a Lambda function. I’ve got AWSCLI installed in my Amazon Linux container, so I’m using the following CLI to create the Lambda function. Alternatively, you can download the zip file and manually create the Lambda function in the AWS console using the same parameters specified in the CLI below.
    aws lambda create-function --function-name reset_function --region us-east-1 --zip-file fileb://root/resetpw.zip --role resetpw --handler reset_function.lambda_handler --runtime python2.7 --timeout 30 --vpc-config SubnetIds=subnet-a12b3cde,SecurityGroupIds=sg-0ab12c30 --memory-size 128
  • For the script to run in your environment, a number of Lambda variables need to be set which will be used at runtime. In the AWS Console, navigate to Lambda then click on your newly created Lambda function. In the environment variables section, create the following variables:
    • Url – This is the LDAPS URL for your domain controller. Note that it must be LDAP over SSL.
    • Domain_base_dn – The base distinguished name used to search for the user
    • User – The service account that has permissions to change the user password
    • Pw – The password of the service account
  • Finally, we need to encrypt the Pw variable in the Lambda console. Navigate to the Encryption configuration and select Enable helpers for encryption in transit. Select your KMS key for both encryption in transit and at reset, then select the Encrypt button next to the pw variable. This will encrypt and mask the value.

  • Hit Save in the top right-hand corner to save the environment variables.

That’s it! The Lambda function is now configured. A summary of the Lambda function’s logic is as follows:

  1. Collect the Lambda environment variables and decrypt the service account password
  2. Perform a secure AD bind but don’t verify the certificate (I’m using a Self-Signed Cert in my lab)
  3. Collect the user’s birthday, start month, telephone number, and DN from AD
  4. Check the user’s verification input
  5. If the input is correct, reset the password and send it to the user’s telephone number, otherwise exit the function.

Update and test Lex bot fulfillment

The final step is to add the newly created Lambda function to the Lex bot so it’s invoked after input is validated. To do this, perform the following:

  1. In the Amazon Console, navigate to Amazon Lex and select your bot
  2. Select Fulfillment for your password reset intent, then select AWS Lambda function
  3. Choose the recently created Lambda function from the drop down box, then select Save Intent

That should be it! You’ll need to build your bot again before testing…

My phone buzzes with a new SMS…

Success! A few final things worth noting:

  • All Lambda execution logs will be written to CloudWatch logs, so this is the best place to begin troubleshooting if required
  • Modification to the AD password attribute is not possible without a secure bind and will result in an error.
  • The interrogated fields (month started and date of birth) are custom AD attributes in my lab.
  • Password complexity in my lab is relaxed. If you’re using the default password complexity in AD, the randomly generated password in the lambda function may not meet complexity requirements every time.

Hope to see you in part 4, where we bolt on Amazon Connect to field phone calls!

Replacing the service desk with bots using Amazon Lex and Amazon Connect (Part 2)

Welcome back! Hopefully you had the chance to follow along in part 1 where we started creating our Lex chatbot. In part 2, we attempt to make the conversation more human-like and begin integrating data validation on our slots to ensure we’re getting the correct input.

Creating the Lambda initialisation and validation function

As data validation requires compute, we’ll need to start by creating an AWS Lambda function. Head over to the AWS console, then navigate to the AWS Lambda page. Once you’re there, select Create Function and choose to Author from Scratch then specify the following:

Name: ResetPWCheck

Runtime: Python 2.7 (it’s really a matter of preference)

Role: I use an existing Out of the Box role, “Lambda_basic_execution”, as I only need access to CloudWatch logs for debugging.

Once you’ve populated all the fields, go ahead and select Create Function. The script we’ll be using is provided (further down) in this blog, however before we go through the script in detail, there are two items worth mentioning.

Input Events and Response Formats

It’s well worth familiarising yourself with the page on Lambda Function Input Event and Response Formats in the Lex Developer guide. Every time input is provided to Lex, it invokes the Lambda initalisation and validation function. For example, when I tell my chatbot “I need to reset my password”, the lambda function is invoked and the following event is passed:

Amazon Lex expects a response from the Lambda function in JSON format that provides it with the next dialog action.

Persisting Variables with Session Attributes

There are many ways to determine within your Lambda function where you’re up to in your chat dialog, however my preferred method is to pass state information within the SessionAttributes object of the input event and response as a key/value pair. The SessionAttributes can persist between invocations of the Lambda function (every time input is provided to the chatbot), however you must remember to collect and pass the attributes between input and responses to ensure it persists.

Input Validation Code

With that out of the way, let’s begin looking at the code. The below script is what I’ve used which you can simply copy and paste, assuming you’re using the same slot and intent names in your Lex bot that were used in Part 1.

Let’s break it down.

When the lambda function is first invoked, we check to see if any state is set in the sessionAttributes. If not, we can assume this is the first time the lambda function is invoked and as a result, provide a welcoming response while requesting the User’s ID. To ensure the user isn’t welcomed again, we set a session state so the Lambda function knows to move to User ID validation when next invoked. This is done by setting the “Completed” : “None” key/value pair in the response SessionAttributes.

Next, we check the User ID. You’ll notice the chkUserId function checks for two things; That the slot is populated, and if it is, the length of the field. Because the slot type is AMAZON.Number, any non-numeric characters that are entered will be rejected by the slot. If this occurs, the slot will be left empty, hence this is something we’re looking out for. We also want to ensure the User ID is 6 digits, otherwise it is considered invalid. If the input is correct, we set the session state key/value pair to indicate User ID validation is complete then allow the dialog to continue, otherwise we request the user to re-enter their User ID.

Next, we check the Date of Birth. Because the slot type is strict regarding input, we don’t do much validation here. An utterance for this slot type generally maps to a complete date: YYYY-MM-DD. For validation purpose, we’re just looking for an empty slot. Like the User ID check, we set the session state and allow the dialog to continue if all looks good.

Finally, we check the last slot which is the Month Started. Assuming the input for the month started is correct, we then confirm the intent by reading all the slot values back to the user and asking if it’s correct. You’ll notice here that there’s a bit of logic to determine if the user is using voice or text to interact with Lex. If voice is used, we use Speech Synthesis Markup Language (SSML) to ensure the UserID value is read as digits, rather than as the full number.

If the user is happy with the slot values, the validation completes and Lex then moves to the next Lambda function to fulfil the intent (next blog). If the user isn’t happy with the slot values, the lambda function exits telling the user to call back and try again.

Okay, now that our Lambda function is finished, we need to enable it as a code hook for initialisation and validation. Head over to your Lex bot, select the “ResetPW” intent, then tick the box under Lambda initialisation and validation and select your Lambda function. A prompt will be given to provide permissions to allow your Lex bot to invoke the lambda function. Select OK.

Let’s hit Build on the chatbot, and test it out.

So, we’ve managed to make the conversation a bit more human like and we can now detect invalid input. If you use the microphone to chat with your bot, you’ll notice the User ID value is read as digits. That’s it for this blog. Next blog, we’ll integrate Active Directory and actually get a password reset and sent via SNS to a mobile phone.

Replacing the service desk with bots using Amazon Lex and Amazon Connect (Part 1)

“What! Is this guy for real? Does he really think he can replace the front line of IT with pre-canned conversations?” I must admit, it’s a bold statement. The IT Service Desk has been around for years and has been the foot in the door for most of us in the IT industry. It’s the face of IT operations and plays an important role in ensuring an organisation’s staff can perform to the best of their abilities. But what if we could take some of the repetitive tasks the service desk performs and automate them? Not only would we be saving on head count costs, we would be able to ensure correct policies and procedures are followed to uphold security and compliance. The aim of this blog is to provide a working example of the automation of one service desk scenario to show how easy and accessible the technology is, and how it can be applied to various use cases.
To make it easier to follow along, I’ve broken this blog up into a number of parts. Part 1 will focus on the high-level architecture for the scenario and begin creating the Lex chatbot.

Scenario

Arguably, the most common service desk request is the password reset. While this is a pretty simple issue for the service desk to resolve, many service desk staff seem to skip over, or not realise the importance of user verification. Both the simple resolution and the strict verification requirement make this a prime scenario to automate.

Architecture

So what does the architecture look like? The diagram below dictates the expected process flow. Let’s step through each item in the diagram.

 

Amazon Connect

The process begins when the user calls the service desk and requests to have their password reset. In our architecture, the service desk uses Amazon Connect which is a cloud based customer contact centre service, allowing you to create contact flows, manage agents, and track performance metrics. We’re also able to plug in an Amazon Lex chatbot to handle user requests and offload the call to a human if the chatbot is unable to understand the user’s intent.

Amazon Lex

After the user has stated their request to change their password, we need to begin the user verification process. Their intent is recognised by our Amazon Lex chatbot, which initiates the dialog for the user verification process to ensure they are who they really say they are.

AWS Lambda

After the user has provided verification information, AWS Lambda, which is a compute on demand service, is used to validate the user’s input and verify it against internal records. To do this, Lambda interrogates Active Directory to validate the user.

Amazon SNS

Once the user has been validated, their password is reset to a random string in Active Directory and the password is messaged to the user’s phone using Amazon’s Simple Notification Service. This completes the interaction.

Building our Chatbot

Before we get into the details, it’s worth mentioning that the aim of this blog is to convey the technology capability. There’s many ways of enhancing the solution or improving validation of user input that I’ve skipped over, so while this isn’t a finished production ready product, it’s certainly a good foundation to begin building an automated call centre.

To begin, let’s start with building our Chatbot in Amazon Lex. In the Amazon Console, navigate to Amazon Lex. You’ll notice it’s only available in Ireland and US East. As Amazon Connect and my Active Directory environment is also in US East, that’s the region I’ve chosen.

Go ahead and select Create Bot, then choose to create your own Custom Bot. I’ve named mine “UserAdministration”. Choose an Output voice and set the session timeout to 5 minutes. An IAM Role will automatically be created on your behalf to allow your bot to use Amazon Polly for speech. For COPPA, select No, then select Create.

Once the bot has been created, we need to identify the user action expected to be performed, which is known as an intent. A bot can have multiple intents, but for our scenario, we’re only creating one, which is the password reset intent. Go ahead and select Create Intent, then in the Add Intent window, select Create new intent. My intent name is “ResetPW”. Select Add, which should add the intent to your bot. We now need to specify some expected sample utterances, which are phrases the user can use to trigger the Reset Password intent. There’s quite a few that could be listed here, but I’m going to limit mine to the following:

  • I forgot my password
  • I need to reset my password
  • Can you please reset my password

The next section is the configuration for the Lambda validation function. Let’s skip past this for the time being and move onto the slots. Slots are used to collect information from the user. In our case, we need to collect verification information to ensure the user is who they say they are. The verification information collected is going to vary between environments. I’m looking to collect the following to verify against Active Directory:

  • User ID – In my case, this is a 6-digit employee number that is also the sAMAccountName in Active Directory
  • User’s birthday – This is a custom attribute in my Active Directory
  • Month started – This is a custom attribute in my Active Directory

In addition to this, it’s also worth collecting and verifying the user’s mobile number. This can be done by passing the caller ID information from Amazon Connect, however we’ll skip this, as the bulk of our testing will be text chat and we need to ensure we have a consistent experience.

To define a slot, we need to specify three items:

  • Name of the slot – Think of this as the variable name.
  • Slot type – The data type expected. This is used to train the machine learning model to recognise the value for the slot.
  • Prompt – How the user is prompted to provide the value sought.

Many slot types are provided by Amazon, two of which has been used in this scenario. For “MonthStarted”, I’ve decided to create my own custom slot type, as the in-built “AMAZON.Month” slot type wasn’t strictly enforcing recognisable months. To create your own slot type, press the plus symbol on the left-hand side of the page next to Slot Types, then provide a name and description for your slot type. Select to Restrict to Slot values and Synonyms, then enter each month and its abbreviation. Once completed, click Add slot to intent.

Once the custom slot type has been configured, it’s time to set up the slots for the intent. The screenshot below shows the slots that have been configured and the expected order to prompt the user.

Last step (for this blog post), is to have the bot verify the information collected is correct. Tick the Confirmation Prompt box and in the Confirm text box provided, enter the following:

Just to confirm, your user ID is {UserID}, your Date of Birth is {DOB} and the month you started is {MonthStarted}. Is this correct?

For the Cancel text box, enter the following:

Sorry about that. Please call back and try again.

Be sure to leave the fulfillment to Return parameters to client and hit Save Intent.

Great! We’ve built the bare basics of our bot. It doesn’t really do much yet, but let’s take it for a spin anyway and get a feel for what to expect. In the top right-hand corner, there’s a build button. Go ahead and click the button. This takes some time, as building a bot triggers machine learning and creates the models for your bot. Once completed, the bot should be available to text or voice chat on the right side of the page. As you move through the prompts, you can see at the bottom the slots getting populated with the expected format. i.e. 14th April 1983 is converted to 1983-04-14.

So at the moment, our bot doesn’t do much but collect the information we need. Admittedly, the conversation is a bit robotic as well. In the next few blogs, we’ll give the bot a bit more of a personality, we’ll do some input validation, and we’ll begin to integrate with Active Directory. Once we’ve got our bot working as expected, we’ll bolt on Amazon Connect to allow users to dial in and converse with our new bot.

Ok Google Email me the status of all vms – Part 2

First published at https://nivleshc.wordpress.com

In my last blog, we configured the backend systems necessary for accomplishing the task of asking Google Home “OK Google Email me the status of all vms” and it sending us an email to that effect. If you haven’t finished doing that, please refer back to my last blog and get that done before continuing.

In this blog, we will configure Google Home.

Google Home uses Google Assistant to do all the smarts. You will be amazed at all the tasks that Google Home can do out of the box.

For our purposes, we will be using the platform IF This Then That or IFTTT for short. IFTTT is a very powerful platform as it lets you create actions based on triggers. This combination of triggers and actions is called a recipe.

Ok, lets dig in and create our IFTTT recipe to accomplish our task

1.1   Go to https://ifttt.com/ and create an account (if you don’t already have one)

1.2   Login to IFTTT and click on My Applets menu from the top

IFTTT_MyApplets_Menu

1.3   Next, click on New Applet (top right hand corner)

1.4   A new recipe template will be displayed. Click on the blue + this choose a service

IFTTT_Reicipe_Step1

1.5   Under Choose a Service type “Google Assistant”

IFTTT_ChooseService

1.6   In the results Google Assistant will be displayed. Click on it

1.7   If you haven’t already connected IFTTT with Google Assistant, you will be asked to do so. When prompted, login with the Google account that is associated with your Google Home and then approve IFTTT to access it.

IFTTT_ConnectGA

1.8   The next step is to choose a trigger. Click on Say a simple phrase

IFTTT_ChooseTrigger

1.9   Now we will put in the phrases that Google Home should trigger on.

IFTTT_CompleteTrigger

For

  • What do you want to say? enter “email me the status of all vms
  • What do you want the Assistant to say in response? enter “no worries, I will send you the email right away

All the other sections are optional, however you can fill them if you prefer to do so

Click Create trigger

1.10   You will be returned to the recipe editor. To choose the action service, click on + that

IFTTT_That

1.11  Under Choose action service, type webhooks. From the results, click on Webhooks

IFTTT_ActionService

1.12   Then for Choose action click on Make a web request

IFTTT_Action_Choose

1.13   Next the Complete action fields screen is shown.

For

  • URL – paste the webhook url of the runbook that you had copied in the previous blog
  • Method – change this to POST
  • Content Type – change this to application/json

IFTTT_CompleteActionFields

Click Create action

1.13   In the next screen, click Finish

IFTTT_Review

 

Woo hoo. Everything is now complete. Lets do some testing.

Go to your Google Home and say “email me the status of all vms”. Google Home should reply by saying “no worries. I will send you the email right away”.

I have noticed some delays in receiving the email, however the most I have had to wait for is 5 minutes. If this is unacceptable, in the runbook script, modify the Send-MailMessage command by adding the parameter -Priority High. This sends all emails with high priority, which should make things faster. Also, the runbook is currently running in Azure. Better performance might be achieved by using Hybrid Runbook Workers

To monitor the status of the automation jobs, or to access their logs, in the Azure Automation Account, click on Jobs in the left hand side menu. Clicking on any one of the jobs shown will provide more information about that particular job. This can be helpful during troubleshooting.

Automation_JobsLog

There you go. All done. I hope you enjoy this additional task you can now do with your Google Home.

If you don’t own a Google Home yet, you can do the above automation using Google Assistant as well.

Ok Google Email me the status of all vms – Part 1

First published at https://nivleshc.wordpress.com

Technology is evolving at a breathtaking pace. For instance, the phone in your pocket has more grunt than the desktop computers of 10 years ago!

One of the upcoming areas in Computing Science is Artificial Intelligence. What seemed science fiction in the days of Isaac Asimov, when he penned I, Robot seems closer to reality now.

Lately the market is popping up with virtual assistants from the likes of Apple, Amazon and Google. These are “bots” that use Artificial Intelligence to help us with our daily lives, from telling us about the weather, to reminding us about our shopping lists or letting us know when our next train will be arriving. I still remember my first virtual assistant Prody Parrot, which hardly did much when you compare it to Siri, Alexa or Google Assistant.

I decided to test drive one of these virtual assistants, and so purchased a Google Home. First impressions, it is an awesome device with a lot of good things going for it. If only it came with a rechargeable battery instead of a wall charger, it would have been even more awesome. Well maybe in the next version (Google here’s a tip for your next version 😉 )

Having played with Google Home for a bit, I decided to look at ways of integrating it with Azure, and I was pleasantly surprised.

In this two-part blog, I will show you how you can use Google Home to send an email with the status of all your Azure virtual machines. This functionality can be extended to stop or start all virtual machines, however I would caution against NOT doing this in your production environment, incase you turn off some machine that is running critical workloads.

In this first blog post, we will setup the backend systems to achieve the tasks and in the next blog post, we will connect it to Google Home.

The diagram below shows how we will achieve what we have set out to do.

Google Home Workflow

Below is a list of tasks that will happen

  1. Google Home will trigger when we say “Ok Google email me the status of all vms”
  2. As Google Home uses Google Assistant, it will pass the request to the IFTTT service
  3. IFTTT will then trigger the webhooks service to call a webhook url attached to an Azure Automation Runbook
  4. A job for the specified runbook will then be queued up in Azure Automation.
  5. The runbook job will then run, and obtain a status of all vms.
  6. The output will be emailed to the designated recipient

Ok, enough talking 😉 lets start cracking.

1. Create an Azure AD Service Principal Account

In order to run our Azure Automation runbook, we need to create a security object for it to run under. This security object provides permissions to access the azure resources. For our purposes, we will be using a service principal account.

Assuming you have already installed the Azure PowerShell module, run the following in a PowerShell session to login to Azure

Import-Module AzureRm
Login-AzureRmAccount

Next, to create an Azure AD Application, run the following command

$adApp = New-AzureRmADApplication -DisplayName "DisplayName" -HomePage "HomePage" -IdentifierUris "http://IdentifierUri" -Password "Password"

where

DisplayName is the display name for your AD Application eg “Google Home Automation”

HomePage is the home page for your application eg http://googlehome (or you can ignore the -HomePage parameter as it is optional)

IdentifierUri is the URI that identifies the application eg http://googleHomeAutomation

Password is the password you will give the service principal account

Now, lets create the service principle for the Azure AD Application

New-AzureRmADServicePrincipal -ApplicationId $adApp.ApplicationId

Next, we will give the service principal account read access to the Azure tenant. If you need something more restrictive, please find the appropriate role from https://docs.microsoft.com/en-gb/azure/active-directory/role-based-access-built-in-roles

New-AzureRmRoleAssignment -RoleDefinitionName Reader -ServicePrincipalName $adApp.ApplicationId

Great, the service principal account is now ready. The username for your service principal is actually the ApplicationId suffixed by your Azure AD domain name. To get the Application ID run the following by providing the identifierUri that was supplied when creating it above

Get-AzureRmADApplication -IdentifierUri {identifierUri}

Just to be pedantic, lets check to ensure we can login to Azure using the newly created service principal account and the password. To test, run the following commands (when prompted, supply the username for the service principal account and the password that was set when it was created above)

$cred = Get-Credential 
Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantId {TenantId}

where Tenantid is your Azure Tenant’s ID

If everything was setup properly, you should now be logged in using the service principal account.

2. Create an Azure Automation Account

Next, we need an Azure Automation account.

2.1   Login to the Azure Portal and then click New

AzureMarketPlace_New

2.2   Then type Automation and click search. From the results click the following.

AzureMarketPlace_ResultsAutomation

2.3   In the next screen, click Create

2.4   Next, fill in the appropriate details and click Create

AutomationAccount_Details

3. Create a SendGrid Account

Unfortunately Azure doesn’t provide relay servers that can be used by scripts to email out. Instead you have to either use EOP (Exchange Online Protection) servers or SendGrid to achieve this. SendGrid is an Email Delivery Service that Azure provides, and you need to create an account to use it. For our purposes, we will use the free tier, which allows the delivery of 2500 emails per month, which is plenty for us.

3.1   In the Azure Portal, click New

AzureMarketPlace_New

3.2   Then search for SendGrid in the marketplace and click on the following result. Next click Create

AzureMarketPlace_ResultsSendGrid

3.3   In the next screen, for the pricing tier, select the free tier and then fill in the required details and click Create.

SendGridAccount_Details

4. Configure the Automation Account

Inside the Automation Account, we will be creating a Runbook that will contain our PowerShell script that will do all the work. The script will be using the Service Principal and SendGrid accounts. To ensure we don’t expose their credentials inside the PowerShell script, we will store them in the Automation Account under Credentials, and then access them from inside our PowerShell script.

4.1   Go into the Automation Account that you had created.

4.2   Under Shared Resource click Credentials

AutomationAccount_Credentials

4.3    Click on Add a credential and then fill in the details for the Service Principal account. Then click Create

Credentials_Details

4.4   Repeat step 4.3 above to add the SendGrid account

4.5   Now that the Credentials have been stored, under Process Automation click Runbooks

Automation_Runbooks

Then click Add a runbook and in the next screen click Create a new runbook

4.6   Give the runbook an appropriate name. Change the Runbook Type to PowerShell. Click Create

Runbook_Details

4.7   Once the Runbook has been created, paste the following script inside it, click on Save and then click on Publish

Import-Module Azure
$cred = Get-AutomationPSCredential -Name 'Service Principal account'
$mailerCred = Get-AutomationPSCredential -Name 'SendGrid account'

Login-AzureRmAccount -Credential $cred -ServicePrincipal -TenantID {tenantId}

$outputFile = $env:TEMP+ "\AzureVmStatus.html"
$vmarray = @()

#Get a list of all vms 
Write-Output "Getting a list of all VMs"
$vms = Get-AzureRmVM
$total_vms = $vms.count
Write-Output "Done. VMs Found $total_vms"

$index = 0
# Add info about VM's to the array
foreach ($vm in $vms){ 
 $index++
 Write-Output "Processing VM $index/$total_vms"
 # Get VM Status
 $vmstatus = Get-AzurermVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Status

# Add values to the array:
 $vmarray += New-Object PSObject -Property ([ordered]@{
 ResourceGroupName=$vm.ResourceGroupName
 Name=$vm.Name
 OSType=$vm.StorageProfile.OSDisk.OSType
 PowerState=(get-culture).TextInfo.ToTitleCase(($vmstatus.statuses)[1].code.split("/")[1])
 })
}
$vmarray | Sort-Object PowerState,OSType -Desc

Write-Output "Converting Output to HTML" 
$vmarray | Sort-Object PowerState,OSType -Desc | ConvertTo-Html | Out-File $outputFile
Write-Output "Converted"

$fromAddr = "senderEmailAddress"
$toAddr = "recipientEmailAddress"
$subject = "Azure VM Status as at " + (Get-Date).toString()
$smtpServer = "smtp.sendgrid.net"

Write-Output "Sending Email to $toAddr using server $smtpServer"
Send-MailMessage -Credential $mailerCred -From $fromAddr -To $toAddr -Subject $subject -Attachments $outputFile -SmtpServer $smtpServer -UseSsl
Write-Output "Email Sent"

where

  • ‘Service Principal Account’ and ‘SendGrid Account’ are the names of the credentials that were created in the Automation Account (include the ‘ ‘ around the name)
  • senderEmailAddress is the email address that the email will show it came from. Keep the domain of the email address same as your Azure domain
  • recipientEmailAddress is the email address of the recipient who will receive the list of vms

4.8   Next, we will create a Webhook. A webhook is a special URL that will allow us to execute the above script without logging into the Azure Portal. Treat the webhook URL like a password since whoever possesses the webhook can execute the runbook without needing to provide any credentials.

Open the runbook that was just created and from the top menu click on Webhook

Webhook_menu

4.9   In the next screen click Create new webhook

4.10  A security message will be displayed informing that once the webhook has been created, the URL will not be shown anywhere in the Azure Portal. IT IS EXTREMELY IMPORTANT THAT YOU COPY THE WEBHOOK URL BEFORE PRESSING THE OK BUTTON.

Enter a name for the webhook and when you want the webhook to expire. Copy the webhook URL and paste it somewhere safe. Then click OK.

Once the webhook has expired, you can’t use it to trigger the runbook, however before it expires, you can change the expiry date. For security reasons, it is recommended that you don’t keep the webhook alive for a long period of time.

Webhook_details

Thats it folks! The stage has been set and we have successfully configured the backend systems to handle our task. Give yourselves a big pat on the back.

Follow me to the next blog, where we will use the above with IFTTT, to bring it all together so that when we say “OK Google, email me the status of all vms”, an email is sent out to us with the status of all the vms 😉

I will see you in Part 2 of this blog. Ciao 😉

Scripting the generation & creation of Microsoft Identity Manager Sets/Workflows/Sync & Management Policy Rules with the Lithnet Resource Management PowerShell Module

Introduction

Yes, that title is quite a mouthful. And this post is going to be quite long. But worth the read if you are having to create a number of rules in Microsoft/Forefront Identity Manager, or even more so the same rule in multiple environments (eg. Dev, Staging, Production).

My colleague David Minnelli introduced using the Lithnet RMA PowerShell Module and the Import-RMConfig cmdlet recently for bulk creation of MIM Sets and MPR’s. David has a lot of the background on Import-RMConfig and getting started with it. Give that a read for a more detailed background.

In this post I detail using Import-RMConfig to create a Set, Workflow, Synchronization Rule and Management Policy Rule to populate a Development AD Domain with Users from a Production AD Domain. This process is designed to run on a combined MIM Service/Sync Server. If your roles a separated (as they likely will be in a Production environment) you will need to run these scripts on the MIM Sync Server (so it can query the Management Agents, and you will need to add in a line to connect to the MIM Service (eg. Set-ResourceManagementClient ) at the beginning of the script.

In my environment I have two Active Directory Management Agents, each connected to an AD Forest as shown below.

LithnetImportRMConfig.png

On each of the AD MA’s I have a Constant Flow Attribute (named Source) configured to flow in a value representing the source AD Forest. I’m doing this in my environment as I have more than one production forest (hence the need for automation). You could simply use the Domain attribute for the Set criteria. That attribute is used in the Set later on. Mentioning it up front so it make sense.

Overview

The Import-RMConfig cmdlet uses XML and XOML files that contain the configuration required to create the Set, Workflow, Sync Rule and MPR in the FIM/MIM Service. The order that I approach the creation is, Sync Rule, Workflow, Set and finally the MPR.

Each of these objects as indicated above leverage an XML and/or XOML input file. I’ve simplified base templates and included them in the scripts.

The Sync Rule Script includes a prompt to choose a folder (you can create one through the GUI presented) to store the XML/XOML files to allow the Import-RMConfig to use them. Once generated you can simply reference the files with Import-RMConfig to replicate the creation in another environment.

SelectFolder.PNG

Creating the Synchronization Rule

For creation of the Sync Rule we need to define which Management Agent will be the target for our Sync Rule. In my script I’ve automated that too (as I have a number to do). I’m querying the MIM Sync Server for all its Active Directory MA’s and then providing a dialog to allow you to choose the target MA for the Sync Rule. That dialog simply looks like the one below.

TargetMA.PNG

Creating the Sync Rule will finally ask you to give the Rule a name. This name will then be used as the base Display Name for the Set, MPR and Workflow (and a truncated version as the Rule ID’s).

SyncRuleName.PNG

The script below in the $SyncRuleXML section defines the rules of the Sync Rule. Mine is an Outbound Sync Rule, with a base set of attributes and transforming the users UPN and DN (for the differing Development AD namespace). Update lines 42 and 45 for the users UPN and DN your namespace.

Creating the Workflow

The Workflow script is pretty self-explanatory. A simple Action based workflow and is below.

Creating the Set

The Set is the group of objects that will be synchronized to the target management agent. As my Sync Rule is only for Users my Set is also only contains users. As stated in the Overview I have an attribute that defines the authoritative source for the objects. I’m also using that in my Set criteria.

Creating the Management Policy Rule

The MPR ties everything together. Here’s that part of the script.

Tying them all together

Here is the end to end automation, and the raw script that you could use as the basis for automating similar rule generation. The Sync Rule could easily be updated for Contacts or Groups. Remember the attributes and object classes are case sensitive’.

  • Through the Browse for Folder dialog I created a new folder named ProvisionDevAD

ProvDevAD.PNG

  • I provided a Display Name for the rules

RuleDisplayname.PNG

  • I chose the target Management Agent

TargetMA.PNG

  • The SyncRule, Workflow, Set and MPR are created. The whole thing takes seconds.

Importing Rules.PNG

  • Script Complete

Complete.PNG

Let’s take a look at the completed objects through the MIM Portal.

Sync Rule

The Sync Rule is present as we named it. Including the !__ prefix so it appears at the top of the list.

Sync Rule.PNG

Outbound Sync Rule based on a MPR, Set and Workflow

Sync Rule 2.PNG

The Resources will be created and if deleted de-provisioned.

Sync Rule 3.PNG

And our base attribute flows.

Sync Rule 4

Set

Our Set was created.

Set1.png

Our naming aligns with what we input

Set2.png

And a Criteria based Set. As per the Overview I have an attribute populated by a Constant flow rule that I based my set on. You’ll want to update for you environment.

Set3.png

Workflow

The Action Workflow was created

Workflow1.PNG

All looks great

Workflow2.PNG

And it applies our Sync Rule

Workflow3.PNG

MPR

And finally our MPR. Created as a Transition In MPR with Action Workflow

MPR1

Set Transition and naming all aligned

MPR2

The Transition Set configured for the Set that was created

MPR3

And the Workflow configured with the Workflow that was just created

MPR4

Summary

When you have a lot of Sync Rules to create, or you know you will need to re-create them numerous times, potentially in different environments automation is key. This just scratches the surface on what can be achieved, and made so much easier using the Lithnet PowerShell Modules.

Here’s is the full script. Note: You’ll need to make a couple of minor changes as indicated earlier, but you should be able to create a Provisioning Rule end to end pretty quick to validate the process. Then customize accordingly for your environment and requirements. Enjoy.

Creating Accounts on Azure SQL Database through PowerShell Automation

In the previous post, Azure SQL Pro Tip – Creating Login Account and User, we have briefly walked through how to create login accounts on Azure SQL Database through SSMS. Using SSMS is of course the very convenient way. However, as a DevOps engineer, I want to automate this process through PowerShell. In this post, we’re going to walk through how to achieve this goal.

Step #1: Create Azure SQL Database

First of all, we need an Azure SQL Database. It can be easily done by running an ARM template in PowerShell like:

We’re not going to dig it further, as this is beyond our topic. Now, we’ve got an Azure SQL Database.

Step #2: Create SQL Script for Login Account

In the previous post, we used the following SQL script:

Now, we’re going to automate this process by providing username and password as parameters to an SQL script. The main part of the script above is CREATE LOGIN ..., so we slightly modify it like:

Now the SQL script is ready.

Step #3: Create PowerShell Script for Login Account

We need to execute this in PowerShell. Look at the following PowerShell script:

Looks familiar? Yes, indeed. It’s basically the same as using ADO.NET in ASP.NET applications. Let’s run this PowerShell script. Woops! Something went wrong. We can’t run the SQL script. What’s happening?

Step #4: Update SQL Script for Login Account

CREATE LOGIN won’t take variables. In other words, the SQL script above will never work unless modified to take variables. In this case, we don’t want to but should use dynamic SQL, which is ugly. Therefore, let’s update the SQL script:

Then run the PowerShell script again and it will work. Please note that using dynamic SQL here wouldn’t be a big issue, as all those scripts are not exposed to public anyway.

Step #5: Update SQL Script for User Login

In a similar way, we need to create a user in the Azure SQL Database. This also requires dynamic SQL like:

This is to create a user with a db_owner role. In order for the user to have only limited permissions, use the following dynamic SQL script:

Step #6: Modify PowerShell Script for User Login

In order to run the SQL script right above, run the following PowerShell script:

So far, we have walked through how we can use PowerShell script to create login accounts and user logins on Azure SQL Database. With this approach, DevOps engineers will be easily able to create accounts on Azure SQL Database by running PowerShell script on their build server or deployment server.

Programmatically interacting with Yammer via PowerShell – Part 2

In my last post I foolishly said that part 2 would be ‘coming in the next few days’. This of course didn’t happen, but I guess it’s better late than never!

In part 1 which is available here, I wrote how it was possible to post to a Yammer group via a *.ps1 using a ‘Yammer Verified Admin’ account. While this worked a treat, it soon became apparent that this approach had limited productivity rewards. Instead, I wanted to create groups and add users to these groups, all while providing minimal inputs.

Firstly, there isn’t a documented create group .json?, but a quick hunt round the tinterweb with Google helped me uncover the groups.json?. This simply needs a name and whether it’s open or closed, open = $false, closed = $true. So building on my example from Part 1, the below code should create a new group…

$clientID = "fvIPx********GoqqV4A"
$clientsecret = "5bYh6vvDTomAJ********RmrX7RzKL0oc0MJobrwnc"
$Token = "AyD********NB65i2LidQ"
$Group = "Posting to Yammer Group"
$GroupType = $True
$CreateGroupUri = "https://www.yammer.com/api/v1/groups.json?name=$Group&private=$GroupType"

    $Headers = @{
        "Accept" = "*/*"
        "Authorization" = "Bearer "+$Token
        "accept-encoding" = "gzip"
        "content-type" = "application/json"
         }

Invoke-WebRequest -Method POST -Uri $CreateGroupUri -Header $Headers
    You’ll noticed I’ve moved away from Invoke-RestMethod to Invoke-WebRequest. This is due to finding a bug where the script would hang and eventually timeout which is detailed in this link.

All going well, you should end up with a new group which has your ‘Yammer Verified Admin’ as the sole member ala…

YammerGroup

Created Yammer Group

Great, but as I’ve just highlighted, there is only one person in that group, and that’s the admin account we’ve been using. To add other Yammer registered users to the group we need to impersonate. This is only possible via a ‘Yammer Verified Admin’ account for obvious chaos avoiding reasons. So firstly you need to grab the token of the user…

$GetUsersUri = "https://www.yammer.com/api/v1/users.json"
$YammerUPN = "dave.young@daveswebsite.com"
$YammerUsers = (Invoke-WebRequest -Uri $GetUsersUri -Method Get -Headers $Headers).content | ConvertFrom-Json

foreach ($YammerUser in $YammerUsers)
    {
    if ($YammerUser.email -eq $YammerUPN)
        {
        $YammerUserId = $YammerUser.id
        }
    }

$GetUserTokUri = “https://www.yammer.com/api/v1/oauth/tokens.json?user_id=$YammerUserId&consumer_key=$clientID"
$YammerUserDave = (Invoke-WebRequest -Uri $GetUserTokUri -Method Get -Headers $Headers).content | ConvertFrom-Json

To step you through the code. I’ve changed the uri to the users.json, provided the UPN of the user that I want to impersonate and I’m using the headers from the previously provided code. I grab all the users into the $YammerUsers variable and then I do a foreach/if to obtain the id of the user. Now we’ve got that we can use the tokens.json to perform a Get request. This will bring you back a lot of information about the user, but most importantly you’ll get the token!

    user_id : 154**24726
    network_id : 20**148
    network_permalink : daveswebsite.com
    network_name : daveswebsite.com
    token : 18Lz3********Nu0JlvXYA
    secret : Wn9ab********kellNnQgvSfbGJjBfRMWZNICW0JTA
    view_members : True
    view_groups : True
    view_messages : True
    view_subscriptions : True
    modify_subscriptions : True
    modify_messages : True
    view_tags : True
    created_at : 2015/06/15 23:59:19 +0000
    authorized_at : 2015/06/15 23:59:19 +0000
    expires_at :

Storing this into the $UserToken variable allows for you to append this to the Authorization within the Headers so you can impersonate/authenticate on behalf of the user. The code looks like…

$UserToken = $YammerUserDave.token
$YammerGroupId = "61***91"

 $UserHeaders = @{
                "Accept" = "*/*"
                "Authorization" = "Bearer "+$UserToken
                "accept-encoding" = "gzip"
                "content-type" = "application/json"
                }

$PostGroupUri = "https://www.yammer.com/api/v1/group_memberships.json?group_id=$YammerGroupId"
$AddYammerUser = Invoke-WebRequest -Uri $PostGroupUri -Method Post -Headers $UserHeaders

So using the group that we created earlier and the correct variables we then successfully add the user to the group…

DaveGroup

Dave in the group

Something to be mindful of, when you pull the groups or the users it will be done in pages of 50. I found using a Do/While worked nicely to build up the variables so they could then be queried, like this…

If ($YammerGroups.Count -eq 50)
    {
    $GroupCycle = 1
    DO
        {
        $GetMoreGroupsUri = "https://www.yammer.com/api/v1/groups.json?page=$GroupCycle"
        $MoreYammerGroups = (Invoke-WebRequest -Uri $GetMoreGroupsUri -Method Get -Headers $AdminHeaders).content | ConvertFrom-Json    
        $YammerGroups += $MoreYammerGroups
        $GroupCycle ++
        $GroupCount = $YammerGroups.Count
        }
    While ($MoreYammerGroups.Count -gt 0)
    }

Once you’ve got your head around this, then the rest of the API/Json’s on the REST API are really quite useful, my only gripe right now is that they are really missing a delete group json – hopefully it’ll be out soon!

Cheers,

Dave

Programmatically interacting with Yammer via PowerShell – Part 1

For my latest project I was asked to automate some Yammer activity. I’m first to concede that I don’t have much of a Dev background, but I instantly fired up PowerShell ISE in tandem with Google only to find…well not a lot! After a couple of weeks fighting with a steep learning curve, I thought it best to blog my findings, it’s good to share ‘n all that!

    It’s worth mentioning at the outset, if you want to test this out you’ll need an E3 Office 365 Trial and a custom domain. It’s possible to trial Yammer, but not with the default *.onmicrosoft.com domain.

First things first, there isn’t a PowerShell Module for Yammer. I suspect it’s on the todo list over in Redmond since their 2012 acquisition. So instead, the REST API is our interaction point. There is some very useful documentation along with examples of the json queries over at the Yammer developer site, linked here.

The site also covers the basics of how to interact using the REST API. Following the instructions, you’ll want to register your own application. This is covered perfectly in the link here.

When registering you’ll need to provide a Expected Redirect. For this I simply put my Yammer site address again https://www.yammer.com/daveswebsite.com. For the purposes of my testing, I’ve not had any issues with this setting. This URL is important and you’ll need it later so make sure to take a note of it. From the registration you should also grab your Client ID & Client secret.

While we’ve got what appears to be the necessary tools to authenticate, we actually need to follow some steps to retrieve our Admin token.

    It is key to point out that I use the Yammer Verified Admin. This will be more critical to follow for Part 2 of my post, but it’s always good to start as you mean to go on!

So The following script will load Internet Explorer and compile the necessary URL. You will of course simply change the entries in the variables with the ones you created during your app registration. I have obfuscated some of the details in my examples, for obvious reasons 🙂

$clientID = "fvIPx********GoqqV4A"
$clientsecret = "5bYh6vvDTomAJ********RmrX7RzKL0oc0MJobrwnc"
$RedirURL = "https://www.yammer.com/daveswebsite.com"

$ie = New-Object -ComObject internetexplorer.application
$ie.Visible = $true
$ie.Navigate2("https://www.yammer.com/dialog/oauth?client_id=$clientID&amp;redirect_uri=$RedirURL")

From the IE Window, you should login with your Yammer Verified Admin and authorise the app. Once logged in, proceed to this additional code…

$UrlTidy = $ie.LocationURL -match 'code=(......................)'; $Authcode = $Matches[1]
$ie = New-Object -ComObject internetexplorer.application
$ie.Visible = $true
$ie.Navigate2("https://www.yammer.com/oauth2/access_token.json?client_id=$clientID&amp;client_secret=$clientsecret&amp;code=$Authcode")

This script simply captures the 302 return and extracts the $Authcode which is required for the token request. It will then launch an additional Internet Explorer session and prompt you to download an access_token.json. Within here you will find your Admin Token which does not expire and can be used for all further admin tasks. I found it useful to load this into a variable using the code below…

$Openjson = $(Get-Content 'C:\Tokens\access_token.json' ) -join "`n" | ConvertFrom-Json
$token = $Openjson.access_token.token

Ok, so we seem to be getting somewhere, but our Yammer page is still looking rather empty! Well now all the prerequistes are complete, we can make our first post. A good example json to use is posting a message, which is detailed in the link here.

I started with this one mainly because all we need is the Group_ID of one of the groups in Yammer and the message body in json format. I created a group manually and then just grabbed the Group_ID from the end of the URL in my browser. I have provided an example below…

$uri = "https://www.yammer.com/api/v1/messages.json"

$Payloadjson = '{
"body": "Posting to Yammer!",
"group_id": 59***60
}'

$Headers = @{
"Accept" = "*/*"
"Authorization" = "Bearer "+$token
"accept-encoding" = "gzip"
"content-type"="application/json"
}

Invoke-RestMethod -Method Post -Uri $uri -Header $Headers -Body $Payloadjson
Yammer Result

Yammer Post

    It’s at this stage you’ll notice that I’ve only used my second cmdlet, Invoke-RestMethod. Both this and ConvertFrom-Json were introduced in PowerShell 3.0 and specifically designed for REST web services like this.

A key point to highlight here is the Authorisation attribute in the $Headers. This is where the $Token is passed to Yammer for authentication. Furthermore, this $Header construct is all you need going forward. It’s simply a case of changing the -Method, the $uri and the $Payload and you can play around with all the different json queries listed on the Yammer Site.

While this was useful for me, it soon became apparent that I wanted to perform actions on behalf of other users. This is something I’ll look to cover in Part 2 of this Blog, coming in the next few days!