In my previous article I explained the basic steps for creating a PowerShell module that you can upload to Azure Automation. Now, being pedantic by nature, I see a lot of scripts of really poor quality and as I write modules quite regularly, I want to share some useful tips to turn a rubbish module into something significantly better.
Previously, I wrote a basic module that looks like this:

You can run the module by using the command new-compliment -name “someone”. However, the power of PowerShell is in its object-oriented nature and pipelining, so we really need to spruce this up so that it becomes a better quality module. Note, it’ll still be an absurdly useless module, but it’ll give you an idea how to write them.

The parameters section

As you can see at the top, we have a parameter block. That should be standard, but we are not providing any additional information about what the parameter itself is, nor do we specify the rules. That’s no good. Right now, you cannot use pipelining which limits the usability. Always write your modules with pipelining in mind. It’s simply good practice and makes for much more powerful use later one.
So, decorate the variable $name with a parameter block: [Parameter()]$name
This doesn’t do anything and won’t make a difference yet, but now we can start providing some additional settings for the parameters and that’s where it becomes more useful.
So make the following change: [Parameter(Mandatory=$true)]$name. Now we have made this a mandatory parameter. If it is not provided, an error is shown. Good, but still no pipelining.
So, do the following [Parameter(Mandatory=$true, ValueFromPipeline=$true, ValueFromPipelinebyPropertyName=$true)]$name. The two pipeline options do different things. The first “ValueFromPipeline” essentially states that whatever is sent through the pipeline goes into this parameter. Logically, you can really only do this once. If the pipeline contains something with multiple values (like an object), this won’t work because it won’t match a simple string parameter. That’s what the second option (valueFromPipelinebyPropertyname) is for. If you send an object through the pipeline, and that object as a property called “name”, then it will be automatically mapped to this $name parameter. You want to set this option on almost all your parameters. If you do that, you can run your command many times based on the output of an import-csv  (where you need a column called ‘name’).

Pipelining arrays

With the above changes you can run the following
$name = “Ward”
$name | new-compliment
However, what happens if you create an array of names and pipeline that:
$names = @(“Ward”,”James”,”Chris”)
$names | new-compliment
As you’ll see, only the last name in the array is used. That’s not what we want. We want each entry in the array be processed. So, the best way of doing that is to use a begin, process and end block in the function. I rarely see people using this, which is unfortunate, as this is really useful functionality.
So, in the main section of the function, create the following three sections:
Begin { }
Process { }
End { }
Now, move the Write-host line into the curly braces of the Process section. Test, and suddenly our command runs for each time in the array.
What happened? Quite simple, the Begin section will contain code that you want to run only once. This isn’t always used, but can be useful for checking certain things before proceeding such as whether or not there is connectivity. The process section, will run for each item in the pipeline. As our array contains 3 items, this runs 3 times. The End section is like the Begin section in that it runs only once, after the process section has completed.
Our module is starting to be already a bit more professional.

Adding help

The next overlooked thing is adding help information. It’s easy to skip. Don’t! Six months from now, you will have forgotten how to use your module and this help information is going to be really useful. Adding help is easy. Add the following comment section right before the function in the module file. If you have multiple functions, you’ll add this separately for each function.

Avoiding disasters

What if instead of a compliment function, we created an insult function. Perhaps, we want to make sure that before you run it, you are really sure. So, we want confirmation before proceeding. You’d do this for any command that does something destructive.
Fortunately, PowerShell contains all we need, you just need to enable it. So, add the following line right before your param block:
[CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact=”High”)]
Next, in your process block, you want to put an if statement around the actual actions so that they are only carried out in case there is a positive confirmation:

If you run the function now, you’ll be asked to confirm. If you include this in script, you can override the confirmation with -confirm:$false

Lastly exporting functions

In a larger module, you will have functions that are only used by the module itself. These functions are not used by the end users, they should even be able to run them. In a module we can actually specify which functions are available through the Export-modulemember command.  Add the following to the end of the module file:
Export-modulemember new-compliment.
The complete module now looks like this:

You can use this as a template for a PowerShell module as there’s only one line that actually does something, so change that instruction and change the parameters. I know that some of this stuff isn’t needed in Azure, but better build good habits.

Category:
Application Development and Integration, PowerShell
Tags:

Join the conversation! 1 Comment

  1. I’m a programmer of 30 years standing but new to Powershell. I wanted something simple to follow with clear examples and Part 1 & 2 fit the bill perfectly, well presented and easy to follow. A great introduction to Powershell. More like this please!

Comments are closed.