Replace Personal Privilege Account into Shareable Broker Accounts

Introduction

Most of the organizations still have the practice of Personal Privilege Accounts in their corporate platforms and application. It’s very challenging when comes to managing and monitoring those accounts which gives non-restrictive access to the most valuable systems in the Organizations. Effective procedures around managing these privileged accounts are extremely difficult without specialized tools.

CyberArk Privileged Account Management solution enable these organizations to secure, provision, manage, control and monitor all activities associated with privileged accounts present in their IT landscape.

One of the primary goals of implementing Privilege Account Management solution will be replacing personal privileged accounts with shareable broker Accounts. This will drastically reduce the total number of privilege accounts for each application and systems. And, these broker accounts will get the other benefits from CyberArk PAM solution for example, One Time Password, enforce corporate Password Policy, tamper proof audit trails etc.

Replace AD Personal Privileged Accounts into Shareable Broker Accounts

Typical CyberArk approach to replace Active Directory personal privilege accounts into shareable broker Accounts are graphically depicted in the below picture.

1

Note: Assume all green line connectors are the customization needed to implement this use case.

1) In this scenario, two new AD shared accounts (App_Broker_Acc1|2) are created and added as members of domain admin groups (after this implementation we can disable all the existing personal privilege accounts which are members of this group ex S-XXXX, S-YYYY)

2

2) A new AD group (PAM_Domain Admins) will be created specifically to map users normal AD id to CyberArk Safe (Safes are logical containers with in the CyberArk Vault). This will provide end user (289705, 289706) access to fetch password and initiate a session to target platforms.

3) The normal AD IDs of the administrators will be added as members of the newly created AD groups for PAM.

3

4) A Safe (AD_Domain Admins_Safe) will be created in CyberArk. The AD group (PAM_Domain Admins) which we’ve created in step2 will be made as member of this safe with required permission enabled.

4

5) On-board the Shared account which are created in Step 1 into CyberArk. These accounts will be stored under the Safe, which we have created as part of step 4.

5

6) Now the administrators will be able to logon to CyberArk Web Portal (PVWA) using their normal AD ID and then they can connect to the target platform by selecting a broker account without knowing its credentials.

6

7) Session initiated through shareable broker account without end user knowing its password.

7

Querying against an Azure SQL Database using Azure Automation Part 1

What if you wanted to leverage Azure automation to analyse database entries and send some statistics or even reports on a daily or weekly basis?

Well why would you want to do that?

  • On demand compute:
    • You may not have access to a physical server. Or your computer isn’t powerful enough to handle huge data processing. Or you would definitely do not want to wait in the office for the task to complete before leaving on a Friday evening.
  • You pay by the minute
    • With Azure automation, your first 500 minutes are for free, then you pay by the minute. Check out Azure Automation Pricing for more details. By the way its super cheap.
  • Its Super Cool doing it with PowerShell. 

There are other reasons why would anyone use Azure automation but we are not getting into the details around that. What we want to do is to leverage PowerShell to do such things. So here it goes!

To query against a SQL database whether its in Azure or not isn’t that complex. In fact this part of the post is to just get us started. Now for this part, we’re going to do something simple because if you want to get things done, you need the fastest way of doing it. And that is what we are going to do. But here’s a quick summary for the ways I thought of doing it:

    1. Using ‘invoke-sqlcmd2‘. This Part of the blog.. its super quick and easy to setup and it helps getting things done quickly.
    2. Creating your own SQL Connection object to push complex SQL Querying scenarios. [[ This is where the magic kicks in.. Part 2 of this series ]]

How do we get this done quickly?

For the sake of keeping things simple, we’re assuming the following:

  • We have an Azure SQL Database called ‘myDB‘, inside an Azure SQL Server ‘mytestAzureSQL.database.windows.net‘.
  • Its a simple database containing a single table ‘test_table’. This table has basically three columns  (Id, Name, Age) and this table contains only two records.
  • We’ve setup ‘Allow Azure Services‘ Access on this database in the firewall rules Here’s how to do that just in case:
    • Search for your database resource.
    • Click on ‘Set firewall rules‘ from the top menu.
    • Ensure the option ‘Allow Azure Services‘ is set to ‘ON
  • We do have an Azure automation account setup. We’ll be using that to test our code.

Now lets get this up and running

Start by creating two variables, one containing the SQL server name and the other containing the database name.

Then create an Automation credential object to store your SQL Login username and password. You need this as you definitely should not be thinking of storing your password in plain text in script editor.

I still see people storing passwords in plain text inside scripts.

Now you need to import the ‘invoke-sqlcmd2‘ module in the automation account. This can be done by:

  • Selecting the modules tab from the left side options in the automation account.
  • From the top menu, click on Browse gallery, search for the module ‘invoke-sqlcmd2‘, click on it and hit ‘Import‘. It should take about a minute to complete.

Now from the main menu of the automation account, click on the ‘Runbooks‘ tab and then ‘Add a Runbook‘, Give it a name and use ‘PowerShell‘ as the type. Now you need to edit the runbook. To do that, click on the Pencil icon from the top menu to get into the editing pane.

Inside the pane, paste the following code. (I’ll go through the details don’t worry).

#Import your Credential object from the Automation Account
 
 $SQLServerCred = Get-AutomationPSCredential -Name "mySqllogin" #Imports your Credential object from the Automation Account
 
 #Import the SQL Server Name from the Automation variable.
 
 $SQL_Server_Name = Get-AutomationVariable -Name "AzureSQL_ServerName" #Imports the SQL Server Name from the Automation variable.
 
 #Import the SQL DB from the Automation variable.
 
 $SQL_DB_Name = Get-AutomationVariable -Name "AzureSQL_DBname"
    • The first cmdlet ‘Get-AutomationPSCredential‘ is to retrieve the automation credential object we just created.
    • The next two cmdlets ‘Get-AutomationVariable‘ are to retrieve the two Automation variables we just created for the SQL server name and the SQL database name.

Now lets query our database. To do that, paste the below code after the section above.

#Query to execute
 
 $Query = "select * from Test_Table"
 
 "----- Test Result BEGIN "
 
 # Invoke to Azure SQL DB
 
 invoke-sqlcmd2 -ServerInstance "$SQL_Server_Name" -Database "$SQL_DB_Name" -Credential $SQLServerCred -Query "$Query" -Encrypt
 
 "`n ----- Test Result END "

So what did we do up there?

    • We’ve created a simple variable that contains our query. I know the query is too simple but you can put in there whatever you want.
    • We’ve executed the cmdlet ‘invoke-sqlcmd2‘. Now if you noticed, we didn’t have to import the module we’ve just installed, Azure automation takes care of that upon every execution.
    • In the cmdlet parameter set, we specified the SQL server (that has been retrieved from the automation variable), and the database name (automation variable too). Now we used the credential object we’ve imported from Azure automation. And finally, we used the query variable we also created. An optional switch parameter ‘-encypt’ can be used to encrypt the connection to the SQL server.

Lets run the code and look at the output!

From the editing pane, click on ‘Test Pane‘ from the menu above. Click on ‘Start‘ to begin testing the code, and observe the output.

Initially the code goes through the following stages for execution

  • Queuing
  • Starting
  • Running
  • Completed

Now what’s the final result? Look at the black box and you should see something like this.

----- Test Result BEGIN 

Id Name Age
-- ---- ---
 1 John  18
 2 Alex  25

 ----- Test Result END 

Pretty sweet right? Now the output we’re getting here is an object of type ‘Datarow‘. If you wrap this query into a variable, you can start to do some cool stuff with it like

$Result.count or

$Result.Age or even

$Result | where-object -Filterscript {$PSItem.Age -gt 10}

Now imagine if you could do so much more complex things with this.

Quick Hint:

If you include a ‘-debug’ option in your invoke cmdlet, you will see the username and password in plain text. Just don’t run this code with debugging option ON 🙂

Stay tuned for Part 2!!

 

Enabling Source Control for locally stored code using Git, Visual Studio Code and Sourcetree

First published at https://nivleshc.wordpress.com

Introduction

Coming from a system administration background, I am used to writing scripts to get mundane tasks done. Whenever I saw repeatable tasks, I saw an opportunity to script them, and pass them onto a junior to do 😉

However, writing scripts brings about its own challenges.

Ok, time to fess up 😉 Hands up those that have modified a script, only to realise that the modifications broke it! To make matters worse, you forgot to take a copy of the original!

Don’t worry, I have been in that boat, and can remember the countless hours I spent, getting the script back to what it was (mind you, I am not talking about a formal business change here, which is governed by strict change control, but about personal scripts, that you have created to make your daily tasks easier)

To make a copy of a script, I would normally suffix the file with the current time and date. This provided me with a timestamp of when I changed the file and a way of reverting my changes. However, there were instances when I was making backups of the modified script because I had tested a modification and it worked, however I didn’t want to risk breaking it when further modifying the file. Guess what, these are the times when I found I made the worst mistakes! I used to get so engrossed with my modifications that I would forget to make a backup of the changes and end up with an unworkable script. The only version to revert to was the original, which meant all my hard work went to waste!

This is why I started my search for a better change tracking system. One that will show me the changes I had made, and which will allow me to easily revert to a previous version.

Guess what! I think I just found this golden goose and it is truly amazing!

In this blog I will show you how you can use Git, an open source version control system,  to track changes to scripts stored locally on your computer. The main use of Git is for source control of files that a team contributes to. In these situations, a Git Server is used to store the repository.

Please ensure that the local folder you are tracking for source control is backed up either to the cloud or to an external hard disk.

For editing our code/script, we will use Microsoft’s Visual Studio Code, a free IDE that has Git support in-built. We will also use Sourcetree, Atlassian’s free Git client.

 

Introducing Git

Git is an awesome opensource distributed version control system. When working in a team, it allows you to have your files centrally managed, and at the same time, allowing multiple people to work on them. Team members can pull the repository to their local computer. They can also branch a part of the repository, update the files in that part and then merge them with the master. If there are no conflicts, Git will update the files in its repository. However, if there are conflicts, Git will inform that team member, showing them the conflicts. The team member then can either resolve the conflicts and then re-merge or discard their changes altogether.

If you want to read more on git, check out https://git-scm.com

To host the repositories for your team, two commonly used solutions are a Git Server or Visual Studio Teams Server. You can also use Github, however, your repositories will be public, unless you sign up for a paid account.

For personal use, you can store your git repositories in a local directory that is backed up to the cloud. For my personal projects, I use a Dropbox synchronised folder.

To use Git, you need to use a Git client. If you have a MacBook, a git client comes built-in. For windows, there are lots of clients available, however in my view, Sourcetree is one of the best (more about this abit later).

For MacBook users, below are some basic commands you can use from a terminal session

#change to directory where you will store your repository
cd /Users/tomj/Documents/git-repo/personalproject

#create a git repo in this folder
git init

#you can copy files into this folder
#to get git to start tracking the changes in the newly added files use the following command
git add .

 

As mentioned above, https://git-scm.com is an awesome site to learn more about Git.

You can also check out this page https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet for some quick git commands.

Using Sourcetree

For those that prefer a GUI client, I found that Sourcetree, from Atlassian, is an awesome Git client.

It gives you all the features of a good Git client plus it also shows you the history of all the changes made to the repository.

For this blog, I will be using Sourcetree to create and manage the Git repository.

So head over to https://www.sourcetreeapp.com to download and then install the client.

After installing Sourcetree, you will be prompted for a login account. Follow the links provided in the Sourcetree app to create a free Bitbucket account and then login.

Ok, lets begin.

Create a new repository

A repository is essentially a collection of files (or file) that we will track for changes. You can think of it as a directory.

To create a new repository, open Sourcetree.

From the menu, click on File and then click New. You will get the following screen.

Sourcetree_newRepo

Next, click on New and then click on Create Local Repository.

In the next window, for Destination Path, select the folder that will contain the scripts that you want to monitor for source control.

For  Name leave it to the default (name of the folder). Ensure the Type is Git and then click Create.

Guess what, thats all it takes to create a local repository! Simple ?

Once the repository is created, you will see a screen similar to the one shown below (my repository is called temp)

Sourcetree_newRepoCreated.

Double click on the newly created repository (as shown above). This will show the dashboard where everything happens 😉

Sourcetree_RepoDashboard

 

To see all the changes that have been made to the repository, click on History in the above screen.

Visual Studio Code

Ok, so we have created our repository and it is being monitored for changes. Now, we can start coding.

As mentioned above, we will be using Visual Studio Code, a free IDE from Microsoft. If you haven’t got it already, download it from https://code.visualstudio.com

Once installed, open Visual Studio Code.

From the menu, click on File and then click on  Open.  Next, choose the folder that you created the repository for above and then click on Open.

You will now see the folder structure, with all the files inside it in the left pane.

You can open any of the existing files or create new ones. For new ones, ensure you save them in the repository’s folder.

As soon as you save the file, you will notice the Source Control icon shows the number of changes that are currently ready to be staged (Source Control section is denoted by the “stethoscope” icon – ok it’s not really that but it surely looks like it 😉 )

VSC_SourceControl_Update

Now, one thing to note about Source Control via Git is that, you have to stage your changes. When you stage your changes, those changes will be written to the Git repository when you click Commit.

Click into the Source Control section and then under Changes click the + for each of the files, to stage the change.

VSC_SourceControl_Update_Changes

To commit the changes, enter a short description of what the changes were and then click on the tick at the top.

VSC_SourceControl_Update_Changes_Commit

That’s it. Your changes has now been successfully committed to the Git repository.

To view a history of all the changes that have been done to your repository, open Sourcetree and then click on History.

Notice the description column. This contains the comments you wrote when committing your staged changes. This provides a quick reminder of what the changes were. To drill down deeper into the changes, check the pane at the bottom right. Here, you will see the actual changes that were made (green denotes additions and red denotes deletion of characters). If there are multiple people committing to the same repo (as would be the case in a team), the names of each person will be shown beside each line in the History section.

Sourcetree_ViewHistory

Now, lets say that after you did your commit, you realised that you didn’t want that change, and in-fact you prefer what the file was before the commit. All you need to do is go into Sourcetree, in the History section, find the change and then right click on it and then click on Reverse commit. This reverses the commit and changes the file to what it previously was. If after that, you want to get back the change? Well, you can reverse the reverse commit 😉 (this is so much better than my method of copying the last suffixed version to the current version)

Soucetree_ReverseCommit

Closing Remarks

I am absolutely loving Git. It is an awesome tool and I would highly recommend it to each and everyone. For me personally, it helps in controlling the various changes I make to my code, with easy auditability and view of the changes I make between versions.

For teams, Git provides even more benefits. Using a central server (Git Server or Visual Studio Team Services) to host the Git repositories, the whole team can work on the files without blocking each other. The files will be stored centrally (actually with Git, when you pull a repo, you download the full repo to your local computer. If you merge your changes, the files are merged to the copy on the server). The changes to the files are easily trackable and there is an easy way to revert to a previous version should issues arise due to modifications.

I hope you embrace Git as I have and use it to track all your code changes.

Till the next time, Enjoy 😉

Recommendations on using Terraform to manage Azure resources

siliconvalve

If you’ve been working in the cloud infrastructure space for the last few years you can’t have missed the buzz around Hashicorp’s Terraform product. Terraform provides a declarative model for infrastructure provisioning that spans multiple cloud providers as well as on-premises services from the likes of VMWare.

I’ve recently had the opportunity to use Terraform to do some Azure infrastructure provisioning so I thought I’d share some recommendations on using Terraform with Azure (as at January 2018). I’ll also preface this post by saying that I have only been provisioning Azure PaaS services (App Service, Cosmos DB, Traffic Manager, Storage and Application Insights) and haven’t used any IaaS components at all.

In the beginning

I needed to provide an easy way to provision around 30 inter-related services that together constitute the hosting environment for single customer solution. Ideally I wanted a way to make it easy to re-provision these…

View original post 1,113 more words

Use Azure Health to track active incidents in your Subscriptions

siliconvalve

Yesterday afternoon while doing some work I ran into an issue in Azure. Initially I thought this issue was due to a bug in my (new) code and went to my usual debugging helper Application Insights to review what was going on.

The below graphs a little old, but you can see a clear spike on the left of the graphs which is where we started seeing issues and which gave me a clue that something was not right!

App Insights views

Initially I thought this was a compute issue as the graphs are for a VM-hosted REST API (left) and a Functions-based backend (right).

At this point there was no service status indicating an issue so I dug a little deeper and reviewed the detailed Exception information from Application Insights and realised that the source of the problem was the underlying Service Bus and Event Hub features that we use to glue…

View original post 268 more words

Visual Studio Team Services (VSTS) Continuous Integration and Continuous Deployment

I have been working on an Azure Pass Project recently and try to leverage VSTS DevOps CICD features to automatic the build and deployment process. Thanks to my colleague Sean Perera, he helped me and provided a deep dive on the VSTS CICD process.

I am writing this blog to share the whole workflow:

  1. Create new project in VSTS, create Dev branch based on the master branch

1

  1. Establish the connection from local VS to the VSTS project

2

  1. Push web app codes to the VSTS dev branch environment

3

3.1

  1. Set up the endpoint connections between VSTS and Azure:
  • Login to the Azure tenant environment, create new registration for VSTS tenant.

4.1

  • Generate service principle key and keep it safe

4.2

  • Come to VSTS online portal, go to settings -> services-> create a new service endpoint, the service principal client ID will be the Azure application ID, service Principle key will be Azure service principal key.

4.3

  • Click “Verify connection” to make sure it passed the connection testing
  1. Go to Create a build definition:
  • Define the build task: select the repo source, define the azure subscription, the destination to push to, all the app settings and parameter definitions

5.1

  • Go to Triggers and enable the CI settings:

5.2

  1. Create a new release definition
  • Define the release Pipeline: where is the source build and where is the environment, in my case, I am using VSTS to push codes to Azure PaSS environment.

6.1

  • Enable the Continuous Deployment settings

6.2

  • Define the release tasks: in my case I am using pre-build deploy azure app service and also swap from staging to prod environment

6.3

6.4

  1. Auto build and release process

Once I make change on my project code from my local visual studio environment, I commit the code and push up to the VSTS dev environment, VSTS will automatically start the build and release process, complete the release and push to Azure web app environment.

7.1

7.2

  1. Done, test my code in the dev and prod environment. It looks good. the VSTS DevOps features speed up the whole deployment process.

 

VSTS Build Definitions as YAML Part 2: How?

In the last post, I described why you might want to define your build definition as a YAML file using the new YAML Build Definitions feature in VSTS. In this post, we will walk through an example of a simple VSTS YAML build definition for a .NET Core application.

Application Setup

Our example application will be a blank ASP.NET Core web application with a unit test project. I created these using Visual Studio for Mac’s ASP.NET Core Web application template, and added a blank unit test project. The template adds a single unit test with no logic in it. For our purposes here we don’t need any actual code beyond this. Here’s the project layout in Visual Studio:

Visual Studio Project

I set up a project on VSTS with a Git repository, and I pushed my code into that repository, in a /src folder. Once pushed, here’s how the code looks in VSTS:

Code in VSTS

Enable YAML Build Definitions

As of the time of writing (November 2017), YAML build definitions are currently a preview feature in VSTS. This means we have to explicitly turn them on. To do this, click on your profile picture in the top right of the screen, and then click Preview Features.

VSTS Preview Features option

Switch the drop-down to For this account [youraccountname], and turn the Build YAML Definitions feature to On.

Enable Build YAML Definitions feature in VSTS

Design the Build Process

Now we need to decide how we want to build our application. This will form the initial set of build steps we’ll run. Because we’re building a .NET Core application, we need to do the following steps:

  • We need to restore the NuGet packages (dotnet restore).
  • We need to build our code (dotnet build).
  • We need to run our unit tests (dotnet test).
  • We need to publish our application (dotnet publish).
  • Finally, we need to collect our application files into a build artifact.

As it happens, we can actually collapse this down to a slightly shorter set of steps (for example, the dotnet build command also runs an implicit dotnet restore), but for now we’ll keep all four of these steps so we can be very explicit in our build definition. For a real application, we would likely try to simplify and optimise this process.

I created a new build folder at the same level as the src folder, and added a file in there called build.yaml.

VSTS also allows you to create a file in the root of the repository called .vsts-ci.yml. When this file is pushed to VSTS, it will create a build configuration for you automatically. Personally I don’t like this convention, partly because I want the file to live in the build folder, and partly because I’m not generally a fan of this kind of ‘magic’ and would rather do things manually.

Once I’d created a placeholder build.yaml file, here’s how things looked in my repository:

Build folder in VSTS repository

Writing the YAML

Now we can actually start writing our build definition!

In future updates to VSTS, we will be able to export an existing build definition as a YAML file. For now, though, we have to write them by hand. Personally I prefer that anyway, as it helps me to understand what’s going on and gives me a lot of control.

First, we need to put a steps: line at the top of our YAML file. This will indicate that we’re about to define a sequence of build steps. Note that VSTS also lets you define multiple build phases, and steps within those phases. That’s a slightly more advanced feature, and I won’t go into that here.

Next, we want to add an actual build step to call dotnet restore. VSTS provides a task called .NET Core, which has the internal task name DotNetCoreCLI. This will do exactly what we want. Here’s how we define this step:

Let’s break this down a bit. Also, make sure to pay attention to indentation – this is very important in YAML.

- task: DotNetCoreCLI@2 indicates that this is a build task, and that it is of type DotNetCoreCLI. This task is fully defined in Microsoft’s VSTS Tasks repository. Looking at that JSON file, we can see that the version of the task is 2.1.8 (at the time of writing). We only need to specify the major version within our step definition, and we do that just after the @ symbol.

displayName: Restore NuGet Packages specifies the user-displayable name of the step, which will appear on the build logs.

inputs: specifies the properties that the task takes as inputs. This will vary from task to task, and the task definition will be one source you can use to find the correct names and values for these inputs.

command: restore tells the .NET Core task that it should run the dotnet restore command.

projects: src tells the .NET Core task that it should run the command with the src folder as an additional argument. This means that this task is the equivalent of running dotnet restore src from the command line.

The other .NET Core tasks are similar, so I won’t include them here – but you can see the full YAML file below.

Finally, we have a build step to publish the artifacts that we’ve generated. Here’s how we define this step:

This uses the PublishBuildArtifacts task. If we consult the definition for this task on the Microsoft GitHub repository, we can see This task accepts several arguments. The ones we’re setting are:

  • pathToPublish is the path on the build agent where the dotnet publish step has saved its output. (As you will see in the full YAML file below, I manually overrode this in the dotnet publish step.)
  • artifactName is the name that is given to the build artifact. As we only have one, I’ve kept the name fairly generic and just called it deploy. In other projects, you might have multiple artifacts and then give them more meaningful names.
  • artifactType is set to container, which is the internal ID for the Artifact publish location: Visual Studio Team Services/TFS option.

Here is the complete Build.yaml file:

Set up a Build Configuration and Run

Now we can set up a build configuration in VSTS and tell it to use the YAML file. This is a one-time operation. In VSTS’s Build and Release section, go to the Builds tab, and then click New Definition.

Create new build definition in VSTS

You should see YAML as a template type. If you don’t see this option, check you enabled the feature as described above.

YAML build template in VSTS

We’ll configure our build configuration with the Hosted VS2017 queue (this just means that our builds will run on a Microsoft-managed build agent, which has Visual Studio 2017 installed). We also have to specify the relative path to the YAML file in the repository, which in our case is build/build.yaml.

Build configuration in VSTS

Now we can save and queue a build. Here’s the final output from the build:

Successful build

(Yes, this is build number 4 – I made a couple of silly syntax errors in the YAML file and had to retry a few times!)

As you can see, the tasks all ran successfully and the test passed. Under the Artifacts tab, we can also see that the deploy artifact was created:

Build artifact

Tips and Other Resources

This is still a preview feature, so there are still some known issues and things to watch out for.

Documentation

There is a limited amount of documentation available for creating and using this feature. The official documentation links so far are:

In particular, the properties for each task are not documented, and you need to consult the task’s task.json file to understand how to structure the YAML syntax. Many of the built-in tasks are defined in Microsoft’s GitHub repository, and this is a great starting point, but more comprehensive documentation would definitely be helpful.

Examples

There aren’t very many example templates available yet. That is why I wrote this article. I also recommend Marcus Felling’s recent blog post, where he provides a more complex example of a YAML build definition.

Tooling

As I mentioned above, there is limited tooling available currently. The VSTS team have indicated that they will soon provide the ability to export an existing build definition as YAML, which will help a lot when trying to generate and understand YAML build definitions. My personal preference will still be to craft them by hand, but this would be a useful feature to help bootstrap new templates.

Similarly, there currenly doesn’t appear to be any validation of the parameters passed to each task. If you misspell a property name, you won’t get an error – but the task won’t behave as you expect. Hopefully this experience will be improved over time.

Error Messages

The error messages that you get from the build system aren’t always very clear. One error message I’ve seen frequently is Mapping values are not allowed in this context. Whenever I’ve had this, it’s been because I did something wrong with my YAML indentation. Hopefully this saves somebody some time!

Releases

This feature is only available for VSTS build configurations right now. Release configurations will also be getting the YAML treatment, but it seems this won’t be for another few months. This will be an exciting development, as in my experience, release definitions can be even more complex than build configurations, and would benefit from all of the peer review, versioning, and branching goodness I described above.

Variable Binding and Evaluation Order

While you can use VSTS variables in many places within the YAML build definitions, there appear to be some properties that can’t be bound to a variable. One I’ve encountered is when trying to link an Azure subscription to a property.

For example, in one of my build configurations, I want to publish a Docker image to an Azure Container Registry. In order to do this I need to pass in the Azure service endpoint details. However, if I specify this as a variable, I get an error – I have to hard-code the service endpoint’s identifier into the YAML file. This is not something I want to do, and will become a particular issue when release definitions can be defined in YAML, so I hope this gets fixed soon.

Summary

Build definitions and build scripts are an integral part of your application’s source code, and should be treated as such. By storing your build definition as a YAML file and placing it into your repository, you can begin to improve the quality of your build pipeline, and take advantage of source control features like diffing, versioning, branching, and pull request review.

VSTS Build Definitions as YAML Part 1: What and Why?

Visual Studio Team Services (VSTS) has recently gained the ability to create build definitions as YAML files. This feature is currently in preview. In this post, I’ll explain why this is a great addition to the VSTS platform and why you might want to define your builds in this way. In the next post I’ll work through an example of using this feature, and I’ll also provide some tips and links to documentation and guidance that I found helpful when constructing some build definitions myself.

What Are Build Definitions?

If you use a build server of some kind (and you almost certainly should!), you need to tell the build server how to actually build your software. VSTS has the concept of a build configuration, which specifies how and when to build your application. The ‘how’ part of this is the build definition. Typically, a build definition will outline how the system should take your source code, apply some operations to it (like compiling the code into binaries), and emit build artifacts. These artifacts usually then get passed through to a release configuration, which will deploy them to your release environment.

In a simple static website, the build definition might simply be one step that copies some files from your source control system into a build artifact. In a .NET Core application, you will generally use the dotnet command-line tool to build, test, and publish your application. Other application frameworks and languages will have their own way of building their artifacts, and in a non-trivial application, the steps involved in building the application might get fairly complex, and may even trigger PowerShell or Bash scripts to allow for further flexibility and advanced control flow and logic.

Until now, VSTS has really only allowed us to create and modify build definitions in its web editor. This is a great way to get started with a new build definition, and you can browse the catalog of available steps, add them into your build definition, and configure them as necessary. You can also define and use variables to allow for reuse of information across steps. However, there are some major drawbacks to defining your build definition in this way.

Why Store Build Definitions in YAML?

Build definitions are really just another type of source code. A build definition is just a specification of how your application should be built. The exact list of steps, and the sequence in which they run, is the way in which we are defining part of our system’s behaviour. If we adopt a DevOps mindset, then we want to make sure we treat all of our system as source code, including our build definitions, and we want to take this source code seriously.

In November 2017, Microsoft announced that VSTS now has the ability to run builds that have been defined as a YAML file, instead of through the visual editor. The YAML file gets checked into the source control system, exactly the same way as any other file. This is similar to the way Travis CI allows for build definitions to be specified in YAML files, and is great news for those of us who want to treat our build definitions as code. It gives us many advantages.

Versioning

We can keep build definitions versioned, and can track the history of a build definition over time. This is great for audibility, as well as to ensure that we have the ability to consult or roll back to a previous version if we accidentally make a breaking change.

Keeping Build Definitions with Code

We can store the build definitions alongside the actual code that it builds, meaning that we are keeping everything tidy and in one place. Until now, if we wanted to fully understand the way the application was built, we’d have to look at at the code repository as well as the VSTS build definition. This made the overall process harder to understand and reason about. In my opinion, the fewer places we have to remember to check or update during changes the better.

Branching

Related to the last point, we can also use make use of important features of our source control system like branching. If your team uses GitHub Flow or a similar Git-based branching strategy, then this is particularly advantageous.

Let’s take the example of adding a new unit test project to a .NET Core application. Until now, the way you might do this is to set up a feature branch on which you develop the new unit test project. At some point, you’ll want to add the execution of these tests to your build definition. This would require some careful timing and consideration. If you update the build definition before your branch is merged, then any builds you run in the meantime will likely fail – they’ll be trying to run a unit test project that doesn’t exist outside of your feature branch yet. Alternatively, you can use a draft version of your build configuration, but then you need to plan exactly when to publish that draft.

If we specify our build definition in a YAML file, then this change to the build process is simply another change that happens on our feature branch. The feature branch’s YAML file will contain a new step to run the unit tests, but the master branch will not yet have that step defined in its YAML file. When the build configuration runs on the master branch before we merge, it will get the old version of the build definition and will not try to run our new unit tests. But any builds on our feature branch, or the master branch once our feature branch is merged, will include the new tests.

This is very powerful, especially in the early stages of a project where you are adding and changing build steps frequently.

Peer Review

Taking the above point even further, we can review a change to a build definition YAML file in the same way that we would review any other code changes. If we use pull requests (PRs) to merge our changes back into the master branch then we can see the change in the build definition right within the PR, and our team members can give them the same rigorous level of review as they do our other code files. Similarly, they can make sure that changes in the application code are reflected in the build definition, and vice versa.

Reuse

Another advantage of storing build definitions in a simple format like YAML is being able to copy and paste the files, or sections from the files, and reuse them in other projects. Until now, copying a step from one build definition to another was very difficult, and required manually setting up the steps. Now that we can define our build steps in YAML files, it’s often simply a matter of copying and pasting.

Linting

A common technique in many programming environments is linting, which involves running some sort of analysis over a file to check for potential errors or policy violations. Once our build definition is defined in YAML, we can perform this same type of analysis if we wanted to write a tool to do it. For example, we might write a custom linter to check that we haven’t accidentally added any credentials or connection strings into our build definition, or that we haven’t mistakenly used the wrong variable syntax. YAML is a standard format and is easily parsable across many different platforms, so writing a simple linter to check for your particular policies is not a complex endeavour.

Abstraction and Declarative Instruction

I’m a big fan of declarative programming – expressing your intent to the computer. In declarative programming, software figures out the right way to proceed based on your high-level instructions. This is the idea behind many different types of ‘desired-state’ automation, and in abstractions like LINQ in C#. This can be contrasted with an imperative approach, where you specify an explicit sequence of steps to achieve that intent.

One approach that I’ve seen some teams adopt in their build servers is to use PowerShell or Bash scripts to do the entire build. This provided the benefits I outlined above, since those script files could be checked into source control. However, by dropping down to raw script files, this meant that these teams couldn’t take advantage of all of the built-in build steps that VSTS provides, or the ecosystem of custom steps that can be added to the VSTS instance.

Build definitions in VSTS are a great example of a declarative approach. They are essentially an abstraction of a sequence of steps. If you design your build process well, it should be possible for a newcomer to look at your build definition and determine what the sequence of steps are, why they are happening, and what the side-effects and outcomes will be. By using abstractions like VSTS build tasks, rather than hand-writing imperative build logic as a sequence of command-line steps, you are helping to increase the readability of your code – and ultimately, you may increase the performance and quality by allowing the software to translate your instructions into actions.

YAML build definitions give us all of the benefits of keeping our build definitions in source control, but still allows us to make use of the full power of the VSTS build system.

Inline Documentation

YAML files allow for comments to be added, and this is a very helpful way to document your build process. Frequently, build definitions can get quite complex, with multiple steps required to do something that appears rather trivial, so having the ability to document the process right inside the definition itself is very helpful.

Summary

Hopefully through this post, I’ve convinced you that storing your build definition in a YAML file is a much tidier and saner approach than using the VSTS web UI to define how your application is built. In the next post, I’ll walk through an example of how I set up a simple build definition in YAML, and provide some tips that I found useful along the way.

Exchange Online & Splunk – Automating the solution

NOTES FROM THE FIELD:

I have recently been consulting on, what I think is a pretty cool engagement to integrate some Office365 mailbox data into the Splunk reporting platform.

I initially thought about using a .csv export methodology however through trial & error (more error than trial if I’m being honest), and realising that this method still required some manual interaction, I decided to embark on finding a fully automated solution.

The final solution comprises the below components:

  • Splunk HTTP event collector
    • Splunk hostname
    • Token from HTTP event collector config page
  • Azure automation account
    • Azure Run As Account
    • Azure Runbook
    • Exchange Online credentials (registered to Azure automation account

I’m not going to run through the creation of the automation account, or required credentials as these had already been created, however there is a great guide to configuring the solution I have used for this customer at  https://www.splunk.com/blog/2017/10/05/splunking-microsoft-cloud-data-part-3.html

What the PowerShell script we are using will achieve is the following:

  • Connect to Azure and Exchange Online – Azure run as account authentication
  • Configure variables for connection to Splunk HTTP event collector
  • Collect mailbox data from the Exchange Online environment
  • Split the mailbox data into parts for faster processing
  • Specify SSL/TLS protocol settings for self-signed cert in test environment
  • Create a JSON object to be posted to the Splunk environment
  • HTTP POST the data directly to Splunk

The Code:

#Clear Existing PS Sessions
Get-PSSession | Remove-PSSession | Out-Null
#Create Split Function for CSV file
function Split-array {
param($inArray,[int]$parts,[int]$size)
if($parts) {
$PartSize=[Math]::Ceiling($inArray.count/$parts)
}
if($size) {
$PartSize=$size
$parts=[Math]::Ceiling($inArray.count/$size)
}
$outArray=New-Object’System.Collections.Generic.List[psobject]’
for($i=1;$i-le$parts;$i++) {
$start=(($i-1)*$PartSize)
$end=(($i)*$PartSize)-1
if($end-ge$inArray.count) {$end=$inArray.count-1}
$outArray.Add(@($inArray[$start..$end]))
}
return,$outArray
}
function Connect-ExchangeOnline {
param(
$Creds
)
#Connect to Exchange Online
$Session=New-PSSession –ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/-Credential $Credentials-Authentication Basic -AllowRedirection
$Commands=@(“Add-MailboxPermission”,”Add-RecipientPermission”,”Remove-RecipientPermission”,”Remove-MailboxPermission”,”Get-MailboxPermission”,”Get-User”,”Get-DistributionGroupMember”,”Get-DistributionGroup”,”Get-Mailbox”)
Import-PSSession-Session $Session-DisableNameChecking:$true-AllowClobber:$true-CommandName $commands|Out-Null
}
#Create Variables
$SplunkHost = “Your Splunk hostname or IP Address”
$SplunkEventCollectorPort = “8088”
$SplunkEventCollectorToken = “Splunk Token from Http Event Collector”
$servicePrincipalConnection = Get-AutomationConnection -Name ‘AzureRunAsConnection’
$credentials = Get-AutomationPSCredential -Name ‘Exchange Online’
#Connect to Azure
Add-AzureRMAccount -ServicePrincipal -Tenant $servicePrincipalConnection.TenantID -ApplicationId $servicePrincipalConnection.ApplicationID -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
#Connect to Exchange Online
Connect-ExchangeOnline -Creds $credentials
#Invoke Script
$mailboxes = Get-Mailbox -resultsize unlimited | select-object -property DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Get Current Date & Time
$time = get-date -Format s
#Convert Timezone to Australia/Brisbane
$bnetime = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId($time, [System.TimeZoneInfo]::Local.Id, ‘E. Australia Standard Time’)
#Adding Time Column to Output
$mailboxes = $mailboxes | Select-Object @{expression = {$bnetime}; Name = ‘Time’}, DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Create Split Array for Mailboxes Spreadsheet
$recipients = Split-array -inArray $mailboxes -parts 5
#Create JSON objects and HTTP Post to Splunk HTTP Event Collector
foreach ($recipient in $recipients) {
foreach($rin$recipient) {
#Create SSL Validation Bypass for Self-Signed Certificate in Testing
$AllProtocols = [System.Net.SecurityProtocolType]’Ssl3,Tls,Tls11,Tls12′
[System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
#Get JSON string to post to Splunk
$StringToPost = “{ `”Time`”: `”$($r.Time)`”, `”DisplayName`”: `”$($r.DisplayName)`”, `”PrimarySMTPAddress`”: `”$($r.PrimarySmtpAddress)`”, `”IsMailboxEnabled`”: `”$($r.IsMailboxEnabled)`”, `”ForwardingSmtpAddress`”: `”$($r.ForwardingSmtpAddress)`”, `”GrantSendOnBehalfTo`”: `”$($r.GrantSendOnBehalfTo)`”, `”ProhibitSendReceiveQuota`”: `”$($r.ProhibitSendReceiveQuota)`”, `”AddressBookPolicy`”: `”$($r.AddressBookPolicy)`” }”
$uri = “https://” + $SplunkHost + “:” + $SplunkEventCollectorPort + “/services/collector/raw”
$header = @{“Authorization”=”Splunk ” + $SplunkEventCollectorToken}
#Post to Splunk Http Event Collector
Invoke-RestMethod -Method Post -Uri $uri -Body $StringToPost -Header $header
}
}
Get-PSSession | Remove-PSSession | Out-Null

 

The final output that can be seen in Splunk looks like the following:

11/13/17
12:28:22.000 PM
{ [-]
AddressBookPolicy:
DisplayName: Shane Fisher
ForwardingSmtpAddress:
GrantSendOnBehalfTo:
IsMailboxEnabled: True
PrimarySMTPAddress: shane.fisher@xxxxxxxx.com.au
ProhibitSendReceiveQuota: 50 GB (53,687,091,200 bytes)
Time: 11/13/2017 12:28:22
}Show as raw text·         AddressBookPolicy =  

·         DisplayName = Shane Fisher

·         ForwardingSmtpAddress =  

·         GrantSendOnBehalfTo =  

·         IsMailboxEnabled = True

·         PrimarySMTPAddress = shane.fisher@xxxxxxxx.com.au

·         ProhibitSendReceiveQuota = 50 GB (53,687,091,200 bytes)

I hope this helps some of you out there.

Cheers,

Shane.

 

 

 

Using Visual Studio with Github to Test New Azure CLI Features

Following the Azure Managed Kubernetes announcement yesterday, I immediately upgraded my Azure CLI on Windows 10 so I could try it out.

Unfortunately I discovered there was a bug with retrieving credentials for your newly created Kubernetes cluster – the command bombs with the following error:

C:\Users\rafb> az aks get-credentials --resource-group myK8Group --name myCluster
[Errno 13] Permission denied: 'C:\\Users\\rafb\\AppData\\Local\\Temp\\tmpn4goit44'
Traceback (most recent call last):
 File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\main.py", line 36, in main
 cmd_result = APPLICATION.execute(args)
(...)

A Github Issue had already been created by a someone else and a few hours later, the author of the offending code submitted a Pull Request (PR) fixing the issue. While this is great, until the PR is merged into master and a new release of the Azure CLI for Windows is out , the fix will be unavailable.

Given this delay I thought it would be neat to setup an environment in order to try out the fix and also contribute to the Azure CLI in future. Presumably any other GitHub project can be accessed this way so it is by no means restricted to the Azure CLI.

Prerequisites

I will assume you have a fairly recent version of Visual Studio 2017 running on Windows 10, with Python support installed. By going into the Tools menu and choosing “Get Extensions and Features” in Visual Studio, you can add Python support as follows:

Install Python

By clicking in the “Individual Components” tab in the Visual Studio installer, install the “GitHub Extension for Visual Studio”:

Install GitHub support

You will need a GitHub login, and for the purpose of this blog post you will also need a Git client such as Git Bash (https://git-for-windows.github.io/).

Getting the Code

After launching Visual Studio 2017, you’ll notice a “Github” option under “Open”:

Screen Shot 2017-10-26 at 13.51.56

If you click on Github, you will be asked to enter your Github credentials and you will presented with a list of repositories you own but you won’t be able to clone the Azure CLI repository (unless you have your own fork of it).

The trick is to go to the Team Explorer window (View > Team Explorer) and choose to clone to a Local Git Repository:

Clone to local Repository

After a little while the Azure CLI Github repo (https://github.com/Azure/azure-cli) will be cloned to the directory of your choosing (“C:\Users\rafb\Source\Repos\azure-cli2” in my case).

Running Azure CLI from your Dev Environment

Before we can start hacking the Azure CLI, we need to make sure we can run the code we have just cloned from Github.

In Visual Studio, go to File > Open > Project/Solutions and open
“azure-cli2017.pyproj” from your freshly cloned repo. This type of project is handled by the Python Support extension for Visual Studio.

Once opened you will see this in the Solution Explorer:

Solution Explorer

As is often the case with Python projects, you usually run programs in a virtual environment. Looking at the screenshot above, the environment pre-configured in the Python project has a little yellow warning sign. This is because it is looking for Python 3.5 and the Python Support extension for Visual Studio comes with 3.6.

The solution is to create an environment based on Python 3.6. In order to do so, right click on “Python Environments” and choose “Add Virtual Environment…”:

Add Virtual Python Environment

Then choose the default in the next dialog box  (“Python 3.6 (64 bit)”) and click on “Create”:

Create Environment

After a few seconds, you will see a brand new Python 3.6 virtual environment:

View Virtual Environment

Right click on “Python Environments” and Choose “View All Python Environments”:

View All Python Environments

Make sure the environment you created is selected and click on “Open in PowerShell”. The PowerShell Window should open straight in the cloned repository directory – the “src/env” sub directory, where your newly created Python Environment lives.

Run the “Activate.ps1” PowerShell script to activate the Python Environment.

Activate Python

Now we can get the dependencies for our version of Azure CLI to run (we’re almost there!).

Run “python scripts/dev_setup.py” from the base directory of the repo as per the documentation.

Getting the dependencies will take a while (in the order of 20 to 30 minutes on my laptop).

zzzzzzzzz

Once the environment is setup, you can execute the “az” command and it is indeed the version you have cloned that executes:

It's alive!

Trying Pull Request Modifications Straight from Github

Running “git status” will show which branch you are on. Here is “dev” which does not have the “aks get-credentials” fix yet:

dev branch

And indeed, we are getting this now familiar error:

MY EYES!

Back in Visual Studio, go to the Github Window (View > Other Windows > GitHub) and enter your credentials if need be.

Once authenticated click on the “Pull Requests” icon and you will see a list of current Pull Requests against the Azure CLI Github repo:

Don't raise issues, create Pull Requests!

Now, click on pull request 4762 which has the fix for the “aks get-credentials” issue and you will see an option to checkout the branch which has the fix. Click on it:

Fixed!

And lo and behold, we can see that the branch has been changed to the PR branch and that the fix is working:

Work Work Work

You now have a dev environment for the Azure CLI with the ability to test the latest features even before they are merged and in general use!

You can also easily visualise the differences between the code in dev and PR branches, which is a good way to learn from others 🙂

Diff screen

The same environment can probably be used to create Pull Requests straight from Visual Studio although I haven’t tried this yet.