VSTS Build Definitions as YAML Part 2: How?

In the last post, I described why you might want to define your build definition as a YAML file using the new YAML Build Definitions feature in VSTS. In this post, we will walk through an example of a simple VSTS YAML build definition for a .NET Core application.

Application Setup

Our example application will be a blank ASP.NET Core web application with a unit test project. I created these using Visual Studio for Mac’s ASP.NET Core Web application template, and added a blank unit test project. The template adds a single unit test with no logic in it. For our purposes here we don’t need any actual code beyond this. Here’s the project layout in Visual Studio:

Visual Studio Project

I set up a project on VSTS with a Git repository, and I pushed my code into that repository, in a /src folder. Once pushed, here’s how the code looks in VSTS:

Code in VSTS

Enable YAML Build Definitions

As of the time of writing (November 2017), YAML build definitions are currently a preview feature in VSTS. This means we have to explicitly turn them on. To do this, click on your profile picture in the top right of the screen, and then click Preview Features.

VSTS Preview Features option

Switch the drop-down to For this account [youraccountname], and turn the Build YAML Definitions feature to On.

Enable Build YAML Definitions feature in VSTS

Design the Build Process

Now we need to decide how we want to build our application. This will form the initial set of build steps we’ll run. Because we’re building a .NET Core application, we need to do the following steps:

  • We need to restore the NuGet packages (dotnet restore).
  • We need to build our code (dotnet build).
  • We need to run our unit tests (dotnet test).
  • We need to publish our application (dotnet publish).
  • Finally, we need to collect our application files into a build artifact.

As it happens, we can actually collapse this down to a slightly shorter set of steps (for example, the dotnet build command also runs an implicit dotnet restore), but for now we’ll keep all four of these steps so we can be very explicit in our build definition. For a real application, we would likely try to simplify and optimise this process.

I created a new build folder at the same level as the src folder, and added a file in there called build.yaml.

VSTS also allows you to create a file in the root of the repository called .vsts-ci.yml. When this file is pushed to VSTS, it will create a build configuration for you automatically. Personally I don’t like this convention, partly because I want the file to live in the build folder, and partly because I’m not generally a fan of this kind of ‘magic’ and would rather do things manually.

Once I’d created a placeholder build.yaml file, here’s how things looked in my repository:

Build folder in VSTS repository

Writing the YAML

Now we can actually start writing our build definition!

In future updates to VSTS, we will be able to export an existing build definition as a YAML file. For now, though, we have to write them by hand. Personally I prefer that anyway, as it helps me to understand what’s going on and gives me a lot of control.

First, we need to put a steps: line at the top of our YAML file. This will indicate that we’re about to define a sequence of build steps. Note that VSTS also lets you define multiple build phases, and steps within those phases. That’s a slightly more advanced feature, and I won’t go into that here.

Next, we want to add an actual build step to call dotnet restore. VSTS provides a task called .NET Core, which has the internal task name DotNetCoreCLI. This will do exactly what we want. Here’s how we define this step:

Let’s break this down a bit. Also, make sure to pay attention to indentation – this is very important in YAML.

- task: DotNetCoreCLI@2 indicates that this is a build task, and that it is of type DotNetCoreCLI. This task is fully defined in Microsoft’s VSTS Tasks repository. Looking at that JSON file, we can see that the version of the task is 2.1.8 (at the time of writing). We only need to specify the major version within our step definition, and we do that just after the @ symbol.

displayName: Restore NuGet Packages specifies the user-displayable name of the step, which will appear on the build logs.

inputs: specifies the properties that the task takes as inputs. This will vary from task to task, and the task definition will be one source you can use to find the correct names and values for these inputs.

command: restore tells the .NET Core task that it should run the dotnet restore command.

projects: src tells the .NET Core task that it should run the command with the src folder as an additional argument. This means that this task is the equivalent of running dotnet restore src from the command line.

The other .NET Core tasks are similar, so I won’t include them here – but you can see the full YAML file below.

Finally, we have a build step to publish the artifacts that we’ve generated. Here’s how we define this step:

This uses the PublishBuildArtifacts task. If we consult the definition for this task on the Microsoft GitHub repository, we can see This task accepts several arguments. The ones we’re setting are:

  • pathToPublish is the path on the build agent where the dotnet publish step has saved its output. (As you will see in the full YAML file below, I manually overrode this in the dotnet publish step.)
  • artifactName is the name that is given to the build artifact. As we only have one, I’ve kept the name fairly generic and just called it deploy. In other projects, you might have multiple artifacts and then give them more meaningful names.
  • artifactType is set to container, which is the internal ID for the Artifact publish location: Visual Studio Team Services/TFS option.

Here is the complete Build.yaml file:

Set up a Build Configuration and Run

Now we can set up a build configuration in VSTS and tell it to use the YAML file. This is a one-time operation. In VSTS’s Build and Release section, go to the Builds tab, and then click New Definition.

Create new build definition in VSTS

You should see YAML as a template type. If you don’t see this option, check you enabled the feature as described above.

YAML build template in VSTS

We’ll configure our build configuration with the Hosted VS2017 queue (this just means that our builds will run on a Microsoft-managed build agent, which has Visual Studio 2017 installed). We also have to specify the relative path to the YAML file in the repository, which in our case is build/build.yaml.

Build configuration in VSTS

Now we can save and queue a build. Here’s the final output from the build:

Successful build

(Yes, this is build number 4 – I made a couple of silly syntax errors in the YAML file and had to retry a few times!)

As you can see, the tasks all ran successfully and the test passed. Under the Artifacts tab, we can also see that the deploy artifact was created:

Build artifact

Tips and Other Resources

This is still a preview feature, so there are still some known issues and things to watch out for.

Documentation

There is a limited amount of documentation available for creating and using this feature. The official documentation links so far are:

In particular, the properties for each task are not documented, and you need to consult the task’s task.json file to understand how to structure the YAML syntax. Many of the built-in tasks are defined in Microsoft’s GitHub repository, and this is a great starting point, but more comprehensive documentation would definitely be helpful.

Examples

There aren’t very many example templates available yet. That is why I wrote this article. I also recommend Marcus Felling’s recent blog post, where he provides a more complex example of a YAML build definition.

Tooling

As I mentioned above, there is limited tooling available currently. The VSTS team have indicated that they will soon provide the ability to export an existing build definition as YAML, which will help a lot when trying to generate and understand YAML build definitions. My personal preference will still be to craft them by hand, but this would be a useful feature to help bootstrap new templates.

Similarly, there currenly doesn’t appear to be any validation of the parameters passed to each task. If you misspell a property name, you won’t get an error – but the task won’t behave as you expect. Hopefully this experience will be improved over time.

Error Messages

The error messages that you get from the build system aren’t always very clear. One error message I’ve seen frequently is Mapping values are not allowed in this context. Whenever I’ve had this, it’s been because I did something wrong with my YAML indentation. Hopefully this saves somebody some time!

Releases

This feature is only available for VSTS build configurations right now. Release configurations will also be getting the YAML treatment, but it seems this won’t be for another few months. This will be an exciting development, as in my experience, release definitions can be even more complex than build configurations, and would benefit from all of the peer review, versioning, and branching goodness I described above.

Variable Binding and Evaluation Order

While you can use VSTS variables in many places within the YAML build definitions, there appear to be some properties that can’t be bound to a variable. One I’ve encountered is when trying to link an Azure subscription to a property.

For example, in one of my build configurations, I want to publish a Docker image to an Azure Container Registry. In order to do this I need to pass in the Azure service endpoint details. However, if I specify this as a variable, I get an error – I have to hard-code the service endpoint’s identifier into the YAML file. This is not something I want to do, and will become a particular issue when release definitions can be defined in YAML, so I hope this gets fixed soon.

Summary

Build definitions and build scripts are an integral part of your application’s source code, and should be treated as such. By storing your build definition as a YAML file and placing it into your repository, you can begin to improve the quality of your build pipeline, and take advantage of source control features like diffing, versioning, branching, and pull request review.

VSTS Build Definitions as YAML Part 1: What and Why?

Visual Studio Team Services (VSTS) has recently gained the ability to create build definitions as YAML files. This feature is currently in preview. In this post, I’ll explain why this is a great addition to the VSTS platform and why you might want to define your builds in this way. In the next post I’ll work through an example of using this feature, and I’ll also provide some tips and links to documentation and guidance that I found helpful when constructing some build definitions myself.

What Are Build Definitions?

If you use a build server of some kind (and you almost certainly should!), you need to tell the build server how to actually build your software. VSTS has the concept of a build configuration, which specifies how and when to build your application. The ‘how’ part of this is the build definition. Typically, a build definition will outline how the system should take your source code, apply some operations to it (like compiling the code into binaries), and emit build artifacts. These artifacts usually then get passed through to a release configuration, which will deploy them to your release environment.

In a simple static website, the build definition might simply be one step that copies some files from your source control system into a build artifact. In a .NET Core application, you will generally use the dotnet command-line tool to build, test, and publish your application. Other application frameworks and languages will have their own way of building their artifacts, and in a non-trivial application, the steps involved in building the application might get fairly complex, and may even trigger PowerShell or Bash scripts to allow for further flexibility and advanced control flow and logic.

Until now, VSTS has really only allowed us to create and modify build definitions in its web editor. This is a great way to get started with a new build definition, and you can browse the catalog of available steps, add them into your build definition, and configure them as necessary. You can also define and use variables to allow for reuse of information across steps. However, there are some major drawbacks to defining your build definition in this way.

Why Store Build Definitions in YAML?

Build definitions are really just another type of source code. A build definition is just a specification of how your application should be built. The exact list of steps, and the sequence in which they run, is the way in which we are defining part of our system’s behaviour. If we adopt a DevOps mindset, then we want to make sure we treat all of our system as source code, including our build definitions, and we want to take this source code seriously.

In November 2017, Microsoft announced that VSTS now has the ability to run builds that have been defined as a YAML file, instead of through the visual editor. The YAML file gets checked into the source control system, exactly the same way as any other file. This is similar to the way Travis CI allows for build definitions to be specified in YAML files, and is great news for those of us who want to treat our build definitions as code. It gives us many advantages.

Versioning

We can keep build definitions versioned, and can track the history of a build definition over time. This is great for audibility, as well as to ensure that we have the ability to consult or roll back to a previous version if we accidentally make a breaking change.

Keeping Build Definitions with Code

We can store the build definitions alongside the actual code that it builds, meaning that we are keeping everything tidy and in one place. Until now, if we wanted to fully understand the way the application was built, we’d have to look at at the code repository as well as the VSTS build definition. This made the overall process harder to understand and reason about. In my opinion, the fewer places we have to remember to check or update during changes the better.

Branching

Related to the last point, we can also use make use of important features of our source control system like branching. If your team uses GitHub Flow or a similar Git-based branching strategy, then this is particularly advantageous.

Let’s take the example of adding a new unit test project to a .NET Core application. Until now, the way you might do this is to set up a feature branch on which you develop the new unit test project. At some point, you’ll want to add the execution of these tests to your build definition. This would require some careful timing and consideration. If you update the build definition before your branch is merged, then any builds you run in the meantime will likely fail – they’ll be trying to run a unit test project that doesn’t exist outside of your feature branch yet. Alternatively, you can use a draft version of your build configuration, but then you need to plan exactly when to publish that draft.

If we specify our build definition in a YAML file, then this change to the build process is simply another change that happens on our feature branch. The feature branch’s YAML file will contain a new step to run the unit tests, but the master branch will not yet have that step defined in its YAML file. When the build configuration runs on the master branch before we merge, it will get the old version of the build definition and will not try to run our new unit tests. But any builds on our feature branch, or the master branch once our feature branch is merged, will include the new tests.

This is very powerful, especially in the early stages of a project where you are adding and changing build steps frequently.

Peer Review

Taking the above point even further, we can review a change to a build definition YAML file in the same way that we would review any other code changes. If we use pull requests (PRs) to merge our changes back into the master branch then we can see the change in the build definition right within the PR, and our team members can give them the same rigorous level of review as they do our other code files. Similarly, they can make sure that changes in the application code are reflected in the build definition, and vice versa.

Reuse

Another advantage of storing build definitions in a simple format like YAML is being able to copy and paste the files, or sections from the files, and reuse them in other projects. Until now, copying a step from one build definition to another was very difficult, and required manually setting up the steps. Now that we can define our build steps in YAML files, it’s often simply a matter of copying and pasting.

Linting

A common technique in many programming environments is linting, which involves running some sort of analysis over a file to check for potential errors or policy violations. Once our build definition is defined in YAML, we can perform this same type of analysis if we wanted to write a tool to do it. For example, we might write a custom linter to check that we haven’t accidentally added any credentials or connection strings into our build definition, or that we haven’t mistakenly used the wrong variable syntax. YAML is a standard format and is easily parsable across many different platforms, so writing a simple linter to check for your particular policies is not a complex endeavour.

Abstraction and Declarative Instruction

I’m a big fan of declarative programming – expressing your intent to the computer. In declarative programming, software figures out the right way to proceed based on your high-level instructions. This is the idea behind many different types of ‘desired-state’ automation, and in abstractions like LINQ in C#. This can be contrasted with an imperative approach, where you specify an explicit sequence of steps to achieve that intent.

One approach that I’ve seen some teams adopt in their build servers is to use PowerShell or Bash scripts to do the entire build. This provided the benefits I outlined above, since those script files could be checked into source control. However, by dropping down to raw script files, this meant that these teams couldn’t take advantage of all of the built-in build steps that VSTS provides, or the ecosystem of custom steps that can be added to the VSTS instance.

Build definitions in VSTS are a great example of a declarative approach. They are essentially an abstraction of a sequence of steps. If you design your build process well, it should be possible for a newcomer to look at your build definition and determine what the sequence of steps are, why they are happening, and what the side-effects and outcomes will be. By using abstractions like VSTS build tasks, rather than hand-writing imperative build logic as a sequence of command-line steps, you are helping to increase the readability of your code – and ultimately, you may increase the performance and quality by allowing the software to translate your instructions into actions.

YAML build definitions give us all of the benefits of keeping our build definitions in source control, but still allows us to make use of the full power of the VSTS build system.

Inline Documentation

YAML files allow for comments to be added, and this is a very helpful way to document your build process. Frequently, build definitions can get quite complex, with multiple steps required to do something that appears rather trivial, so having the ability to document the process right inside the definition itself is very helpful.

Summary

Hopefully through this post, I’ve convinced you that storing your build definition in a YAML file is a much tidier and saner approach than using the VSTS web UI to define how your application is built. In the next post, I’ll walk through an example of how I set up a simple build definition in YAML, and provide some tips that I found useful along the way.

An Approach to DevOps Adoption

Originally posted at Chandra’s blog – https://fastandsteady.io

DevOps has been a buzzword for a while now in the tech industry, with many organizations joining the bandwagon and working towards embracing DevOps practices.

Wikipedia describes DevOps as “a practice that emphasizes the collaboration and communication of the IT professionals across the value chain while automating the process of software delivery and infrastructure changes. The aim is to deliver the software quickly and reliably.”

However, in an Enterprise scenario with the complexity involved, the journey to implement DevOps comprehensively is evolutionary.  Hence, it is only sensible to drive along an incremental adoption path. Each increment has to provide the most benefits through the MVP (Minimum Viable Product) delivered towards the DevOps journey.

In this context, this article attempts to explain the initial steps towards the larger DevOps journey and helps to get a head start.

The approach at high-level consists of four major steps:

  1. Value stream mapping – Mapping the existing process workflows
  2. Future state value stream mapping – Identify the immediate goals and visualize the optimized value stream map
  3. Execution – Incremental approach towards the implementation
  4. Retrospection – Review and learn.

OK, let’s get started!

Value Stream Mapping 

Value stream mapping is a lean improvement strategy that maps the processes and information flows of a product from source to delivery. For software delivery, it is the pre-defined path an idea takes to transform into a software product/service delivered to the end customers.

Value stream mapping exercise of the current services delivered serves as the first step towards the DevOps journey. It helps in identifying the overall time taken by the value chain, the process cycle time and the lead time between the processes involved in the software delivery.  It also captures various process specific metrics along the value chain.

Quite obviously, the exercise involves collaboration with multiple stakeholders of the application lifecycle management to gather the details and at the same time align them to a shared goal. In fact, it sets the stage for the larger collaboration between the parties as the journey progresses.

The picture below depicts a typical software development workflow at a high level, agnostic to the development methodology. Depending on the type of change or the product, one or more steps may not be relevant to the application’s lifecycle.

Software Lifecycle

Value stream mapping provides key insights on the overall performance of the value chain. Details include:

  • The process and information flow
  • Overall timeline from an idea generation to release
  • Fastest and slowest processes
  • Shortest and longest lead times.

The insights will pave the path to the definition of strategic goals and the immediate goals that will help optimize the overall value chain. The next steps describe the future state value stream mapping and the execution methodology that focuses on breaking the silos to create a people and technology-driven culture for accelerated processes.

Future state Value Stream Mapping

The future state value stream mapping focuses on defining the immediate goals for optimising the value chain. The optimization focuses on improving the product/service quality delivered through each process, to improve the cycle times and the lead time, etc. Remember, the aim is to deliver quickly and reliably.

While the easiest route to go about this, is to target those processes that are time-consuming, it is imperative to analyse multiple aspects of the process before considering the options. Below are a few metrics that could be considered to evaluate the options and build the future state value stream mapping. The optimisation options are to be balanced against the metrics to arrive at the final set that will be part of the execution.

Table

Execution

Ideation to action – This next step in the journey is broken down into three key aspects:

  1. Backlog grooming
  2. Shared responsibility model and
  3. Implementation.

Backlog grooming

This involves user story detailing and a backlog creation for the areas identified for optimization in collaboration with all the parties involved. The backlog has to be put to action either in sprints or in a Kanban approach depending on how you want to manage the execution.

Further, below, describes how the backlog could be driven forward for execution.

Next steps

Shared responsibility model

It is vital that a culture of collaboration and communication between the stakeholders is nurtured for a successful DevOps journey. A shared responsibility model is equally important. It overlaps the services delivered by each of the stakeholders involved, which earlier used to be in silos. Below is a depiction on how the shared responsibility model evolves with the DevOps adoption.

Shared responsibilities

As you may figure out, during the exercise, most tasks could ideally be delivered by the operations team. However, tasks related to the design optimization, setting up continuous integration, implementing a test automation framework, etc. are part of the Development/QA community’s responsibilities.

The project management related tasks are more focused on nurturing the culture as well as providing the tools/methods for improving the collaboration and to provide the work in process visibility. The key is to bring together the teams involved (including business, development, quality assurance, service delivery, and operations) and build the necessary tools and technologies to drive the agile processes.

Again, depending on the organization, Scrum/Kanban methods can be implemented to execute the user stories/tasks.

Implementation

Tools and technology implementation is one of the core aspects of a typical DevOps journey. They provide the required automation across the ALM and accelerate the adoption, and make the whole process sustainable. Needless to say, the implementation of the tools and/or otherwise the tasks are driven through the shared responsibility model and the sprint plans put together.

Review and Retrospection

The last step in the cycle is to measure and map the outcomes achieved and update the value stream map to project the new realities. It is also important to review the improvement in the overall process transparency. A detailed review of the execution provides insights into areas of further development across all the facets of the process.

The last step of the cycle could well be also, the first step towards another iteration/increment for further optimization of the value stream mapping nurturing and driving the DevOps culture and the journey forward.

Interacting with Azure Web Apps Virtual File System using PowerShell and the Kudu API

Introduction

Azure Web Apps or App Services are quite flexible regarding deployment. You can deploy via FTP, OneDrive or Dropbox, different cloud-based source controls like VSTS, GitHub, or BitBucket, your on-premise Git, multiples IDEs including Visual Studio, Eclipse and Xcode, and using MSBuild via Web Deploy or FTP/FTPs. And this list is very likely to keep expanding.

However, there might be some scenarios where you just need to update some reference files and don’t need to build or update the whole solution. Additionally, it’s quite common that corporate firewalls restrictions leave you with only the HTTP or HTTPs ports open to interact with your Azure App Service. I had such a scenario where we had to automate the deployment of new public keys to an Azure App Service to support client certificate-based authentication. However, we were restricted by policies and firewalls.

The Kudu REST API provides a lot of handy features which support Azure App Services source code management and deployments operations, among others. One of these is the Virtual File System (VFS) API. This API is based on the VFS HTTP Adapter which wraps a VFS instance as an HTTP RESTful interface. The Kudu VFS API allows us to upload and download files, get a list of files in a directory, create directories, and delete files from the virtual file system of an Azure App Service; and we can use PowerShell to call it.

In this post I will show how to interact with the Azure App Service Virtual File Sytem (VFS) API via PowerShell.

Authenticating to the Kudu API.

To call any of the Kudu APIs, we need to authenticate by adding the corresponding Authorization header. To create the header value, we need to use the Kudu API credentials, as detailed here. Because we will be interacting with an API related to an App Service, we will be using site-level credentials (a.k.a. publishing profile credentials).

Getting the Publishing Profile Credentials from the Azure Portal

You can get the publishing profile credentials, by downloading the publishing profile from the portal, as shown in the figure below. Once downloaded, the XML document will contain the site-level credentials.

Getting the Publishing Profile Credentials via PowerShell

We can also get the site-level credentials via PowerShell. I’ve created a PowerShell function which returns the publishing credentials of an Azure App Service or a Deployment Slot, as shown below.

Bear in mind that you need to be logged in to Azure in your PowerShell session before calling these cmdlets.

Getting the Kudu REST API Authorisation header via PowerShell

Once we have the credentials, we are able to get the Authorization header value. The instructions to construct the header are described here. I’ve created another PowerShell function, which relies on the previous one, to get the header value, as follows.

Calling the App Service VFS API

Once we have the Authorization header, we are ready to call the VFS API. As shown in the documentation, the VFS API has the following operations:

  • GET /api/vfs/{path}    (Gets a file at path)
  • GET /api/vfs/{path}/    (Lists files at directory specified by path)
  • PUT /api/vfs/{path}    (Puts a file at path)
  • PUT /api/vfs/{path}/    (Creates a directory at path)
  • DELETE /api/vfs/{path}    (Delete the file at path)

So the URI to call the API would be something like:

  • GET https://{webAppName}.scm.azurewebsites.net/api/vfs/

To invoke the REST API operation via PowerShell we will use the Invoke-RestMethod cmdlet.

We have to bear in mind that when trying to overwrite or delete a file, the web server implements ETag behaviour to identify specific versions of files.

Uploading a File to an App Service

I have created the PowerShell function shown below which uploads a local file to a path in the virtual file system. To call this function you need to provide the App Service name, the Kudu credentials (username and password), the local path of your file and the kudu path. The function assumes that you want to upload the file under the wwwroot folder, but you can change it if needed.

As you can see in the script, we are adding the “If-Match”=”*” header to disable ETag version check on the server side.

Downloading a File from an App Service

Similarly, I have created a function to download a file on an App Service to the local file system via PowerShell.

Using the ZIP API

In addition to using the VFS API, we can also use the Kudu ZIP Api, which allows to upload zip files and expand them into folders, and compress server folders as zip files and download them.

  • GET /api/zip/{path}    (Zip up and download the specified folder)
  • PUT /api/zip/{path}    (Upload a zip file which gets expanded into the specified folder)

You could create your own PowerShell functions to interact with the ZIP API based on what we have previously shown.

Conclusion

As we have seen, in addition to the multiple deployment options we have for Azure App Services, we can also use the Kudu VFS API to interact with the App Service Virtual File System via HTTP. I have shared some functions for some of the provided operations. You could customise these functions or create your own based on your needs.

I hope this has been of help and feel free to add your comments or queries below. 🙂

Gracefully managing Gulp process hierarchy on Windows

Process Tree

When developing client side JavaScript, one thing that really comes in handy is the ability to create fully functional stubs that can mimic real server APIs. This decouples project development dependencies and allows different team members to work in parallel against an agreed API contract.

To allow people to have an isolated environment to work on and get immediate feedback on their changes, I leverage the Gulp.js + Node.js duo.

“Gulp.js is a task runner that runs on Node.js that it is normally used to automate UI development workflows such as LESS to CSS conversion, making HTML templates, minify CSS and JS, etc. However, it can also be used to fire up local web servers, custom processes and many other things.”

To make things really simple, I set up Gulp to allow anyone to enter gulp dev on a terminal to start the following workflow:

  1. Prepare distribution files
    – Compile LESS or SASS style sheets
    – Minify and concatenate JavaScript files
    – Compile templates (AngularJS)
    – Version resources for cache busting (using a file hash) 
    – etc
  2. Start stub servers
    – Start an API mockup server
    – Start a server that can serve local layouts HTML files
  3. Open a browser pointing to the local App (chrome or IE depending on the platform)
  4. Fire up file watchers to trigger specific builds and browser reloads when project files change.

The following is an extract of the entry point for these gulp tasks (gulp and gulp dev):

The DEV START.servers task is imported by the requireDir('./gulp-tasks') instruction. This task triggers other dependent workflows that spin up plain Node.js web servers that are, in turn, kept alive by a process supervisor library called forever-monitor.

There is a catch, however. If you are working on a Unix-friendly system (like a Mac), processes are normally managed hierarchically. In a normal situation, a root process (let’s say gulp dev) will create child processes (DEV START.api.server and DEV START.layouts.server) that will only live during the lifetime of their parent. This is great because whenever we terminate the parent process, all its children are terminated too.

In a Windows environment, processes management is done in a slightly different way – even if you close a parent process its child processes will stay alive doing what they were already doing. Child processes will contain only the parent ID as a reference. This means that it is still possible to mimic Unix process tree behaviour, but it is just a little bit more tedious and some library creators avoid dealing with the problem. This is the case with Gulp.

So in our scenario we listen to the SIGINT signal and gracefully terminate all processes when it is raised through keyboard input (hitting Ctrl-B). This prevents developers on windows from having to go to the task manager and terminate orphaned child processes themselves.

I am also using process.once(... event listener instead of process.on(... to prevent an error surfaced on Macs when Ctrl-B is hit more than once. We don’t want the OS to complain when we try to terminate a process that has already been terminated :).

That is it for now..

Happy hacking!

Creating self-signed certs using OpenSSL on Windows

ssl

Working with Linux technologies exposes you to a huge number of open source tools that can simplify and speed up your development workflow. Interestingly enough, many of these tools are now flooding into the Windows ecosystem allowing us to increase the portability of our development assets across multiple operating systems.

Today I am going to demonstrate how easy it is to install OpenSSL on Windows and how simple it is to quickly create self-signed certificates for our development TLS needs that will work on a range of operating systems.

We will start by installing the following tools:

1. Chocolatey
https://chocolatey.org/
“Chocolatey is a package manager for Windows (like apt-get or yum but for Windows). It was designed to be a decentralized framework for quickly installing applications and tools that you need. It is built on the NuGet infrastructure currently using PowerShell as its focus for delivering packages from the distros to your door, err computer.”

2. Cmder
https://chocolatey.org/packages?q=cmder
“Cmder is a software package created out of pure frustration over the absence of nice console emulators on Windows. It is based on amazing software, and spiced up with the Monokai color scheme and a custom prompt layout. Looking sexy from the start”

Benefits of this approach

  • Using OpenSSL provides portability for our scripts by allowing us to run the same commands no matter which OS you are working on: Mac OSX, Windows or Linux.
  • The certificates generated through OpenSSL can be directly imported as custom user certificates on Android and iOS (this is not the case with other tools like makecert.exe, at least not directly).
  • Chocolatey is a very effective way of installing and configuring software on your Windows machine in a scriptable way (Fiddler, Chrome, NodeJS, Docker, Sublime… you name it).
  • The “Cmder” package downloads a set of utilities which are commonly used in the Linux world. This once again allows for better portability of your code, especially if you want to start using command line tools like Vagrant, Git and many others.

Magically getting OpenSSL through Cmder

Let’s get started by installing the Chocolatey package manager onto our machine. It only takes a single like of code! See: https://chocolatey.org/install

Now that we have our new package manager up and running, getting the Cmder package installed becomes as simple as typing in the following instruction:

C:\> choco install cmder
Cmdr window

reference: http://cmder.net/

The Cmder package shines “big time” because it installs for us a portable release of the latest Git for Windows tools (https://git-for-windows.github.io). The Git for Windows project (aka msysGit) gives us access to the most traditional commands found on Linux and Mac OSX: ls, ssh, git, cat, cp, mv, find, less, curl, ssh-keygen, tar ….

mssysgit

… and OpenSSL.

Generating your Root CA

The following instructions will help you generating your Root Certificate Authority (CA) certificate. This is the CA that will be trusted by your devices and that will be used to sign your own TLS HTTPS development certs.

root CA files

We now double-click on the myRootCA.pfx file to fire up the Windows Certificate Import Wizard and get the Root CA imported into the Trusted Root Certification Authorities store. Too easy… let’s move on to signing our first TLS certs with it!

Generating your TLS cert

The following commands will quickly get the ball rolling by generating and signing the certificate request in interactive mode (entering cert fields by hand). In later stages you might want to use a cert request configuration file and pass it in to the OpenSSL command in order to make the process scriptable and therefore repeatable.

Just for the curious, I will be creating a TLS cert for “sweet-az.azurewebsites.net” to allow me to setup a local dev environment that mimics an Azure Web App.

Just as we did in the previous step, we can double click on the packaged myTSL.pfx file to get the certificate imported into the Local Machine/Personal Windows Certificate Store.

Testing things out

Finally we will just do a smoke test against IIS following the traditional steps:

  • Create an entry for the hostname used in the cert in your hosts file:
    127.0.0.1            sweet-az.azurewebsites.net
  •  Create an 433 binding for the default site in IIS Management Console.

SSL site bindings

Let’s confirm that everything has worked correctly by opening a browser session and navigating to our local version of the https://sweet-az.azurewebsites.net website.

Sweet as!

That’s all folks!

In following posts I will cover how we get these certs installed and trusted on our mobile phones and then leverage other simple techniques to help us developing mobile applications (proxying and ssh tunneling). Until then….

Happy hacking!

Access Azure linked templates from a private repository

I recently was tasked to propose a way to use linked templates, especially how to refer to templates stored in a private repository.  The Azure Resource Manager (ARM) engine accepts a URI to access and deploy linked templates, hence the URI must be accessible by ARM.  If you store your templates in a public repository, ARM can access them fine, but what if you use a private repository?  This post will show you how.

In this example, I use Bitbucket – a Git-based source control product by Atlassian.  The free version (hosted in the cloud) allows you to have up to 5 private repositories.  I will describe the process for obtaining a Bitbucket OAuth 2.0 token using PowerShell and how we can use the access token in Azure linked templates.

Some Basics

If you store your code in a private repository, you can access the stored code after logging into Bitbucket.  Alternatively, you can use an access token instead of user credentials to log in.  Access tokens allow apps or programs to access private repositories hence, they are secret in nature, just like passwords or cash.

Bitbucket access token expires in one hour.  Once the token expires, you can either request for a new access token or renew using a refresh token.  Bitbucket supports all 4 of the RFC-6749 standard to grant access/bearer token – in this example; we will use the password grant method.  Note that this method will not work when you have a two-factor authentication enabled.

Getting into actions

First thing first, you must have a private repository in Bitbucket. To obtain access tokens, we will create a consumer in Bitbucket which will generate a consumer key and a secret.  This key/secret pair is used to grant access token.  See Bitbucket documentation for more detailed instructions.

Before I describe the process to grant access token in PowerShell, let’s examine how we can use get Bitbucket access token using the curl command.


curl -X POST -u "client_id:secret"
https://bitbucket.org/site/oauth2/access_token -d grant_type=password
-d username={username} -d password={password}

  • -X POST = specifies a POST method over HTTP, the request method that is passed to the Bitbucket server.
  • -u “client_id:secret” = specifies the user name and password used for basic authentication.  Note that this does not refer to the bitbucket user login details but rather the consumer key/secret pair.
  • -d this is the body of the HTTP request – in this case, it specifies the grant_type method to be used, which is password grant. In addition to Bitbucket login details – username and password.

To replicate the same command in PowerShell, we can use the Invoke-RestMethod cmdlet. This cmdlet is an REST-compliant, a method that sends HTTP/HTTPS requests which return structured data.


# Construct BitBucket request
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $bbCredentials.bbConsumerKey,$bbCredentials.bbConsumersecret)))
$data = @{
grant_type = 'password'
username=$bbCredentials.bbUsername
password=$bbCredentials.bbPassword
}
# Perform the request to BB OAuth2.0
$tokens=Invoke-RestMethod -Uri $accessCodeURL -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -Method Post -Body $data

The base64AuthInfo variable creates a base64 encoded string for HTTP basic authentication and the HTTP body request encapsulated in the data variable.  Both variables are used to construct the Bitbucket OAuth 2.0 request.

When successfully executed, an access token is produced (an example is below).  This access token is valid for 1 hour by default and you can either renew it with a refresh token or request a new access token.

access_token :g-9dXI3aa3upn0KpXIBGfq5FfUE7UXqHAiBeYD4j_mf383YD2drOEf8Y7CCfAv3yxv2GFlODC8hmhwXUhL8=
scopes : repository 
expires_in : 3600 
refresh_token : Vj3AYYcebM8TGnle8K 
token_type : bearer

Use in Azure Linked Templates

Once you have obtained the access token, we can use it in our Azure linked templates by including it as part of the URL query string.

(For ways how we can implement linked templates, refer to my previous blog post)

{
 "apiVersion": "2015-01-01",
 "name": "dbserverLinked",
 "type": "Microsoft.Resources/deployments",
 "properties": {
    "mode": "Incremental",
     "templateLink": {
         "uri": "https://api.bitbucket.org/1.0/repositories/swappr/azurerm/raw/e1db69add5d62f64120b06a3179828a37b7f166c/azuredeploy.json?accesstoken=g-9dXI3aa3upn0KpXIBGfq5FfUE7UXqHAiBeYD4j_mf383YD2drOEf8Y7CCfAv3yxv2GFlODC8hmhwXUhL8=",
         "contentVersion": "1.0.0.0"
     },
     "parametersLink": {
        "uri": "https://api.bitbucket.org/1.0/repositories/swappr/azurerm/raw/6208359175c99bb892c2097901b0ed7bd723ae56/azuredeploy.parameters.json?access_token=g-9dXI3aa3upn0KpXIBGfq5FfUE7UXqHAiBeYD4j_mf383YD2drOEf8Y7CCfAv3yxv2GFlODC8hmhwXUhL8=",
        "contentVersion": "1.0.0.0"
     }
 }
}

Summary

We have described a way to obtain an access token to a private Bitbucket repository. With this token, your app can access resources, code, or any other artifacts stored in your private repository.  You can also use the access token in your build server so it can get required code, build/compile it, or perform other things.

 

 

Cloud and the rate of Change Management

At Kloud we get incredible opportunities to partner with organisations who are global leaders in their particular industry.

Listening to and observing one of our established clients inspired me to write about their approach to change management in the cloud around the SaaS model.

Let’s start by telling you a quick story about bubble wrap.

Bubble wrap has taken a very different journey to what it was originally intended for. It was meant to be used as wall paper. Who would of thought that to be the case?!

It turned out that at the time there was not a market for bubbled wallpaper and the product was slowly fading into oblivion had there not been innovative and outside-the-box thinking. Bubble wrap’s inventors approached a large technology company in the hope that it could be the packaging material for its fragile product – the result of which revolutionised the packaging industry and today bubble wrap is a household name for many reasons.

In cloud computing we find that cloud suppliers have a wave of new features designed to change and enhance the productivity of the consumer. The consumer loves all the additional functionality that will come with the solution, however at most times does not understand the full extent of the changes. Herein lies the challenge – how does the conventional IT department communicate the change, train up end users and ultimately manage change?

We get ready to run all our time intensive checks so that people know exactly what is about to change, when and how it will change which is challenging to say the least. However, when you ramp this up to cloud-speed it clearly becomes near impossible to manage.

Change can be likened to exercising your muscles. The more you train the stronger you get, and change management is no different – the more change you execute the better you will get at managing it.

Many organisations keep features turned off until they understand them. Like my client says “I think this is old school”.

The smartphone has been a contributor to educating people on the new way of handling change. It has helped gear people up to be more change ready so we should give the end user the credit that they deserve.

So what approach is my client taking?

Enable everything.

Yes, that’s right, let the full power of Cloud permeate through your organisation and recognise that the more time you spend trying to control what gets released as opposed to what doesn’t, the more time you waste which can be better used to create more value.

Enable features and let your early adopters benefit, then let the remaining people go on a journey. Sure, some might get it wrong and end up using the functionality incorrectly but this is likely to be the minority. Remember, most people don’t like change whether we like to admit it or not. As a result you need to spend listening to feedback and seeing how they interact with the technology.

Now let’s address the elephant in the room.

If we enable everything doesn’t it lead to chaos? Well let’s think this through by looking at the reverse. What would happen if we did not enable anything at all? Nothing. What does nothing lead to? Well, to be precise, nothing.

Think about when Henry Ford first rolled out the self-propelled vehicle which he named the Ford Quadricycle in an era where people struggled to look past horses. Did he ever dream that one day there would be electric cars? Probably not. Which is ironic considering he was introduced to Thomas Edison!

My point though? If you try and limit change you could very well be stifling progress. Imagine the lost opportunities?

Unlike bubble wrap which eventually will pop, Cloud services will continue to evolve and expand and so our way in handling change needs to evolve, adapt and change. Just maybe the only thing that has to pop is our traditional approach to change.

Creating Accounts on Azure SQL Database through PowerShell Automation

In the previous post, Azure SQL Pro Tip – Creating Login Account and User, we have briefly walked through how to create login accounts on Azure SQL Database through SSMS. Using SSMS is of course the very convenient way. However, as a DevOps engineer, I want to automate this process through PowerShell. In this post, we’re going to walk through how to achieve this goal.

Step #1: Create Azure SQL Database

First of all, we need an Azure SQL Database. It can be easily done by running an ARM template in PowerShell like:

We’re not going to dig it further, as this is beyond our topic. Now, we’ve got an Azure SQL Database.

Step #2: Create SQL Script for Login Account

In the previous post, we used the following SQL script:

Now, we’re going to automate this process by providing username and password as parameters to an SQL script. The main part of the script above is CREATE LOGIN ..., so we slightly modify it like:

Now the SQL script is ready.

Step #3: Create PowerShell Script for Login Account

We need to execute this in PowerShell. Look at the following PowerShell script:

Looks familiar? Yes, indeed. It’s basically the same as using ADO.NET in ASP.NET applications. Let’s run this PowerShell script. Woops! Something went wrong. We can’t run the SQL script. What’s happening?

Step #4: Update SQL Script for Login Account

CREATE LOGIN won’t take variables. In other words, the SQL script above will never work unless modified to take variables. In this case, we don’t want to but should use dynamic SQL, which is ugly. Therefore, let’s update the SQL script:

Then run the PowerShell script again and it will work. Please note that using dynamic SQL here wouldn’t be a big issue, as all those scripts are not exposed to public anyway.

Step #5: Update SQL Script for User Login

In a similar way, we need to create a user in the Azure SQL Database. This also requires dynamic SQL like:

This is to create a user with a db_owner role. In order for the user to have only limited permissions, use the following dynamic SQL script:

Step #6: Modify PowerShell Script for User Login

In order to run the SQL script right above, run the following PowerShell script:

So far, we have walked through how we can use PowerShell script to create login accounts and user logins on Azure SQL Database. With this approach, DevOps engineers will be easily able to create accounts on Azure SQL Database by running PowerShell script on their build server or deployment server.

Automate your Cloud Operations Part 2: AWS CloudFormation

Stacking the AWS CloudFormation

Automate your Cloud Operations blog post Part 1 have given us the basic understanding on how to automate the AWS stack using CloudFormation.

This post will help the reader on how to layer the stack on top of the existing AWS CloudFormation stack using AWS CloudFormation instead of modifying the base template. AWS resources can be added into existing VPC using the outputs detailing the resources from the main VPC stack instead of having to modify the main template.

This will allow us to compartmentalize, separate out any components of AWS infrastructure and again versioning all different AWS infrastructure code for every components.

Note: The template I will use for this post for educational purposes only and may not be suitable for production workloads :).

Diagram below will help to illustrate the concept:

CloudFormation3

Bastion Stack

Previously (Part 1), we created the initial stack which provide us the base VPC. Next, we will  provision bastion stack which will create a bastion host on top of our base VPC. Below are the components of the bastion stack:

  • Create IAM user that can find out information about the stack and has permissions to create KeyPairs and actions related.
  • Create bastion host instance with the AWS Security Group enable SSH access via port 22
  • Use CloudFormation Init to install packages, create files and run commands on the bastion host instance also take the creds created for the IAM user and setup to be used by the scripts
  • Use the EC2 UserData to run the cfn-init command that actually does the above via a bash script
  • The condition handle: the completion of the instance is dependent on the scripts running properly, if the scripts fail, the CloudFormation stack will error out and fail

Below is the CloudFormation template to build the bastion stack:

Following are the high level steps to layer the bastion stack on top of the initial stack:

I put together the following video on how to use the template:

NAT Stack

It is important to design the VPC with security in mind. I recommend to design your Security Zone and network segregation, I have written a blog post regarding how to Secure Azure Network. This approach also can be impelemented on AWS environment using VPC, subnet and security groups. At the very minimum we will segregate the Private subnet and Public subnet on our VPC.

NAT instance will be added to our Initial VPC “public” subnets so that the future private instances can use the NAT instance for communication outside the Initial VPC. We will use exact same method like what we did on Bastion stack.

Diagram below will help to illustrate the concept:

CloudFormation4

The components of the NAT stack:

  • An Elastic IP address (EIP) for the NAT instance
  • A Security Group for the NAT instance: Allowing ingress traffic tcp, udp from port 0-65535 from internal subnet ; Allowing egress traffic tcp port 22, 80, 443, 9418 to any and egress traffic udp port 123 to Internet and egress traffic port 0-65535 to internal subnet
  • The NAT instance
  • A private route table
  • A private route using the NAT instance as the default route for all traffic

Following is the CloudFormation template to build the stack:

Similar like the previous steps on how to layer the stack:

Hopefully after reading the Part 1 and the Part 2 of my blog posts, the readers will gain some basic understanding on how to automate the AWS cloud operations using AWS CloudFormation.

Please contact Kloud Solutions if the readers need help with automating AWS production environment

http://www.wasita.net