An Approach to DevOps Adoption

Originally posted at Chandra’s blog – https://fastandsteady.io

DevOps has been a buzzword for a while now in the tech industry, with many organizations joining the bandwagon and working towards embracing DevOps practices.

Wikipedia describes DevOps as “a practice that emphasizes the collaboration and communication of the IT professionals across the value chain while automating the process of software delivery and infrastructure changes. The aim is to deliver the software quickly and reliably.”

However, in an Enterprise scenario with the complexity involved, the journey to implement DevOps comprehensively is evolutionary.  Hence, it is only sensible to drive along an incremental adoption path. Each increment has to provide the most benefits through the MVP (Minimum Viable Product) delivered towards the DevOps journey.

In this context, this article attempts to explain the initial steps towards the larger DevOps journey and helps to get a head start.

The approach at high-level consists of four major steps:

  1. Value stream mapping – Mapping the existing process workflows
  2. Future state value stream mapping – Identify the immediate goals and visualize the optimized value stream map
  3. Execution – Incremental approach towards the implementation
  4. Retrospection – Review and learn.

OK, let’s get started!

Value Stream Mapping 

Value stream mapping is a lean improvement strategy that maps the processes and information flows of a product from source to delivery. For software delivery, it is the pre-defined path an idea takes to transform into a software product/service delivered to the end customers.

Value stream mapping exercise of the current services delivered serves as the first step towards the DevOps journey. It helps in identifying the overall time taken by the value chain, the process cycle time and the lead time between the processes involved in the software delivery.  It also captures various process specific metrics along the value chain.

Quite obviously, the exercise involves collaboration with multiple stakeholders of the application lifecycle management to gather the details and at the same time align them to a shared goal. In fact, it sets the stage for the larger collaboration between the parties as the journey progresses.

The picture below depicts a typical software development workflow at a high level, agnostic to the development methodology. Depending on the type of change or the product, one or more steps may not be relevant to the application’s lifecycle.

Software Lifecycle

Value stream mapping provides key insights on the overall performance of the value chain. Details include:

  • The process and information flow
  • Overall timeline from an idea generation to release
  • Fastest and slowest processes
  • Shortest and longest lead times.

The insights will pave the path to the definition of strategic goals and the immediate goals that will help optimize the overall value chain. The next steps describe the future state value stream mapping and the execution methodology that focuses on breaking the silos to create a people and technology-driven culture for accelerated processes.

Future state Value Stream Mapping

The future state value stream mapping focuses on defining the immediate goals for optimising the value chain. The optimization focuses on improving the product/service quality delivered through each process, to improve the cycle times and the lead time, etc. Remember, the aim is to deliver quickly and reliably.

While the easiest route to go about this, is to target those processes that are time-consuming, it is imperative to analyse multiple aspects of the process before considering the options. Below are a few metrics that could be considered to evaluate the options and build the future state value stream mapping. The optimisation options are to be balanced against the metrics to arrive at the final set that will be part of the execution.

Table

Execution

Ideation to action – This next step in the journey is broken down into three key aspects:

  1. Backlog grooming
  2. Shared responsibility model and
  3. Implementation.

Backlog grooming

This involves user story detailing and a backlog creation for the areas identified for optimization in collaboration with all the parties involved. The backlog has to be put to action either in sprints or in a Kanban approach depending on how you want to manage the execution.

Further, below, describes how the backlog could be driven forward for execution.

Next steps

Shared responsibility model

It is vital that a culture of collaboration and communication between the stakeholders is nurtured for a successful DevOps journey. A shared responsibility model is equally important. It overlaps the services delivered by each of the stakeholders involved, which earlier used to be in silos. Below is a depiction on how the shared responsibility model evolves with the DevOps adoption.

Shared responsibilities

As you may figure out, during the exercise, most tasks could ideally be delivered by the operations team. However, tasks related to the design optimization, setting up continuous integration, implementing a test automation framework, etc. are part of the Development/QA community’s responsibilities.

The project management related tasks are more focused on nurturing the culture as well as providing the tools/methods for improving the collaboration and to provide the work in process visibility. The key is to bring together the teams involved (including business, development, quality assurance, service delivery, and operations) and build the necessary tools and technologies to drive the agile processes.

Again, depending on the organization, Scrum/Kanban methods can be implemented to execute the user stories/tasks.

Implementation

Tools and technology implementation is one of the core aspects of a typical DevOps journey. They provide the required automation across the ALM and accelerate the adoption, and make the whole process sustainable. Needless to say, the implementation of the tools and/or otherwise the tasks are driven through the shared responsibility model and the sprint plans put together.

Review and Retrospection

The last step in the cycle is to measure and map the outcomes achieved and update the value stream map to project the new realities. It is also important to review the improvement in the overall process transparency. A detailed review of the execution provides insights into areas of further development across all the facets of the process.

The last step of the cycle could well be also, the first step towards another iteration/increment for further optimization of the value stream mapping nurturing and driving the DevOps culture and the journey forward.

Interacting with Azure Web Apps Virtual File System using PowerShell and the Kudu API

Introduction

Azure Web Apps or App Services are quite flexible regarding deployment. You can deploy via FTP, OneDrive or Dropbox, different cloud-based source controls like VSTS, GitHub, or BitBucket, your on-premise Git, multiples IDEs including Visual Studio, Eclipse and Xcode, and using MSBuild via Web Deploy or FTP/FTPs. And this list is very likely to keep expanding.

However, there might be some scenarios where you just need to update some reference files and don’t need to build or update the whole solution. Additionally, it’s quite common that corporate firewalls restrictions leave you with only the HTTP or HTTPs ports open to interact with your Azure App Service. I had such a scenario where we had to automate the deployment of new public keys to an Azure App Service to support client certificate-based authentication. However, we were restricted by policies and firewalls.

The Kudu REST API provides a lot of handy features which support Azure App Services source code management and deployments operations, among others. One of these is the Virtual File System (VFS) API. This API is based on the VFS HTTP Adapter which wraps a VFS instance as an HTTP RESTful interface. The Kudu VFS API allows us to upload and download files, get a list of files in a directory, create directories, and delete files from the virtual file system of an Azure App Service; and we can use PowerShell to call it.

In this post I will show how to interact with the Azure App Service Virtual File Sytem (VFS) API via PowerShell.

Authenticating to the Kudu API.

To call any of the Kudu APIs, we need to authenticate by adding the corresponding Authorization header. To create the header value, we need to use the Kudu API credentials, as detailed here. Because we will be interacting with an API related to an App Service, we will be using site-level credentials (a.k.a. publishing profile credentials).

Getting the Publishing Profile Credentials from the Azure Portal

You can get the publishing profile credentials, by downloading the publishing profile from the portal, as shown in the figure below. Once downloaded, the XML document will contain the site-level credentials.

Getting the Publishing Profile Credentials via PowerShell

We can also get the site-level credentials via PowerShell. I’ve created a PowerShell function which returns the publishing credentials of an Azure App Service or a Deployment Slot, as shown below.

Bear in mind that you need to be logged in to Azure in your PowerShell session before calling these cmdlets.

Getting the Kudu REST API Authorisation header via PowerShell

Once we have the credentials, we are able to get the Authorization header value. The instructions to construct the header are described here. I’ve created another PowerShell function, which relies on the previous one, to get the header value, as follows.

Calling the App Service VFS API

Once we have the Authorization header, we are ready to call the VFS API. As shown in the documentation, the VFS API has the following operations:

  • GET /api/vfs/{path}    (Gets a file at path)
  • GET /api/vfs/{path}/    (Lists files at directory specified by path)
  • PUT /api/vfs/{path}    (Puts a file at path)
  • PUT /api/vfs/{path}/    (Creates a directory at path)
  • DELETE /api/vfs/{path}    (Delete the file at path)

So the URI to call the API would be something like:

  • GET https://{webAppName}.scm.azurewebsites.net/api/vfs/

To invoke the REST API operation via PowerShell we will use the Invoke-RestMethod cmdlet.

We have to bear in mind that when trying to overwrite or delete a file, the web server implements ETag behaviour to identify specific versions of files.

Uploading a File to an App Service

I have created the PowerShell function shown below which uploads a local file to a path in the virtual file system. To call this function you need to provide the App Service name, the Kudu credentials (username and password), the local path of your file and the kudu path. The function assumes that you want to upload the file under the wwwroot folder, but you can change it if needed.

As you can see in the script, we are adding the “If-Match”=”*” header to disable ETag version check on the server side.

Downloading a File from an App Service

Similarly, I have created a function to download a file on an App Service to the local file system via PowerShell.

Using the ZIP API

In addition to using the VFS API, we can also use the Kudu ZIP Api, which allows to upload zip files and expand them into folders, and compress server folders as zip files and download them.

  • GET /api/zip/{path}    (Zip up and download the specified folder)
  • PUT /api/zip/{path}    (Upload a zip file which gets expanded into the specified folder)

You could create your own PowerShell functions to interact with the ZIP API based on what we have previously shown.

Conclusion

As we have seen, in addition to the multiple deployment options we have for Azure App Services, we can also use the Kudu VFS API to interact with the App Service Virtual File System via HTTP. I have shared some functions for some of the provided operations. You could customise these functions or create your own based on your needs.

I hope this has been of help and feel free to add your comments or queries below. 🙂

Gracefully managing Gulp process hierarchy on Windows

Process Tree

When developing client side JavaScript, one thing that really comes in handy is the ability to create fully functional stubs that can mimic real server APIs. This decouples project development dependencies and allows different team members to work in parallel against an agreed API contract.

To allow people to have an isolated environment to work on and get immediate feedback on their changes, I leverage the Gulp.js + Node.js duo.

“Gulp.js is a task runner that runs on Node.js that it is normally used to automate UI development workflows such as LESS to CSS conversion, making HTML templates, minify CSS and JS, etc. However, it can also be used to fire up local web servers, custom processes and many other things.”

To make things really simple, I set up Gulp to allow anyone to enter gulp dev on a terminal to start the following workflow:

  1. Prepare distribution files
    – Compile LESS or SASS style sheets
    – Minify and concatenate JavaScript files
    – Compile templates (AngularJS)
    – Version resources for cache busting (using a file hash) 
    – etc
  2. Start stub servers
    – Start an API mockup server
    – Start a server that can serve local layouts HTML files
  3. Open a browser pointing to the local App (chrome or IE depending on the platform)
  4. Fire up file watchers to trigger specific builds and browser reloads when project files change.

The following is an extract of the entry point for these gulp tasks (gulp and gulp dev):

The DEV START.servers task is imported by the requireDir('./gulp-tasks') instruction. This task triggers other dependent workflows that spin up plain Node.js web servers that are, in turn, kept alive by a process supervisor library called forever-monitor.

There is a catch, however. If you are working on a Unix-friendly system (like a Mac), processes are normally managed hierarchically. In a normal situation, a root process (let’s say gulp dev) will create child processes (DEV START.api.server and DEV START.layouts.server) that will only live during the lifetime of their parent. This is great because whenever we terminate the parent process, all its children are terminated too.

In a Windows environment, processes management is done in a slightly different way – even if you close a parent process its child processes will stay alive doing what they were already doing. Child processes will contain only the parent ID as a reference. This means that it is still possible to mimic Unix process tree behaviour, but it is just a little bit more tedious and some library creators avoid dealing with the problem. This is the case with Gulp.

So in our scenario we listen to the SIGINT signal and gracefully terminate all processes when it is raised through keyboard input (hitting Ctrl-B). This prevents developers on windows from having to go to the task manager and terminate orphaned child processes themselves.

I am also using process.once(... event listener instead of process.on(... to prevent an error surfaced on Macs when Ctrl-B is hit more than once. We don’t want the OS to complain when we try to terminate a process that has already been terminated :).

That is it for now..

Happy hacking!

Creating self-signed certs using OpenSSL on Windows

ssl

Working with Linux technologies exposes you to a huge number of open source tools that can simplify and speed up your development workflow. Interestingly enough, many of these tools are now flooding into the Windows ecosystem allowing us to increase the portability of our development assets across multiple operating systems.

Today I am going to demonstrate how easy it is to install OpenSSL on Windows and how simple it is to quickly create self-signed certificates for our development TLS needs that will work on a range of operating systems.

We will start by installing the following tools:

1. Chocolatey
https://chocolatey.org/
“Chocolatey is a package manager for Windows (like apt-get or yum but for Windows). It was designed to be a decentralized framework for quickly installing applications and tools that you need. It is built on the NuGet infrastructure currently using PowerShell as its focus for delivering packages from the distros to your door, err computer.”

2. Cmder
https://chocolatey.org/packages?q=cmder
“Cmder is a software package created out of pure frustration over the absence of nice console emulators on Windows. It is based on amazing software, and spiced up with the Monokai color scheme and a custom prompt layout. Looking sexy from the start”

Benefits of this approach

  • Using OpenSSL provides portability for our scripts by allowing us to run the same commands no matter which OS you are working on: Mac OSX, Windows or Linux.
  • The certificates generated through OpenSSL can be directly imported as custom user certificates on Android and iOS (this is not the case with other tools like makecert.exe, at least not directly).
  • Chocolatey is a very effective way of installing and configuring software on your Windows machine in a scriptable way (Fiddler, Chrome, NodeJS, Docker, Sublime… you name it).
  • The “Cmder” package downloads a set of utilities which are commonly used in the Linux world. This once again allows for better portability of your code, especially if you want to start using command line tools like Vagrant, Git and many others.

Magically getting OpenSSL through Cmder

Let’s get started by installing the Chocolatey package manager onto our machine. It only takes a single like of code! See: https://chocolatey.org/install

Now that we have our new package manager up and running, getting the Cmder package installed becomes as simple as typing in the following instruction:

C:\> choco install cmder
Cmdr window

reference: http://cmder.net/

The Cmder package shines “big time” because it installs for us a portable release of the latest Git for Windows tools (https://git-for-windows.github.io). The Git for Windows project (aka msysGit) gives us access to the most traditional commands found on Linux and Mac OSX: ls, ssh, git, cat, cp, mv, find, less, curl, ssh-keygen, tar ….

mssysgit

… and OpenSSL.

Generating your Root CA

The following instructions will help you generating your Root Certificate Authority (CA) certificate. This is the CA that will be trusted by your devices and that will be used to sign your own TLS HTTPS development certs.

root CA files

We now double-click on the myRootCA.pfx file to fire up the Windows Certificate Import Wizard and get the Root CA imported into the Trusted Root Certification Authorities store. Too easy… let’s move on to signing our first TLS certs with it!

Generating your TLS cert

The following commands will quickly get the ball rolling by generating and signing the certificate request in interactive mode (entering cert fields by hand). In later stages you might want to use a cert request configuration file and pass it in to the OpenSSL command in order to make the process scriptable and therefore repeatable.

Just for the curious, I will be creating a TLS cert for “sweet-az.azurewebsites.net” to allow me to setup a local dev environment that mimics an Azure Web App.

Just as we did in the previous step, we can double click on the packaged myTSL.pfx file to get the certificate imported into the Local Machine/Personal Windows Certificate Store.

Testing things out

Finally we will just do a smoke test against IIS following the traditional steps:

  • Create an entry for the hostname used in the cert in your hosts file:
    127.0.0.1            sweet-az.azurewebsites.net
  •  Create an 433 binding for the default site in IIS Management Console.

SSL site bindings

Let’s confirm that everything has worked correctly by opening a browser session and navigating to our local version of the https://sweet-az.azurewebsites.net website.

Sweet as!

That’s all folks!

In following posts I will cover how we get these certs installed and trusted on our mobile phones and then leverage other simple techniques to help us developing mobile applications (proxying and ssh tunneling). Until then….

Happy hacking!

Access Azure linked templates from a private repository

I recently was tasked to propose a way to use linked templates, especially how to refer to templates stored in a private repository.  The Azure Resource Manager (ARM) engine accepts a URI to access and deploy linked templates, hence the URI must be accessible by ARM.  If you store your templates in a public repository, ARM can access them fine, but what if you use a private repository?  This post will show you how.

In this example, I use Bitbucket – a Git-based source control product by Atlassian.  The free version (hosted in the cloud) allows you to have up to 5 private repositories.  I will describe the process for obtaining a Bitbucket OAuth 2.0 token using PowerShell and how we can use the access token in Azure linked templates.

Some Basics

If you store your code in a private repository, you can access the stored code after logging into Bitbucket.  Alternatively, you can use an access token instead of user credentials to log in.  Access tokens allow apps or programs to access private repositories hence, they are secret in nature, just like passwords or cash.

Bitbucket access token expires in one hour.  Once the token expires, you can either request for a new access token or renew using a refresh token.  Bitbucket supports all 4 of the RFC-6749 standard to grant access/bearer token – in this example; we will use the password grant method.  Note that this method will not work when you have a two-factor authentication enabled.

Getting into actions

First thing first, you must have a private repository in Bitbucket. To obtain access tokens, we will create a consumer in Bitbucket which will generate a consumer key and a secret.  This key/secret pair is used to grant access token.  See Bitbucket documentation for more detailed instructions.

Before I describe the process to grant access token in PowerShell, let’s examine how we can use get Bitbucket access token using the curl command.


curl -X POST -u "client_id:secret"
https://bitbucket.org/site/oauth2/access_token -d grant_type=password
-d username={username} -d password={password}

  • -X POST = specifies a POST method over HTTP, the request method that is passed to the Bitbucket server.
  • -u “client_id:secret” = specifies the user name and password used for basic authentication.  Note that this does not refer to the bitbucket user login details but rather the consumer key/secret pair.
  • -d this is the body of the HTTP request – in this case, it specifies the grant_type method to be used, which is password grant. In addition to Bitbucket login details – username and password.

To replicate the same command in PowerShell, we can use the Invoke-RestMethod cmdlet. This cmdlet is an REST-compliant, a method that sends HTTP/HTTPS requests which return structured data.


# Construct BitBucket request
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $bbCredentials.bbConsumerKey,$bbCredentials.bbConsumersecret)))
$data = @{
grant_type = 'password'
username=$bbCredentials.bbUsername
password=$bbCredentials.bbPassword
}
# Perform the request to BB OAuth2.0
$tokens=Invoke-RestMethod -Uri $accessCodeURL -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -Method Post -Body $data

The base64AuthInfo variable creates a base64 encoded string for HTTP basic authentication and the HTTP body request encapsulated in the data variable.  Both variables are used to construct the Bitbucket OAuth 2.0 request.

When successfully executed, an access token is produced (an example is below).  This access token is valid for 1 hour by default and you can either renew it with a refresh token or request a new access token.

access_token :g-9dXI3aa3upn0KpXIBGfq5FfUE7UXqHAiBeYD4j_mf383YD2drOEf8Y7CCfAv3yxv2GFlODC8hmhwXUhL8=
scopes : repository 
expires_in : 3600 
refresh_token : Vj3AYYcebM8TGnle8K 
token_type : bearer

Use in Azure Linked Templates

Once you have obtained the access token, we can use it in our Azure linked templates by including it as part of the URL query string.

(For ways how we can implement linked templates, refer to my previous blog post)

{
 "apiVersion": "2015-01-01",
 "name": "dbserverLinked",
 "type": "Microsoft.Resources/deployments",
 "properties": {
    "mode": "Incremental",
     "templateLink": {
         "uri": "https://api.bitbucket.org/1.0/repositories/swappr/azurerm/raw/e1db69add5d62f64120b06a3179828a37b7f166c/azuredeploy.json?accesstoken=g-9dXI3aa3upn0KpXIBGfq5FfUE7UXqHAiBeYD4j_mf383YD2drOEf8Y7CCfAv3yxv2GFlODC8hmhwXUhL8=",
         "contentVersion": "1.0.0.0"
     },
     "parametersLink": {
        "uri": "https://api.bitbucket.org/1.0/repositories/swappr/azurerm/raw/6208359175c99bb892c2097901b0ed7bd723ae56/azuredeploy.parameters.json?access_token=g-9dXI3aa3upn0KpXIBGfq5FfUE7UXqHAiBeYD4j_mf383YD2drOEf8Y7CCfAv3yxv2GFlODC8hmhwXUhL8=",
        "contentVersion": "1.0.0.0"
     }
 }
}

Summary

We have described a way to obtain an access token to a private Bitbucket repository. With this token, your app can access resources, code, or any other artifacts stored in your private repository.  You can also use the access token in your build server so it can get required code, build/compile it, or perform other things.

 

 

Cloud and the rate of Change Management

At Kloud we get incredible opportunities to partner with organisations who are global leaders in their particular industry.

Listening to and observing one of our established clients inspired me to write about their approach to change management in the cloud around the SaaS model.

Let’s start by telling you a quick story about bubble wrap.

Bubble wrap has taken a very different journey to what it was originally intended for. It was meant to be used as wall paper. Who would of thought that to be the case?!

It turned out that at the time there was not a market for bubbled wallpaper and the product was slowly fading into oblivion had there not been innovative and outside-the-box thinking. Bubble wrap’s inventors approached a large technology company in the hope that it could be the packaging material for its fragile product – the result of which revolutionised the packaging industry and today bubble wrap is a household name for many reasons.

In cloud computing we find that cloud suppliers have a wave of new features designed to change and enhance the productivity of the consumer. The consumer loves all the additional functionality that will come with the solution, however at most times does not understand the full extent of the changes. Herein lies the challenge – how does the conventional IT department communicate the change, train up end users and ultimately manage change?

We get ready to run all our time intensive checks so that people know exactly what is about to change, when and how it will change which is challenging to say the least. However, when you ramp this up to cloud-speed it clearly becomes near impossible to manage.

Change can be likened to exercising your muscles. The more you train the stronger you get, and change management is no different – the more change you execute the better you will get at managing it.

Many organisations keep features turned off until they understand them. Like my client says “I think this is old school”.

The smartphone has been a contributor to educating people on the new way of handling change. It has helped gear people up to be more change ready so we should give the end user the credit that they deserve.

So what approach is my client taking?

Enable everything.

Yes, that’s right, let the full power of Cloud permeate through your organisation and recognise that the more time you spend trying to control what gets released as opposed to what doesn’t, the more time you waste which can be better used to create more value.

Enable features and let your early adopters benefit, then let the remaining people go on a journey. Sure, some might get it wrong and end up using the functionality incorrectly but this is likely to be the minority. Remember, most people don’t like change whether we like to admit it or not. As a result you need to spend listening to feedback and seeing how they interact with the technology.

Now let’s address the elephant in the room.

If we enable everything doesn’t it lead to chaos? Well let’s think this through by looking at the reverse. What would happen if we did not enable anything at all? Nothing. What does nothing lead to? Well, to be precise, nothing.

Think about when Henry Ford first rolled out the self-propelled vehicle which he named the Ford Quadricycle in an era where people struggled to look past horses. Did he ever dream that one day there would be electric cars? Probably not. Which is ironic considering he was introduced to Thomas Edison!

My point though? If you try and limit change you could very well be stifling progress. Imagine the lost opportunities?

Unlike bubble wrap which eventually will pop, Cloud services will continue to evolve and expand and so our way in handling change needs to evolve, adapt and change. Just maybe the only thing that has to pop is our traditional approach to change.

Creating Accounts on Azure SQL Database through PowerShell Automation

In the previous post, Azure SQL Pro Tip – Creating Login Account and User, we have briefly walked through how to create login accounts on Azure SQL Database through SSMS. Using SSMS is of course the very convenient way. However, as a DevOps engineer, I want to automate this process through PowerShell. In this post, we’re going to walk through how to achieve this goal.

Step #1: Create Azure SQL Database

First of all, we need an Azure SQL Database. It can be easily done by running an ARM template in PowerShell like:

We’re not going to dig it further, as this is beyond our topic. Now, we’ve got an Azure SQL Database.

Step #2: Create SQL Script for Login Account

In the previous post, we used the following SQL script:

Now, we’re going to automate this process by providing username and password as parameters to an SQL script. The main part of the script above is CREATE LOGIN ..., so we slightly modify it like:

Now the SQL script is ready.

Step #3: Create PowerShell Script for Login Account

We need to execute this in PowerShell. Look at the following PowerShell script:

Looks familiar? Yes, indeed. It’s basically the same as using ADO.NET in ASP.NET applications. Let’s run this PowerShell script. Woops! Something went wrong. We can’t run the SQL script. What’s happening?

Step #4: Update SQL Script for Login Account

CREATE LOGIN won’t take variables. In other words, the SQL script above will never work unless modified to take variables. In this case, we don’t want to but should use dynamic SQL, which is ugly. Therefore, let’s update the SQL script:

Then run the PowerShell script again and it will work. Please note that using dynamic SQL here wouldn’t be a big issue, as all those scripts are not exposed to public anyway.

Step #5: Update SQL Script for User Login

In a similar way, we need to create a user in the Azure SQL Database. This also requires dynamic SQL like:

This is to create a user with a db_owner role. In order for the user to have only limited permissions, use the following dynamic SQL script:

Step #6: Modify PowerShell Script for User Login

In order to run the SQL script right above, run the following PowerShell script:

So far, we have walked through how we can use PowerShell script to create login accounts and user logins on Azure SQL Database. With this approach, DevOps engineers will be easily able to create accounts on Azure SQL Database by running PowerShell script on their build server or deployment server.

Automate your Cloud Operations Part 2: AWS CloudFormation

Stacking the AWS CloudFormation

Automate your Cloud Operations blog post Part 1 have given us the basic understanding on how to automate the AWS stack using CloudFormation.

This post will help the reader on how to layer the stack on top of the existing AWS CloudFormation stack using AWS CloudFormation instead of modifying the base template. AWS resources can be added into existing VPC using the outputs detailing the resources from the main VPC stack instead of having to modify the main template.

This will allow us to compartmentalize, separate out any components of AWS infrastructure and again versioning all different AWS infrastructure code for every components.

Note: The template I will use for this post for educational purposes only and may not be suitable for production workloads :).

Diagram below will help to illustrate the concept:

CloudFormation3

Bastion Stack

Previously (Part 1), we created the initial stack which provide us the base VPC. Next, we will  provision bastion stack which will create a bastion host on top of our base VPC. Below are the components of the bastion stack:

  • Create IAM user that can find out information about the stack and has permissions to create KeyPairs and actions related.
  • Create bastion host instance with the AWS Security Group enable SSH access via port 22
  • Use CloudFormation Init to install packages, create files and run commands on the bastion host instance also take the creds created for the IAM user and setup to be used by the scripts
  • Use the EC2 UserData to run the cfn-init command that actually does the above via a bash script
  • The condition handle: the completion of the instance is dependent on the scripts running properly, if the scripts fail, the CloudFormation stack will error out and fail

Below is the CloudFormation template to build the bastion stack:

Following are the high level steps to layer the bastion stack on top of the initial stack:

I put together the following video on how to use the template:

NAT Stack

It is important to design the VPC with security in mind. I recommend to design your Security Zone and network segregation, I have written a blog post regarding how to Secure Azure Network. This approach also can be impelemented on AWS environment using VPC, subnet and security groups. At the very minimum we will segregate the Private subnet and Public subnet on our VPC.

NAT instance will be added to our Initial VPC “public” subnets so that the future private instances can use the NAT instance for communication outside the Initial VPC. We will use exact same method like what we did on Bastion stack.

Diagram below will help to illustrate the concept:

CloudFormation4

The components of the NAT stack:

  • An Elastic IP address (EIP) for the NAT instance
  • A Security Group for the NAT instance: Allowing ingress traffic tcp, udp from port 0-65535 from internal subnet ; Allowing egress traffic tcp port 22, 80, 443, 9418 to any and egress traffic udp port 123 to Internet and egress traffic port 0-65535 to internal subnet
  • The NAT instance
  • A private route table
  • A private route using the NAT instance as the default route for all traffic

Following is the CloudFormation template to build the stack:

Similar like the previous steps on how to layer the stack:

Hopefully after reading the Part 1 and the Part 2 of my blog posts, the readers will gain some basic understanding on how to automate the AWS cloud operations using AWS CloudFormation.

Please contact Kloud Solutions if the readers need help with automating AWS production environment

http://www.wasita.net

Automate your Cloud Operations Part 1: AWS CloudFormation

Operations

What is Operations?

In the IT world, Operations refers to a team or department within IT which is responsible for the running of a business’ IT systems and infrastructure.

So what kind of activities this team perform on day to day basis?

Building, modifying, provisioning, updating systems, software and infrastructure to keep them available, performing and secure which ensures that users can be as productive as possible.

When moving to public cloud platforms the areas of focus for Operations are:

  • Cost reduction: if we design it properly and apply good practices when managing it (scale down / switch off)
  • Smarter operation: Use of Automation and APIs
  • Agility: faster in provisioning infrastructure or environments by Automating the everything
  • Better Uptime: Plan for failover, and design effective DR solutions more cost effectively.

If Cloud is the new normal then Automation is the new normal.

For this blog post we will focus on automation using AWS CloudFormation. The template I will use for this post for educational purposes only and may not be suitable for production workloads :).

AWS CloudFormation

AWS CloudFormation provides developers and system administrators DevOps an easy way to create and manage a collection of related AWS resources, including provisioning and updating them in an orderly and predictable fashion. AWS provides various CloudFormation templates, snippets and reference implementations.

Let’s talk about versioning before diving deeper into CloudFormation. It is extremely important to version your AWS infrastructure in the same way as you version your software. Versioning will help you to track change within your infrastructure by identifying:

  • What changed?
  • Who changed it?
  • When was it changed?
  • Why was it changed?

You can tie this version to a service management or project delivery tools if you wish.

You should also put your templates into source control. Personally I am using Github to version my infrastructure code, but any system such as Team Foundation Server (TFS) will do.

AWS Infrastructure

The below diagram illustrates the basic AWS infrastructure we will build and automate for this blog post:

CloudFormation1

Initial Stack

Firstly we will create the initial stack. Below are the components for the initial stack:

  • A VPC with CIDR block of 192.168.0.0/16 : 65,543 IPs
  • Three Public Subnets across 3 Availability Zones : 192.168.10.0/24, 192.168.11.0/24,  192.168.12.0/24
  • An Internet Gateway attached to the VPC to allow public Internet access. This is a routing construct for VPC and not an EC2 instance
  • Routes and Route tables for three public subnets so EC2 instances in those public subnets can communicate
  • Default Network ACLs to allow all communication inside of the VPC.

Below is the CloudFormation template to build the initial stack.

The template can be downloaded here: https://s3-ap-southeast-2.amazonaws.com/andreaswasita/cloudformation_template/demo/lab1-vpc_ELB_combined.template

I put together the following video on how to use the template:

Understanding a CloudFormation template

AWS CloudFormation is pretty neat and FREE. You only need to pay for the AWS resources provisioned by the CloudFormation template.

The next bit is understanding the Structure of the template. Typically CloudFormation template will have 5 sections:

  • Headers
  • Parameters
  • Mappings
  • Resources
  • Outputs

Headers: Example:

Parameters: Provision-time spec command-line options. Example:

Mappings: Conditionals Case Statements. Example:

Resources: All resources to be provisioned. Example:

Outputs: Example:

Note: Not all AWS Resources can be provisioned using AWS CloudFormation and it has the following limitations.

In Part 2 we will deep dive further on AWS CloudFormation and automating the EC2 including the configuration for NAT and Bastion Host instance.

http://www.wasita.net