MIM configuration version control with Git

The first question usually asked when something goes wrong: What changed?

Some areas of FIM/MIM make it easy to answer that question, some more difficult. If the Reporting Services components haven’t been installed (pretty common), history within the Portal/Service is only retained for 30 days by default, but also contains all data changes not just configuration changes. So, how do we track configuration change?

I was inspired by colleague Darren Robinson’s post “Automate the nightly backup of your Development FIM/MIM Sync and Portal Servers Configuration“, but wanted more detail, automatic differences, and handy visualisation. This is my first rough version and hasn’t been deployed ‘in anger’ at a client, so I expect I haven’t found all the pros/cons as yet. It also doesn’t implement all the recommendations from Microsoft (Check FIM Service Backup and Restore and FIM 2010: Planning Disaster recovery for details).

Approach

Similar to Darren’s post, we’ll export various Sync and MIM Service config to text files, then use a local git repository (no, not GitHub) to store and track the differences.

Assumptions

The script is written with the assumption that you have an all-in-one MIM-in-a-box. I’ll probably extend it at some point to cater for expanded installations. I’m also assuming PowerShell 5 for easier module package management, but it is not a strict requirement.

Pre-requisites

You will need:

  • “Allow log on locally” (and ideally, “Allow log on through Remote Desktop Services”) rights on your FIM/MIM all-in-one server, with access to create directories and files under C:\MIMBackup (or a similar backup location)
    New-Item -ItemType Directory -Path C:\MIMBackup
  • Access to your FIM/MIM Synchronisation Service with MIM Sync Admin rights (can you open the Synchronisation Service Console?). Yes, Admin. I’d love to do this with minimum privileges, but it just doesn’t seem achievable with the permissions available
  • Access to your FIM/MIM Service with either membership of the Administrators set, or a custom set created with Read access to members of set “All Resources”
  • Portable Git for Windows (https://github.com/git-for-windows/git/releases/latest)
    The Portable version is great, doesn’t require administrative access to install/use, doesn’t impact other installation of Git (if any), and is easy to update/maintain with no impact on any other software. Perfect for use in existing environments, and good for change control

    Unpack it into C:\MIMBackup\PortableGit
  • Lithnet FIM/MIM Service PowerShell Module (https://github.com/lithnet/resourcemanagement-powershell)
    The ‘missing commandlets’ for FIM/MIM. Again, they don’t have to be installed with administrative access and can be copied to specific use locations so that other installations/copies will not be affected by version differences/updates

    New-Item -ItemType Directory -Path C:\MIMBackup\Modules
    Save-Module -Name LithnetRMA -Path C:\MIMBackup\Modules
  • Lithnet PowerShell Module for FIM/MIM Synchronization Service (https://github.com/lithnet/miis-powershell)
    More excellent cmdlets for working with the Synchronisation service

    Save-Module -Name LithnetMIISAutomation -Path C:\MIMBackup\Modules
  • FIMAutomation Module (or PSSnapin)
    The ‘default’ PowerShell commandlets for FIM/MIM. Not the fastest tools available, but they do make exporting the FIM/MIM Service configuration easy. If you create a module from the PSSnapin [Check my previous post], you don’t need any special tricks to install it

    Store the module in C:\MIMBackup\Modules\FIMAutomation
  • The Backup-MIMConfig.ps1 script
    C:\MIMBackup\PortableGit\cmd\git.exe clone https://gist.github.com/Froosh/bd17ff4675f945dc7dc3bbb6bbda036d C:\MIMBackup\Backup-MIMConfig

Prepare the Git repository

New-Alias -Name Git -Value C:\MIMBackup\PortableGit\cmd\git.exe
Set-Location -Path C:\MIMBackup\MIMConfig
git init
git config --local user.name "MIM Config Backup"
git config --local user.email "MIMConfigBackup@$(hostname)"

Since the final script will likely be running as a service account, I’m cheating a little and using a default identity that will be used by all users to commit changes to the git repository. Alternatively, you can log in as the service account and set the user.name and user.email in ‘normal’ git per-user mode.

git config user.name "Service Account"
git config user.email "ServiceAccount@$(hostname)"

Give it a whirl!

C:\MIMBackup\Backup-MIMConfig\Backup-MIMConfig.ps1

Now, make a change to your config, run the script again, and look at the changes in Git GUI.

Set-Location -Path C:\MIMBackup\MIMConfig
C:\MIMBackup\PortableGit\cmd\gitk.exe

As you can see here, I changed the portal timezone config:

TimezoneChangeLarge

Finally, the whole backup script

Code Management in Serverless Computing – AWS Lambda and Azure Functions

In the Serverless world, we don’t need to setup server. We just take care of codes (called functions). However, one of the major drawbacks of the current FaaS (Function as a Service) providers in the Serverless world is they support lack of code management features. In this post, we’ll compare both AWS Lambda (Lambda) and Azure Functions (Functions) in regards to the source code management.

AWS Lambda

Lambda doesn’t natively support code management. We can do version management by uploading a .zip file as an artifact.

However, this is not an option from a DevOps perspective because we should manually upload the artifact. Web editor doesn’t store any change history. So, what should we do for Lambda? Fortunately, there a nice open-source tool called Apex. Apex claims that it helps us manage codes for testing and deployment. Sounds awesome, doesn’t it? Let’s have a look.

AWS CLI

In order to use Apex we should have AWS CLI installed on our local machine (and build server, too). Take an appropriate installation method based on our OS (let’s say Windows in this context). Once we install AWS CLI, then we can check if it’s properly installed by typing command like:

Now, we’ve got AWS CLI installed on our dev box. Let’s configure my AWS settings. Type aws configure and enter AWS Access Key ID, AWS Secret Access Key, Default region name and Default output format like:

We’re ready for Apex. Let’s move on.

Apex

Apex is platform-agnostic. Its binary can be downloaded from their repository. Make sure once download a binary, rename it to apex for your convenience (of course you can use its original name, if you prefer to do so). Then run the following command:

By giving a project name and description, Apex is ready for deploy.

After running apex init, several files and folders are generated.

Run the following command:

Now, we have a Lambda function deployed to AWS.

Note that deployed functions through Apex cannot be modified directly on the web as it’s zipped and uploaded by Apex.

We have looked how Apex sets up and deploys functions to Lambda. Now, we just simply type git init and push this to our build server. From now on, we can manage our code history.

Azure Functions

Managing codes in Functions is far much easier than how it is done in Lambda, with being helped by Apex. Functions natively supports git, so we can simply take the repository URL from Azure Portal and use it.

If we choose Local Git Repository, access details will be automatically displayed on the display panel.

With this git access details, we can integrate our build server to manage code change history.

Note that deployed functions through git cannot be modified directly on the web.

So far, we’ve briefly looked at both AWS Lambda and Azure Functions with regards to the code change management. Serverless is surely an attractive and emerging technology, so it would be worth watching how it grows up.

Why you should use Git over TFS

Git || TFS

Git || TFS (Source: VisualStudio.com)

I have been an advocate of git for long time now and I might be biased a little bit, but take a moment to read this and judge for yourself whether git is the way to go or not.

If you are starting a new greenfield project, then you should consider putting your code on a git repository instead of TFS. There are many reasons why git is better suited, but the two main ones in my perspective are:

Cross-Platform Support
Git tools are available for all platforms and there are many great (and FREE) GUI tools like GitExtensions or SourceTree. In today’s development world, there is guaranteed to be multiple set of technologies, languages, frameworks, and platform-support in the same solution/project. Think of using NodeJS with Windows Phone app. Or building apps for Windows, Android, and iOS. These are very common solutions today and developers should be given the freedom of choosing the development platform of their choice. Do not limit your developers to use Visual Studio on Windows. One might argue that TFS Everywhere (which is an add-on to Eclipse) is available for other platforms, but you should try it and see how buggy it is and how slow it is in finding pending changes. Having used TFS Everywhere, I would not really recommend it to anyone, unless it is your last resort.

Developers should be able to do their work any time and anywhere

Work Offline
Developers should be able to do their work any time and anywhere. Having TFS relying on internet connection to commit, shelf, or pull is just not good enough. I have even had many instances where TFS was having availability problems, which meant that I was not able to push/pull changes for an hour. This is not good. Git is great when it comes to working offline, and this is due to the inherent advantage of being a fully distributed source control system. Git gives you the ability to a) Have full history of the repo locally. This enables you to review historical changes, review commits, and merge with other branches all locally. b) Work and commit changes to your branch locally c) Stash changes locally. d) Create local tags. e) Change repo history locally before pushing. And many other benefits that come out of the box

Having TFS relying on internet connection to commit, shelf, or pull changes is just not good enough

Hosting Freedom
With TFS, you are pretty much stuck with Microsoft TFS offering. With git however, it is widely available and many providers offer free hosting for git, including VSTF, GitHub, and Bitbucket. With Visual Studio Online itself offering you to host your code in git repositories, there is really no reason why not take full advantage of git.

With VSO itself offering to host your code in git repositories, there is really no reason why not take full advantage of git

I have introduced git in many development teams so far, and I must say that I did get resentment and reluctance from some people at the start, but when people start using it and enjoying the benefits, everything settles in place.

Some developers are afraid of command-line tools or the complexity of push/pull/ and fetch and they want to stay simple with TFS, but this does not fit today’s development environment style. Last month, I was working at this client site where they were using Visual Studio to develop, debug, and deploy to iOS devices, and it was ridiculously slow. As a final resort, I opted to using Eclipse with TFS Everywhere plugin with my Xamarin Studio, and it was a lot better. I still had to suffer the pain of Eclipse-TFS not seeing my changes every now and then but compared to the time I was saving by choosing my own development IDE (Xamarin Studio), I was happy with that.

If you are starting a new project, or if you’re looking for a way to improve your team practices, then do yourself a favour and move to Git

So to summarise, if you are starting a new project, or if you are looking for a way to improve your team practices, then do yourself a favour and move to Git. You will be in a good company, especially if you enjoy contributing to open source project :). Teams that use VSTS as their ALM platform can still use VSTS, but host their code in git repositories on VSTS to take advantage of git and TFS together.

Finally, if you have any questions or thoughts, I would love to hear from you.

Automate your Cloud Operations Part 2: AWS CloudFormation

Stacking the AWS CloudFormation

Automate your Cloud Operations blog post Part 1 have given us the basic understanding on how to automate the AWS stack using CloudFormation.

This post will help the reader on how to layer the stack on top of the existing AWS CloudFormation stack using AWS CloudFormation instead of modifying the base template. AWS resources can be added into existing VPC using the outputs detailing the resources from the main VPC stack instead of having to modify the main template.

This will allow us to compartmentalize, separate out any components of AWS infrastructure and again versioning all different AWS infrastructure code for every components.

Note: The template I will use for this post for educational purposes only and may not be suitable for production workloads :).

Diagram below will help to illustrate the concept:

CloudFormation3

Bastion Stack

Previously (Part 1), we created the initial stack which provide us the base VPC. Next, we will  provision bastion stack which will create a bastion host on top of our base VPC. Below are the components of the bastion stack:

  • Create IAM user that can find out information about the stack and has permissions to create KeyPairs and actions related.
  • Create bastion host instance with the AWS Security Group enable SSH access via port 22
  • Use CloudFormation Init to install packages, create files and run commands on the bastion host instance also take the creds created for the IAM user and setup to be used by the scripts
  • Use the EC2 UserData to run the cfn-init command that actually does the above via a bash script
  • The condition handle: the completion of the instance is dependent on the scripts running properly, if the scripts fail, the CloudFormation stack will error out and fail

Below is the CloudFormation template to build the bastion stack:

Following are the high level steps to layer the bastion stack on top of the initial stack:

I put together the following video on how to use the template:

NAT Stack

It is important to design the VPC with security in mind. I recommend to design your Security Zone and network segregation, I have written a blog post regarding how to Secure Azure Network. This approach also can be impelemented on AWS environment using VPC, subnet and security groups. At the very minimum we will segregate the Private subnet and Public subnet on our VPC.

NAT instance will be added to our Initial VPC “public” subnets so that the future private instances can use the NAT instance for communication outside the Initial VPC. We will use exact same method like what we did on Bastion stack.

Diagram below will help to illustrate the concept:

CloudFormation4

The components of the NAT stack:

  • An Elastic IP address (EIP) for the NAT instance
  • A Security Group for the NAT instance: Allowing ingress traffic tcp, udp from port 0-65535 from internal subnet ; Allowing egress traffic tcp port 22, 80, 443, 9418 to any and egress traffic udp port 123 to Internet and egress traffic port 0-65535 to internal subnet
  • The NAT instance
  • A private route table
  • A private route using the NAT instance as the default route for all traffic

Following is the CloudFormation template to build the stack:

Similar like the previous steps on how to layer the stack:

Hopefully after reading the Part 1 and the Part 2 of my blog posts, the readers will gain some basic understanding on how to automate the AWS cloud operations using AWS CloudFormation.

Please contact Kloud Solutions if the readers need help with automating AWS production environment

http://www.wasita.net

Automate your Cloud Operations Part 1: AWS CloudFormation

Operations

What is Operations?

In the IT world, Operations refers to a team or department within IT which is responsible for the running of a business’ IT systems and infrastructure.

So what kind of activities this team perform on day to day basis?

Building, modifying, provisioning, updating systems, software and infrastructure to keep them available, performing and secure which ensures that users can be as productive as possible.

When moving to public cloud platforms the areas of focus for Operations are:

  • Cost reduction: if we design it properly and apply good practices when managing it (scale down / switch off)
  • Smarter operation: Use of Automation and APIs
  • Agility: faster in provisioning infrastructure or environments by Automating the everything
  • Better Uptime: Plan for failover, and design effective DR solutions more cost effectively.

If Cloud is the new normal then Automation is the new normal.

For this blog post we will focus on automation using AWS CloudFormation. The template I will use for this post for educational purposes only and may not be suitable for production workloads :).

AWS CloudFormation

AWS CloudFormation provides developers and system administrators DevOps an easy way to create and manage a collection of related AWS resources, including provisioning and updating them in an orderly and predictable fashion. AWS provides various CloudFormation templates, snippets and reference implementations.

Let’s talk about versioning before diving deeper into CloudFormation. It is extremely important to version your AWS infrastructure in the same way as you version your software. Versioning will help you to track change within your infrastructure by identifying:

  • What changed?
  • Who changed it?
  • When was it changed?
  • Why was it changed?

You can tie this version to a service management or project delivery tools if you wish.

You should also put your templates into source control. Personally I am using Github to version my infrastructure code, but any system such as Team Foundation Server (TFS) will do.

AWS Infrastructure

The below diagram illustrates the basic AWS infrastructure we will build and automate for this blog post:

CloudFormation1

Initial Stack

Firstly we will create the initial stack. Below are the components for the initial stack:

  • A VPC with CIDR block of 192.168.0.0/16 : 65,543 IPs
  • Three Public Subnets across 3 Availability Zones : 192.168.10.0/24, 192.168.11.0/24,  192.168.12.0/24
  • An Internet Gateway attached to the VPC to allow public Internet access. This is a routing construct for VPC and not an EC2 instance
  • Routes and Route tables for three public subnets so EC2 instances in those public subnets can communicate
  • Default Network ACLs to allow all communication inside of the VPC.

Below is the CloudFormation template to build the initial stack.

The template can be downloaded here: https://s3-ap-southeast-2.amazonaws.com/andreaswasita/cloudformation_template/demo/lab1-vpc_ELB_combined.template

I put together the following video on how to use the template:

Understanding a CloudFormation template

AWS CloudFormation is pretty neat and FREE. You only need to pay for the AWS resources provisioned by the CloudFormation template.

The next bit is understanding the Structure of the template. Typically CloudFormation template will have 5 sections:

  • Headers
  • Parameters
  • Mappings
  • Resources
  • Outputs

Headers: Example:

Parameters: Provision-time spec command-line options. Example:

Mappings: Conditionals Case Statements. Example:

Resources: All resources to be provisioned. Example:

Outputs: Example:

Note: Not all AWS Resources can be provisioned using AWS CloudFormation and it has the following limitations.

In Part 2 we will deep dive further on AWS CloudFormation and automating the EC2 including the configuration for NAT and Bastion Host instance.

http://www.wasita.net

Microsoft Azure Cross Platform Command Line Step by Step

Microsoft Azure is not just about Windows, Microsoft Azure also supports Linux workloads. Spinning up Linux VMs in Microsoft’s fabric offers alternative options for open-source technologies with Microsoft Azure services.

Microsoft also provides Azure Cross-Platform Command-Line Interface (X-Plat CLI) which is a set of Open-Source, Cross-Platform commands for managing Microsoft Azure platform. X-Plat CLI has few top-level commands which correspond to different set of Microsoft Azure features. Typing “azure” will list each of the sub commands.

X-Plat CLI command line tool is implemented in JavaScript (powered by Node.js).

Deploying Linux VM from Microsoft Azure Management Portal
This blog provides Step by Step instructions via Linux VM. It is quite straight forward deploying secure Linux VM by providing a PEM Certificate associated with Private Key. With this Certificate it is possible to create “Password-less” Linux VM when using Azure Management Portal “From Gallery” functionality or Command Line Tool.

Below instructions will provide Step-by-Step guide how to generate key pair with Git. OpenSSL parameter is used on this occasion.

  • Install and launch Git Bash for Windows computer or launch Terminal for OSX
  • Use command below to generate a key pair:

    nix1

  • Fill in some information for the key pair

    nix2

Once it is done, save the .pem file – this is the public key to be “attached” to new Windows Azure VM. Store the “.key” file safely. For Linux system or Mac OSX command chmod 0600 *.key can be leveraged to secure it from unwanted access.

Create new Azure Linux VM by clicking New button on the bottom left Azure Management Portal and select Compute > Virtual Machine > From Gallery > Pick the Linux OS (Recommended to create the Affinity Group, Storage Account and Virtual Network) > Ensure not to check the “Provide a Password” checkbox, instead upload the Certificate. This is the .pem certificate above. Simply follow the rest of deployment wizard to complete the deployment.

nix3

Connecting to Linux VM
SSH Client is needed to connect to Linux VM:

  • Linux: Use SSH
  • Mac OSX: Use SSH built-in the Terminal App
  • Windows: Use SSH built-in Git Bash shell or Download and use Putty

Below is sample SSH command, simply by passing various parameters:

$ssh –i ./kloudadmin.key kloudadmin@kloudlinux1.cloudapp.net –p22

  • ssh = command we use to connect to Linux VM
  • -i ./kloudadmin.key = pointing to private .key file associated .pem used for Linux VM
  • kloudadmin@kloudlinux1.cloudapp.net = Linux VM user name @ VM DNS name
  • -p 22 = The port to connect to, 22 is the default endpoint (the endpoint can be specified)

nix4

Installing Cross Platform Command Line (X-Plat CLI)
There are few ways to install the X-Plat CLI; using installer packages for Windows and OS X or combination of Node.js and NPM for Linux.

Node.js and npm via nave
Nave is a tool for handling node.js installations. Nave is to node.js just like RVM is to Ruby. It pulls directly from nodejs.org

Follow below instructions:

Note: # = explanation; $ command = execute on Linux VM

$ sudo su –#install node.js through nave
$ wget https://raw.github.com/isaacs/nave/master/nave.sh
$ chmod +x nave.sh
$ ./nave.sh install 0.10.15
$ ./nave.sh use 0.10.15
$ node –v

nix5

#install npm

$ curl -s https://npmjs.org/install.sh > npm-install-$$.sh
$ sh npm-install-*.sh

 nix6

Microsoft Azure X-Plat CLI
use npm command to install Azure X-Plat CLI

#install X-Plat CLI
$ npm install azure-cli -g

Using Microsoft Azure X-Plat CLI

Type $azure to test and show sub-commands

nix7

Microsoft Azure Publish Settings File
MIcrosoft Azure Publish Settings File needs to be downloaded and imported in order to create resources on related subscription.

$azure account download
$azure account import “path to the publishing file”

Note: You need a browser to download the publish settings file or You can download the file from local machine and upload it to Azure Linux VM
Example Azure Account Import: $ azure account import My.publishsettings

nix8

Click here for more details info how to use Microsoft Azure X-Plat CLI