FIM 2010 R2 and the Missing Log File

Anyone who has had anything to do with FIM will probably have experienced moments where you question what is taking place and ask yourself if you really understand what FIM is doing at a specific point in time. This is partly due to FIM’s extraordinarily unpredictable error handling and logging.

While working on a long running FIM 2010 R2 project where we chose to make heavy use of PowerShell within action and authorisation workflows. We chose to make use of some of the PowerShell extensions for FIM 2010 R2 at Codplex. In particular we made use of:

By enabling FIM to execute PowerShell it enabled us to get FIM to do all kinds of things it otherwise did not have an out of the box capability for. Furthermore it made FIM’s interactions with other systems what I like to call “System Administrator” friendly. Most system administrators these days are pretty comfortable with PowerShell and can at least follow the logic inside of a PowerShell ps1 script.

So, this worked well for us and allowed us to pick the “FIM Extensions PowerShell Activity” from inside of a workflow and execute our very own PowerShell Scripts as part of Action or Authorisation Workflows.

PowerShellActivity

This is awesome until something unexpected happens inside your scripts. In a complex environment where many PowerShell scripts may exist within close proximity of each other troubleshooting can be a less than pleasant experience.

While the concept of logging is nothing new, we did experiment with a few methods before arriving at one that made working with FIM PowerShell Extension more friendly, predictable and requiring standard analytical / system administration skills rather than a specialty in clairvoyance. Originally we started logging to a custom application log inside the windows event-log. This however was slow and cumbersome to view, particularly during the development and testing stages of our project. In the end we found it more helpful to have a single PowerShell text file log, that captured the output of all our scripts activities as executed as part of FIM workflows, allowing for an easier to read view of what has taken place and importantly in what order.

So here my learnings from the field:

Start by creating a function library. Here you can stash any functions and reduce the requirement for repetitive code within your PowerShell scripts. we called our library “fimlib.ps1”

Inside the fimlib.ps1 we wrote a function for logging to a txt file. The function allows us to define a severity ($severity), event code ($code) and a message ($message)

$logfile = “c:\EPASS\Logs\PowerShell”+(get-date -f yyyy-MM-dd)+”.log”

$loginclude = @(“DEBUG”,”INFO”,”WARNING”,”ERROR”,”JOBINFO”,”JOBDEBUG”)

$handle = $pid.ToString() + “.” + (get-random -Minimum 1000 -Maximum 2000).ToString()

function fimlog($severity,$code,$message) {

if ($loginclude -contains $severity) {

$date = get-date -format u

$user = whoami

$msg = “{” + $handle + “} ” + $date + ” [” + (get-pscallstack)[1].Command + “/” + $code + “/” + $severity + ” as ” + $user + “] – ” + $message

add-content $logfile $msg

}

}

Take note of the use of the “$handle” variable, here we are creating a code to identify individual threads based on the current PID and a randomish number.

The end result is a text based log file with events like the following:

log

I like to use NotePad++ to view the log as it has some nice features like automatic notification when there are new entries inside the log. The handle I mentioned earlier makes it easy to sort the log and isolate individual activities by finding all occurrences of an handle. Typically each PowerShell Script executed as a result of a workflow will have a unique handle.

So how do you make this work:

Firstly you’ll need to reference your library at the start of each of your PowerShell scripts. Add something like this to the start of your script:

# Load FIMLib.ps1
. C:\Lib\FIMLib.ps1

Whenever you need to log something you can simply call your “fimlog” function as follows:

fimlog “JOBINFO” 100 “This is something I’d like to log”

While this is nothing revolutionary, it helped me understand what was actually taking place in reality. Hopefully it can help you too.

Start-up like a pro or fast track cloud in your enterprise. . .

As part of my job I regularly interact with IT and business leaders from companies, across a diverse range of industries. A similarity I see across most businesses is that they contain a bunch of knowledge workers that all need to interact both internally and externally with common parties (internal departments / branches, customers, suppliers, vendors and government / regulatory bodies).

So how do knowledge workers in today’s highly connected world collaborate and communicate? Aside from telephone and face to face communication, email is still the primary tool of communication, why? Because it’s universally accepted and everyone in business has it. . . Is this good or bad? It certainly comes at a cost, being the productivity of your knowledge workers. . .

McKinsey & Company state that the average knowledge worker spends 28% of their day in their email client managing email and searching for information’ (McKinsey)

Now 28% is a significant proportion of one’s day, clawing back some of this time to focus on your core business is what competitive advantage is made of.

Interestingly, from my observations many established companies are slow to embrace technologies beyond email that could help to free up time. If this sounds like your business, surely improving productivity and increasing the focus on whatever it is that your business specialises in is of significant importance and priority?

Technology is an important part of everyone’s business and unfortunately it’s often viewed as a cost centre, rather than being viewed as a tool for competitive advantage.

If provisioning and managing IT services isn’t you’re core business, it will naturally be a deterring factor when considering the assimilation of a new technology within your business processes. This is where cloud technologies provide real agility. There is tangible value to be leveraged, particularly from higher level cloud services. Software as a Service (SaaS) and Platform as a Service (PaaS) offerings allow you to think differently and treat IT more like a utility (electricity or water) and consume it, ready-made and working, as service. Why not let someone else that specialises in technology run it for you? Best of all SaaS services are super quick to provision (often minutes) and can easily be trialled with low levels of risk or expense.

Competitive advantage is gleamed by making staff more focused and productive. Strive for solutions that provide your organisation with:

  • Knowledge retention
    – make it easy to seek answers to questions and store your Intellectual property
  • Effortless collaboration
    – make it easy for your staff to collaborate and communicate with everyone they need to be it inside or outside of your corporate / geographical boundary
  • Faster access to information
    – don’t make the mistake of making it a 15 step process to access corporate information or documents
  • Security and Governance
    – choose solutions that have built-in security and governance

So here are my tips for developing agility. . .

If you’re a Start-up, be born in the cloud. . . If you’re a well-established corporation / enterprise perhaps it’s time to think more like a start-up. Be agile, try new things and stay hungry for improvement. To a degree have a simplistic mindset to make things easier.

  • Don’t get stuck in the past
    – doing things the same way as you did five or ten years ago is a recipe for commercial irrelevance
  • Don’t collect Servers
    – if you’re not in the business of IT infrastructure, don’t go on a mission to amass a collection of server’s on-premises.
  • Establish a cloud presence
    – establish a presence with more than one vendor, why use only one? Pick the best bits from multiple vendors
  • Think services not servers
    – strive for the selection of SaaS and PaaS over IaaS wherever possible.
  • Have a strategy for Identity Management
    – avoid identity overload and retain centralised control of identity and access
  • Be ready to switch
    – be open to new solutions and service offerings and agile enough to switch
  • Review your existing Infrastructure landscape
    – identify candidates for transition to a cloud service, preferably SaaS / PaaS
  • Pilot and review some new technologies
    – identify processes ripe for disruption, try and seek feedback from your staff
  • Keep the governance
    – just because it’s in the cloud doesn’t mean you need to abandon your security and governance principles

By dedicating some time and resources, a platform to facilitate quick trial of new services can be achieved. Adopt a hungry mindset, explore cost savings and opportunities to improve productivity.

In Conclusion. . .

Wikipedia define competitive advantage as ‘occurring when an organisation acquires or develops an attribute or combination of attributes that allows it to outperform its competitors’ (Wikipedia). McKinsey & Company’s example of average knowledge worker time spent inside corporate email illustrates the opportunity that exists for improvement on a single front. Many more like this may exist within your business.

Transitioning to anything new can seem daunting. Start by creating a road map for the implementation and adoption of new technology within your business. Discuss, explore and seek answers to questions and concerns you have around cloud services. Adopt a platform that ensures you correctly select, implement and leverage your investment and yield competitive advantage.

If you’re not sure where to start, leverage the skills of others who have been through it many times before. Consider engaging a knowledgeable Kloud consultant to help your organisation with the formulation of a tailored cloud strategy.

 

 

IPv6 – Are we there yet??

The topic of IPv6 seems to come up every couple of years. The first time I recall there being a lot of hype about IPv6 was way back in the early 2000’s, ever since then the topic seems to get attention every once in a while and then disappears into insignificance alongside more exciting IT news.

The problem with IPv4 is that there are only about 3.7 billion public IPv4 addresses. Whilst this may initially sound like a lot, take a moment to think about how many devices you currently have that connect to the Internet. Globally we have already experienced a rapid uptake of Internet connected smart-phones and the recent hype surrounding the Internet of Things (IoT) promises to connect an even larger array of devices to the Internet. With a global population of approx. 7 billion people we just don’t have enough to go around.

Back in the early 2000’s there was limited support in the form of hardware and software that supported IPv6. So now that we have wide spread hardware and software IPv6 support, why is it that we haven’t all switched?

Like most things in the world it’s often determined by the capacity to monetise an event. Surprisingly not all carriers / ISP’s are on board and some are reluctant to spend money to drive the switch. Network address translation (NAT) and Classless Inter-Domain Routing (CIDR), have made it much easier to live with IPv4. NAT used on firewalls and routers lets many nodes in a network sit behind a single public IP address. CIDR, sometimes referred to as supernetting is a way to allocate and specify the Internet addresses used in inter-domain routing in a much more flexible manner than with the original system of Internet Protocol (IP) address classes. As a result, the number of available Internet addresses has been greatly increased and has allowed service providers to conserve addresses by divvying up pieces of a full range of IP addresses to multiple customers.

Perceived risk by consumers also comes into play. It is plausible that many companies may be of the view that the introduction of IPv6 is somewhat unnecessary and potentially risky in terms of effort required to implement and loss of productivity during implementation. Most corporations are simply not feeling any pain with IPv4 so it’s not on their short term radar as being of any level of criticality to their business. When considering IPv6 implementation from a business perspective, the successful adoption of new technologies are typically accompanied by some form of reward or competitive advantage associated with early adoption. The potential for financial reward is often what drives significant change.

To IPv6’s detriment from the layperson’s perspective it has little to distinguish itself from IPv4 in terms of services and service costs. Many of IPv4’s short comings have been addressed. Financial incentives to make the decision to commence widespread deployment just don’t exist.

We have all heard the doom and gloom stories associated with the impending end of IPv4. Surely this should be reason enough for accelerated implementation of IPv6? Why isn’t everyone rushing to implement IPv6 and mitigate future risk? The situation where exhaustion of IPv4 addresses would cause rapid escalation in costs to consumers hasn’t really happened yet and has failed to be a significant factor to encourage further deployment of IPv6 in the Internet.

Another factor to consider is backward compatibility. IPv4 hosts are unable to address IP packets directly to an IPv6 host and vice-versa.

So this means that it is not realistic to just switch over a network from IPv4 to IPv6. When implementing IPv6 a significant period of dual stack IPv4 and IPv6 coexistence needs to take place. This is where IPv6 is turned on and run in parallel with the existing IPv4 network. This just sounds like two networks instead of one and double administrative overhead for most IT decision makers.

Networks need to provide continued support for IPv4 for as long as there are significant levels of IPv4 only networks and services still deployed. Many IT decision makers would rather spend their budget elsewhere and ignore the issue for another year.

Only once the majority of the Internet supports a dual stack environment can networks start to turn off their continued support for IPv4. Therefore, while there is no particular competitive advantage to be gained by early adoption of IPv6, the collective internet wide decommissioning of IPv4 is likely to be determined by the late adopters.

So what should I do?

It’s important to understand where you are now and arm yourself with enough information to plan accordingly.

  • Check if your ISP is currently supporting IPv6 by visiting a website like http://testmyipv6.com/. There is a dual stack test which will let you know if you are using IPv4 alongside IPv6.
  • Understand if the networking equipment you have in place supports IPv6.
  • Understand if all your existing networked devices (everything that consumes an IP address) supports IPv6.
  • Ensure that all new device acquisitions are fully supportive of IPv6.
  • Understand if the services you consume support IPv6. (If you are making use of public cloud providers, understand if the services you consume support IPv6 or have a road map to IPv6.)

The reality is that IPv6 isn’t going away and as IT decision makers we can’t postpone planning for its implementation indefinitely. Take the time now to understand where your organisation is at. Make your transition to IPv6 a success story!!

Azure Mobile Services and the Internet of Things

The IT industry is full of buzzwords and “The Internet of Things” (IoT) is one that’s getting thrown about a lot lately. The IoT promises to connect billions of devices and sensors to the internet. How this data is stored, sorted, analysed and surfaced will determine the amount of value it is to your business. With this in mind I thought it’s time to start playing around with some bits and pieces to see if I could create my very own IoT connected array of sensors.

To get started I’ll need a micro-controller that I can attach some sensors to. Second I’ll need some kind of web service and storage to accept and store my raw sensor data. I’m not a developer so I’ve decided to keep things simple to start with. My design goals however, are to make use of cloud services to accept and store my raw data. Azure Mobile Services seems like a good place to start.

I’ve chosen the following components for my IoT Project

  1. Arduino Yun – the Micro-controller board
  2. Temperature Sensor – to detect ambient temperature
  3. Light Sensor – to detect light levels
  4. Pressure Sensor – to detect barometric pressure
  5. Azure Mobile Services – to connect the Arduino Yun to the cloud
  6. NoSQL database – to store my raw data

Arduino Yun

From the Arduino website the board’s capabilities are as follows: “The Arduino Yún is a microcontroller board based on the ATmega32u4 and the Atheros AR9331. The Atheros processor supports a Linux distribution based on OpenWrt named OpenWrt-Yun. The board has built-in Ethernet and WiFi support, a USB-A port, micro-SD card slot, 20 digital input/output pins (of which 7 can be used as PWM outputs and 12 as analog inputs), a 16 MHz crystal oscillator, a micro USB connection, an ICSP header, and a 3 reset buttons.”

Further Details on the Arduino Yun can be found at: http://arduino.cc/en/Main/ArduinoBoardYun?from=Products.ArduinoYUN

Schematics

The following schematic diagram illustrates the wiring arrangements between the Arduino Yun and the sensors. In this blog I’m not going to provide any specific detail in this area, instead we are going to focus on how the Arduino Yun can be programed to send its sensor data to a database in Microsoft’s Azure cloud. (There are loads of other blogs that focus specifically on connecting sensors to the Arduino boards, checkout http://www.arduino.cc/)

 

 

Azure Mobile Services

To make use of Azure Mobile Services you will need an Azure subscription. Microsoft offer a free one month trial with $210 credit to spend on all Azure Services. So what are you waiting for J
http://azure.microsoft.com/en-us/pricing/free-trial/

Ok Back to Azure Mobile Services, Microsoft define Azure Mobile Services as “a scalable and secure backend that can be used to power apps on any platform–iOS, Android, Windows or Mac. With Mobile Services, it’s easy to store app data in the cloud or on-premises, authenticate users, and send push notifications, as well as add your custom backend logic in C# or Node.js.” For my IoT project I’m just going to use Azure Mobile Services as a place to accept connections from the Arduino Yun and store the raw sensor data.

Create a Mobile Service

Creating the mobile service is pretty straight forward. Within the web management portal select New, Compute, Mobile Service then Create.

 

Azure Mobile Services will prompt you for:

  • A URL – This is the end point address the Arduino Yun will use to connect to Azure Mobile Services
  • Database – A NoSQL database to store our raw sensor data
  • Region – Which geographic region will host the mobile service and db
  • Backend – the code family used in the back end. I’m using JavaScript

Next you’ll be asked to specify some database settings including server name. You can either choose an existing server (if you have one) or alternatively create a brand new one on the fly. Azure Mobile Services will prompt you for:

  • Name – That’s the name of your database
  • Server – I don’t have an existing one so I’m selecting “New SQL database Server”
  • Server Login Name – A login name for your new db
  • Server Login Password – The password

Now that we have a database it’s time to create a table to store our raw data. The Azure Management Portal provides an easy to use UI to create a new table. Go to the Service Management page, select the Data tab, and click the “+” sign at the bottom of the page. As we are creating a NoSQL table there is no need to specify a schema. Simply provide a table name and configure the permissions “insert/update/delete/read” operations.

Retrieval of the Application Key

Now it’s time to retrieve the Application Key. This will be used to authenticate the REST API calls, when we post data to the table. To retrieve the application key go to the Dashboard page and select the “manage keys” button at the bottom of the page. Two keys will be displayed, copy the “application” one.

 

 

Create the Table

Within the Mobile Service Management page, select Data

Click Create

 

Once the table has been create its ready to accept values. The Arduino Yun can be programed to send its sensor values to our new database via the Azure Mobile Services REST API. http://msdn.microsoft.com/en-us/library/jj710108.aspx

The Application key retrieved earlier will be used to authenticate the API calls.

 

The Arduino Yun Sketch

Here is the basic code inside the Arduino Yun sketch, the code has the some core functions as follows:

Setup()

Every Arduino Yun sketch contains a setup function, this is where things like the serial port and bridge are initialized.

loop()

Every Arduino Yun sketch contains a loop this is where the sensor values are read and the other functions are called.

send_request()

The send_request function is used to establish a connection with the Azure Mobile Services endpoint. A HTTP POST is formed, authentication takes place and a JSON object is generated with our sensor values and placed in the body. Currently the sample code below sends a single sensor value (lightLevel) to Azure Mobile Services. This could easily be expanded to include all sensor values from the array of sensors connected to the Arduino Yun.

wait_response()

This function waits until there are free bytes available on the connection.

read_response()

This function reads the response bytes from Azure Mobile Services and outputs the HTTP response code to the serial console for debugging / troubleshooting purposes.

 

/* Arduino Yun sketch writes sensor data to Azure Mobile Services.*/

 // Include Arduino Yun libraries
#include <Bridge.h>
#include <YunClient.h>
#include <SPI.h>

 // Azure Mobile Service address
const char *server = “iotarduino.azure-mobile.net”;

// Azure Mobile Service table name
const char *table_name = “iotarduino_data”;

// Azure Mobile Service Application Key
const char *ams_key = “HJRxXXXXXXXXXXXXXXXmuNWAfxXXX”;

 YunClient client;
char buffer[64];

/*Send HTTP POST request to the Azure Mobile Service data API */
void send_request(int lightLevel)
{
Serial.println(“\nconnecting…”);
if (client.connect(server, 80)) {
Serial.print(“sending “);
Serial.println(lightLevel);

// POST URI
sprintf(buffer, “POST /tables/%s HTTP/1.1”, table_name);
client.println(buffer);

 // Host header
sprintf(buffer, “Host: %s”, server);
client.println(buffer);

 // Azure Mobile Services application key
sprintf(buffer, “X-ZUMO-APPLICATION: %s”, ams_key);
client.println(buffer);

 // JSON content type
client.println(“Content-Type: application/json”);

 // POST body
sprintf(buffer, “{\”LightLevel\”: %d}”, lightLevel);

 // Content length
client.print(“Content-Length: “);
client.println(strlen(buffer));

 // End of headers
client.println();

 // Request body
client.println(buffer);

 } else {
Serial.println(“connection failed”);
}
}

 /* Wait for a response */
void wait_response()
{
while (!client.available()) {
if (!client.connected()) {
return;
}
}
}

 /* Read the response and output to the serial monitor */
void read_response()
{
bool print = true;

 while (client.available()) {
char c = client.read();
// Print only until the first carriage return
if (c == ‘\n’)
print = false;
if (print)
Serial.print(c);
}
}

 /* Terminate the connection*/
void end_request()
{
client.stop();
}

 /* Arduino Yun Setup */
void setup()
{
Serial.begin(9600);
Serial.println(“Starting Bridge”);
Bridge.begin();
}

 /* Arduino Yun Loop */
void loop()
{
int val = analogRead(A0);
send_request(val);
 wait_response();
 read_response();
 send_request();
 delay(1000);
}

 

The Sensor Data in Azure

Once the sketch is uploaded to the Arduino Yun and executed, the sensor data can be viewed within the Azure Mobile Services dashboard.

 

The Arduino Yun Serial monitor displays the serial (debugging / troubleshooting) comments as the sketch executes.

 

Conclusion

This was just a bit of fun and this is obviously not an enterprise grade solution, however, I hope it goes some way to illustrating the possibilities that are readily available to all of us. Things can be done today in far fewer steps. Access to powerful compute and storage is easier than ever.

The Arduino Yun is an open source electronic prototyping board that allows people like me, without any real developer skills to mess around with electronics, explore ideas and interact with the outside world. There are loads of interesting Arduino code samples and ideas for projects with real world use cases.

My goal here was to illustrate how easy it is to obtain raw sensor data from the outside world and store it in a place where I have loads of options for data processing. By placing my data in Azure I have access to the power and resources of the Microsoft Azure cloud platform literally at my fingertips. . .Enjoy the possibilities!

 

 

Microsoft Azure Multi-Site VPN

Recently I had the opportunity to assist an organisation which has physical offices located in Adelaide, Melbourne, Brisbane and Sydney replacing their expensive MPLS network with a Multi-site VPN to Azure.

This worked well for the customer as they no longer have any server infrastructure on premises. Each branch office requires access to the virtual infrastructure hosted within their Azure VNET.

The solution provides each office connectivity to the VMs and other services hosted within Azure, as well as a means of inter-site connectivity to PC’s and other services located within the branch offices on the rare occasion where this may still be required.

Step 1. – Document Internal Network

In preparation for implementing the Multi-site VPN, review and document the current IP address ranges in-use and Internet gateway addresses at each branch site.

Internal Network Ranges and Gateway Addresses

Site Internal Range Gateway Address
Adelaide 172.16.11.0/24 External IP x.x.x.x
Sydney 172.16.12.0/24 External IP x.x.x.x
Melbourne 172.16.13.0/24 External IP x.x.x.x
Brisbane 172.16.14.0/24 External IP x.x.x.x

 

Step 2. Define the Internal Network Ranges within Azure

Within Azure Networks, select Local Networks

Next you need to define the internal network ranges and gateway addresses as obtained in step 1.


Step 3. – Define the Azure Gateway Subnet

The Multi-site VPN requires a “Gateway Subnet”.

Select Networks, VNET, Configure and click “add subnet” to add your gateway subnet to the VNET configuration.

A whole subnet is required, in this case I’ve chosen 172.17.10.0/24 and have included it within my VNET as follows:


Step 4. – Create a Dynamic Routing Gateway

A “Dynamic Routing” Gateway is required for Multi-Site VPN. If you already have a Point-to-site VPN configured to a single site and are using a “Static Routing” Gateway, you will need to delete the gateway and start a fresh. If you need to delete your old static gateway, Microsoft have instructions at http://msdn.microsoft.com/en-us/library/azure/dn221918.aspx

Select New, Dynamic Gateway

The Gateway will start building, this will take approximately 15 minutes to complete.

Once completed the dashboard will display the gateway between Azure and the primary site (as listed in local networks)

Step 5. Export Network Configuration and Populate for Multi-Site Gateway

In order to configure VPN tunnels to multiple on-premise gateways, we need to export the NetworkConfig.xml and define the <ConnectionsToLocalNetwork>.

Open the exported NetworkConfig.xml and locate the <Gateway> node / element and add an additional localNetworkSiteRef name for each site as illustrated below.

<Gateway>
<ConnectionsToLocalNetwork>
<LocalNetworkSiteRef name=”Adelaide”><Connection type=”IPsec”/></LocalNetworkSiteRef>
<LocalNetworkSiteRef name=”Sydney”><Connection type=”IPsec” /></LocalNetworkSiteRef>
<LocalNetworkSiteRef name=”Brisbane”><Connection type=”IPsec”/></LocalNetworkSiteRef>
<LocalNetworkSiteRefname=”Melbourne”><Connectiontype=”IPsec”/></LocalNetworkSiteRef>
 </ConnectionsToLocalNetwork>
</Gateway>

Step 6. Import the updated Network Configuration

When you import the NetworkConfig.xml file with the changes, the new tunnels will be added using the dynamic gateway that you created earlier.

The dashboard should now display the status of each VPN tunnel

Step 7. – Obtain the IPsec/IKE pre-shared Key for each tunnel

Now we are ready to download the IPsec/IKE pre-shared keys for the VPN tunnels. Once your new tunnels have been added, use the PowerShell cmdlet “Get-AzureVNetGatewayKey” to get the IPsec/IKE pre-shared keys for each tunnel.

Using the Azure PowerShell execute the following cmdlet for each LocalNetworkSiteName (as defined in step 2)

Get-AzureVNetGatewayKey –VNetName “VNET” –LocalNetworkSiteName “Adelaide”|fl

The output should look something like this:

Value : P1nwHJfbAL4ZvWWySewt9yDcwf0gAD76
OperationDescription : Get-AzureVNetGatewayKey
OperationId : 58732f31-170f-ae52-ba31-bc5f173c1972
OperationStatus : Succeeded

The highlighted value in bold is the IPsec/IKE pre-shared key. This will be required when configuring the VPN tunnel for the router (in this case for the Adelaide site, remember to repeat for each LocalNetworkSiteName)

Step 8. – Configure Routers for IPSec Tunnel

Create VPN Device Scripts for each local network site and populate with respective IPsec/IKE pre-shared key. In my case each site had a Cisco 880 series ISR router. Microsoft have templates for a variety of manufacturers which can be downloaded from http://msdn.microsoft.com/en-us/library/azure/dn133800.aspx#BKMK_ISRDynamic

Open the template using Notepad. Search and replace all <text> strings with the values that pertain to your environment. Be sure to include < and >. When a name is specified, also remember that the name you select should be unique.

A table containing the template text fields with an explanation of the parameter values can be found at http://msdn.microsoft.com/en-us/library/azure/30e508e4-8b3f-4b09-ba0e-0ab251fc3d5c#BKMK_AboutConfigurationTemplates

Once a template has been created for each of your sites, pass the scripts onto your network administrator, in the case of the Cisco 880 series routers, the relevant parts of the script were integrated into the Cisco IOS running configuration by our ISP.

Step 9. – Check the multi-site tunnel status for connectivity

Once the router configurations have been updated, the ConnectivityState should change from “Initializing” to “Connected”.

To verify the connectivity state from the Azure PowerShell execute:

 Get-AzureVnetConnection -VNetName VNET

 Each site will be listed. Look for the ConnectivityState, LastEventMessage and LocalNetworkSiteName

ConnectivityState : Connected
EgressBytesTransferred : 861264503
IngressBytesTransferred : 1054775403
LastConnectionEstablished : 4/06/2014 12:04:27 AM
LastEventID : 23401
LastEventMessage : The connectivity state for the local network site ‘Adelaide’ changed from Not Connected to Connected.
LastEventTimeStamp : 4/06/2014 12:04:27 AM
LocalNetworkSiteName : Adelaide
OperationDescription : Get-AzureVNetConnection
OperationStatus : Succeeded

ConnectivityState : Connected
EgressBytesTransferred : 118232342
IngressBytesTransferred : 52362858
LastConnectionEstablished : 5/06/2014 2:49:32 PM
LastEventID : 23401
LastEventMessage : The connectivity state for the local network site ‘Brisbane’ changed from Not Connected to Connected.
LastEventTimeStamp : 5/06/2014 2:49:32 PM
LocalNetworkSiteName : Brisbane
OperationDescription : Get-AzureVNetConnection
OperationStatus : Succeeded

ConnectivityState : Connected
EgressBytesTransferred : 138401240
IngressBytesTransferred : 122273966
LastConnectionEstablished : 6/06/2014 10:54:34 AM
LastEventID : 23401
LastEventMessage : The connectivity state for the local network site ‘Melbourne’ changed from Not Connected to Connected.
LastEventTimeStamp : 6/06/2014 10:54:34 AM
LocalNetworkSiteName : Melbourne
OperationDescription : Get-AzureVNetConnection
OperationStatus : Succeeded


ConnectivityState : Connected
EgressBytesTransferred : 225672988
IngressBytesTransferred : 65533667
LastConnectionEstablished : 5/06/2014 3:19:32 PM
LastEventID : 23401
LastEventMessage : The connectivity state for the local network site ‘Sydney’ changed from Not Connected to Connected.
LastEventTimeStamp : 5/06/2014 3:19:32 PM
LocalNetworkSiteName : Sydney
OperationDescription : Get-AzureVNetConnection
OperationStatus : Succeeded

The connectivity status can also be viewed from within the Azure Virtual network dashboard as follows:

Conclusion

If you are looking for an alternative to a MPLS network and require access to your Azure VNET from all your branch offices the Azure multi-site VPN may be a good fit for your organization. In the aforementioned case, the performance of connectivity to the Azure VNET is desired over the performance of connectivity from one branch office location to another (e.g. connectivity between Adelaide and Brisbane). With the Azure VNET being at the center, this fulfils the requirements of this organization at a substantially lower cost than their legacy MPLS network.

Cloud Storage AWS and Azure

Kloud Cloud

Working with new technologies, rapid rates of change, the excitement of the unknown are good reasons to work in IT. The constant change keeps things fresh and interesting.

The thing that gets me excited though, is the business innovation that occurs with the application of the right mix of technology to otherwise everyday business problems. At Kloud we get to do this all the time!!

I’d like to share with you some simple use cases for cloud services that we have recently been a part of that makes IT Managers and Systems Administrators look like stars.

When talking to IT Managers and Systems Administrators, I often hear the following issues:

  • Storage – We just don’t have enough and its expensive to expand on our existing SAN investment
  • Backup and Restore – We do it, but it’s slow cumbersome and we are not sure we do it well
  • Disaster Recovery – We have a plan, but it’s not agile and we could do it better

So how can the cloud help in this regard? Well at Kloud Solutions we are platform agnostic, in fact we can help architect solutions that make use of your favourite cloud platform, be it Amazon Web Services (AWS), Windows Azure perhaps both or maybe something else. The key is to start small with something simple. I like to choose a single problem to solve, then expand on it to add more functionality and solve another related issue. With pay for what you use pricing models and no lock in contracts, it’s easy to get started and low risk. So let’s take a look at the storage use case in more detail.

Storage

If you provision a LUN and give users write access, they will use it, in fact they will probably create multiple copies of the same stuff. Furthermore they are likely to tell you that it’s all really, really important and you can’t delete it or archive it. I guess if I was an end user I’d probably do the same thing. As an end user of a system I have simple requirements of storage. I want to store a file and know that in a week, a month or a year down the track when I want it, it will be there and I won’t need to log an incident with the service desk to get it back. Simple really. In reality we know that to store that file, at a minimum we need to store it on a disk array for redundancy. Furthermore, we also need to back it up in case of corruption, deletion or wider spread disaster.

So how can the cloud help me here? Both AWS and Azure have methods for expanding an organisations storage into the cloud, with some great features too. Best of all its relatively quick to implement.

Ok I like AWS, how can it help?

AWS have a product called “Storage Gateway” with “Gateway-Cached Volumes”. This is essentially a virtual machine (VM) that you download from AWS and run on your VMWare based Hypervisor on premise. The Storage Gateway will need to have some locally attached storage assigned to it for caching purposes. The Gateway-Cached Volumes enable you to provision LUNs like a regular SAN. The Storage Gateway stores you primary data in Amazon S3 storage buckets, whilst retaining your frequently accessed data on the locally attached storage. The brilliance with this is that it is largely transparent to your end users. The files that are in regular use are already cached on the Storage Gateway. Older files that are not frequently used are a little slower to download, but are then added to the Storage Gateway’s cache as a frequently accessed file. This gives you virtually limitless storage capabilities at ridiculously low costs to provision.

How do I back it up?

Once your data is in the AWS S3 storage buckets you can take advantage of some really cool AWS features that can replace traditional tape backup systems.

Point in time snapshots of your storage volumes can be created. Snapshots can be taken on an ad-hoc basis or scheduling can be configured. Best of all snapshots are incremental backups that reduce storage charges. When a new snapshot is made only data that has changed since your last snapshot is stored. In addition compression is used to further reduce your storage charges.

If a restore of your data is required it can be performed in minutes as opposed to days. Cloud based Snapshots offer offsite data protection via the cloud.

Ok what about Disaster Recovery?

Ok so now you have been busy provisioning LUNs all over the place, your users love the new storage, but you can’t sleep because you are worried about disaster recovery. What happens if there is an extended outage with the VM hypervisor infrastructure that hosts the Storage Gateway VM? Fear not, AWS Storage Gateway for EC2 to the rescue! Storage Gateway for EC2 is a cloud hosted version of the storage gateway that can mirror your production environment in case the on premises infrastructure goes down. The EC2 Storage gateway gives you access to all the data in S3.

This of course can be expanded out to a full blown DR scenario. By using this in conjunction with Amazon EC2 you can create VMs of your critical application servers. In the event of a DR situation you can launch your application EC2 VM instances and access their storage via your AWS Storage Gateway in EC2.

So what if I want to use Azure?

Microsoft recently acquired StorSimple and now offer the StorSimple appliance for Windows Azure. StorSimple is a hardware appliance that’s a rack mountable cloud-integrated storage unit. The StorSimple appliance works in a similar fashion to the aforementioned AWS Storage Gateway. In this case, however, it’s a physical device with a mix of high performance solid state and cheaper regular hard disks. The StorSimple appliance works in the same manner as the AWS Storage Gateway allowing you to provision LUNs with recently used files cached on the appliances internal storage and the data being stored in Azure blob storage.

How do I back it up?

The StorSimple appliance has a whole bunch of smarts built into it. Snapshots of your data can be scheduled on a point in time basis or taken in an ad-hoc basis. Snapshots are also incremental so when a new snapshot is taken only the data that has changed since your last snapshot is stored. De-duplication is also performed, meaning duplicated files are only stored once to further reduce your storage requirements.

Restores of your data can be performed in minutes as opposed to days. By storing data in Azure you are also providing offsite data protection. You can finally retire that cumbersome tape storage system yay!!

Ok what about Disaster Recovery?

Ok so what happens if our primary site with the StorSimple appliance goes down? Backups that are made with the StorSimple appliance in the cloud can be recovered to a different location so it’s just a case of provisioning another StorSimple appliance connecting it to Azure and restoring the data. Microsoft are also working on a Virtual Machine version of StorSimple that could be run in the Azure cloud.

So there we have it, a couple of practical, and scalable solutions to common storage problems that can be implemented quickly and adapted to the requirements of an enterprise.