When to use Azure Load Balancer or Application Gateway

siliconvalve

One thing Microsoft Azure is very good at is giving you choices – choices on how you host your workloads and how you let people connect to those workloads.

In this post I am going to take a quick look at two technologies that are related to high availability of solutions: Azure Load Balancer and Application Gateway.

Application Gateway is a bit of a dark horse for many people. Anyone coming to Azure has to get to grips with Azure Load Balancer (ALB) as a means to provide highly available solutions, but many don’t progress beyond this to look at Application Gateway (AG) – hopefully this post will change that!

The OSI model

A starting point to understand one of the key differences between the ALB and AG offerings is the OSI model which describes the logical layers required to implement computer networking. Wikipedia has a good introduction, which…

View original post 469 more words

Secure your VSTS Release Management Azure VM deployments with NSGs and PowerShell

siliconvalve

One of the neat features of VSTS’ Release Management capability is the ability to deploy to Virtual Machine hosted in Azure (amongst other environments) which I previously walked through setting up.

One thing that you need to configure when you use this deployment approach is an open TCP port to the Virtual Machines to allow remote access to PowerShell and WinRM on the target machines from VSTS.

In Azure this means we need to define a Network Security Group (NSG) inbound rule to allow the traffic (sample shown below). As we are unable to limit the source address (i.e. where VSTS Release Management will call from) we are stuck creating a rule with a Source of “Any” which is less than ideal, even with the connection being TLS-secured. This would probably give security teams a few palpitations when they look at it too!

Network Security Group

We might be able to determine a…

View original post 637 more words

Inviting Microsoft Account users to your Azure AD-secured VSTS tenant

siliconvalve

I’ve done a lot of external invite management for VSTS after the last few years, and generally without fail we’ll have issues getting everyone on-boarded easily. This blog post is a reference for me (and I guess you too) to understand the invite process and document the experience the invited user has.

There are two sections to this blog post:

1. Admin instructions to invite users.

2. Invited user instructions.

Select whichever one applies to you.

The starting point for this post is that external user hasn’t yet been invited to your Azure AD tenant. The user doing in the inviting is also not an Azure AD Global Admin, but I has rights in an Azure tenant.

The Invite to Azure AD

Log into an Azure subscription using your Azure AD account and select Subscriptions. Ideally this shouldn’t be a production tenant!

Select Subscription

I am going to start by…

View original post 721 more words

Azure Functions: Build an ecommerce processor using Braintree’s API

siliconvalve

In this blog I am continuing with my series covering useful scenarios for using Azure Functions – today I’m going to cover how you can process payments by using Functions with Braintree’s payment gateway services.

I’m not going to go into the details of setting up and configuring your Braintree account, but what I will say is the model you should be applying to make this scenario work is one where you Function will play the Server role as documented in Braintree’s configuration guide.

My sample below is pretty basic, and for the Function to be truly useful (and secure) to use you will need to consider a few things:

  1. Calculate total monetary amount to charge elsewhere and pass to the Function as an argument (my preference is via a message on a Service Bus Queue or Topic). Don’t do the calculation here – make the Function do precisely…

View original post 398 more words

Azure Functions: Send email using SendGrid

siliconvalve

Prior to Azure Functions announcing their General Availability (GA) I had previously used SendGrid as an output binding in order to send email messages.

Since GA, however, the ability to use SendGrid remains undocumented (I assume to give the Functions team time to test and document the binding properly) and the old approach I was using no longer seems valid.

As I needed to use this feature I spent some time digging into getting this working with the GA release of Azure Functions (version ~1). Thankfully as Functions is an abstraction over WebJobs I had plenty of information on how to do it right now thanks to the WebJobs documentation and extensibility :).

Here’s how you can get this working too:

1. Register your SendGrid API key in Application Settings: you must utilise the documented approach of setting your API key in an App Setting called “AzureWebJobsSendGridApiKey”. Without this your…

View original post 151 more words

Azure Functions: Access KeyVault Secrets with a Cert-secured Service Principal

siliconvalve

Azure Functions is one of those services in Azure that is seeing a massive amount of uptake. People are using it for so many things, some of which require access to sensitive information at runtime.

At time of writing this post there is a pending Feature Request for Functions to support storing configuration items in Azure KeyVault. If you can’t wait for that Feature to drop here’s how you can achieve this today.

Step 1: Create a KeyVault and Register Secrets

I’m not going to step through doing this in detail as the documentation for KeyVault is pretty good, especially for adding Secrets or Keys. For our purposes we are going to store a password in a Secret in KeyVault and have the most recent version of it be available from this URI:

https://mytestvault.vault.azure.net/secrets/remotepassword

Step 2: Setup a Cert-secured Service Principal in Azure AD

a. Generate a self-signed…

View original post 423 more words

Continuous Deployment of Windows Services using VSTS

siliconvalve

I have to admit writing this post feels a bit “old skool”. Prior to the last week I can’t remember the last time I had to break out a Windows Service to solve anything. Regardless, for one cloud-based IaaS project I’m working on I needed a simple worker-type solution that was private and could post data to a private REST API hosted on the other end of an Azure VNet Peer.

While I could have solved this problem any number of ways I plumped for Windows Service primarily because it will be familiar to developers and administrators at the organisation I’m working with, but I figured if I’m going to have to deploy onto VMs I’m sure not deploying in an old-fashioned way! Luckily we’re already running in Azure and hosting on VSTS so I have access to all the tools I need!

Getting Setup

The setup for this process…

View original post 426 more words

Deploying to Azure VMs using VSTS Release Management

siliconvalve

I am going to subtitle this post “the missing manual” because I spent quite a bit of time troubleshoothing how this should all work.

Microsoft provides a bunch of useful information how to deploy from Visual Studio Team Services (VSTS) to different targets, including Azure Virtual Machines.

In an ideal world I wouldn’t be using VMs at all, but for my current particular use case I have to use VMs so the above (linked) approach worked.

The approach sounds good but I ran into a few sharp edges that I thought I would document here (and hopefully the source documentation will be updated to reflect this in due course).

Preparing deployment targets

Azure FQDNs

I thought I’d do the right thing by configuring the Azure IP of my hosts to have a full FQDN rather than just an IP address.

As I found out this is not a good…

View original post 530 more words

Migrating resources from AWS to Microsoft Azure

Kloud receives a lot of communications in relation to the work we do and the content we publish on our blog. My colleague Hugh Badini recently published a blog about Azure deployment models from which we received the following legitimate follow up question…

So, Murali, thanks for letting us know you’d like to know more about this… consider this blog a starting point :).

Firstly though…

this topic (inter-cloud migrations), as you might guess, isn’t easily captured in a single blog post, nor, realistically in a series, so what I’m going to do here is provide some basics to consider. I may not answer your specific scenario but hopefully provide some guidance on approach.

Every cloud has a silver lining

The good news is that if you’re already operating in a cloud environment then you have likely had to deal with many of the fundamental differences between traditional application hosting and architecture and that of cloud platforms.

You will have dealt with how you ensure availability of your application(s) across outages; dealing with spikes in traffic via use of elastic compute resources; and will have come to recognise that is many ways, Infrastructure-as-a-Service (IaaS) in the cloud has many similarities to the way you’ve always done things on-prem (such as backups).

Clearly you have less of a challenge in approaching a move to another cloud provider.

Where to start

When we talk about moving from AWS to Azure we need to consider a range of things – let’s take a look at some key ones.

Understand what’s the same and what’s different

Both platforms have very similar offerings, and Microsoft provides many great resources to help those utilising AWS to build an understanding of which services in AWS map to which services in Azure. As you can see the majority of AWS’ services have an equivalent in Azure.

Microsoft’s Channel 9 is also a good place to start to learn about the similarities, with there being no better place than the Microsoft Azure for Amazon AWS Professional video series.

So, at a platform level, we are pretty well covered, but…

the one item to be wary of in planning any move of an existing application is how it has been developed. If we are moving components from, say, an EC2 VM environment to an Azure VM environment then we will probably have less work to do as we can build our Azure VM as we like (yes, as we know, even Linux!) and install whatever languages, frameworks or runtimes we need.

If, however, we are considering moving an application from a more Platform-as-a-Service capability such AWS Lambda we need to look at the programming model required to move its equivalent in Azure – Azure Functions. While AWS Lambda and Azure Functions are functionally the same (no pun intended) we cannot simply take our Lambda code and drop it into an Azure Function and have it work. It may not even make sense to utilise Azure Functions depending on what you are shifting.

It’s also important to consider the differences in the availability models in use today in AWS and Azure. AWS uses Availability Zones to help you manage the uptime of your application and it’s components. In Azure we manage availability at two levels – locally via Availability Sets and then geographically through use of Regions. As these models differ it’s an important area to consider for any migration.

Tools are good, but are no magic wand

Microsoft provides a way to migrate AWS EC2 instances to Azure using Azure Site Recovery (ASR) and while there are many tools for on-prem to cloud migrations and for multi-cloud management, they mostly steer away from actual migration between cloud providers.

Kloud specialises in assessing application readiness for cloud migrations (and then helping with the migration), and we’ve found inter-cloud migration is no different – understanding the integration points an application has and the SLAs it must meet are a big part of planning what your target cloud architecture will look like. Taking into consideration underlying platform services in use is also key as we can see from the previous section.

If you’re re-platforming an application you’ve built or maintain in-house, make sure to review your existing deployment processes to leverage features available to you for modern Continuous Deployment (CD) scenarios which are certainly a strength of Azure.

Data has a gravitational pull

The modern application world is entirely a data-driven one. One advantage to cloud platforms is the logically bottomless pit of storage you have at your disposal. This presents a challenge, though, when moving providers where you may have spent years building data stores containing Terabytes or Petabytes of data. How do you handle this when moving? There are a few strategies to consider:

  • Leave it where it is: you may decide that you don’t need all the data you have to be immediately available. Clearly this option requires you to continue to manage multiple clouds but may make economic sense.
  • Migrate via physical shipping: AWS provides Snowball as a way to extract data out of AWS without needing to pull it over a network connection. If your solution allows it you could ship your data out of AWS to a physical location, extract that data, and then prepare it for import into Azure, either over a network connection using ExpressRoute or through the Azure Import/Export service.
  • Migrate via logical transfer: you may have access to a service such as Equinix’s Cloud Exchange that allows you to provision inter-connects between cloud and other network providers. If so, you may consider using this as your migration enabler. Ensure you consider how much data you will transfer and what, if any, impact the data transfer might have on existing network services.

Outside of the above strategies on transferring of data, perhaps you can consider a staged migration where you only bring across chunks of data as required and potentially let older data expire over time. The type and use of data obviously impacts on which approach to take.

Clear as…

Hopefully this post has provided a bit more clarity around what you need to consider when migrating resources from AWS to Azure. What’s been your experience? Feel free to leave comments if you have feedback or recommendations based on the paths you’ve followed.

Happy dragon slaying!

Creating Azure AD B2C Service Principals with PowerShell

siliconvalve

I’ve been lucky enough over the last few months to be working on some cool consumer-facing solutions with one of my customers. A big part of the work we’ve been doing in building Minimum Viable Product (MVP) solutions to allow us to quickly test concepts in-market using stable, production ready technologies.

As these are consumer solutions, the Azure Active Directory (AAD) B2C service was an obvious choice for identity management, made even more so by AAD B2C’s ability to act as a source-of-truth for consumer identity and profile information across a portfolio of applications and services.

AAD B2C and Graph API

The AAD B2C schema is extensible which allows you to add custom attributes to an identity. Some of these extension attributes you may wish the user to manage themselves (i.e. mobile phone number), and some may be system-managed or remotely-sourced value associated with the identity (i.e. Salesforce ContactID) that…

View original post 480 more words