Demystifying Managed Service Identities on Azure

Managed service identities (MSIs) are a great feature of Azure that are being gradually enabled on a number of different resource types. But when I’m talking to developers, operations engineers, and other Azure customers, I often find that there is some confusion and uncertainty about what they do. In this post I will explain what MSIs are and are not, where they make sense to use, and give some general advice on how to work with them.

What Do Managed Service Identities Do?

A managed service identity allows an Azure resource to identify itself to Azure Active Directory without needing to present any explicit credentials. Let’s explain that a little more.

In many situations, you may have Azure resources that need to securely communicate with other resources. For example, you may have an application running on Azure App Service that needs to retrieve some secrets from a Key Vault. Before MSIs existed, you would need to create an identity for the application in Azure AD, set up credentials for that application (also known as creating a service principal), configure the application to know these credentials, and then communicate with Azure AD to exchange the credentials for a short-lived token that Key Vault will accept. This requires quite a lot of upfront setup, and can be difficult to achieve within a fully automated deployment pipeline. Additionally, to maintain a high level of security, the credentials should be changed (rotated) regularly, and this requires even more manual effort.

With an MSI, in contrast, the App Service automatically gets its own identity in Azure AD, and there is a built-in way that the app can use its identity to retrieve a token. We don’t need to maintain any AD applications, create any credentials, or handle the rotation of these credentials ourselves. Azure takes care of it for us.

It can do this because Azure can identify the resource – it already knows where a given App Service or virtual machine ‘lives’ inside the Azure environment, so it can use this information to allow the application to identify itself to Azure AD without the need for exchanging credentials.

What Do Managed Service Identities Not Do?

Inbound requests: One of the biggest points of confusion about MSIs is whether they are used for inbound requests to the resource or for outbound requests from the resource. MSIs are for the latter – when a resource needs to make an outbound request, it can identify itself with an MSI and pass its identity along to the resource it’s requesting access to.

MSIs pair nicely with other features of Azure resources that allow for Azure AD tokens to be used for their own inbound requests. For example, Azure Key Vault accepts requests with an Azure AD token attached, and it evaluates which parts of Key Vault can be accessed based on the identity of the caller. An MSI can be used in conjunction with this feature to allow an Azure resource to directly access a Key Vault-managed secret.

Authorization: Another important point is that MSIs are only directly involved in authentication, and not in authorization. In other words, an MSI allows Azure AD to determine what the resource or application is, but that by itself says nothing about what the resource can do. For some Azure resources this is Azure’s own Identity and Access Management system (IAM). Key Vault is one exception – it maintains its own access control system, and is managed outside of Azure’s IAM. For non-Azure resources, we could communicate with any authorisation system that understands Azure AD tokens; an MSI will then just be another way of getting a valid token that an authorisation system can accept.

Another important point to be aware of is that the target resource doesn’t need to run within the same Azure subscription, or even within Azure at all. Any service that understands Azure Active Directory tokens should work with tokens for MSIs.

How to Use MSIs

Now that we know what MSIs can do, let’s have a look at how to use them. Generally there will be three main parts to working with an MSI: enabling the MSI; granting it rights to a target resource; and using it.

  1. Enabling an MSI on a resource. Before a resource can identify itself to Azure AD,it needs to be configured to expose an MSI. The way that you do this will depend on the specific resource type you’re enabling the MSI on. In App Services, an MSI can be enabled through the Azure Portal, through an ARM template, or through the Azure CLI, as documented here. For virtual machines, an MSI can be enabled through the Azure Portal or through an ARM template. Other MSI-enabled services have their own ways of doing this.

  2. Granting rights to the target resource. Once the resource has an MSI enabled, we can grant it rights to do something. The way that we do this is different depending on the type of target resource. For example, Key Vault requires that you configure its Access Policies, while to use the Event Hubs or the Azure Resource Manager APIs you need to use Azure’s IAM system. Other target resource types will have their own way of handling access control.

  3. Using the MSI to issue tokens. Finally, now that the resource’s MSI is enabled and has been granted rights to a target resource, it can be used to actually issue tokens so that a target resource request can be issued. Once again, the approach will be different depending on the resource type. For App Services, there is an HTTP endpoint within the App Service’s private environment that can be used to get a token, and there is also a .NET library that will handle the API calls if you’re using a supported platform. For virtual machines, there is also an HTTP endpoint that can similarly be used to obtain a token. Of course, you don’t need to specify any credentials when you call these endpoints – they’re only available within that App Service or virtual machine, and Azure handles all of the credentials for you.

Finding an MSI’s Details and Listing MSIs

There may be situations where we need to find our MSI’s details, such as the principal ID used to represent the application in Azure AD. For example, we may need to manually configure an external service to authorise our application to access it. As of April 2018, the Azure Portal shows MSIs when adding role assignments, but the Azure AD blade doesn’t seem to provide any way to view a list of MSIs. They are effectively hidden from the list of Azure AD applications. However, there are a couple of other ways we can find an MSI.

If we want to find a specific resource’s MSI details then we can go to the Azure Resource Explorer and find our resource. The JSON details for the resource will generally include an identity property, which in turn includes a principalId:

Screenshot 1

That principalId is the client ID of the service principal, and can be used for role assignments.

Another way to find and list MSIs is to use the Azure AD PowerShell cmdlets. The Get-AzureRmADServicePrincipal cmdlet will return back a complete list of service principals in your Azure AD directory, including any MSIs. MSIs have service principal names starting with https://identity.azure.net, and the ApplicationId is the client ID of the service principal:

Screenshot 2

Now that we’ve seen how to work with an MSI, let’s look at which Azure resources actually support creating and using them.

Resource Types with MSI and AAD Support

As of April 2018, there are only a small number of Azure services with support for creating MSIs, and of these, currently all of them are in preview. Additionally, while it’s not yet listed on that page, Azure API Management also supports MSIs – this is primarily for handling Key Vault integration for SSL certificates.

One important note is that for App Services, MSIs are currently incompatible with deployment slots – only the production slot gets assigned an MSI. Hopefully this will be resolved before MSIs become fully available and supported.

As I mentioned above, MSIs are really just a feature that allows a resource to assume an identity that Azure AD will accept. However, in order to actually use MSIs within Azure, it’s also helpful to look at which resource types support receiving requests with Azure AD authentication, and therefore support receiving MSIs on incoming requests. Microsoft maintain a list of these resource types here.

Example Scenarios

Now that we understand what MSIs are and how they can be used with AAD-enabled services, let’s look at a few example real-world scenarios where they can be used.

Virtual Machines and Key Vault

Azure Key Vault is a secure data store for secrets, keys, and certificates. Key Vault requires that every request is authenticated with Azure AD. As an example of how this might be used with an MSI, imagine we have an application running on a virtual machine that needs to retrieve a database connection string from Key Vault. Once the VM is configured with an MSI and the MSI is granted Key Vault access rights, the application can request a token and can then get the connection string without needing to maintain any credentials to access Key Vault.

API Management and Key Vault

Another great example of an MSI being used with Key Vault is Azure API Management. API Management creates a public domain name for the API gateway, to which we can assign a custom domain name and SSL certificate. We can store the SSL certificate inside Key Vault, and then give Azure API Management an MSI and access to that Key Vault secret. Once it has this, API Management can automatically retrieve the SSL certificate for the custom domain name straight from Key Vault, simplifying the certificate installation process and improving security by ensuring that the certificate is not directly passed around.

Azure Functions and Azure Resource Manager

Azure Resource Manager (ARM) is the deployment and resource management system used by Azure. ARM itself supports AAD authentication. Imagine we have an Azure Function that needs to scan our Azure subscription to find resources that have recently been created. In order to do this, the function needs to log into ARM and get a list of resources. Our Azure Functions app can expose an MSI, and so once that MSI has been granted reader rights on the resource group, the function can get a token to make ARM requests and get the list without needing to maintain any credentials.

App Services and Event Hubs/Service Bus

Event Hubs is a managed event stream. Communication to both publish onto, and subscribe to events from, the stream can be secured using Azure AD. An example scenario where MSIs would help here is when an application running on Azure App Service needs to publish events to an Event Hub. Once the App Service has been configured with an MSI, and Event Hubs has been configured to grant that MSI publishing permissions, the application can retrieve an Azure AD token and use it to post messages without having to maintain keys.

Service Bus provides a number of features related to messaging and queuing, including queues and topics (similar to queues but with multiple subscribers). As with Event Hubs, an application could use its MSI to post messages to a queue or to read messages from a topic subscription, without having to maintain keys.

App Services and Azure SQL

Azure SQL is a managed relational database, and it supports Azure AD authentication for incoming connections. A database can be configured to allow Azure AD users and applications to read or write specific types of data, to execute stored procedures, and to manage the database itself. When coupled with an App Service with an MSI, Azure SQL’s AAD support is very powerful – it reduces the need to provision and manage database credentials, and ensures that only a given application can log into a database with a given user account. Tomas Restrepo has written a great blog post explaining how to use Azure SQL with App Services and MSIs.

Summary

In this post we’ve looked into the details of managed service identities (MSIs) in Azure. MSIs provide some great security and management benefits for applications and systems hosted on Azure, and enable high levels of automation in our deployments. While they aren’t particularly complicated to understand, there are a few subtleties to be aware of. As long as you understand that MSIs are for authentication of a resource making an outbound request, and that authorisation is a separate thing that needs to be managed independently, you will be able to take advantage of MSIs with the services that already support them, as well as the services that may soon get MSI and AAD support.

Deploy active/active FortiGate NGFW in Azure

I recently was tasked with deploying two Fortinet FortiGate firewalls in Azure in a highly available active/active model. I quickly discovered that there is currently only two deployment types available in the Azure marketplace, a single VM deployment and a high availability deployment (which is an active/passive model and wasn’t what I was after).

FG NGFW Marketplace Options

I did some digging around on the Fortinet support sites and discovered that to you can achieve an active/active model in Azure using dual load balancers (a public and internal Azure load balancer) as indicated in this Fortinet document: https://www.fortinet.com/content/dam/fortinet/assets/deployment-guides/dg-fortigate-high-availability-azure.pdf.

Deployment

To achieve an active/active model you must deploy two separate FortiGate’s using the single VM deployment option and then deploy the Azure load balancers separately.

I will not be going through how to deploy the FortiGate’s and required VNets, subnets, route tables, etc. as that information can be found here on Fortinet’s support site: http://cookbook.fortinet.com/deploying-fortigate-azure/.

NOTE: When deploying each FortiGate ensure they are deployed into different frontend and backend subnets, otherwise the route tables will end up routing all traffic to one FortiGate.

Once you have two FortiGate’s, a public load balancer and an internal load balancer deployed in Azure you are ready to configure the FortiGate’s.

Configuration

NOTE: Before proceeding ensure you have configured static routes for all your Azure subnets on each FortiGate otherwise the FortiGate’s will not be able to route Azure traffic correctly.

Outbound traffic

To direct all internet traffic from Azure via the FortiGate’s will require some configuration on the Azure internal load balancer and a user defined route.

  1. Create a load balance rule with:
    • Port: 443
    • Backend Port: 443
    • Backend Pool:
      1. FortiGate #1
      2. FortiGate #2
    • Health probe: Health probe port (e.g. port 22)
    • Session Persistence: Client IP
    • Floating IP: Enabled
  2. Repeat step 1 for port 80 and any other ports you require
  3. Create an Azure route table with a default route to the Azure internal load balancer IP address
  4. Assign the route table to the required Azure subnets

IMPORTANT: In order for the load balance rules to work you must add a static route on each FortiGate for IP address: 168.63.129.16. This is required for the Azure health probe to communicate with the FortiGate’s and perform health checks.

FG Azure Health Probe Cfg

Once complete the outbound internet traffic flow will be as follows:

FG Internet Traffic Flow

Inbound traffic

To publish something like a web server to the internet using the FortiGate’s will require some configuration on the Azure public load balancer.

Let’s say I have a web server that resides on my Azure DMZ subnet that hosts a simple website on HTTPS/443. For this example the web server has IP address: 172.1.2.3.

  1. Add an additional public IP address to the Azure public load balancer (for this example let’s say the public IP address is: 40.1.2.3)
  2. Create a load balance rule with:
    • Frontend IP address: 40.1.2.3
    • Port: 443
    • Backend Port: 443
    • Backend Pool:
      1. FortiGate #1
      2. FortiGate #2
    • Session Persistence: Client IP
  3. On each FortiGate create a VIP address with:
    • External IP Address: 40.1.2.3
    • Mapped IP Address: 172.1.2.3
    • Port Forwarding: Enabled
    • External Port: 443
    • Mapped Port: 443

FG WebServer VIP Cfg

You can now create a policy on each FortiGate to allow HTTPS to the VIP you just created, HTTPS traffic will then be allowed to your web server.

For details on how to create policies/VIPs on FortiGate’s refer to the Fortinet support website: http://cookbook.fortinet.com.

Once complete the traffic flow to the web server will be as follows:

FG Web Traffic Flow

Azure Update Management

How do you patch/update your infrastructure in Azure, AWS, On-Premises? There are many ways, of course, including manually, built-in scheduled update, Group Policy, locally scripted, ConfigMgr, custom Azure Automation, WSUS, and so on.

Somewhat recently, another option “Azure Update Management” has become available, and it is FREE*. This is an expanded offering of what used to be OMS Update Management, integrated into the main Azure Portal and visible on each VM under the “Update Management” node.

Rather than regurgitate the existing documentation and tutorial, I want to highlight some of the finer points:

  • Yes, supports Windows and Linux
  • Requires ‘supported’ versions of Windows or Linux
  • Does not support ‘client’ versions e.g. Windows 7/8/8.1/10
  • Requires .NET Framework 4.5 and Windows Management Framework 5.0 or later on Windows 2008R2 SP1
  • Windows Server 2008 or 2008 R2 without SP1 won’t apply updates, just scan/assess
  • Update targets must have access to an update repository
    • WSUS, ConfigMgr SUP, or Microsoft Update
    • Linux package repository, either locally managed, or the OS default
  • Integration with ConfigMgr requires current branch 1606 or newer

Caveats

  • If an update reports that it requires a reboot, the VM will reboot. Currently there appears to be no way to avoid/defer a reboot
  • Windows VMs only scan for updates every 12 hours
  • Linux VMs scan for updates every 3 hours

I was about to build a WSUS server in an Azure subscription to address a number of manually updated or otherwise unmanaged Azure VMs, which was going to cost a minimum of about AUD $200 per month. This appears to be a nearly ideal solution to me and very attractive to the client at the ‘nearly free’ price point.

This is my new ‘go-to’ for update management; I hope it looks as good to you, and simplifies an important part of your environment.

FREE*

  • Requires a Log Analytics storage account, which does cost a small amount for ingestion and storage for 31 days (https://azure.microsoft.com/en-us/pricing/details/log-analytics/). The first 5 GB ingestion per month is free, which should be good for 50-100 VMs, leaving about 20c/GB/Month for storage costs – so maybe $1 per month for up to 100 VMs!
  • No further costs involved for Azure VMs
  • Non-Azure VMs/Computers could incur further charges, depending on your environment, use of extra Azure Automation/Configuration Management features, etc

Sample Update Status (Server name column removed)

AzureUpdateManagement

Deploy VM via ARM template: Purchase eligibility failed

I recently tried to deploy a VM using an ARM template executed via PowerShell and I encountered the purchase eligibility failed error as seen below.

PurchaseEligibilityFailedError

As I have encountered this before I ensured I accepted marketplace terms for the VM image in question using the PowerShell commands:

Get-AzureRmMarketplaceTerms -Publisher PublisherName -Product ProductName -Name Name | Set-AzureRmMarketplaceTerms -Accept

I then reattempted to deploy my VM using my ARM template and still got the same error, I even waited 24 hours and tried again with no luck.

I then discovered that from the Azure portal you can create a new resource using the “Template deployment” option and deploy your ARM template via the Azure portal.

TemplateDeploymentOption

After I uploaded and executed my ARM template using this method it deployed my VM successfully with no purchase eligibility errors.

Any subsequent VM deployments via PowerShell using the exact same ARM template now worked as expected with no purchase eligibility errors.

Azure ARM architecture pattern: a DMZ design with a firewall appliance

Im in the process of putting together a new Azure design for a client. As always in Azure, the network components form the core of the design. There was a couple of key requirements that needed to be addressed that the existing environment had outgrown: lack of any layer 7 edge heightened security controls and a lack of a DMZ.

I was going through some designs that I’ve previously done and was checking the Microsoft literature on what some fresh design patterns might look like, in case anythings changed in recent times. There is still only a single reference on the Microsoft Azure documentation and it still references ASM and not ARM.

For me then, it seems that the existing pattern I’ve used is still valid. Therefore, I thought I’d share what that architecture would look like via this quick blog post.

My DMZ with a firewall appliance design

Here’s an overview of some key points on the design:

  • Firewall appliances will have multiple interfaces, but, typically will have 2 that we are mostly concerned about: an internal interface and an external interface
  • Network interfaces in ARM are now independent objects from compute resources
    • As such, an interface needs to be associated with a subnet
    • Both the internal and external interfaces could in theory be associated with the same subnet, but, thats a design for another blog post some day
  • My DMZ design features two subnets in two zones
    • Zone 1 = “Untrusted”
    • Zone 2 = “Semi-trusted”
    • Zone 3 = “Trusted” – this is for reference purposes only so you get the overall picture
  • Simple subnet design
    • Subnet 1 = “External DMZ”
    • Subnet 2 = “Internal DMZ”
    • Subnet 3 = Trusted
  • Network Security Groups (NSGs) are also used to encapsulate the DMZ subnets and to isolate traffic from the VNET
  • Through this topology there are effectively three layers of firewall between the untrusted zone and the trusted zone
    • External DMZ NSG, firewall appliance and Internal DMZ NSG
  • With two DMZ subnets (external DMZ and internal DMZ), there are two scenarios for deploying DMZ workloads
    • External DMZ = workloads that do not require heightened security controls by way of the firewall
      • Workload example: proxy server, jump host, additional firewall appliance management or monitoring interfaces
      • Alternatively, this subnet does not need to be used for anything other than the firewall edge interface
    • Internal DMZ = workloads that required heightened security controls and also (by way of best practice) shouldn’t be deployed in the trusted zone
      • Workload example: front end web server, Windows WAP server, non domain joined workloads
  • Using the firewall as an edge device requires route tables to force all traffic leaving a subnet to be directed to the firewall internal interface
    • This applies to the internal DMZ subnet and the trusted subnet
    • The External DMZ subnet does not have a route table configured so that the firewall is able to route out to Azure internet and the greater internet
  • Through VNET peering, other VNETs and subnets could also route out to Azure internet (and the greater WWW) via this firewall appliance – again, leverage route tables

Cheers 🙂

Azure ARM architecture pattern: the correct way to deploy a DMZ with NSGs

Isolating any subnet in Azure can effectively create a DMZ. To do this correctly though is certainly something that is super easy, but, something that can easily be done incorrectly.

Firstly, all that is required is a NSG and associating that with any given subnet (caveat- remember that NSGs are not compatible with the GatewaySubnet). Doing this will deny most traffic to and from that subnet- mostly relating to the tag “internet”. What is easily missed is not applying a deny all rule set in both the inbound and outbound rules of the NSG itself.

Ive seen some clients that have put an NSG on a subnet and assumed that subnet was protected. Unfortunately, thats not correct.

Ive seen some clients that have put a deny all inbound from the internet and vice versa deny all outbound to the internet and assumed that the subnet was protected and isolated. Unfortunately, thats also not correct.

How to correctly isolate a subnet to create a DMZ

Azure has 3 default rules that apply to an NSG.

These rules are:

To view these default rules, at the top of the NSG inbound or outbound rules, select this button:

With these 3 rules, there’s the higher in order two rules that can trip people up. The main culprit being rule 65000 means that any other subnet in your VNET and any other VNET that is peered with your VNET is allowed to communicate with your given subnet.

To correctly isolate a subnet in a VNET, we need to create new rule (I always do the lowest available rule priority number of 4096) for both inbound and outbound, to deny * ports on * IP’s or subnets that will override these default rules. Azure NSGs work by way of precedence. The higher the rule priority number, the higher…. um… you guess it: the priority when processing the rules (much like any other firewall or network appliance vendor ACL). This 4096 deny rule should look something like this:

You’ll also notice two warnings when you’ve attempted to save that rule which should explain in better and more succinct English my earlier definition of what happens:

  • Warning #1 – This rule denies traffic from AzureLoadBalancer and may affect virtual machine connectivity. To allow access, add an inbound rule with higher priority to allow AzureLoadBalancer to VirtualNetwork.
  • Warning #2 – This rule denies virtual network access. If you wish to allow access to your virtual network, add an inbound rule with higher priority to Allow VirtualNetwork to VirtualNetwork.

With that, the Azure load balancer and VNET traffic from other subnets within the same VNET or other VNETs through peering will be denied. Thus, we have a true isolated subnet and one that can be setup as a DMZ.

Cheers!

BONUS

Before I go, I just wanted to quickly mention that in Azure NSGs are also able to be applied to a single network interface.

If you flip the methodology there, its quite easily possible to have no NSGs on any subnets, but rather, apply those NSGs on every interface associated with servers and instances in a VNET. There is a key drawback though with this approach- administrative overhead.

With an NSG associated with a server instance, again following my correct deny all rule mentioned earlier, a single NIC or a single server instance could be isolated in a sudo DMZ. The challenge then is, if this process is repeated across 100 servers, keeping all those NSGs up to date and replicating rules when servers need to communicate on various ports or protocols. Administrative overhead indeed!


Original posted on Lucian.Blog. Follow Lucian on Twitter @LucianFrango.

Recommendations on using Terraform to manage Azure resources

siliconvalve

If you’ve been working in the cloud infrastructure space for the last few years you can’t have missed the buzz around Hashicorp’s Terraform product. Terraform provides a declarative model for infrastructure provisioning that spans multiple cloud providers as well as on-premises services from the likes of VMWare.

I’ve recently had the opportunity to use Terraform to do some Azure infrastructure provisioning so I thought I’d share some recommendations on using Terraform with Azure (as at January 2018). I’ll also preface this post by saying that I have only been provisioning Azure PaaS services (App Service, Cosmos DB, Traffic Manager, Storage and Application Insights) and haven’t used any IaaS components at all.

In the beginning

I needed to provide an easy way to provision around 30 inter-related services that together constitute the hosting environment for single customer solution. Ideally I wanted a way to make it easy to re-provision these…

View original post 1,113 more words

Use Azure Health to track active incidents in your Subscriptions

siliconvalve

Yesterday afternoon while doing some work I ran into an issue in Azure. Initially I thought this issue was due to a bug in my (new) code and went to my usual debugging helper Application Insights to review what was going on.

The below graphs a little old, but you can see a clear spike on the left of the graphs which is where we started seeing issues and which gave me a clue that something was not right!

App Insights views

Initially I thought this was a compute issue as the graphs are for a VM-hosted REST API (left) and a Functions-based backend (right).

At this point there was no service status indicating an issue so I dug a little deeper and reviewed the detailed Exception information from Application Insights and realised that the source of the problem was the underlying Service Bus and Event Hub features that we use to glue…

View original post 268 more words

Azure AD Domain Services

I recently had what I thought was a rather unique requirement from a customer.

The requirement was to build Azure IaaS virtual machines and have them joined to a managed domain, while also being able to authenticate to the virtual machines using Azure AD credentials.

The answer is Azure AD Domain Services!

Azure AD Domain Services provides managed domain services such as domain join, group policy and Kerberos/NTLM authentication without the need for you to deploy and  manage domain controllers in the cloud. For more information see https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-overview

It is not without its limitations though, main things to call out is that configuring domain trusts and applying schema extensions is not possible with Azure AD Domain Services. For a full list of limitations see: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-comparison

Unfortunately at this point in time you cannot use ARM templates to configure Azure AD Domain Services so you are limited to the Azure Portal or PowerShell. I am not going to bore you with the details of the deployment steps as it is quite simple and you can easily follow the steps supplied in the Microsoft documentation: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-enable-using-powershell

What I would like to do is point out the following learnings that I discovered during my deployment.

  1. In order to utilise Azure AD credentials that are synchronised from on-premises, synchronisation of NTLM/Kerberos credential hashes must be enabled in Azure AD Connect, this is not enabled by default.
  2. If there is any cloud-only user accounts, all users who need to use Azure AD Domain Services must change their passwords after Azure AD Domain Services is provisioned. The password change process causes the credential hashes for Kerberos and NTLM authentication to be generated in Azure AD.
  3. Once a cloud-only user account has changed their password, you will need to wait for a minimum of 20 minutes before you will be able to use Azure AD Domain Services (this got me as I was impatient).
  4. Speaking of patience the provisioning process of Azure Domain Services takes about an hour.
  5. Have a dedicated subnet for Azure AD Domain services to avoid any connectivity issues that may occur with NSGs/firewalls.
  6. You can only have one managed domain connected to your Azure Active Directory.

That’s it, hopefully this helped you get a better understanding of Azure AD Domain Services and assists with a smooth deployment.

Understanding Azure’s Container PaaS Capabilities

siliconvalve

If you’ve been using Azure over the past twelve months, you can’t but have the feeling that it’s become a bit like this…

Containers... Containers Everywhere

.. and you’d be right.

To be fair, though, Containers have been one of the hot topics in computing in general and certainly one that’s been getting the most interest in my recent Azure Open Source Roadshows.

One thing that has struck me though is that people are not clear on the purpose of all the services in Azure that have ‘Containers’ listed as a capability, so in this post I am going to try and review the Azure Platform-as-a-Service offerings that have Container capabilities and cover what the services can be used for.

First, before we begin, let’s quickly get some fundamentals under our belts.

What is a Container?

Containers provide encapsulation and isolation for workloads and remove the need for a complete Operating System image…

View original post 1,698 more words