Use Azure Health to track active incidents in your Subscriptions

siliconvalve

Yesterday afternoon while doing some work I ran into an issue in Azure. Initially I thought this issue was due to a bug in my (new) code and went to my usual debugging helper Application Insights to review what was going on.

The below graphs a little old, but you can see a clear spike on the left of the graphs which is where we started seeing issues and which gave me a clue that something was not right!

App Insights views

Initially I thought this was a compute issue as the graphs are for a VM-hosted REST API (left) and a Functions-based backend (right).

At this point there was no service status indicating an issue so I dug a little deeper and reviewed the detailed Exception information from Application Insights and realised that the source of the problem was the underlying Service Bus and Event Hub features that we use to glue…

View original post 268 more words

Visual Studio Team Services (VSTS) Continuous Integration and Continuous Deployment

I have been working on an Azure Pass Project recently and try to leverage VSTS DevOps CICD features to automatic the build and deployment process. Thanks to my colleague Sean Perera, he helped me and provided a deep dive on the VSTS CICD process.

I am writing this blog to share the whole workflow:

  1. Create new project in VSTS, create Dev branch based on the master branch

1

  1. Establish the connection from local VS to the VSTS project

2

  1. Push web app codes to the VSTS dev branch environment

3

3.1

  1. Set up the endpoint connections between VSTS and Azure:
  • Login to the Azure tenant environment, create new registration for VSTS tenant.

4.1

  • Generate service principle key and keep it safe

4.2

  • Come to VSTS online portal, go to settings -> services-> create a new service endpoint, the service principal client ID will be the Azure application ID, service Principle key will be Azure service principal key.

4.3

  • Click “Verify connection” to make sure it passed the connection testing
  1. Go to Create a build definition:
  • Define the build task: select the repo source, define the azure subscription, the destination to push to, all the app settings and parameter definitions

5.1

  • Go to Triggers and enable the CI settings:

5.2

  1. Create a new release definition
  • Define the release Pipeline: where is the source build and where is the environment, in my case, I am using VSTS to push codes to Azure PaSS environment.

6.1

  • Enable the Continuous Deployment settings

6.2

  • Define the release tasks: in my case I am using pre-build deploy azure app service and also swap from staging to prod environment

6.3

6.4

  1. Auto build and release process

Once I make change on my project code from my local visual studio environment, I commit the code and push up to the VSTS dev environment, VSTS will automatically start the build and release process, complete the release and push to Azure web app environment.

7.1

7.2

  1. Done, test my code in the dev and prod environment. It looks good. the VSTS DevOps features speed up the whole deployment process.

 

HoloLens – Spatial sound

The world of Mixed reality and Augmented reality is only half real without three-dimensional sound effects to support the virtual world. Microsoft deals with this challenge by leveraging the ability of their audio engine to generate Spatial Sound. Spatial sound, as a feature, can simulate three-dimensional sounds in a virtual world based on direction, distance, and other environmental factors. Spatial Sound is based on the concept of sound localization which is a popular topic in the field of sound engineering. Sound localization can be defined as a process of determining the source of the sound, the field or sound, the position of the listener and the media or environment of sound propagation. The following diagram illustrates a virtual world with Spatial Sound enabled:

1

The concept of sound localization and Spatial Sound is not new to this world. Spatial music has been practiced since biblical ages in the form of the antiphon. The modern form of spatial music was introduced in early 1900 in Germany. Spatial sound empowers a Mixed reality application developer to place sound sources in a three-dimensional space around the user in a convincing manner. Objects in the virtual world can act as sources for these sounds creating an immersive experience for the user.

Scenarios for Spatial Sound

In the world of Mixed Reality, Spatial Sound can be used to make many user scenarios realistic. Following are few of them.

  • Anchoring – The ability to position a virtual object in a virtual world is critical for many Mixed reality applications. In a scenario where the users turn his face away from the object, and it disappears from his viewport, the only way to give the user a scene of the object’s existence in the scene is by propagating localized sounds from the object.
  • Guiding – Spatial sound are found to be very useful in scenarios where users need to be guided to draw their attention to a specific object or space in the three-dimensional world.
  • Simulating physics – Sound plays an important role in emulating realistic physics in the world of Mixed Reality. For example, the impact of a glass sphere dropping behind the user in a three-dimensional world is best simulated by enabling localized shattering sound effects at the point of its collision with the floor.

Implementing Spatial Sound in HoloLens applications

The audio engine is HoloLens uses a technology called HTRF (Head Related Transfer Function) to simulate sounds that are coming from various directions and distances within a virtual world. Head Related Transfer Functions define directivity patterns of the human ears which caters for direction, elevation, distance, and frequency of sound. Unity offers built-in support for Microsoft HTRF extensions.

Configuring the Audio Source

The plug-in can be enabled from the audio manager in Unity (Edit->Project Settings->Audio).

2

Once the setting is enabled, you should be able to ‘Spatialize’ any audio source attached to a game object in Unity. To configure the audio source, you will need to perform the following steps:

  1. Select the audio source on the game object in the inspector panel
  2. Check the ‘Spatialize’ Checkbox under the options for the audio source3
  3. For best results, change the value of ‘Spatial Blend’ to 14

Your audio source is now configured to play Spatial Sound.

Testing Spatial Sound

The best way to test Spatial Sound is by using the ‘Audio Emitter’ component which comes built-in with the HoloToolKit. Perform the following steps to configure the Audio Emitter.

  1. Add the Audio Emitter component from the inspector panel5
  2. Configure the Update Interval, Max Objects, and Max Distance on the Audio Emitter component. The ‘Update Interval’ determines the frequency in which the Audio Emitter scans for environmental factors which influence the sound output. ‘Max Objects’ specifies the maximum number of influencing objects to be considered and the max distance specifies the maximum radius for the scan.6
  1. Leave the ‘Outer Sphere’ parameter empty for now. This parameter is used to demonstrate audio occlusion
  2. Associate an audio file to your Audio Source and enable the loop7
  3. Run the application and move around the object to feel the Spatial Sound in action.

Measuring sound output

It is often a scenario where you want to measure the sound output produced by an Audio Source to visually represent it. A good example is a case when you need to display an audio histogram within your application.8

The following code can be used to measure the RMS output from an Audio Source which can then be used to paint the histogram spectrum.

Unity also supports a direct function (AudioSource.GetSpectrumData) to retrieve spectrum data from the audio source

To conclude, in this blog, we walked through the fundamentals of spatial sound. After which, we learned about implementing Spatial Sound in a HoloLens application and about how to measure sound output.

HoloLens – Understanding depth (Spatial Mapping)

Building smart applications which can work in a three-dimensional space has many challenges. Amongst these, the one that tops the list is the challenge of understanding and mapping the surrounding 3D world. Applications usually depend on the device and platform capability to resolve this problem. Augmented Reality and Mixed Reality devices ships with built-in technologies to measure the depth of the containing world.

Scenarios of interest

Mapping the world around a device is critical to enable powerful scenarios in this field. Following are few such use cases:

  • Docking/Placement – What makes Mixed Reality different from Augmented Reality is its ability to enable interaction between virtual and physical objects. To make a Mixed Reality scenario realistic, it is critical for the application to understand the mapping of the environment where the user is currently operating from. This will help the application place or dock the holograms obeying the physical bounds of the environment. For example, if the application needs to place a chair in a room, it will need to position it on the floor with enough space to land its four legs.
  • Navigation – Objects in the holographic world should be constrained by the rules of the physical world to make the application look real. For example, a holographic puppy should be able to walk on the floor or jump on to the couch and not walk through the walls and furniture. To enable this, the application should be aware of the depth of each object around the user at any given point in time.
  • Physics – In the real world, the behaviour of an object in motion is highly influenced by factors like inertia, density, elasticity, and so on. To match the similar experience of holograms in the world of Mixed Reality, the application will need to be aware of the environment. For Example, dropping the ball from the roof on the floor will have a different effect from dropping it on a couch.

Technologies

Depth sensing is not a new problem in the world of computing. However, the rise of Mixed Reality and Augmented Reality enabled devices have taken it to the limelight. Different vendors address this challenge by different names. For examples, Google calls it ‘Depth Perception’. Apple ARKit calls it ‘Depth Map’ and Microsoft calls it ‘Spatial Mapping’.  The underlying technologies used by these devices may be different but the objective of discovering the depth of the environment around the device remains the same. Following are few of the underlying technologies used by these devices to measure depth:

  • Structured Infrared light projector/scanner
  • RGB Depth cameras
  • Time-of-flight camera

Time of Flight

Time of flight technology is specifically of our interest because of its popularity and the fact that it is being used by devices like Microsoft Kinect and HoloLense to measure depth. The technology works on the reflective properties of the objects. It uses the known speed of light to calculate the distance by measuring the time taken for a photon to reflect back to the device sensors. The following diagram illustrates the measurement process:

TFF

Experiments state that the time for flight technology works at its best in between the range of approximately 0.5 to 5 metres.  The depth camera in HoloLens works well between 0.85 to 3.01 metres.

Spatial Mapping

Spatial Mapping is a feature shipped with Microsoft HoloLens which provides a representation of real-world surfaces around the device. This can be used by application developers to make the applications environment aware. The feature abstracts the hardware technology used to measure the depth and exposes easily consumable APIs to integrate with.

Spatial Mapping in HoloLens

The best way to leverage Spatial Mapping capabilities in a HoloLens is to integrate it using Unity. To enable Spatial Mapping, you will first need to enable ‘SpatialPerception’ capability on your project. Unity offers two built-in components to support Spatial Mapping.

  • Spatial Mapping Renderer – This component is responsible for visually presenting the spatial map as a mesh to the application.
  • Spatial Mapping Collider – The collider is responsible for enabling interactions between the holograms and the spatial mesh.

These components can be added to an existing unity project from the ‘Add Component’ menu (Add Component > AR > Spatial Mapping Collider/Spatial Mapping Renderer)

The following link talks about setting up Spatial Mapping capabilities in your Unity project in detail.

https://developer.microsoft.com/en-us/windows/mixed-reality/spatial_mapping_in_unity

Tips and tricks

Following are few tips and tricks to optimize Spatial Mapping in your applications.

  • Updating spatial maps – Running a spatial map starts with the trigger for collecting mapping data. This operation is very CPU intensive thereby costing battery life and starvation to other processes. To optimise update cycles, request collision data only when required. The API’s also let you query collision data for selective surfaces.
  • Configuring refresh intervals – It is important to choose an optimal refresh rate for your spatial maps to go light on CPU. You can do this from the Unity’s inspector window.

TBU

  • The density of spatial data – Spatial Mapping uses triangle meshes to represent the surfaces it maps. For an application which does not require high-resolution mapping, it is advisable to generate maps with lower triangle density to optimise CPU time and turn around time for the mapping process.
  • Understanding the implementation – It is useful to understand the implementation of Spatial Mapping to perform low-level optimizations specific to your applications. Understanding how ‘SurfaceObserver’ and ‘SurfaceData’ is implemented gives a good insight into how things work. You can refer to the Unity documentation to learn more about this.

https://docs.unity3d.com/Manual/windowsholographic-sm-lowlevelapi.html

  • Mixed Reality Toolkit example – A good place to start on Spatial Mapping is the example shared within the Mixed Reality Toolkit accessible through the following link.

https://github.com/Microsoft/MixedRealityToolkit-Unity/tree/master/Assets/HoloToolkit-Examples/SpatialMapping

To conclude, in this blog, we briefly discussed the depth mapping problem, solutions to this problem and the technology behind it. We also dived deeper into Spatial Mapping feature of HoloLens and how it can be used with a Unity project.

 

Xamarin Forms: Mircosoft.EntityFrameworkCore.Sqlite issue with Physical devices

Introduction

Building Xamarin Forms apps using .Net Standard 2.0 is still pretty much new to industry, we are just started to learn how differently we have to configure Xamarin setting to get it working when compared to PCL based projects.

I was building a Xamarin Forms based App using Microsoft’s Entityframeworks SQlite to store app’s data. Entity framework using sqlite is an obvious choice when it comes to building App using .Net Standard 2.0

Simulator

Works well on pretty much on all simulators without any issue, all read/write operations works well.

Issue  – Physical Device

App crashes on physical device, when tried to read or write data from the SQlite database

Error

System.TypeInitializationException: The type initializer for ‘Microsoft.EntityFrameworkCore.EntityFrameworkQueryableExtensions’ threw an exception. —> System.InvalidOperationException: Sequence contains
no matching element

Resolution

Change linker behavior to “Don’t Link”

Xamarin forms using .Net Standard 2.0

Introduction

All Xamarin developers, please welcome Net standard 2.0. This is the kind of class library we were waiting for all these years. The .Net standard 2.0 specification is now complete and it is included with Net core 2.0, Net framework 4.6.1 and up to latest versions. It can be used using Visual Studio versions 15.3 and up. Net Standard 2.0 obviously supports C# and also F# and Visual Basic.

More APIs

Net Standard 2.0 is for sharing code via various platforms. It is included with all the common APIs that all .Net implementations, it unified all .net frameworks to avoid any fragmentations in future. There are more than 32000 APIs in .Net Standard 2.0 most of them that are already available in .Net Framework APIs. Microsoft has made it easy to port existing code to .Net Standard 2.0. It is now easy to extend any .Net Standard to .Net core 2.0 or any versions that come in future.

NuGet Support

Most NuGet packages currently work well with .Net framework, but not all projects are compatible to move to .Net Standard 2.0, therefore a compatibility mode is added to support them.  Even after compatibility mode, only upt0 70% of packages are supported.

Frameworks and Libraries

Below is the table,list all the support frameworks and libraries. Click here for more details

.NET Standard
1.0 1.1 1.2 1.3 1.4 1.5 1.6 2.0
.NET Core 1.0 1.0 1.0 1.0 1.0 1.0 1.0 2.0
.NET Framework 4.5 4.5 4.5.1 4.6 4.6.1 4.6.1 4.6.2 4.6.1 vNext 4.6.1
Mono 4.6 4.6 4.6 4.6 4.6 4.6 4.6 5.4
Xamarin.iOS 10.0 10.0 10.0 10.0 10.0 10.0 10.0 10.14
Xamarin.Mac 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.8
Xamarin.Android 7.0 7.0 7.0 7.0 7.0 7.0 7.0 8.0
Universal Windows Platform 10.0 10.0 10.0 10.0 10.0 10.0.16299 10.0.16299 10.0.16299
Windows 8.0 8.0 8.1
Windows Phone 8.1 8.1 8.1
Windows Phone Silverlight 8.0

Sample to convert PCL or Shared to .Net Standard 2.0

  1. Create a default PCL or Shared based Xamarin Forms applications and name it appropreately and wait for solution to loadScreen Shot 2017-12-09 at 09.18.05
  2. Add .Net Standard class library by selecting .Net Standard 2.0Screen Shot 2017-12-09 at 09.24.38Screen Shot 2017-12-09 at 09.25.41Now project should look something like belowScreen Shot 2017-12-09 at 09.26.38.png
  3. Now remove PCL or Shared based project (VERY Important only after moving all the required project files to Netstandard20Test library) and compileScreen Shot 2017-12-09 at 09.28.38.png
  4. now rename the NetStandard20Test to NetStandardTest (Same as deleted library), make sure to rename DefaultNameSpace and Assembly to NetStandarTestScreen Shot 2017-12-09 at 09.30.07Screen Shot 2017-12-09 at 09.30.14Screen Shot 2017-12-09 at 09.30.24Screen Shot 2017-12-09 at 09.30.44Screen Shot 2017-12-09 at 09.34.23.png
  5. Now build the project and see if build is successfully.
  6. Your build should fail with errors as shown below, it is because of the deleted project, now we have to reference back the newly created .Net Standard 2.0 to both Android and iOSScreen Shot 2017-12-09 at 09.35.53.png
  7. Now edit references on each platform project to add newly created project as shown below onceScreen Shot 2017-12-09 at 09.37.58Screen Shot 2017-12-09 at 09.38.05
  8. references are applied correctly, you should get below errorsScreen Shot 2017-12-09 at 09.52.14
  9. Now add Xamarin.Forms NuGet package for all projectsScreen Shot 2017-12-09 at 09.54.04.png
  10. Now build the project and you should see any errorsScreen Shot 2017-12-09 at 10.58.06
  11. Microsoft has also released a compatibility NuGet package that makes sure’s all the existing packages are compatible to .Net Standard 2.0
  12. Add NuGet package, Mirosoft.NETCore.Portable.Compatibility to .Net Standard 2.0 project.Screen Shot 2017-12-09 at 11.03.01

Hope this blog is useful to you.

 

Disk Space Reporting through Lamba Functions- Linux servers

Solution Objective:

The solution provides detailed report related to hard disk space for all the Linux Ec2 instances in the AWS environment.

Requirements:

 

Mentioned below are the requirements the solution should be able to fulfil.

  • Gather information related to all mount points in all the Linux EC2 instances in the environment.
  • Able to generate cumulative report based on all instances in the environment.

3. Assumptions:

The following assumptions are considered

  • All the EC2 instances have SSM agent installed.
  • The personnel responsible for the configuration have some understanding of IAM Roles, S3 buckets and lambda functions

4. Solutions Description:

The following services provided by Amazon will be utilized to generate the report

  • Linux shell Scripts
  • AWS S3
  • AWS Lambda
  • AWS IAM Roles
  • Maintenances Windows

4.1      Linux Shell Script.

Linux Shell Script will be utilized to generate information about the instance and the mount points space utilization.

Mentioned below script needs to be executed on all Linux Ec2 instances to generate the mount point information.

curl http://169.254.169.254/latest/meta-data/instance-id # Prints the Instance ID
printf "\n" # Adds line
df # provides details of the mount point

4.1      AWS S3

The result of the shell script will be posted to an S3 bucket for further use.

The EC2 instances will need write access to the nominated S3 bucket for certificate Maintenance.

S3 Bucket Name: eomreport ( sample name )

4.2      AWS Lambda Functions

Lambda Functions will be used to perform the following activities.

  • Acquire the result of the Shell script from the S3 bucket
  • Generate a Report
  • Email the report to the relevant recipient

The Lambda Functions would need read access to the S3 bucket and access to AWS SES to send emails to recipients.

Mentioned below is the Lambda Functions that performs the mentioned above tasks.

import boto3
import codecs
import pprint
from datetime import datetime, date, time
def lambda_handler(event,Context):
    s3 = boto3.resource('s3')
    mybucket = s3.Bucket('eomreport')
    resulthtml = ["<h1>Report : Hard disk Space </h1>"] # Adds heading to the email body
    resulthtml.append('<html><body><table border="1">') # Creates a table
    resulthtml.append('<tr><td><b>InstanceID</b></td><td><b>Available Space</b></td><td><b>Used Space</b></td><td><b>Use %</b></td></td><td><b>Mounted on</b></td></b></tr>')
    for file_key in mybucket.objects.all():
        complete_string = str(file_key)
        search = "stdout"
        check = complete_string.find(search)
        if check > 0 :
            body = file_key.get()['Body'].read().decode('utf-8')
            complete=body.splitlines() #splits data into lines.
            id="".join(complete[0])
            hr=complete[1]
            hr2=hr.split()
            hr2.append("InstanceID")
            hstr=",".join(hr2)
            details=complete[2:]
            for line in details:
                    output_word=line.split()
                    dstr="".join(line)
                    resulthtml.append(("<td>'{}'</td><td>'{}'</td><td>'{}'</td><td>'{}'</td><td>'{}'</td></tr>").format(id,output_word[3],output_word[2],output_word[4],output_word[5])) # for the HTML email to be sent.
    resulthtml.append('</table></body></html>')
    final=str("".join(resulthtml))
    final=final.replace("'","")
    print(final)
    sender = "email@email.com"
    recipient = "email@email.com"
    awsregion = "us-east-1"
    subject = "Certificate Update list"
    charset = "UTF-8"
    mylist="mylist update"
    client = boto3.client('ses',region_name=awsregion)
    try:
        response = client.send_email(
           Destination={
               'ToAddresses': [
                   recipient,
                ],
            },
         Message={
                  'Body': {
                      'Html': {
                        'Charset': charset,
                        'Data': final,
                             },
                    'Text': {
                     'Charset': charset,
                     'Data': mylist,
                    },
                },
                'Subject': {
                    'Charset': charset,
                    'Data': subject,
                },
            },
            Source=sender,
        )
    # Display an error if something goes wrong.
    except Exception as e:
        print( "Error: ", e)
    else:
       print("Email sent!")

 

4.1 AWS IAM Roles

Roles will be used to grant

  • AWS S3 write access to all the EC2 instances as they will submit the output of the  the S3 bucket
  • AWS SES access to Lambda Functions to send emails to relevant recipients.

4.2 AWS SES

Amazon Simple Email Service (Amazon SES) evolved from the email platform that Amazon.com created to communicate with its own customers. In order to serve its ever-growing global customer base, Amazon.com needed to build an email platform that was flexible, scalable, reliable, and cost-effective. Amazon SES is the result of years of Amazon’s own research, development, and iteration in the areas of sending and receiving email.( Ref. From https://aws.amazon.com/ses/).

We would be utilizing AWS SES to generate emails using AWS lambda.

The configuration of the Lambda functions can be modified to send emails to a distribution group to provide Certificate reporting, or it can be used to send emails to ticketing system in order to provide alerting and ticket creation in case a certificate expiration date crosses a configured threshold.

5Solution Configuration

5.1 Configure IAM Roles

The following Roles should be configured

  • IAM role for Lambda Function.
  • IAM for EC2 instances for S3 bucket Access

5.1.1 Role for Lambda Function

Lambda function need the following access

  • Read data from the S3 bucket
  • Send Emails using Amazon S3

To accomplish the above the following policy should be created and attached to the IAM Role

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1501474857000",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::S3BucketName/*"
            ]
        },
        {
            "Sid": "Stmt1501474895000",
            "Effect": "Allow",
            "Action": [
                "ses:SendEmail"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

6.1.2  Role for EC2 instance

All EC2 instances should have access to store the Shell output in the S3 bucket.

To accomplish the above , the following policy should be assigned to the EC2 roles

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1501475224000",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::eomreport"
            ]
        }
    ]
}

6.2 Configure Maintenance Window.

The following tasks need to be performed for the maintenance window

  • Register a Run Command with Run-Shell Script using the script in section 4.1
  • Register targets based on the requirements
  • Select the schedule based on your requirement

Maintenance Window Ref : 

http://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html

6.3 Configure Lambda Function:

The following tasks need to be performed for the Lambda Function

  • Create a blank lambda function with the S3 put event as the trigger\lambda function
  • Click on Next
  • Enter the Name and Description
  • Select run time Python 3.6
  • Copy and paste the lambda function mentioned in section 4.3

    6.4 Configuring AWS SES

The following tasks need to be completed before the execution of the Run-commands.

  • Email Addresses should be added to the AWS SES section of the tenant.
  • The email addresses should be verified.

 7. Result:

Based on the above configuration, whenever the run command is executed, the following report is generated and sent to the nominated email account.

InstanceID Available Space Used Space Use % Mounted on
i-sampleID1 123984208 1832604 0.02 /
i-sampleID1 7720980 0 0 /dev
i-sampleID1 7746288 0 0 /dev/shm
i-sampleID1 7721456 24832 0.01 /run
i-sampleID1 7746288 0 0 /sys/fs/cgroup
i-sampleID2 122220572 3596240 0.03 /
i-sampleID2 7720628 0 0 /dev
i-sampleID2 7746280 8 0.01 /dev/shm
i-sampleID2 7532872 213416 0.03 /run
i-sampleID2 7746288 0 0 /sys/fs/cgroup
i-sampleID2 81554964 16283404 0.17 /sit
i-sampleID2 83340832 14497536 0.15 /uat
i-sampleID2 1549260 0 0 /run/user/1000
i-sampleID3 123983664 1833148 0.02 /
i-sampleID3 7720980 0 0 /dev
i-sampleID3 7746288 0 0 /dev/shm
i-sampleID3 7721448 24840 0.01 /run
i-sampleID3 7746288 0 0 /sys/fs/cgroup

 

Azure AD Domain Services

I recently had what I thought was a rather unique requirement from a customer.

The requirement was to build Azure IaaS virtual machines and have them joined to a managed domain, while also being able to authenticate to the virtual machines using Azure AD credentials.

The answer is Azure AD Domain Services!

Azure AD Domain Services provides managed domain services such as domain join, group policy and Kerberos/NTLM authentication without the need for you to deploy and  manage domain controllers in the cloud. For more information see https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-overview

It is not without its limitations though, main things to call out is that configuring domain trusts and applying schema extensions is not possible with Azure AD Domain Services. For a full list of limitations see: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-comparison

Unfortunately at this point in time you cannot use ARM templates to configure Azure AD Domain Services so you are limited to the Azure Portal or PowerShell. I am not going to bore you with the details of the deployment steps as it is quite simple and you can easily follow the steps supplied in the Microsoft documentation: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-enable-using-powershell

What I would like to do is point out the following learnings that I discovered during my deployment.

  1. In order to utilise Azure AD credentials that are synchronised from on-premises, synchronisation of NTLM/Kerberos credential hashes must be enabled in Azure AD Connect, this is not enabled by default.
  2. If there is any cloud-only user accounts, all users who need to use Azure AD Domain Services must change their passwords after Azure AD Domain Services is provisioned. The password change process causes the credential hashes for Kerberos and NTLM authentication to be generated in Azure AD.
  3. Once a cloud-only user account has changed their password, you will need to wait for a minimum of 20 minutes before you will be able to use Azure AD Domain Services (this got me as I was impatient).
  4. Speaking of patience the provisioning process of Azure Domain Services takes about an hour.
  5. Have a dedicated subnet for Azure AD Domain services to avoid any connectivity issues that may occur with NSGs/firewalls.
  6. You can only have one managed domain connected to your Azure Active Directory.

That’s it, hopefully this helped you get a better understanding of Azure AD Domain Services and assists with a smooth deployment.

Understanding Azure’s Container PaaS Capabilities

siliconvalve

If you’ve been using Azure over the past twelve months, you can’t but have the feeling that it’s become a bit like this…

Containers... Containers Everywhere

.. and you’d be right.

To be fair, though, Containers have been one of the hot topics in computing in general and certainly one that’s been getting the most interest in my recent Azure Open Source Roadshows.

One thing that has struck me though is that people are not clear on the purpose of all the services in Azure that have ‘Containers’ listed as a capability, so in this post I am going to try and review the Azure Platform-as-a-Service offerings that have Container capabilities and cover what the services can be used for.

First, before we begin, let’s quickly get some fundamentals under our belts.

What is a Container?

Containers provide encapsulation and isolation for workloads and remove the need for a complete Operating System image…

View original post 1,698 more words

AWS Re:Invent 2017 – what’s out so far

What a week it’s been for AWS customers. Just in the last 5 days we already seen a huge number of product releases including:

AWS Sumerian: With Sumerian, you can construct an interactive 3D scene without any programming experience, test it in the browser, and publish it as a website that is immediately available to users. Product details can be found https://aws.amazon.com/about-aws/whats-new/2017/11/announcing-amazon-sumerian-preview/

Amazon MQ:Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud. Amazon MQ works with your existing applications and services without the need to manage, operate, or maintain your own messaging system. See Jeff’s blog post here https://aws.amazon.com/blogs/aws/amazon-mq-managed-message-broker-service-for-activemq/

Amazon EC2 Bare Metal Instances:Amazon EC2 Bare Metal instances provide your applications with direct access to the processor and memory of the underlying server. These instances are ideal for workloads that require access to hardware feature sets (such as Intel VT-x), or for applications that need to run in non-virtualized environments for licensing or support requirements. For for info on getting into the preview, visit https://aws.amazon.com/about-aws/whats-new/2017/11/announcing-amazon-ec2-bare-metal-instances-preview/

PrivateLink for Customers/Partners: We announced that customers can now use AWS PrivateLink to access third party SaaS applications from their Virtual Private Cloud (VPC) without exposing their VPC to the public Internet. Customers can also use AWS PrivateLink to connect services across different accounts and VPCs within their own organizations, significantly simplifying their internal network architecture. See details https://aws.amazon.com/about-aws/whats-new/2017/11/aws-privatelink-now-available-for-customer-and-partner-services/ and kloud will be blogging about this much more in the coming weeks

Amazon GuardDuty:Amazon GuardDuty is a threat detection service that gives you a more accurate and easy way to continuously monitor and protect your AWS accounts and workloads. With a few clicks in the AWS Management Console, GuardDuty begins analyzing AWS data across all your AWS accounts integrated with threat intelligence feeds, anomaly detection, and machine learning for more actionable threat detection in an easy to use, pay as you go cloud security service. Again, Jeff’s done a great article https://aws.amazon.com/blogs/aws/amazon-guardduty-continuous-security-monitoring-threat-detection/

Now that Andy Jassy’s keynote has just finished, we now have a bunch more:

AWS Fargate: containers as a service where a customer no longer needs to manage the underlying EC2 instances. See blog post https://aws.amazon.com/blogs/aws/aws-fargate/

Elastic Kubernetes Service: the same level of integration we’ve to come expect from ECS, but running kubernetes. For more details, see https://aws.amazon.com/blogs/aws/amazon-elastic-container-service-for-kubernetes/

Aurora serverless: Designed for workloads that are highly variable and subject to rapid change, this new configuration allows you to pay for the database resources you use, on a second-by-second basis. More details can be found https://aws.amazon.com/blogs/aws/in-the-works-amazon-aurora-serverless/

AWS recognition for video: Amazon Rekognition Video is a new video analysis service feature that brings scalable computer vision analysis to your S3 stored video, as well as, live video streams. see Jeff’s blog here https://aws.amazon.com/blogs/aws/launch-welcoming-amazon-rekognition-video-service/

AWS Neptune: a fast and reliable graph database service that makes it easy to gain insights from relationships among your highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds of latency. Jeff’s blog can be found https://aws.amazon.com/blogs/aws/amazon-neptune-a-fully-managed-graph-database-service/

AWS DeepLens: a new video camera that runs deep learning models directly on the device, out in the field. You can use it to build cool apps while getting hands-on experience with AI, IoT, and serverless computing. AWS DeepLens combines leading-edge hardware and sophisticated on-board software, and lets you make use of AWS Greengrass, AWS Lambda, and other AWS AI and infrastructure services in your app. See here for the latest blog post https://aws.amazon.com/blogs/aws/deeplens/

This is by no means a complete list of everything released, but just a glimpse of what’s come out so far. Stay tuned to our blog for detailed deep dives into some of these services.