Dependency Injection in Vue.js App with TypeScript

Dependency management is one of critical points while developing applications. In the back-end world, there are many IoC container libraries that we can make use of, like Autofac, Ninject, etc. Similarly, many modern front-end frameworks also provide DI features. However, those features work way differently from how back-end libraries do. In this post, we’re going to use TypeScript and Vue.js for development and apply an IoC container library called InversifyJS that offers very similar development experiences to back-end application development.

The code samples used in this post can be found here.

provide/inject Pair in VueJs

According to the official document, vue@2.2.0 supports DI feature using the provide/inject pair. Here’s how DI works in VueJs. First of all, declare dependency, MyDependency in the parent component like:

Then its child component consumes the dependency like:

Maybe someone from the back-end development got a question. Child components only consumes dependencies that are declared from their parent component. In other words, in order for all components to consume all dependencies, this declaration MUST be done at the top-level component of its hierarchy. That’s the main difference between VueJs and other back-end IoC containers. There’s another question – VueJs doesn’t provide a solution for inter-dependency issue. This inter-dependency should be solved by a third-party library. But that’s fine. We’re going to use TypeScript anyway, which has a solution for the inter-dependency issue.

DI in VueJs and TypeScript

Evan You, the creator of VueJs, has recently left a comment about his design philosophy on VueJs framework.

While using a class-based API by default may make it more “friendly” to devs used to classes, it also makes it more hostile to a large group of users who use Vue without build tools or transpilers. When you are advocating your preference, you might be missing some nuance we have to take into account as a framework.

This is why we offer the object-based API as the baseline and the class-based API as an opt-in. This allows us to cater to both groups of users.

Therefore, we need to sort out either using the provide/inject pair or using another approach, ie. service locator pattern. In order to use the provide/inject pair, as we found above, we need to put an IoC container instance at the top-level of component. On the other hand, we can simply use the container as a service locator. Before applying either approach, let’s implement the IoC container.

Building IoC Container using InversifyJS

InversifyJS is a TypeScript library for IoC container, which is heavily influenced from Ninject. Therefore syntax is very similar to each other. Interface and class samples used here is merely modified from both libraries’ conventions – yeah, the ninja stuff!

Defining Interfaces

Let’s define Weapon and Warrior interfaces like below:

Defining Models

InversifyJS uses Symbol to resolve instances. This is a sample code to define multiple symbols in one object. This object contains multiple symbols for Warrior, Weapon and Container.

The @injectable decorator provided by InversifyJS defines classes that are bound into an IoC container.

The @inject decorator goes to constructor parameters. Make sure that those parameters require the Symbol objects defined earlier.

Make sure that we should use the same Symbol object defined earlier. If we simply use Symbol("Weapon") here, it wouldn’t be working as each Symbol object is immutable.

Implementing IoC Container

Let’s implement the IoC container using the interfaces and models above.

The last part of the code snippet above, container.bind(...).to(...), is very similar to how IoC container works in C#. Now we’re ready for use of this container.

Attaching Child Component

Unlike the Previous Posts, We’re adding a new child Vue component, Ninja.vue to Hello.vue for dependency injection.

Hello.vue has got the Ninja.vue component as its child. Let’s have a look at the Ninja.vue component.

Now, let’s apply both service locator and provide/inject pair.

Applying Service Locator

We’re updating the Ninja.vue to use service locator:

As we can see above, the IoC container instance, container is directly consumed within the Ninja.vue component. When we run the application, the result might be looking like:

As some of us might uncomfortable to use the service locator pattern, now we’re applying the built-in provide/inject pair.

Applying provide/inject Pair

As we identified above, in order to consume all dependencies at all Vue components, we should declare IoC container as a dependency at the top-level of the component, ie) App.vue.

We can see that the container instance is provided with the symbol, SERVICE_IDENTIFIER.CONTAINER defined earlier. Now let’s modify the Ninja.vue component:

The @Inject decorator takes care of injecting the container instance from the App.vue component. Make sure that the same symbol, SERVICE_IDENTIFIER.CONTAINER is used. All good! Now we can see the same result like the picture above.

So far, we’ve had an overview how to use DI in VueJs & TypeScript app in two different approaches – service locator or provide/inject pair. Which one to choose? It’s all up to you.

Dynamically rename an AWS Windows host deployed via a syspreped AMI

One of my customers presented me with a unique problem last week. They needed to rename a Windows Server 2016 host deployed using a custom AMI without rebooting during the bootstrap process. This lack of a reboot rules out the simple option of using the PowerShell Rename-Computer Cmdlet. While there are a number of methods to do this, one option we came up with is dynamically updating the sysprep unattended answer file using a PowerShell script prior to the unattended install running during first boot of a sysprepped instance.

To begin, we looked at the Windows Server 2016 sysprep process on an EC2 instance to get an understanding of the process required. It’s important to note that this process is slightly different to Server 2012 on AWS, as EC2Launch has replaced EC2Config. The high level process we need to perform is the following:

  1. Deploy a Windows Server 2016 AMI
  2. Connect to the Windows instance and customise it as required
  3. Create a startup PowerShell script to set the hostname that will be invoked on boot before the unattended installation begins
  4. Run InitializeInstance.ps1 -Schedule to register a scheduled task that initializes the instance on next boot
  5. Modify SysprepInstance.ps1 to replace the cmdline registry key after sysprep is complete and prior to shutdown
  6. Run SysprepInstance.ps1 to sysprep the machine.

Steps 1 and 2 are fairly self-explanatory, so let’s start by taking a look at step 3.

After a host is sysprepped, the value for the cmdline registry key located in HKLM:\System\Setup is populated with the windeploy.exe path, indicating that it should be invoked after reboot to initiate the unattended installation. Our aim is to ensure that the answer file (unattend.xml) is modified to include the computer name we want the host to be named prior to windeploy.exe executing. To do this, I’ve created the following PowerShell script named startup.ps1 and placed it in C:\Scripts to ensure my hostname is based on the internal IP address of my instance.

Once this script is in place, we can move to step 4, where we schedule the InitializeInstance.ps1 script to run on first boot. This can be done by running InitializeInstance.ps1 -Schedule located in C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts

Step 5 requires a modification of the SysprepInstance.ps1 file to ensure the shutdown after Sysprep is delayed to allow modification of the cmdline registry entry mentioned in step 3. The aim here is to replace the windeploy.exe path with our startup script, then shutdown the host. The modifications to the PowerShell script to accommodate this are made after the “#Finally, perform Sysprep” comment.

Finally, run the SysprepInstance.ps1 script.

Once the host changes to the stopped state, you can now create an AMI and use this as your image.

Accessing to Geolocation on Mobile Devices from ASP.NET Core Application in Vue.js and TypeScript

In the previous post, we used HTML5 getUserMedia() API to access camera on our mobile devices. In this post, we’re using geolocation data on our mobile devices.

The code samples used for this post can be found here.

navigator.geolocation API

Unlike getUserMedia() API, geolocation API has a great level of compatibility of almost all browsers.

Therefore, with a simple TypeScript code, we can easily use the geolocation data.

NOTE: In order to use the geolocation API, the device must be connected to the Internet. Also, each browser vendor uses its own mechanism to get geolocation data, which will cause different result even in the same device. This article gives us more details.

Prerequisites

  • ASP.NET Core App from the previous post
  • Computer or mobile devices that can access to the Internet through Wi-Fi or mobile network

NOTE 1: We use vue@2.2.2 and typescript@2.2.1 in this post. There are breaking changes on VueJs for TypeScript, so it’s always a good idea to check out the official guideline.

NOTE 2: Code samples used in this post were from the MDN document that was altered to fit in TypeScript.

Updating Hello.vue

In order to display latitude, longitude and altitude retrieved from the geolocation API, we need to update the Hello.vue file:

That’s pretty much self descriptive – clicking or tapping the Get Location button will display those geolocation data. Let’s move onto the logic side.

Updating Hello.ts

The Get Location button is bound with the getLocation() event, which needs to be implemented like below:

First of all, we need to declare properties for latitude, longitude and altitude, followed by the getLocation() method. Let’s dig into it.

  • First of all, we check the navigator.geolocation isntance if the web browser supports geolocation API or not.
  • Call getCurrentPosition() method to get the current position. This method then passes two callback methods and an option instance as its parameters.
  • Callback method, success(), passes the position instance containing current position details and binds co-ordinates to the browser.
  • error() callback handles error.
  • options instance provides options for the geolocation API.

NOTE Each callback method has its return type, according to the type definition, which is not necessary. Therefore, we just return null

The options instance used above is actually an interface type of PositonOptions that needs to be implemented. Its implementation might be looking like below:

We completed the TypeScript part. Let’s run the app!

Results

When we use a web browser within our dev machine, it firstly asks us to get a permission to use our location data:

Click Allow and we’ll see the result.

This time, let’s do it on a mobile browser. This is taken from Chrome for iPhone. It also asks us a permission to use geolocation data.

Once tapping the OK button, we can see the result.

So far, we’ve briefly looked at the geolocation API to populate current location. That’s not that hard, isn’t it?

If we have more complex scenario, need more accurate location details, or need constant access to the location data even we’re not using the app, then native app might have to be considered. Here’s a good discussion regarding to these concerns. But using HTML5 geolocation API would be enough in majority of cases.

Azure AD Connect – Upgrade Errors

 

 

Azure AD Connect is the latest release to date for Azure AD sync or previously known as Dirsync service. It comes with some new features which make it even more efficient and useful in Hybrid environment. Besides many new features the primary purpose of this application remains the same i.e. to sync identities from your local (On-Prem) AD to Azure AD.

Of the late I upgraded an AD sync service to AD connect and during the install process I ran into a few issues which I felt are not widely discussed or posted on the web but yet are real world scenarios which people can face during AD connect Install and configuration. Let’s discus them below.

 

Installation Errors

The very first error is stumped up on was Sync service install failure. The installation process started smoothly and Visual C++ package was installed and sql database created without any issue but during synchronization service installation, process failed and below screen message was displayed.

Issue:

Event viewer logs suggested that the installation process failed because of install package could not install the required dll files. The primary reason suggested that the install package was corrupt.

 

sync install error

 

Actions Taken:

Though I was not convinced but for sake of busting this reason I downloaded new AD connect install package and reinstalled the application but unfortunately it failed at same point.

Next, I switched from my domain account to another service account which was being used to run AD sync service on current server. This account had higher privileges then mine but unfortunately result was the same.

Next I started reviewing the application logs located at following path.

 

And at first look I found access denied errors logged in. What was blocking the installation files? Yes, none other but the AV. Immediately contacted security administrator and requested to temporarily stop AV scanning. Result was a smooth install on next attempt.

I have shared below some of the related errors I found in the log files.

 

 

 

 

Configuration Errors:

One of the important configurations in AD connect is the Azure Ad account with global administrator permissions. If you are creating a new account for this purpose and you have not logged on with it to change first time password, then you may face with below error.

badpassword

 

Nothing to panic about. All you need to do is log into Azure portal using this account, change password and then add credentials with newly set password into configuration console.

Another error related to Azure Ad sync account was encountered by one of my colleague Lucian and he has beautifully narrated the whole scenario in one of his cool blogs here: Azure AD Connect: Connect Service error

 

Other Errors and Resolutions:

Before I conclude, I would like to share some more scenarios which you might face during install/configuration and post install. My Kloudie fellows have done their best to explain them. Have a look and happy AAD connecting.

 

Proxy Errors

Configuring Proxy for Azure AD Connect V1.1.105.0 and above

 

Sync Errors:

Azure AD Connect manual sync cycle with powershell, Start-ADSyncSyncCycle

 

AAD Connect – Updating OU Sync Configuration Error: stopped-deletion-threshold-exceeded

 

Azure Active Directory Connect Export profile error: stopped-server-down

 

 

 

 

 

 

 

 

Azure MFA: Architecture Selection Case Study

I’ve been working with a customer on designing a new Azure Multi Factor Authentication (MFA) service, replacing an existing 2FA (Two Factor Authentication) service based on RSA Authenticator version 7.

Now, typically Azure MFA service solutions in the past few years have been previously architected in the detail ie. a ‘bottom up’ approach to design – what apps are we enforcing MFA on? what token are we going to use? phone, SMS, smart phone app? Is it one way message, two way message? etc.

Typically a customer knew quite quickly which MFA ‘architecture’ was required – ie. the ‘cloud’ version of Azure MFA was really only capable of securing Azure Active Directory authenticated applications. The ‘on prem’ (local data centre or private cloud) version using Azure MFA Server (the server software Microsoft acquired in the PhoneFactor acquisition) was the version that was used to secure ‘on-prem’ directory integrated applications.  There wasn’t really a need to look at the ‘top down’ architecture.

In aid of a ‘bottom up’ detailed approach – my colleague Lucian posted a very handy ‘cheat sheet’ last year, in comparing the various architectures and the features they support which you can find here: https://blog.kloud.com.au/2016/06/03/azure-multi-factor-authentication-mfa-cheat-sheet

 

New Azure MFA ‘Cloud Service’ Features

In the last few months however, Microsoft have been bulking up the Azure MFA ‘cloud’ option with new integration support for on-premise AD FS (provided with Windows Server 2016) and now on-premise Radius applications (with the recent announcement of the ‘public preview’ of the NPS Extension last month).

(On a side note: what is also interesting, and which potentially reveals wider trends on token ‘popularity’ selection choices, is that the Azure ‘Cloud’ option still does not support OATH (ie. third party tokens) or two-way SMS options (ie. reply with a ‘Y’ to authenticate)).

These new features have therefore forced the consideration of the primarily ‘cloud service’ architecture for both Radius and AD FS ‘on prem’ apps.

 

“It’s all about the Apps”

Now, in my experience, many organizations share application architectures they like to secure with multi factor authentication options.  They broadly fit into the following categories:

1. Network Gateway Applications that use Radius or SDI authentication protocols, such as network VPN clients and application presentation virtualisation technologies such as Citrix and Remote App

2. SaaS Applications that choose to use local directory credentials (such as Active Directory) using Federation technologies such as AD FS (which support SAML or WS-Federation protocols), and

3. SaaS applications that use remote (or ‘cloud’) directory credentials for authentication such as Azure Active Directory.

Applications that are traditionally accessed via only the corporate network are being phased out for ones that exist either purely in the Cloud (SaaS) or exist in a hybrid ‘on-prem’ / ‘cloud’ architecture.

These newer application architectures allow access methods from untrusted networks (read: the Internet) and therefore these access points also apply to trusted (read: corporate workstations or ‘Standard Operating Environment’) and untrusted (read: BYOD or ‘nefarious’) devices.

In order to secure these newer points of access, 2FA or MFA solution architectures have had to adapt (or die).

What hasn’t changed however is that a customer when reviewing their 2FA or MFA choice of vendors will always want to choose a low number of MFA vendors (read: one), and expects that MFA provider to support all of their applications.  This keeps user training cost low and operational costs low.   Many are also fed up dealing with ‘point solutions’ ie. securing only one or two applications and requiring a different 2FA or MFA solution per application.

 

Customer Case Study

So in light of that background, this section now goes through the requirements in detail to really ‘flush’ out all the detail before making the right architectural decision.

 

Vendor Selection

This was taken place prior to my working with our customer, however it was agreed that Azure MFA and Microsoft were the ‘right’ vendor to replace RSA primarily based on:

  • EMS (Enterprise Mobility + Security) licensing was in place, therefore the customer could take advantage of Azure Premium licensing for their user base.  Azure Premium meant we would use the ‘Per User’ charge model for Azure MFA (and not the other choice of ‘Per Authentication’ charge model ie. being charged for each Azure MFA token delivered).
  • Tight integration with existing Microsoft services including Office 365, local Active Directory and AD FS authentication services.
  • Re-use of strong IT department skills in the use of Azure AD features.

 

 Step 1: App Requirements Gathering

The customer I’ve been working with has two ‘types’ of applications:

1. Network Gateway Applications – Cisco VPN using an ASA appliance and SDI protocol, and Citrix NetScaler using Radius protocol.

2. SaaS Applications using local Directory (AD) credentials via the use of AD FS (on Server 2008 currently migrating to Server 2012 R2) using both SAML & WS-Federation protocols.

They wanted a service that could replace their RSA service that integrated with their VPN & Citrix services, but also ‘extend’ that solution to integrate with AD FS as well.   The currently don’t use 2FA or MFA with their AD FS authenticated applications (which includes Office 365).

They did not want to extend 2FA services to Office 365 primarily as that would incur the use of static ‘app passwords’ for their Outlook 2010 desktop version.

 

Step 2:  User Service Migration Requirements

The move from RSA to Azure MFA was going to involve the flowing changes as well to the way users used two factor services:

  1. Retire the use of ‘physical’ RSA tokens but preserve a similar smart phone ‘soft token’ delivery capability
  2. Support two ‘token’ options going forward:  ‘soft token’ ie. use of a smart phone application or SMS received tokens
  3. Modify some applications to use the local AD password instead of the RSA ‘PIN’ as a ‘what you know’ factor
  4. Avoid the IT Service Desk for ‘soft token’ registration.  RSA required the supply of a static number to the Service Desk who would then enable the service per that user.  Azure MFA uses a ‘rotating number’ for ‘soft token’ registrations (using the Microsoft Authenticator application).  This process can only be performed on the smart phone itself.

So mapping out these requirements, I then had to find the correct architecture that met their requirements (in light of the new ‘Cloud’ Azure MFA features):

 

Step 3: Choosing the right Azure MFA architecture

I therefore had a unique situation, whereby I had to present an architectural selection – whether to use the Azure MFA on premise Server solution, or Azure MFA Cloud services.  Now, both services technically use the Azure MFA ‘Cloud’ to deliver the tokens, but the sake of simplicity, it boils down to two choices:

  1. Keep the service “mostly” on premise (Solution #1), or
  2. Keep the service “mostly” ‘in the cloud’ (Solution #2)

The next section goes through the ‘on-premise’ and ‘cloud’ requirements of both options, including specific requirements that came out of a solution workshop.

 

Solution Option #1 – Keep it ‘On Prem’

New on-premise server hardware and services required:

  • One or two Azure MFA Servers on Windows Server integrating with local (or remote) NPS services, which performs Radius authentication for three customer applications
  • On-premise database storing user token selection preferences and mobile phone number storage requiring backup and restoration procedures
  • One or two Windows Server (IIS) hosted web servers hosting the Azure MFA User Portal and Mobile App web service
  • Use of existing reverse proxy publishing capability of the user portal and mobile app web services to the Internet under an a custom web site FQDN.  This published mobile app website is used for Microsoft Authenticator mobile app registrations and potential user self-selection of factor e.g. choosing between SMS & mobile app for example.
New Azure MFA Cloud services required:
  • User using Azure MFA services must be in local Active Directory as well as Azure Active Directory
  • Azure MFA Premium license assigned to user account stored in Azure Active Directory

Option1.png

Advantages:

  • If future requirements dictate Office 365 services to use MFA, then ADFS version 3 (Windows Server 2012) directly integrates with on premise Server MFA.  Only AD FS version 4 (Windows Server 2016) has capability in integrating directly with the cloud based Azure MFA.
  • The ability to allow all MFA integrated authentications through in case Internet services (HTTPS) to Azure cloud are unavailable.  This is configurable with the ‘Approve’ setting for the Azure MFA server setting: “when Internet is not accessible:”

 

Disadvantages:

  • On-premise MFA Servers requiring uptime & maintenance (such as patching etc.)
  • Have to host on-premise Azure website and publish to the Internet under existing customer capability for user self service (if required).  This includes on-premise IIS web servers to host mobile app registration and user factor selection options (choosing between SMS and mobile app etc.)
  • Disaster Recovery planning and implementation to protect the local Azure MFA Servers database for user token selection and mobile phone number storage (although mobile phone numbers can be retrieved from local Active Directory as an import, assuming they are present and accurate).
  • SSL certificates used to secure the on-premise Azure self-service portal are required to be already supported by mobile devices such as Android and Apple. Android devices for example, do not support installing custom certificates and requires using an SSL certificate from an already trusted vendor (such as THAWTE)

 

Solution Option #2 – Go the ‘Cloud’!

New on-prem server hardware and services required:

  • One or two Windows Servers hosting local NPS services which performs Radius authentication for three customer applications.  These can be existing available Windows Servers not utilizing local NPS services for Radius authentication but hosting other software (assuming they also fit the requirements for security and network location)
  • New Windows Server 2016 server farm operating ADFS version 4, replacing the existing ADFS v3 farm.

New Azure MFA Cloud services required:

  • User using Azure MFA services must be in local Active Directory as well as Azure Active Directory
  • User token selection preferences and mobile phone number storage stored in Azure Active Directory cloud directory
  • Azure MFA Premium license assigned to user account stored in Azure Active Directory
  • Use of Azure hosted website: ‘myapps.microsoft.com’ for Microsoft Authenticator mobile app registrations and potential user self selection of factor e.g. choosing between SMS & mobile app for example.
  • Configuring Azure MFA policies to avoid enabling MFA for other Azure hosted services such as Office 365.

Option2.png

Advantages:

  • All MFA services are public cloud based with little maintenance required from the customer’s IT department apart from uptime for on-premise NPS servers and AD FS servers (which they’re currently already doing)
  • Potential to reuse existing Windows NPS server infrastructure (would have to review existing RSA Radius servers for compatibility with Azure MFA plug in, i.e. Windows Server versions, cutover plans)
  • Azure MFA user self-service portal (for users to register their own Microsoft soft token) is hosted in cloud, requiring no on-premise web servers, certificates or reverse proxy infrastructure.
  • No local disaster recovery planning and configuration required. NPS services are stateless apart from IP addressing configurations.   User information token selections and mobile phone numbers stored in Azure Active Directory with inherent recovery options.

 

Disadvantages:

  • Does not support AD FS version 3 (Windows Server 2012) for future MFA integration with AD FS SaaS enabled apps such as Office 365 or other third party applications (i.e. those that uses AD FS so users can use local AD authentication credentials). These applications require AD FS version 4 (Windows Server 2016) which supports the Azure MFA extension (similar to the NPS extension for Radius)
  • The Radius NPS extension and the Windows AD FS 2016 Azure MFA integration do not currently support the ability to approve authentications should the Internet go offline to the Azure cloud i.e. cannot reach the Azure MFA service across HTTPS however this may be because….
  • The Radius NPS extension is still in ‘public preview’.  Support from Microsoft at this time is limited if there are any issues with it.  It is expected that this NPS extension will go into general release shortly however.

 

Conclusion and Architecture Selection

After the workshop, it was generally agreed that Option #2 fit the customer’s on-going IT strategic direction of “Cloud First”.

It was agreed that the key was replacing the existing RSA service integrating with Radius protocol applications in the short term, with AD FS integration viewed as very much ‘optional’ at this stage in light of Office 365 not viewed as requiring two factor services (at this stage).

This meant that AD FS services were not going to be upgraded to Windows Server 2016 to allow integration with Option #2 services (particularly in light of the current upgrade to Windows Server 2012 wanting to be completed first).

The decision was to take Option #2 into the detailed design stage, and I’m sure to post future blog posts particularly into any production ‘gotchas’ in regards to the Radius NPS extension for Azure MFA.

During the workshop, the customer was still deciding whether to allow a user to select their own token ‘type’ but agreed that they wanted to limit it if they did to only three choices: one way SMS (code delivered via SMS), phone call (ie. push ‘pound to continue’) or the use of the Microsoft Authenticator app.   Since these features are available in both architectures (albeit with different UX), this wasn’t a factor in the architecture choice.

The limitation for Option #2 currently around the lack of automatically approving authentications in case the Internet service ‘went down’ was disappointing to the customer, however at this stage it was going to be managed with an ‘outage process’ in case they lost their Internet service. The workaround to have a second NPS server without the Azure MFA extension was going to be considered as part of that process in the detailed design phase.

 

 

 

 

 

 

 

 

 

A workaround for the Microsoft Identity Manager limitation of not allowing simultaneous Management Agents running Synchronisation Profiles

Why ?

For those of you that may have missed it, in early 2016 Microsoft released a hotfix for Microsoft Identity Manager that included a change that removed the ability for multiple management agents on a Microsoft Identity Manager Synchronization Server to simultaneously run synchronization run profiles. I detailed the error you get in this blog post.

At the time it didn’t hurt me too much as I didn’t require any other fixes that were incorporated into that hotfix (and the subsequent hotfix). That all changed with MIM 2016 SP1 which includes functionality that I do require. The trade-off however is I am now coming up against the inability to perform simultaneous delta/full synchronization run profiles.

Having had this functionality for all the previous versions of the product that I’ve worked with for the last 15+ years, you don’t realise how much you need it until its gone. I understand Microsoft introduced this constraint to protect the integrity of the system, but my argument is that it should be up to the implementor. Where I use this functionality a LOT is with Management Agents processing different object types from different connected systems (eg. Users, Groups, Contacts, Photos, other entity types). For speed of operation I want my management agents running synchronization run profiles simultaneously.

Overview of my workaround

First up this is a workaround not a solution to actually running multiple sync run profiles at once. I really hope Microsoft remove this limitation in the next hotfix. What I’m doing is trying to have my run sequences run as timely as possible.

What I’m doing is:

  • splitting my run profiles into individual steps
    • eg. instead of doing a Delta Import and a Delta Sync in a single multi-step run profile I’m doing a Delta Import as one run profile and the sync as a separate one. Same for Full Import Full Synchronization
  • running my imports in parallel. Essentially the Delta or Full imports are all running simultaneously
  • utilising the Lithnet MIIS Automation Powershell Module for the Start-ManagementAgent and Wait-ManagementAgent cmdlets to run the synchronisation run profiles

Example Script

This example script incorporates the three principles from above. The environment I built this example in has multiple Active Directory Forests. The example queries the Sync Server for all the Active Directory MA’s then performs the Imports and Sync’s. This shows the concept, but for your environment potentially without multiple Active Directories you will probably want to change the list of MA’s you use as input to the execution script.

Prerequisites

You’ll need;

The following script takes each of the Active Directory Management Agents I have (via a dynamic query in Line 9) and performs a simultaneous Delta Import on them. If your Run Profiles for Delta Imports aren’t named DI then update Line 22 accordingly.

-RunProfileName DI

With imports processed we want to kick into the synchronisation run profiles. Ideally though we want to kick these off as soon as the first import has finished.  The logic is;

  • We loop through each of the MA’s looking for the first MA that has finished its Import (lines 24-37)
  • We process the Sync on the first completed MA that completed its import and wait for it to complete (lines 40-44)
  • Enumerate through the remaining MA’s, verifying they’ve finished their Import and run the Sync Run Profile (lines 48-62)

If your Run Profiles for Delta Sync’s aren’t named DS then update Line 40 and 53 accordingly.

-RunProfileName DS

Summary

While we don’t have our old luxury of being able to choose for ourselves when we want to execute synchronisation run profiles simultaneously (please allow us to Microsoft), we can come up with band-aid workarounds to still get our synchronisation run profiles to execute as quickly as possible.

Let me know what solutions you’ve come up with to workaround this constraint.

Secure your VSTS Release Management Azure VM deployments with NSGs and PowerShell

siliconvalve

One of the neat features of VSTS’ Release Management capability is the ability to deploy to Virtual Machine hosted in Azure (amongst other environments) which I previously walked through setting up.

One thing that you need to configure when you use this deployment approach is an open TCP port to the Virtual Machines to allow remote access to PowerShell and WinRM on the target machines from VSTS.

In Azure this means we need to define a Network Security Group (NSG) inbound rule to allow the traffic (sample shown below). As we are unable to limit the source address (i.e. where VSTS Release Management will call from) we are stuck creating a rule with a Source of “Any” which is less than ideal, even with the connection being TLS-secured. This would probably give security teams a few palpitations when they look at it too!

Network Security Group

We might be able to determine a…

View original post 637 more words

Accessing to Camera on Mobile Devices from ASP.NET Core Application in Vue.js and TypeScript

In the previous post, we built an ASP.NET Core application using Vue.js and TypeScript. As a working example, we’re building a mobile web application. Many modern web browsers supporting HTML5 can access to multimedia devices on users’ computer, smartphones or tablets, such as camera and microphone. The Navigator.getUserMedia() API enables us to access to those resources. In this post, we’re actually going to implement a feature for camera access on our computer and mobile devices, by writing codes in VueJs and TypeScript.

The code samples used for this post can be found here.

getUserMedia() API

Most modern web browsers support this getUserMedia() API, as long as they support HTML5. There are two different APIs around this method – one is Navigator.getUserMedia() that supports callback functions, while the other MediaDevices.getUserMedia(), that came up later, supports Promise so that we can avoid Callback Hell. However, not all browsers support the MediaDevices.getUserMedia(), so we need to support both anyway. For more details around getUserMedia(), we can find some practical samples in this MDN document.

Prerequisites

  • ASP.NET Core application from the previous post
  • Computer, tablet or smartphone having camera

NOTE 1: This post uses VueJs 2.2.1 and TypeScript 2.2.1. VueJs 2.2.1 introduced some breaking changes how it interacts with TypeScript. Please have a look at the official guide document.

NOTE 2: vue-webcam written by @smronju was referenced for camera access, and modified to fit in the TypeScript format.

Update Hello.vue

We need a placeholder for camera access and video streaming. Add the following HTML codes into the template section in Hello.vue.

  • video accepts the camera input. src, width, height and autoplay are bound with the component in Hello.ts. Additionally, we add the ref attribute for the component to recognise the video tag.
  • img is where the camera input is rendered. The photo field is used for data binding.
  • button raises the mouse click or finger tab event by invoking the takePhoto function.

The HTML bits are done. Let’s move on for TypeScript part.

Update Hello.ts

The existing Hello.ts was simple, while this time it’s grown up to handle the camera API. Here’s the bits:

We can see many extra data fields for two-way data binding between user input and application. Some of them comes with their default values so that we don’t have to worry about their initialisation too much.

  • The takePhoto() function creates a virtual DOM for canvas, converts the input signal from the video into an image, and sends it to the img tag to display snapshot.

  • The mounted() event function is invoked when this component, Hello.ts, is mounted to its parent. It uses the getUserMedia() API to bind streaming source to the video tag.
  • The video tag through this.$refs.video is the HTML element that has the ref attribute in Hello.vue. Without the ref attribute, VueJs cannot know where to access to the tag.

NOTE: The original type of the this.$refs instance is { [key: string]: Vue | Element | Vue[] | Element[] }, while we cast it to any. This is to avoid build failure due to the linting error caused by using the original type and accessing to the resource by referencing like this.$refs.video. If we don’t want to cast it to any, we can use this.$refs["video"] instead.

We’ve so far completed the coding part. Now, let’s build this up and run a local IIS Express, and access to the web app through http://localhost:port. It works fine.

This time, instead of localhost, use the IP address. If we want to remotely access to our local dev website, this post would help.

It says we can’t use the camera because of its insecure access. In order to use the getUserMedia() API, we should use HTTPS connection to prevent private data exposure. This only happens when we’re using Google Chrome, not FireFox or Edge. So, just change the connection to HTTPS.

Now we can use IP address for camera access. Once we allow it we can immediately see our face directly on the web like below (yeah, it’s me! lol).

Let’s try this from our mobile devices. The first one is taken from Android phone, followed by the one taken from Windows Phone, then the ones from iPhone. Thanks Boris for help take those pictures!

Errr… what happened on iPhone? The camera is not accessible from both Safari for iOS and Chrome for iOS!!

This is because not all mobile web browsers support the getUserMedia() API.

getUserMedia Browser Compatibility

Here’s the data sheet from http://mobilehtml5.org/.

Unfortunately, we can’t use the getUserMedia API on iOS for now. For iOS users, we have to provide alternative methods for their user experience. There’s another API called HTML Media Capture that is supported by all mobile web browsers. It uses the traditional input type="file" tag. With this, we can access to camera on our mobile devices.

In the next post, we’re going to figure out how to provide a fallback option, if getUserMedia() API is not available.

It’s time to get your head out of the clouds!

head-in-the-clouds1

For those of you who know me, you are probably thinking “Why on earth would we be wanting to get our heads out of the “Cloud” when all you’ve been telling me for years now is the need to adopt cloud!

This is true for the most part, but my point here is many businesses are being flooded by service providers in every direction to adopt or subscribe to their “cloud” based offering, furthermore ICT budgets are being squeezed forcing organisations into SaaS applications. The question that is often forgotten is the how? What do you need to access any of these services? And the answer is, an Identity!

It is for this reason I make the title statement. For many organisations, preparing to utilise a cloud offering` can require months if not years of planning and implementing organisational changes to better enable the ICT teams to offer such services to the business. Historically “enterprises” use local datacentres and infrastructure to provide all business services, where everything is centrally managed and controlled locally. Controlling access to applications is as easy as adding a user to a group for example. But to utilise any cloud service you need to take a step back and get your head back down on ground zero, look at whether your environment is suitably ready. Do you have an automated simple method of providing seamless access to these services, do you need to invest in getting your business “Cloud Ready”?

image1This is where Identity comes to the front and centre, and it’s really the first time for a long time now Identity Management has become a centrepiece to the way a business can potentially operate. This identity provides your business with the “how’ factor in a more secure model.

So, if you have an existing identity management platform great, but that’s not to say it’s ready to consume any of these offerings, this just says you have the potential means. For those who don’t have something in place then welcome! Now I’m not going to tell you products you should or shouldn’t use, this is something that is dictated by your requirements, there is no “one size fits all” platform out there, in fact some may need a multitude of applications to deliver on the requirements that your business has, but one thing every business needs is a strategy! A roadmap on how to get to that elusive finish line!

A strategy is not just around the technical changes that need to be made but also around the organisational changes, the processes, and policies that will inevitably change as a consequence of adopting a cloud model. One example of this could be around the service management model for any consumption based services, the way you manage the services provided by a consumption based model is not the same as a typical managed service model.

And these strategies start from the ground up, they work with the business to determine their requirements and their use cases. Those two single aspects can either make or break an IAM service. If you do not have a thorough and complete understanding of the requirements that your business needs, then whatever the solution being built is, will never succeed.

So now you may be asking so what next? What do we as an organisation need to do to get ourselves ready for adopting cloud? Well all in all there are basically 5 principles you need to undertake to set the foundation of being “Cloud Ready”

  1. Understand your requirements
  2. Know your source of truth/s (there could be more then one)
  3. Determine your central repository (Metaverse/Vault)
  4. Know your targets
  5. Know your universal authentication model

Now I’m not going to cover all these in this one post, otherwise you will be reading a novel, I will touch base on these separately. I would point out that many of these you have probably already read, these are common discussion points online and the top 2 topics are discussed extensively by my peers. The point of difference I will be making will be around the cloud, determining the best approach to these discussions so when you have your head in the clouds, you not going to get any unexpected surprises.

Remote Access to Local ASP.NET Core Applications from Mobile Devices

One of the most popular tools for ASP.NET or ASP.NET Core application development is IIS Express. We can’t deny it. Unless we need specific requirements, IIS Express is a sort of de-facto web server for debugging on developers’ local machines. With IIS Express, we can easily access to our local web applications with no problem during the debugging time.

There are, however, always cases that we need to access to our locally running website from another web browsers like mobile devices. As we can see the picture above, localhost is the loopback address so we can’t use it outside our dev box. It’s not working by simply replacing the loopback address with a physical IP address. We need to adjust our dev box to allow this traffic. In this post, we’re going to solve this issue by looking at two different approaches.

At the time of writing this post, we’re using Visual Studio (VS) 2015, as VS 2017 will be launched on March 7, 2017.

Network Sharing Options and Windows Firewall

Please make sure that all screenshots for this section are taken from Windows 10. Open my current connected network (either wireless or wired).

Make sure that the “Make this PC discoverable” option is turned on.

This option enables our network in “Private” mode on Windows Firewall:

WARNING!!!: If our PC is currently connected to a public network, for our better security, we need to turn off the private network settings; otherwise our PC will get vulnerable from malicious attacks.

Update Windows Firewall Settings

In this example, the locally running web app uses the port number of 7314. Therefore, we need to register a new inbound firewall rule to allow access through the port number. Open “Windows Firewall with Advanced Security” through Control Panel and create a new rule with options below:

  • Rule Type: Port
  • Protocol: TCP
  • Port Number: 7314
  • Action: Allow the Connection
  • Profile: Private (Domain can also be selected if our PC is bound with domain controllers)
  • Name: Self-descriptive name of anything! eg) IIS Express Port Opener

Now, all traffic through this port number is allowed from now on. So far, we’ve completed the basic environment settings including firewalls. Let’s move onto the first option using IIS Express itself.

1. Updating IIS Express Configurations Directly

When we install VS, IIS Express is also installed at the same time. Its default configuration file is located at somewhere but each solution that VS 2015 creates has its own settings that overwriting the default one and it’s stored to the .vs folder like:

Open applicationhost.config for update.

Add another binding with my local IP address like:

We can easily find our local IP address by running the ipconfig command. We’re using 192.168.1.3 for now.

IIS Express has now been set. Let’s try our mobile web browser to access the local dev website by IP address.

All good! It seems to be working now. However, if we have more web applications running on our dev environment for our development work, every time we create a new web application project, we have to register the port number, allocated by IIS Express, to Windows Firewall. No good. Too repetitive. Is there any other convenient way? Of course there is.

2. Conveyor – Visual Studio Extension

Conveyor can sort out this hassle. At the time of this writing, its version is 1.3.2. After installing this extension, run the debugging mode by typing the F5 key again and we will be able to see a new window like:

The Remote URL is what we’re going to use. In general, the IP address would look like 192.168.xxx.xxx, if we’re in a small network (home, for example), or something different type of IP address type, if we’re in a corporate network. This is the IP address that the mobile devices use. Another important point is Conveyor uses the port number starting from 45455. Whatever port number IIS Express assigns the web application project, Conveyor forwards it to 45455. If 45455 is taken by others, it looks up one and one until a free port number exists. Due to this behaviour, we can easily predict the port number range, instead of the random nature of IIS Express. Therefore, we can register the port number range starting from 45455 to whatever we want, 45500 for example.

Now, we can access to our local dev website by using this port number pool like:

If we’re developing a web application using HTTPS connection, that wouldn’t be an issue. If no self-signed certificate is installed on our local dev machine, Conveyor will install one and that’s it. Visiting the website again through HTTPS connection will display the initial warning message and finally gets the page.

We’ve so far discussed how to remotely access to our local dev website using either IIS Express configuration or Conveyor. Conveyor gets rid of repetitive firewall registration, so it’s worth installing for our web app development.