Branding the Consultant: what can I do versus where can I add the most value?

Engaging new and potentially challenging clients can always be daunting, particularly when an expectation has been set as to what role you will play as a part of a team. Whether consulting and road-mapping potential outcomes and future work, to delivering a full project. In my time working with Kloud, the broader sense of the term ‘consultant’ appears to be at an all-time high in terms of what it means to the professional marketplace. I view today’s Business Consultant as someone who guides an individual stakeholder or group, based on engaging and understanding given circumstances or a proposed business case, to help make decisions on adopting a specific direction – one which is considered the most appropriate or in their best interest.

The client, as we know them, usually comes with high expectations (surely, as they are paying good money for quality service), but that doesn’t necessarily mean that they know what they want right now. Solutions and recommended approaches aren’t always black and white and neither should be the decisions on whether to approach them or not i.e. providing a solution in the present versus future proofing based on projections of desirable future business outcomes.

So, before pulling the panic cord and calling for the chopper to extract you from a project where delivery may be stalling or responsibilities and requirements changing from one day to the next, perhaps narrow the window and focus on the objectives immediately in front of you. When targeting these immediate objectives look at which ones align with your strengths, not only in skills but also in your unique personality. Ask yourself– ‘What aspects of my skills and ability to provide the best services and outcomes for my clients allows for them, as well as my peers, to identify my own, unique brand?’

Business Case Example

A medium-sized business had already taken steps to progress out of it’s technological infancy, by purchasing the necessary software and scoping infrastructure requirements for where it intended it’s future state to be. However, with a limited business case from key stakeholders, time and resources available for business discovery exercises, and ability for IT to meet the support demands of the business, a series of challenges looked to stifle the enabling of these software and infrastructure solutions. From a technical standpoint, the IT staff where highly capable, but required analytical and consulting capability to aid the transition.

However, the CIO remained adamant on the direction he desired the company to head in, and so in engaging with external consultants and allowing an almost agile-like engagement with the various business departments, was able to focus on the larger picture of developing their internal IT as a technological innovation body, as well as a support function, while having the staff engage collaboratively in the transition to the new technological state. The engagement with consultants who had to act flexibly due to the lack of time and business case development allowed for a situation where they were, in turn, able to challenge themselves, drawing on past experience, apply skills outside of their portfolio and self-educate on the solution(s), to deliver the desired outcomes for the project. The end result was inciting positive attitudes towards, and a broadened understanding of IT as well as the community-building effects that these modern technologies could have on the business.
While this specific situation might be nothing new to some consultants out there, the value in strengthening your brand through outcomes that flow into positive effects for a client’s company is something that is not always achieved, but should be a key focus, particularly when offering guidance and advising on a solution.

TheOracleOpportunity.indd

Personality Attributes of a Technical Consultant (Ref. ILoveOracle.com)

Applying the most valuable aspects of your skill set to the appropriate situation has the potential to not only yield bigger wins for your project, but also show the client a level of capability and control that allows for an investment of trust, something that in the long-term, is an invaluable intangible between consultant, client, the team (Kloud) and your (Kloudie) unique brand of consulting.

Be sure that, in empowering and promoting your own brand, you don’t forget about the team. Part of your strength and value can be your resourcefulness, not just your individual skill. In a present technological landscape endorsing constant collaboration, there’s no harm in not knowing what you don’t know – just be willing to ask and learn. Know that giving time to self-evaluation and development translates to value added in the long-term, at least in from my experience.

Sharepoint 2013 Cumulative Updates Patching Overview

A couple months ago, I had the opportunity to help a client to patch their Sharepoint on-premise 2013, which was last patched up to 2014. It was a challenging but interesting experience. We decided to use the Nth-1 patch, which was March 2018 at that time. It was a 5-weeks engagements, where we had to break down the steps and processes from back-up preparations, roll back strategies, on-patching strategies to post-patching testings. Lots of inputs and discussions among DBAs, system engineers, IT manager and Sharepoint engineers. After all, it was a worthwhile experience as we all learnt a lot from it. Here’ i am breaking down the summary in what we have done.

First of all, we need to lay out the path from early stage to implementation and completion. The high level steps are as:

Inventory check on Sharepoint sites

It is very important to do an inventory check through the existing sites. Do a breakdown list of major sites and their owners, find out which are the most used sites, find out about the amount of storage being used, find out how many custom farm solutions there are.

Test through all aspects of Sharepoint

Application level and backend (central admin) level – We need to be confident with the existing functionalities in Sharepoint, whether they are out of box or custom. Testings need to be done before anything, this is to ensure what you test after the patching still work the same way as before patching.

I would also recommend backing up the existing search service application, just in case it gone corrupted after patching. Remember, anything can happen with the patching, regardless of whether it worked on DEV or TEST farms as each farm configurations can have different configurations and settings.

I would also suggest taking lots of screenshots in Central Admin so that you have a record of existing settings and configurations to compare at later stage.

Backup content databases

Liaise with DBAs to perform backups for content databases. This will likely to take overnight depending on the sizes of your farm and tool being used. Should it have completed successfully, we need to liaise with the system engineer to perform snapshots on all VMs. This should take less time than the content databases backup, however all VMs need to be shutdown prior.

The important thing to note is, we are trying to capture the current state of the farm, so any mis-sync between the VMs or databases would result in a corrupted farm.

Stopping all Sharepoint services

It is important to stop these critical Sharepoint services: IISAdmin, SPTimerV4 and Search. There is Powershell script that can do this but you could also stop them in local server’s Services window.

Applying CU patches

Always login using the farm account on each server.

The order of applying Sharepoint patch is critical. The order we chose to apply was as follow:

  • Front end servers
  • Backend servers
  • Search servers

You can run the patch asynchronously on all server as it doesn’t interfere with each other. The patch only updates the local file system, it doesn’t update the database just yet. Each patch should take no longer than half an hour to complete. Upon completion, restart the server.

Run PSConfig.exe

This must be run in order for the patching process to complete, or else the server status page in Central Admin will not show the updates being applied! I recommend using the command line version rather than the GUI version as we found the GUI version likes to hide error messages and just shows a completion message.

The command we used was this:

PSConfig.exe -cmd upgrade -inplace b2b -wait -cmd applicationcontent -install -cmd installfeatures -cmd secureresources -cmd services -install

The order of running this is also critical. We ran it on App servers first, then front end servers and lastly search servers. Always check on the results, any error or issue would likely be reported here. You need to fix all issues and re-run the command until a success message is shown.

If the patch has been applied successfully, Central Admin -> Upgrade and Migration -> Check product and patch installation status page would immediately reflect the update. Also, check the database configuration version reflects the right new version number.

Note, one issue we had to fix was the removal of orphan features found in various sites. There were a number of these orphan features which was reported by the tool and we used Powershell script to manually remove one by one.

Post-patching testing

Once all patches have been applied, ensure all servers are back up running. Check all the major services are running (i.e. Central Admin, IIS, Search, etc). This is when snapshots from your pre-testing come handy if you can’t remember what services should be running on what server. There are also ULS logs which you could check and scan through to see if there’s any critical issues found.

Then we started with post-patching testings. This should be the same test plans as the pre-testing and the result should match as well.

Some useful links for reference:

 

 

 

Why is the Azure Load Balancer NOT working?

Context

For most workloads that I’ve deployed in Azure that have required load balancing, for the Azure Load Balancer (ALB) used in those architectures, the out of the box experience or the default configuration was used. The load balancer service is great like that, whereby for the majority of scenarios it just works out of the box. I’m sure this isn’t an Azure only experience either. The other public cloud providers have a great out of the box load balancing service that would work with just about any service without in depth configuration.

You can see that I’ve been repetitive on the point around out of the box experience. This is where I think I’ve become complacent in thinking that this out of the box experience should work in the majority of circumstances.

My problem, as outlined in this blog post, is that I’ve experienced both the Azure Service Manager (ASM) load balancer and the Azure Resource Manager (ARM) load balancer not working as intended…

UPDATE 2018-07-13 – The circumstances in both of the examples given in this blog post assume that the workloads in the backend pool are configured correctly AND that the Azure Load Balancer is also configured correctly as per Microsoft recommendations. The specific solution that I’ve outlined came about after making sure that all the settings were checked, checked again and also had Microsoft Premier Support validate the config.

Azure Load Balancer

From what I’ve been told by Microsoft Premier Support, the Azure Load Balancer has had a 5-tuple distribution algorithm, based on source IP, source port, destination IP, destination port and protocol type, since its inception. However, that was certainly not the case as I’ll explain in the next paragraph. While this 5-tuple mode should in theory work well with just about any scenario, because at the end of the day the distribution is still round robin between endpoints, the stickiness of sessions to those endpoints comes into play where that can cause some issues.

In a blog post from way back when, Microsoft outlines that to accommodate RDS Gateway, the distribution mode options for the ALB have been updated. There are a total of 3 distribution modes: 5-tuple (mentioned before), 3-tuple (source IP, destination IP and protocol) and 2-tuple (source IP and destination IP).

Now that we have established the configuration options and roughly when they came about, lets get stuck into the impact in two scenarios, months apart and in both ASM Classic and ARM deployment modes…

 

The problem

Earlier this year at a customer, we ran into a problem where we had a number of Azure workloads hard reset. This was either the cause of an outage in the region, some scheduled or unscheduled maintenance that had to occur. Nothing to serious sounding until we found that the Network Device Enrolment Server (NDES) was not able to accept traffic from the Web Application Proxy (WAP) server that was inline and “north” of the server. The WAP itself was in a Cloud Service (so ASM/Classic environment here) where there was multiple WAP servers that leveraged Load Balanced Sets (or the Azure Internal Load Balancer) as part of the Cloud Service.

The odd thing that happened was that since the outage/scheduled/unscheduled maintenance had happened, inbound NDES traffic via the WAP suddenly became erratic. Certificates that were requested via NDES (from Intune in this circumstance) were for the most part not being completed. So ensued, a long and enjoyable Microsoft Premier case that involved the Azure Product Group (sarcasm intended).

The solution

I’ll keep this short and sweet as I would rather save you, dear reader, the time of not having to relive that incident. The outcome was as follows:

It was determined that the ASM Cloud Service Load Balanced Set (or Azure Load Balancer, Azure Internal Load Balancer) configuration was set to the out of the box default of 5-tuple distribution. While this implementation of the WAP + NDES solution was in production for at least 2-3 years, working without fault or issue, was not the correct configuration. It was determined that the correct configuration for this setup was to leverage either 2-tuple (source and destination IPs) distribution, or 3-tuple (source IP, destination IP and protocol). We went with the more specific 3-tuple and that resolved connectivity issues.

The PowerShell to execute this solution (setting the ASM Cloud Service LBSet to 3-tuple, source IP+destinationIP+protocol) is as follows:

Set-AzureLoadBalancedEndpoint -ServiceName "[CloudServiceX]" -LBSetName "[LBSetX]" -Protocol tcp LoadBalancerDistribution "sourceIPprotocol"

The specific parameter (-LoadBalancerDistribution) which sets the load balancer distribution algorithm, has the Valid values of:

  • sourceIP. A two tuple affinity: Source IP, Destination IP
  • sourceIPProtocol. A three tuple affinity: Source IP, Destination IP, Protocol
  • none. A five tuple affinity: Source IP, Source Port, Destination IP, Destination Port, Protocol

Round 2: The problem happened again

Recently, in another work stream, we ran into the same issue. However, the circumstances were slightly different. The problem parameters this time were:

  • An Azure Resource Manager (ARM) environment, not ASM or Classic
    • So, we were using the individual resource of the Azure Internal Load Balancer
    • Much more configuration available here
    • Again though- 5-tuple is the default “Source IP affinity mode” (note: no longer called simply the distribution mode in ARM)
  • The work load WAS NOT NDES
  • The work load WAS deployed on Windows Server 2016 again – same as before
    • I’m not sure if that is a coincidence or not

Round 2: Solution

Having gone through the load balancing distribution mode issue only a few months earlier, I had it fresh in my mind. I suggested to investigate that. After the parameter was changed, in this second instance, we were able to resolve the issue again and get the intended work load working as intended via the Azure Load Balancer.

With Azure Resource Manager, theres a couple of ways you can go about the configuration change. The most common way would be to change the JSON template which is quick and easy. The below is an example of the section around load balancing rules which has the specific “loadDistribtuion” parameter that would need to be changed. The ARMARM load balancer has basically the same configuration options as the ASM counterpart around this setting; sourceIP, sourceIPProtocol. However, the only difference is that there is a “Default” option which is the ARM equivalent to the ASM or “None” (default = 5-tuple).

"loadBalancingRules": [
{
"name": "[concat(parameters('loadBalancers_EXAMPLE_name')]",
"etag": "W/\"[XXXXXXXXXXXXXXXXXXXXXXXXXXX]\"",
"properties": {
"provisioningState": "Succeeded",
"frontendIPConfiguration": {
"id": "[parameters('loadBalancers_EXAMPLE_id')]"
},
"frontendPort": XX,
"backendPort": XX,
"enableFloatingIP": false,
"idleTimeoutInMinutes": X,
"protocol": "TCP",
"loadDistribution": "SourceIP",
"backendAddressPool": {
"id": "[parameters('loadBalancers_EXAMPLE_id_1')]"
},
"probe": {
"id": "[parameters('loadBalancers_EXAMPLE_id_2')]"
}
}
}
],

The alternative option would be to just user PowerShell. To do that, you can execute the following:

Get-AzureRmLoadBalancer -Name [LBName] -ResourceGroupName [RGName] | Set-AzureRmLoadBalancerRuleConfig -LoadDistribution "[Parameter]"

 

Conclusion and final thoughts

For the most part I would usually go with the default for any configuration. Through this exercise I have come to question the load balancing requirements to be specific around this distribution mode to avoid any possible fault. Certainly, this is a practice that should extend to every aspect of Azure. The only challenge is balancing questioning every configuration and/or simply going with the defaults. Happy balancing! (Pun intended).


This was originally posted on Lucian.Blog by Lucian.

Follow Lucian on Twitter @LucianFrango.

IaaS – Application Migration Management Tracker

What is IaaS Application Migration

Application migration is the process of moving an application program or set of applications from one environment to another. This includes migration from an on-premises enterprise server to a cloud provider’s environment or from one cloud environment to another. In this example, Infrastructure as a Service (IaaS) application migration.

Application Migration Management Tracker

Having a visual IaaS application migration tracker, helps to clearly identify all dependencies and blockers to manage your end to end migration tracking. In addition to the project plan, this artefact will help to manage daily stand-ups and accurate weekly status reporting.

Benefits

  • Clear visibility of current status
  • Ownerships/accountability
  • Assist escalation
  • Clear overall status
  • Lead time to CAB and preparation times
  • Allows time to agree and test key firewall/network configurations
  • Assist go/no-go decisions
  • Cutover communications
  • All dependencies
  • Warranty period tracking
  • BAU sign-off
  • Decommission of old systems if required

When to use and why?

  • Daily stand-ups
  • Go-no go meetings to take clear next steps and accountability
  • Risks and issues preparation and mitigation steps
  • During change advisory board (CAB) meeting to provide accurate report to obtain approval to implement
  • Traceability to tick and progress BAU activities and preparation of operational support activities

Application Migration Approach

Apps migration.jpg

Example of IaaS Application Migration Tracker

Below is an example which may assist your application migration tracking in detail.

  • Application list
  • Quarterly timelines
  • Clear ownerships
  • Migration tracking sub tasks
  • Warranty tracking sub tasks
  • Current status
  • Final status

IaaS - Application Migration Tracker - Example

IaaS Application Migration Tracker

Summary

Hope this example helps but it can be customised as per organisational processes and priorities. This tracker can also be used on non-complex applications and complex application migration. Thanks.

Replace Personal Privilege Account into Shareable Broker Accounts

Introduction

Most of the organizations still have the practice of Personal Privilege Accounts in their corporate platforms and application. It’s very challenging when comes to managing and monitoring those accounts which gives non-restrictive access to the most valuable systems in the Organizations. Effective procedures around managing these privileged accounts are extremely difficult without specialized tools.

CyberArk Privileged Account Management solution enable these organizations to secure, provision, manage, control and monitor all activities associated with privileged accounts present in their IT landscape.

One of the primary goals of implementing Privilege Account Management solution will be replacing personal privileged accounts with shareable broker Accounts. This will drastically reduce the total number of privilege accounts for each application and systems. And, these broker accounts will get the other benefits from CyberArk PAM solution for example, One Time Password, enforce corporate Password Policy, tamper proof audit trails etc.

Replace AD Personal Privileged Accounts into Shareable Broker Accounts

Typical CyberArk approach to replace Active Directory personal privilege accounts into shareable broker Accounts are graphically depicted in the below picture.

1

Note: Assume all green line connectors are the customization needed to implement this use case.

1) In this scenario, two new AD shared accounts (App_Broker_Acc1|2) are created and added as members of domain admin groups (after this implementation we can disable all the existing personal privilege accounts which are members of this group ex S-XXXX, S-YYYY)

2

2) A new AD group (PAM_Domain Admins) will be created specifically to map users normal AD id to CyberArk Safe (Safes are logical containers with in the CyberArk Vault). This will provide end user (289705, 289706) access to fetch password and initiate a session to target platforms.

3) The normal AD IDs of the administrators will be added as members of the newly created AD groups for PAM.

3

4) A Safe (AD_Domain Admins_Safe) will be created in CyberArk. The AD group (PAM_Domain Admins) which we’ve created in step2 will be made as member of this safe with required permission enabled.

4

5) On-board the Shared account which are created in Step 1 into CyberArk. These accounts will be stored under the Safe, which we have created as part of step 4.

5

6) Now the administrators will be able to logon to CyberArk Web Portal (PVWA) using their normal AD ID and then they can connect to the target platform by selecting a broker account without knowing its credentials.

6

7) Session initiated through shareable broker account without end user knowing its password.

7

SharePoint site template error : IsProduction field is not valid or does not exists

Introduction

In this post I will be talking about exception “IsProduction field not accessible or does not exist”. In our case we had saved an existing site as site template in solution gallery and created a new site collection from saved site template but it was breaking with the below exception message.

Error message:

“The field specified with the name IsProduction is not accessible or does not exist”.


Background

The idea of using Site templates feature in SharePoint OnPrem helps with saving site as template and reusing the site template to pre provision the standard site elements in new site collection such as list, libraries, views, workflows, logos, branding and other elements for different department. Site templates are blue print for the site which can be used when we create new site collections.

Here the requirement was to save the existing site collection as site template with all the custom list, libraries, pages, content type and Nintex workflow. When the site collection was saved as site template it gets saved in the solution gallery and then can be available under the custom template section in the new site collection wizard.

Issue was when a new site was created using the saved custom template the provisioning terminated with the error “The field specified with the name IsProduction is not accessible or does not exist”. Since the error was not much descriptive and checking the SharePoint logs did not provide much information either.

To understand the root cause for the error, I checked field reference in site columns and content type but could not find any reference. Next step was to check the site template cab file (can be downloaded from the solution gallery) and looked for the reference in the site artifacts scheme definition files which pointed me to the Nintex list definition.

Nintex maintains a list internally to manage the site workflow definition, this list had a reference to the column “IsProduction”

.

On checking Nintex documentation and forums “The ‘IsProduction’ Field was introduced in 3.1.7.0 for subscription based Nintex. It was later removed due to few critical bugs

Resolution:

To resolve the issue the reference for the column “IsProduction” had to be removed from the site template, then rebuild the package and deploy it to SharePoint.

I have put together the steps briefly to remove the field reference and deploy the wsp to SharePoint

Steps

  1. Download the solution package for site template from the solution gallery in SharePoint site
  2. Change the extension for wsp package to cab. To unzip the cap file we can use tool or command prompt. I had used the command prompt

    Expand -R “Filename.cab” “Destination Folder” -F:*

  3. Once the cab has been unzipped, go to the files folder.
  4. Under Files è List folder => NintexWorkflows è Schema.xml

  5. In the schema definition file remove the reference to the IsProduction field and save the file.

  6. Last step is to rebuild the wsp using the SharePoint stsadmin command prompt. After the wsp is built it has to be uploaded to the solution gallery and activated again.

With the new custom template, I was able to create the site collection without any issues. I hope this will help solve the issue. Happy Coding!!

Web Application ADFS integration error: Invalid Cryptographic Algorithm

Introduction

In this post I will be talking about invalid cryptographic algorithm exception in web application. We have a multi-tenant single sign on asp.net application which connected with different identity provider to enable single sign on experience.

Background

Single sign-on multi-application scenario has been a soughed feature lately to make the user experience seamless across applications. In this case web application (service provider) was integrating with the ADFS 2.0 client hosted on Windows server 2012 R2 to implement single sign on experience for the end user on their network.

The application code written in C# uses component space helper facade to builds the http request using the Service provider configuration input parameters.

  1. Service provider name
  2. Assertion service endpoint url
  3. Service Provider sign on certificate and certificate password.

Certificate which was previously being used in the application for the assertion request had expired and new issued certificate which was when added to the ADFS server( Identity Provider) and the web application( Service Provider) when used was throwing an exception “Cryptographic Exception: Invalid Algorithm specified”.

On looking closely and debugging the code for the error I could notice exception “SAMLSignatureException: Failed to generate signature” was being thrown when it was stepping through the code segment where it was reading the certificate.


Resolution:

Certificate which has to be used by the assertion service expects to have Microsoft Cryptographic Service Provider (CSP) attribute set to “Microsoft Enhanced RSA and AES Cryptographic Provider”.

In this case the default certificate had the service provider configuration set as “Microsoft rsa schannel cryptographic provider”.

Difference is in the list of supported algorithms, key operations and key sizes. Microsoft RSA sChaneel Cryptographic provider doesn’t support the SHA-256 signature.

To check the certificate CSP we can check it using the below command and need to have open ssl on your system.

Command Prompt

\bin\openssl pkcs12 -in WebAppSelfSignedSSO.pfx

Make sure you point correct path to open ssl.

After the command is executed look for the Microsoft CSP Name attribute to confirm is if the CSP supports SHA-256 signature or not.

In this case we need to change the attribute to “Microsoft Enhanced RSA and AES Cryptographic Provider” to support SHA-256.

Then to update the CSP attribute to support SHA-256 signature in assertion request we need to run the below command to update the CSP.

  1. Convert the pfx file to .pem from command prompt

    Once the command is execute successfully it will generate .pem file.

  2. Next convert the .pem back to pfx and update the CSP attribute property

  3. We can verify the CSP property has been changed to “Microsoft Enhanced RSA and AES Cryptographic Provider”

I hope this will help solve the issue. Happy Coding!!

Bots: An Understanding of Time

Some modern applications must understand time, because the messages they receive contain time sensitive information. Consider a modern Service Desk solution, that may have to retrieve tickets based on a date range (the span between dates) or a duration of time.

In this blog post, I’ll explain how bots can interpret date ranges and durations, so they can respond to natural language queries provided by users, either via keyboard or microphone.

First, let’s consider  the building blocks of a bot, as depicted in the following view:

The client runs an application that sends messages to a messaging endpoint in the cloud. The connection between the client and the endpoint is called a channel. The message is basically something typed or spoken by the user.

Now, the bot must handle the message and provide a response. The challenge here is interpreting what the user said or typed. This is where cognitive services come in.

A cognitive service is trained to take an message from the user and resolve it into an intent (a function the bot can then execute). The intent determines which function the bot will execute, and the resulting response to the user.

To build time/date intelligence into a bot, the cognitive service must be configured to recognise date/time sensitive information in messages, and the bot itself must be able to convert this information into data it can use to query data sources.

Step 1: The Cognitive Service

In this example, I’ll be using the LIUS cognitive service. Because my bot resides in an Australia based Azure tenant, I’ll be using the https://au.luis.ai endpoint. I’ve created an app called Service Desk App.

Next, I need to build some Intents and Entities and train LUIS.

  • An Entity is an thing or phrase (or set of things or phrases) that may occur in in an utterance. I want LUIS (and subsequently the bot) to identify such entities in message provided to it.

The good news is that LUIS has a prebuilt entity called datetimeV2 so let’s add that to our Service Desk App. You may also want to add additional entities, for example: a list of applications managed by your service desk (and their synonyms), or perhaps resolver groups.

Next, we’ll need an Intent so that LUIS can have the bot execute the correct function (i.e. provide a response appropriate to the message). Let’s create an Intent called List.Tickets.

  • An Intent, or intention represents something the user wants to do (in this case, retrieve tickets from the service desk). A bot may be designed to handle more than one Intent. Each Intent is mapped to a function/method the bot executes.

I’ll need to provide some example utterances that LUIS can associate with the List.Tickets intent. These utterances must contain key words or phrases that LUIS can recognise as entities. I’ll use two examples:

  • “Show me tickets lodged for Skype in the last 10 weeks”
  • “List tickets raised for SharePoint  after July this year”

Now, assuming I’ve created an list based entity called Application (so LUIS knows that Skype and SharePoint are Applications), LUIS will recognise these terms as entities in the utterances I’ve provided:

Now I can train LUIS and test some additional utterances. As a general rule, the more utterances you provide, the smarter LUIS gets when resolving a message provided by a user to an intent. Here’s an example:

Here, I’ve provided a message that is a variation of utterances provided to LUIS, but it is enough for LUIS to resolve it to the List.Tickets intent. 0.84 is a measure of certainty – not a percentage, and it’s weighted against all other intents. You can see from the example that LUIS has correctly identified the Application (“skype”), and the measure of time  (“last week”).

Finally, I publish the Service Desk App. It’s now ready to receive messages relayed from the bot.

Step 2: The Bot

Now, it’s possible to create a bot from the Azure Portal, which will automate many of the steps for you. During this process, you can use the Language Understanding template to create a bot with a built in LUISRecognizer, so the code will be generated for you.

  • Recognizer is a component (class) of the bot that is responsible determining intent. The LUISRecognizer does this by relaying the message to the LUIS cognitive service.

Let’s take a look at the bot’s handler for the List.Tickets intent. I’m using Node.js here.

The function that handles the List.Tickets intent uses the EntityRecognizer class and findEntity method to extract entities identified by LUIS and returned in the payload (results).

It passes these values to a function called getData . In this example, I’m going to have my bot call a (fictional) remote service at http://xxxxx.azurewebsites.net/Tickets. This service will support the Open Data (OData) Protocol, allowing me to query data using the query string. Here’s the code:

(note I am using the sync-request package to call the REST service synchronously).

Step 3: Chrono

So let’s assume we’ve sent the following message to the bot:

  • “List tickets raised for SharePoint  after July this year”

It’s possible to query an OData data source for date based information using syntax as follows:

  • $filter=CreatedDate gt datetime’2018-03-08T12:00:00′ and CreatedDate lt datetime’2018-07-08T12:00:00′

So we need to be able to convert ‘after July this year’ to something we can use in an OData query string.

Enter chrono-node and dateformat – neat packages that can extract date information from natural language statements and convert the resulting date into ISO UTC format respectively. Let’s put them both to use in this example:

It’s important to note that chrono-node will ignore some information provided by LUIS (in this case the word ‘after’, but also ‘last’ and ‘before’), so we need a function to process additional information to create the appropriate filter for the OData query:


Handling time sensitive information is a crucial when building modern applications designed to handle natural language queries. After all, wouldn’t it be great to ask for information using your voice, Cortana,  and your mobile device when on the move! For now, these modern apps will be dependent on data in older systems with APIs that require dates or date ranges in a particular format.

The beauty of languages like Node.js and the npm package manager is that building these applications becomes an exercise in assembling building blocks as opposed to writing functionality from scratch.

Querying against an Azure SQL Database using Azure Automation Part 1

What if you wanted to leverage Azure automation to analyse database entries and send some statistics or even reports on a daily or weekly basis?

Well why would you want to do that?

  • On demand compute:
    • You may not have access to a physical server. Or your computer isn’t powerful enough to handle huge data processing. Or you would definitely do not want to wait in the office for the task to complete before leaving on a Friday evening.
  • You pay by the minute
    • With Azure automation, your first 500 minutes are for free, then you pay by the minute. Check out Azure Automation Pricing for more details. By the way its super cheap.
  • Its Super Cool doing it with PowerShell. 

There are other reasons why would anyone use Azure automation but we are not getting into the details around that. What we want to do is to leverage PowerShell to do such things. So here it goes!

To query against a SQL database whether its in Azure or not isn’t that complex. In fact this part of the post is to just get us started. Now for this part, we’re going to do something simple because if you want to get things done, you need the fastest way of doing it. And that is what we are going to do. But here’s a quick summary for the ways I thought of doing it:

    1. Using ‘invoke-sqlcmd2‘. This Part of the blog.. its super quick and easy to setup and it helps getting things done quickly.
    2. Creating your own SQL Connection object to push complex SQL Querying scenarios. [[ This is where the magic kicks in.. Part 2 of this series ]]

How do we get this done quickly?

For the sake of keeping things simple, we’re assuming the following:

  • We have an Azure SQL Database called ‘myDB‘, inside an Azure SQL Server ‘mytestAzureSQL.database.windows.net‘.
  • Its a simple database containing a single table ‘test_table’. This table has basically three columns  (Id, Name, Age) and this table contains only two records.
  • We’ve setup ‘Allow Azure Services‘ Access on this database in the firewall rules Here’s how to do that just in case:
    • Search for your database resource.
    • Click on ‘Set firewall rules‘ from the top menu.
    • Ensure the option ‘Allow Azure Services‘ is set to ‘ON
  • We do have an Azure automation account setup. We’ll be using that to test our code.

Now lets get this up and running

Start by creating two variables, one containing the SQL server name and the other containing the database name.

Then create an Automation credential object to store your SQL Login username and password. You need this as you definitely should not be thinking of storing your password in plain text in script editor.

I still see people storing passwords in plain text inside scripts.

Now you need to import the ‘invoke-sqlcmd2‘ module in the automation account. This can be done by:

  • Selecting the modules tab from the left side options in the automation account.
  • From the top menu, click on Browse gallery, search for the module ‘invoke-sqlcmd2‘, click on it and hit ‘Import‘. It should take about a minute to complete.

Now from the main menu of the automation account, click on the ‘Runbooks‘ tab and then ‘Add a Runbook‘, Give it a name and use ‘PowerShell‘ as the type. Now you need to edit the runbook. To do that, click on the Pencil icon from the top menu to get into the editing pane.

Inside the pane, paste the following code. (I’ll go through the details don’t worry).

#Import your Credential object from the Automation Account
 
 $SQLServerCred = Get-AutomationPSCredential -Name "mySqllogin" #Imports your Credential object from the Automation Account
 
 #Import the SQL Server Name from the Automation variable.
 
 $SQL_Server_Name = Get-AutomationVariable -Name "AzureSQL_ServerName" #Imports the SQL Server Name from the Automation variable.
 
 #Import the SQL DB from the Automation variable.
 
 $SQL_DB_Name = Get-AutomationVariable -Name "AzureSQL_DBname"
    • The first cmdlet ‘Get-AutomationPSCredential‘ is to retrieve the automation credential object we just created.
    • The next two cmdlets ‘Get-AutomationVariable‘ are to retrieve the two Automation variables we just created for the SQL server name and the SQL database name.

Now lets query our database. To do that, paste the below code after the section above.

#Query to execute
 
 $Query = "select * from Test_Table"
 
 "----- Test Result BEGIN "
 
 # Invoke to Azure SQL DB
 
 invoke-sqlcmd2 -ServerInstance "$SQL_Server_Name" -Database "$SQL_DB_Name" -Credential $SQLServerCred -Query "$Query" -Encrypt
 
 "`n ----- Test Result END "

So what did we do up there?

    • We’ve created a simple variable that contains our query. I know the query is too simple but you can put in there whatever you want.
    • We’ve executed the cmdlet ‘invoke-sqlcmd2‘. Now if you noticed, we didn’t have to import the module we’ve just installed, Azure automation takes care of that upon every execution.
    • In the cmdlet parameter set, we specified the SQL server (that has been retrieved from the automation variable), and the database name (automation variable too). Now we used the credential object we’ve imported from Azure automation. And finally, we used the query variable we also created. An optional switch parameter ‘-encypt’ can be used to encrypt the connection to the SQL server.

Lets run the code and look at the output!

From the editing pane, click on ‘Test Pane‘ from the menu above. Click on ‘Start‘ to begin testing the code, and observe the output.

Initially the code goes through the following stages for execution

  • Queuing
  • Starting
  • Running
  • Completed

Now what’s the final result? Look at the black box and you should see something like this.

----- Test Result BEGIN 

Id Name Age
-- ---- ---
 1 John  18
 2 Alex  25

 ----- Test Result END 

Pretty sweet right? Now the output we’re getting here is an object of type ‘Datarow‘. If you wrap this query into a variable, you can start to do some cool stuff with it like

$Result.count or

$Result.Age or even

$Result | where-object -Filterscript {$PSItem.Age -gt 10}

Now imagine if you could do so much more complex things with this.

Quick Hint:

If you include a ‘-debug’ option in your invoke cmdlet, you will see the username and password in plain text. Just don’t run this code with debugging option ON 🙂

Stay tuned for Part 2!!

 

PowerShell gotcha when connecting ASM Classic VNETs to ARM ExpressRoute

Recently I was working on an Azure ExpressRoute configuration change that required an uplift from a 1GB circuit to a 10Gb circuit. Now thats nothing interesting, but, of note was using some PowerShell to execute a cmdlet.

A bit of a back story to set the scene here; and I promise it will be brief.

You can no longer provision Azure ExpressRoute circuits in the Classic or ASM deployment model. All ExpressRoute circuits that are provisioned now are indeed Azure Resource Manager (ASM) deployments. So there is a very grey area between what cmdlets to run on which PowerShell module.

The problem

The environment I was working with had a mixture of ASM and ARM deployments. The ExpressRoute circuit was re-created in ARM. When attempting to follow the Microsoft documentation to connect a VNET to that new circuit (available on this docs.microsoft.com page), I ran the following command–

Get-AzureDedicatedCircuitLink 
New-AzureDedicatedCircuitLink -ServiceKey "[XXXX-XXXX-XXXX-XXXX-XXXX]" -VNetName "[MyVNet]"

–but PowerShell threw out the following error (even after doing the end to end process in a new PS ISE session):

Get-AzureDedicatedCircuit : Object reference not set to an instance of an object. 
At line:1 char:1 + 
Get-AzureDedicatedCircuit 
+ ~~~~~~~~~~~~~~~~~~~~~~~~~     
+ CategoryInfo          : NotSpecified: (:) [Get-AzureDedicatedCircuit], NullReferenceException     
+ FullyQualifiedErrorId : System.NullReferenceException,Microsoft.WindowsAzure.Commands.ExpressRoute.GetAzureDedicatedCircuitCommand

Solution

This is actually quite a tricky one as there’s little to no documentation that outlines this specifically. It’s not too difficult though as theres only two things you need to do:

  1. Ensure you have the version 5.1.1 of the Azure PowerShell modules
    • Ensure you also have the Azure ExpressRoute PowerShell module
  2. Execute the all Import-Module commands and in the right order as per bellow

Note: If you have a newer version of the Azure PowerShell modules, like I did (version 5.2.0), import the older module as per bellow

Import-Module "C:\Program Files\WindowsPowerShell\Modules\Azure\5.1.1\Azure.psd1"
Import-Module "C:\Program Files\WindowsPowerShell\Modules\Azure\5.1.1\ExpressRoute\ExpressRoute.psd1

Then you can execute the New-AzureDedicatedCircuitLink command and connect your ASM VNET to ARM ExpressRoute.

Cheers!


Originally posted on Lucian.Blog. Follow Lucian on Twitter @LucianFrango.