Microsoft Teams and IOT controled Robotics — The BOT

Part 2 of 4 series into Teams Controlled Robotics

Part 1

Microsoft Teams is an excellent collaboration tool with person to person communication workloads like, Messaging, Voice and Video collaboration. Microsoft Teams can also use Microsoft AI and cognitive services to collaborate with machines and devices. The Azure suite of services allows person to machine control, remote diagnostics and telemetrics analytics of internet connected devices.

To demonstrate how Microsoft Teams can control remote robotics, I have created a fun project that allows Teams to manage a RoboRaptor through Teams natural language messages. The objective is to send control commands from Teams as natural language messages that are sent to a Microsoft AI BOT. The BOT will then use Azure  language understanding services (LUIS) to determine the command intent. The result is sent to the Internet of Things controller card attached to the RoboRaptor for translation into machine commands. Finally I have configured a Teams channel to the Azure BOT service. In Teams it looks like a contact with an application ID. When I type messages into the Teams client it is sent from Teams over the channel to the Azure BOT service for processing. The RoboRaptor command messages to the IoT device are sent from the BOT or functions to the Azure IoT HUB service for messaging to the physical device.

The overview of the logical connectivity is below:


The Azure services and infrastructure used in this IoT environment is extensive and fall into five key areas.



  1. The blue services belong to Azure AI and machine learning services and it includes chat bots and cognitive services.
  2. The services in orange belong to Azure compute, analytics.
  3. The services in green belong to Azure internet of things suite.
  4. The Services in yellow are IoT hardware, switches and sensors.
  5. The services in white are network connectivity infrastructure

The Azure Bot service plays an essential part in the artificial intelligence and personal assistant role by calling and controlling functions and cognitive services. As the developer, I create code that collects instant messages from web chats and Teams channels and tries to collect key information and then determines an intent of the user.

Question and Answer Service:

In this project I want to be able to deliver a help menu. When users type in a request for help with the commands that they can use with the RoboRaptor, I wish to be able to return a list in a Teams card of all commands and possible resultant actions. The Azure Q&A service is best suited for this task. The Q&A service is an excellent repository for a single question and a single reply with no processing. With the Q&A service you build a list of sample questions, and if you match you reply with the assigned text, it is best for Frequently asked Questions scenarios.

I can use the BOT to collect information from the user and store it in dialog tables for processing. For example, I can ask for a user’s name and store it for replies and future use.

Sending commands

I wanted to be able to use natural language to forward commands to the RoboRaptor. As Teams is a collaboration tool and for those who are part of the team have permissions for this BOT, so they too can send commands to IoT robotic devices. The Teams members can have many ways of saying a request. Sure, I can just assign a single word for an action like forward however if I want to string commands together I will need to use the Azure LUIS service BOT arrays to build action table. For example, I can build a BOT that replicates talking to a human through the teams chat window.

As you can see the LUIS service can generate a more natural conversation with robotics.

How I use the Luis service?:

The LUIS service is a repository and collection of intents and key phrases. The diagram below shows an entry I have created to determine the intent of a user requirement and check its intent confidence level.

I have a large list of intents that equates to a RoboRaptor command request, like move forward and Stop and it includes intents for other projects like collecting names and phone numbers, it can also contain all my home automation commands too.

In the example below, I have the intent that I want the RoboRaptor to dance. Under the dance intent I have several ways of asking the RoboRaptor to dance.


The LUIS service will return to the BOT the Intent of dance and a score of how confident it is of a match. The following is BOT code that evaluates the returned intent and score. If the confidence score is above 0.5 the BOT will initiate a process based on a case match. I created basic Azure BOT service from Visual Studio 2017. You can start with the Hello world template and then build dialogue boxes and middleware to other azure services like Q&A maker and the Luis service.

In our case the intent is dance so the Sendtoraptor process is called with the command string dance.


A series of direct method commands to the IoT using the direct call method is invoked. The method name= forward, and a message payload “dance fwd” is sent to the IoT-Hub service and IoT device name of “IOT3166keith” which is my registered MXCHIP. A series of other moves are sent to give the impression that the RoboRaptor is dancing.


 if (robocmd == “dance”)


//forward 4, then back 4 then right 4 then forward 4 left 4


//send stop signal

methodInvocation = new CloudToDeviceMethod(“forward”) { ResponseTimeout = TimeSpan.FromSeconds(300) };

methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = “dance fwd” }));

response = await serviceClient.InvokeDeviceMethodAsync(“IOT3166keith”, methodInvocation);


methodInvocation = new CloudToDeviceMethod(“backward”) { ResponseTimeout = TimeSpan.FromSeconds(300) };

methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = “dance back” }));

response = await serviceClient.InvokeDeviceMethodAsync(“IOT3166keith”, methodInvocation);


methodInvocation = new CloudToDeviceMethod(“right”) { ResponseTimeout = TimeSpan.FromSeconds(300) };

methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = “dance right” }));

response = await serviceClient.InvokeDeviceMethodAsync(“IOT3166keith”, methodInvocation);


methodInvocation = new CloudToDeviceMethod(“left”) { ResponseTimeout = TimeSpan.FromSeconds(300) };

methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = “dance left” }));

response = await serviceClient.InvokeDeviceMethodAsync(“IOT3166keith”, methodInvocation);





In the above code the method Invocation API attributes are configured, The new cloudToDeviceMethod(“forward”) sets up a direct call- Cloud to Device method with a methodname = forward and the setPayloadJson configurs a json payload message  “dance fwd”.

The await serviceClient.InvokeDeviceMethodAsync (“IOT3166keith”, methodInvocation); function initiate the asynchronous transmission of the message to the IoT Hub service and the device IOT3166keith.

The IOTHUB then sends the message to the physical device. The onboard oled display will show commands as they are received.


The MXCHIP has many environment sensors built in. I selected Temperature and Humidity as data I wish to send to Power BI for analytics. Every few seconds the Telemetric information is sent to the IoT hub service.

I have configured message routing for Telemetric messages to get to stream Analytics service in the IOT HUB service. I then parse the json files and save the data in Azure Blob storage, where Power BI can generate reports. More on this with next blog.

The next Blog will discover more about the IOT hardware and IOT HUB service.


Intelligent Man to Machine collaboration with Microsoft TEAMS. -Robo Raptor

Microsoft Teams is excellent collaboration tool with person to person communication workloads like, Messaging, Voice and Video collaboration. Microsoft Teams can also use Microsoft AI and cognitive services to collaborate with machines and devices. Together with the large suite of Azure services that allows me to call Azure apps to orchestrate  person to machine control, remote diagnostics and telemetrics analytics of internet connected devices.

My Teams BOT  is set up as a personal assistant that  manages communications between several of my projects. The fact I can use a single interface to run many purchased and custom built apps displays the flexibility of Azure BOT’s. I currently run three personal custom created applications, a Office 365 management Assistant, a lockdown and alert system, and IOT device control, all through this single Teams BOT.
To demonstrate how Microsoft teams can control remote robotics, I have created a fun project that allows Teams to manage a RoboRaptor through Teams natural language messages.
The objective is to send control commands from MS Teams as natural language messages that are sent to a Microsoft AI BOT. The BOT will then use Azure LUIS language understanding services to determine the command intent. The result is sent to the Internet of Things controller card attached to the robo raptor for translation into machine commands.

The Robo Raptor  and the MXCHIP is a working IOT device. Live telemetric data is sent back into Azure IOT HUB service to monitor environmental statistics which can be measured through Power BI. Temperature and humidity readings are typical of a standard IOT end point. The MXCHIP is configured with Arduino code which is very common microcontroller IDE platform.

The RoboRaptor project is complex and consumes multiple services from Azure. However, I have been able to build this solution with free tier services and so far I am up for $80 for the MXCHIP and dual relay module. The RoboRaptor was one of the kids old toys I saved from extinction.

The Robo Raptor Project uses the following Azure services.

The Project includes,
• Microsoft Teams for user interface
• BOTs for creating intelligent interaction and function calls to IOT and other Azure services
• Cognitive services, LUIS language understanding services to allow normal natural language, between user and robotics
• QNA, Question and Answer builder to create help menus and information repositories for users
• Facial Recognition Cognitive service, to scan people around the raptor and identify as owner or foe,
• Server-less Azure functions to control communications between IOT and Teams
• IOT, Azure internet of things services to manage and communicate with IOT hardware
• MXCHIP, A small microcontroller that I have attached to the raptor to provide secure internet communication to Azure IOT Hub. The MXCHIP will receive commands and send instructions to the Robo Raptor.
The Mxchip will activate power to the robotics and a fire a laser weapon through switched circuits. The MXCHIP also sends telemetry data back to AZURE for storage and analytics. Information include things like Temperature, Humidity, Pressure, Accelerometer, and Gyroscope info.

My choice of IOT hardware was the MX Chip. I found this development board easy to use and register to the Microsoft Azure IOT HUB. It is Arduino compatible and the board library easy to follow. I used a break out board to access IO pins to activate relays to turn on power and activate the laser. The hardware specs are as follows.
Device Summary:
Microcontroller: STM32F412RG ARM®32-bit Cortex®-M4 CPU
Operating Voltage: 3.3V
Input Voltage: 7-12V
Digital I/O Pins (DIO): 31
Analog Input Pins (ADC): 2
UARTs: 2
SPIs: 1
I2Cs: 1
Flash Memory: 1 MB
SRAM: 256 KB
Clock Speed: 100 MHz

The follow diagram shows the message flow between the MXChip and MS Teams.



Video footage in action

The project blogs is broken up into the following key milestones

Microsoft Teams BOTS and cognitive services. Part 2

Microsoft IOT and the MXCHIP Part 3

Robo raptor Facial recognition Part 4


Psychodynamics Revisited: Data Privacy

business camera coffee connection
How many of you, between waking up and your first cup of hot, caffeinated beverage, told the world something about yourselves online? Whether it be a social media status update, an Instagram photo or story post or even a tweak to your personal profile on LinkedIn. Maybe, yes, maybe no, although I would hedge my bets that you’ve at least checked your notifications, emails or had a scroll through the newsfeed.
Another way to view this question would be: how many of you interacted with the internet in some way since waking up? The likeliness is probably fairly high.
In my first blog looking into the topic of Psychodynamics, I solely focused on our inseparable connection to modern technologies – smartphones, tablets, etc. – and the access that these facilitate for us to interact with the society and the world. For most of us, this is an undeniable truth of daily life. A major nuance of this relationship between people and technology and one that I think we are probably somewhat in denial about is the security and privacy of our personal information online.
To the Technology Community at large, it’s no secret that our personal data is proliferated by governments and billion dollar corporations on a constant basis. Whatever information – and more importantly, information on that information – that’s desired, going to the highest bidder, or for the best market rate. Facebook, for instance, doesn’t sell your information outright. That would be completely unethical and see devaluation to their brand trust. What it does is sell access to you, to the advertisers and large corporations connected through it, which in turn gives them valuable, consumer data, to advertise, target and sell back to you based on your online habits.
My reasoning for raising this topic in regard to psychodynamics and technological-behavioral patterns is for consultants and tech professionals to consider what data privacy means to our/your valued clients.
I was fortunate to participate this past week in a seminar hosted by the University of New South Wales’ Grand Challenges Program, established to promote research in technology and human development. The seminar featured guest speaker Professor Joe Cannataci, the UN’s Special Rapporteur on the right to privacy, who’s in town to discuss with our Federal Government recent privacy issues, specifically amid concerns about the security of the Government’s My Health Record system (see full discussion here on ABC’s RN Breakfast Show) Two key points raised during the seminar, and from Professor Cannataci’s general insights were:

  1. Data analytics targeting individuals/groups are focused largely on the metadata, not the content data of what an individual or group of individuals is producing. What this means is that businesses are more likely to not look at content as scalable unless there are metrics and positive/viral trends in viewership/content consumption patterns.
  2. Technology, it’s socialisation and personal information privacy issues are no longer specific to a generation — “boomers”, “millennials” — or context (though countries like China and Russia prohibit and filter certain URLs and web services). That is to say, in the daily working routine of an individual, their engagement with technology and the push to submit data to get a task done may, in some instances, formulate an unconscious processing pattern over time where we get used to sharing our personal information, adopting the mindset “well, I have nothing to hide”. I believe we’ve likely all been guilty of it before. Jane might not think about how sharing her client’s files with her colleague Judy to assist with advising on a matter may put their employer in breach of a binding confidentiality agreement.

My recent projects saw heavy amounts of content extraction and planning, not immediately considering the meta-data trends and what business departments likely needs were for this content, focusing on documented business processes over the data usage patterns. Particularly working with cloud technologies that were new for the given clients, there was a very basic understanding of what this entailed in regards to data privacy and the legalities around this (client sharing, data visibility, GDPR, to name a few). Perhaps a consideration here is investigating further how these trends play into and, possibly, deviate business processes, rather than look at them as separate factors in information processing.
Work is work, but so is our duty to due diligence, best practices and understanding how we, as technology professionals, can absolve some of these ethical issues in today’s technology landscape.
For more on Professor Joe Cannataci, please see his profile on the United Nations page.
Credit to UNSW Grand Challenges Program. For more info, please see their website or follow their Facebook page (irony intended)

Sending Events from IoT Devices to Azure IoT Hub using HTTPS and REST


Different IoT Devices have different capabilities. Whether it is a Micro-controller or Single Board Computer your options will vary. In this post I detailed using MQTT to send messages from an IoT Device to an Azure IoT Hub as well as using the AzureIoT PowerShell Module.
For a current project I needed to send the events from an IoT Device that runs Linux and had Python support. The Azure IoT Hub includes an HTTPS REST endpoint. For this particular application using the HTTPS REST endpoint is going to be much easier than compiling the Azure SDK for the particular flavour of Linux running on my device.
Python isn’t my language of choice so first I got it working in PowerShell then converted it to Python. I detail both scripts here as a guide for anyone else trying to do something similar but also for myself as I know I’m going to need these snippets in the future.


You’ll need to have configured an;

Follow this post to get started.

PowerShell Device to Cloud Events using HTTPS and REST Script

Here is the PowerShell version of the script. Update Line 3 for your DeviceID, Line 5 for your IoT Hub Name and LIne 11 for your SAS Token.

Using Device Explorer to Monitor the Device on the associated IoT Hub I can see that the message is received.
Device Explorer

Python Device to Cloud Events using HTTPS and REST Script

Here is my Python version of the same script. Again update Line 5 for your IoT DeviceID, Line 7 for your IoT Hub and Line 12 for the SAS Token.

And in Device Explorer we can see the message is received.
Device Explorer Python


When you have a device that has the ability to run Python you can use the IoT Hub HTTPS REST API to send messages from the Client to Cloud negating the need to build and compile the Azure IoT SDK to generate client libraries.

Promoting and Demoting Site pages to News in Modern SharePoint Sites using SPFx extension and Azure Function

The requirement that I will be addressing in this blog is how to Promote and Demote site pages to news articles in Modern SharePoint sites. This approach allows us to promote any site page to News, add approval steps and demote news articles to site pages if the news need to be updated. The news also shows in the modern news web part when the site page is promoted.
Solution Approach:
To start with, create a site page. For creating a Modern page using Azure Function, please refer to this blog. After the site page is created, we will be able to use a status column to track the news status and promote a site page to news status. The status column could have three values – draft, pending approval and published.
We will use a SPFx extension to set the values of the status column and call the Azure Function to promote the site page to news page using SharePoint Online CSOM.
Promoting a site page to news page
Below are the attributes that need to be set for site pages to promote as news article.
1. Promoted State Column set to 2 – set through SPFx extension
2. First Published date value set to published date – set through SPFx extension
3. Promoted state tag in the news site page to be set to value 2 – done in Azure Function
4. Site page needs to be published – done in Azure Function
For a detailed walkthrough on how to create a custom site page with metadata values, please refer to this blog. In order to set the values of ‘Promoted State’ and ‘First Published Date’ metadata values, use the below code after the page is created.

For calling Azure Function from SPFx extension, which will promote the site page to news, can be done using the below method.

Inside the Azure Function, use the below to promote a site page to news article.

Demoting a news to site page
Below are the attributes that needs to be set for demoting a news article to site page
1. Promoted State Column set to 0 – set through SPFx extension
2. First Published date value set to blank – set through SPFx extension
3. Promoted state tag in the news site page to be set to value 0 – done in Azure Function
4. Site page needs to be published – done in Azure Function
For setting the metadata values, the method calls as done above during promotion of site page, can be used. Next in Azure Function, use the below to demote a site page.

Hence above we saw how we can use SPFx extension and Azure Function to promote and demote site pages to news articles in Modern SharePoint sites.

Global Azure Bootcamp 2018 – Creating the Internet of YOUR Things

Today is the 6th Global Azure Bootcamp and I presented at the Sydney Microsoft Office on the Creating the Internet of YOUR Things.
In my session I gave an overview on where IoT is going and some of the amazing things we can look forward to (maybe). I then covered a number of IoT devices that you can buy now that can enrich your life.
I then moved on to building IoT devices and leveraging Azure, the focus of my presentation. How to get started quickly with devices, integration and automation. I provided a working example based off previous my previous posts Integrating Azure IoT Devices with MongooseOS MQTT and PowerShellBuilding a Teenager Notification Service using Azure IoT an Azure Function, Microsoft Flow, Mongoose OS and a Micro Controller, and Adding a Display to the Teenager Notification Service Azure IoT Device
I provided enough information and hopefully inspiration to get you started.
Here is my presentation.


AWS DeepLens – Part 1 – Getting the DeepLens Online

Look what I got my hands on!

Today I will be taking you through the initial setup of the yet to be released AWS DeepLens. DeepLens is rumoured to be released globally in April 2018.

What is the AWS DeepLens?

Announced at AWS Re-Invent 2017, DeepLens is a marriage of:

  • HD Camera
  • Intel based computer with an on-board GPU
  • Ubuntu OS
  • AWS Greengrass
  • AWS Lambda
  • AWS SageMaker

This marriage of technologies is designed to assist developers achieve Deep-Learning inference at the edge device. The edge is typically at the end of the pipeline. What does this all mean?
AWS have made a big play at standardising a data engineer’s pipeline, from writing code in Jupyter notebooks, running training over a cluster, producing a standardised model and finally deploying the model to perform inference at the edge. AWS DeepLens fits in the last step of this pipeline.
Further information can be found here:
With that out of the way, let’s get started.

What’s needed

To get started, the following is required:

  • An AWS account
  • A WiFi network with internet access
  • A computer with a WiFi adaptor and a web browser
  • A power adaptor from the US plug type to your own countries power plug type (as of the writing of this post)

For troubleshooting you will need the following:

  • Micro-HDMI to HDMI cable
  • Monitor with a HDMI port
  • USB keyboard and mouse


Before we go any further through the setup process, there are a few gotchas I encountered while getting the device online that are worth highlighting sooner rather than later:

  • Ensure the wireless network your connecting to is not on a network
  • Turn off any JavaScript blocking plugins in your web browser
  • The password for the DeepLens WiFi may have confusing letters in the them like a capital i that looks like a L

A recent update confirmed there were Wi-Fi issues seen here on the AWS DeepLens Developer Forum

Setting up the DeepLens in the AWS console

  1. Login into your AWS Management console 
  2. Switch to the US-East region (the only available region for the DeepLens at the time of writing)
  3. Click on the DeepLens Service under Machine Learning       
  4. Select Devices from the top navigation bar to navigate to the projects page
  5. Click the Register Device button on the right side of the screen
  6. Give your DeepLens a descriptive name, then select Next
  7. On the Set permissions page, select the Create a role in IAM for all fields, then select Next
  8. You’re now provided with an AWS generated certificate that authenticates AWS DeepLens with the IOT Greengrass service. Click Download certificate and store the zip file in a safe place
  9. Click Register . You’re now ready to plug in you DeepLens

Unpacking and plugging in

The DeepLens comes with:

  • A power pack with a US style power plug
  • A micro SD card
  • The DeepLens itself

To connect the DeepLens, perform the following:

  • Insert the micro SD card into the back of the DeepLens
  • Attach the power adaptor from the DeepLens to the wall socket
  • Press the power button

Connecting to the DeepLens WiFi access point from a PC

This is well documented by the AWS Management console, as displayed in the screenshot below.
The only thing to add here, is to watch out for confusing characters in the password on the device.
Once you have navigated to in your web browser, you will get to a web-based-wizard that steps through the setup process beginning with connecting to your WiFi network.

Connecting the DeepLens to your network

Select your SSID from the list and provide your network password. Once connected you will most likely see a screen that mentions the device is updating. Ensure you wait the dictated length of time before clicking on the Reboot button. If nothing happens wait longer and try again.

Uploading the certificate

Once the device reboots the screen will allow you to upload the certificate downloaded in the previous step, then click Next.

Setting up ssh and other settings

On the final page, specify a device password  and enable ssh access to the device. There’s also an additional option to enable automatic device updates. Automatic updates are on by default and I recommend leaving it that way. Once you’re ready, click Review.

Validating your DeepLens connectivity

Ensure you finalise the configuration by clicking Finish.

You should now see the following screen which indicates you have completed the wizard:

At this point you should see the DeepLens Registration status in the AWS Management console move from In Progress to Completed. This may take a up to 5 minutes.
You are now ready to deploy machine learning projects to the device.

Next time

In the next blog post, we’ll deploy some of the sample projects and learn how they work. We’ll also explore how this integrates with AWS SageMaker.

Psychodynamics: Are We Smarter Than The Device?


How did you know about this blog post?

It’s likely that you were notified by your smartphone or device, the notification itself as a part of trundle that you’re figuratively swiping left in-between email reminders about upcoming events and direct messages from your favourite social media. Or you were trawling your usual network feeds for updates to catch your attention.
Now if you were to time the window in which you check your smart device again for notifications, new messages or general updates, I’d bet that this window would be within a minute or just outside of it, and would require no prompting whatsoever… much like, say, breathing?
On the way to lunch this past week I had to tell three pedestrians to “Look up!” because they were walking on their smartphones while walking through the mess of the CBD at lunch time and just asking for some bad luck to go down. One was even across the intersection while the walk sign was red! Roadworks or not. However these smart device distractions amongst societal situations where we should become actively engaged, are becoming less distractive and more the norm.
Admittedly, I’ve been guilty of this also (stands up in anonymous meeting group circle) “Hi everyone, it’s been 24 days, 4 hours and 19 minutes since my last smart device infringement…”
Separating norms, habits and addictions have become difficult in this regard. A study conducted last year on 205 users, ranging from ages 16 to about 64, and spanning across the UK, China, Australia and the US, drew a preliminary conclusion that people grow emotionally attached to their smartphones. Obviously, a lost or stolen phone can be replaced, and even more conveniently, the data backup restored to the replacement. However the same cannot be said for a lost pet dog for instance.
The study in fact suggests that the emotional connection comes from is the connectivity and community the device facilitates – what we’re actually sacrificing for behavioural controls is the luxury of functionality.
It is with the ease of which these devices can be used, the ability to pour one’s life into apps and social networks, customise and personalise options, is what creates the need for us to be close to it, the loss of it coming with the emotional baggage of disconnection and an inability to “interact substantially”.
Do we know what life was like before this? I would say kind of, but maybe in another ten years’ time, not so much. Sure, we still have to get off our butts for some of our daily activities, but as we move, so does our devices, both figuratively and literally.
We’re well and truly plugged in; it’s the world we live in now. I can get my plumbing fixed and a slice of cake brought to my doorstep by a complete stranger on a single app (and trust that it will happen). Why not?
For further reading on the study, see the article under Computers in Human Behaviour.

A quick start guide for Deploying and Configuring Node-RED as an Azure WebApp


I’ve been experimenting and messing around with IoT devices for well over 10 years. Back then it wasn’t called IoT, and it was very much a build it and write it yourself approach.
Fast forward to 2017 and you can buy a microprocessor for a couple of dollars that includes WiFi. Environmental sensors are available for another couple of dollars and we can start to publish environmental telemetry without having to build circuitry and develop code. And rather than having to design and deploy a database to store the telemetry (as I was doing 10+ years ago) we can send it to SaaS/PaaS services and build dashboards very quickly.
This post provides a quick start guide to those last few points. Visualising data from IoT devices using Azure Platform-as-a-Service services. Here is a rudimentary environment dashboard I put together very quickly.


Having played with numerous services recently for my already API integrated IoT devices, I knew I wanted a solution to visualise the data, but I didn’t want to deploy dedicated infrastructure and I wanted to keep the number of moving parts to a minimum. I looked at getting my devices to publish their telemetry via MQTT which is  great solution (for scale or rapidly changing data), but when you are only dealing with a handful of sensors and data that isn’t highly dynamic it is over-kill. I wanted to simply poll the devices as required and obtain the current readings and visualise it. Think temperature, pressure, humidity.
Through my research I like the look of Node-RED for its quick and simplistic approach to obtaining data and manipulating it for presentation. Node-RED relies on NodeJS which I figured I could deploy as an Azure WebApp (similar to what I did here). Sure enough I could. However not long after I got it working I discovered this project. A Node-RED enabled NodeJS Web App you can deploy straight from Github. Awesome work Juan Manuel Servera.


The quickest way to start then is to use Juan’s Azure WebApp wrapper for Node-RED. Follow his instructions to get that deployed to your Azure Subscription. Once deployed you can navigate to your Node-RED WebApp and you should see something similar to the image below.
The first thing you need to do is secure the app. From your WebApp Application Settings in the Azure Portal, use Kudu to navigate to the WebApp files. Under your wwwroot/WebApp directory you will find the settings.js file. Select the file and select the Edit (pencil) icon.
Comment in the adminAuth section around lines 93-100. To generate the encrypted password on a local install of NodeJS I ran the following command and copied the hash. Change ‘whatisagoodpassword’ for your desired password.

node -e "console.log(require('bcryptjs').hashSync(process.argv[1], 8));" whatisagoodpassword

Select Save, then Restart your application.
On loading your WebApp again you will be prompted to login. Login and lets get started.

Configuring Node-RED

Now it is time to pull in some data and visualise it. I’m not going to go into too much detail as what you want to do is probably quick different to me.
At its simplest though I want to trigger on a timer a call to my sensor API’s to return the values and display them as either text, a graph or a gauge. Below graphically shows he configuration for the dashboard shown above.
For each entity on the dashboard there is and input. For me this is trigger every 15 minutes. That looks like this.
Next is the API to get the data. The API I’m calling is an open GET with the API key in the URL so looks like this.
With the JSON response from the API I retrieve the temperature value and return it as msg for use in the UI.
I then have the Gauge for Temperature. I’ve set the minimum and maximum values and gone with the defaults for the colours.
I’m also outputting debug info during setup for the raw response from the Function ….
….. and from the parsed function.
These appear in the Debug pane on the right hand side.
After each configuration change simply select Deploy and then switch over to your Node-RED WebApp. That will looks like your URI for your WebApp with UI on the end eg.


Thanks to Azure PaaS services and the ability to use a graphical IoT tool like Node-RED we can quickly deploy a solution to visualise IoT data without having to deploy any backend infrastructure. The only hardware is the IoT sensors, everything else is serverless.

Follow Us!

Kloud Solutions Blog - Follow Us!