Microsoft Teams Announcements and New Features – Enhance your meetings today

Microsoft Teams has just turned 2. To celebrate, new features have been announced and are coming your way soon. With this wave of new features there aren’t many reasons left not to adopt Microsoft Teams. Many of our customers are embracing Teams as they see the value in a connected collaboration experience that brings together voice, chat, content & meetings.

For me, nothing beats a face-to-face meeting. Though, as people embrace flexible working, are geographically distributed or constantly on the go, connecting with others can be challenging. A recent focal point for many has been creating a connected meeting experience that brings physical & virtual spaces together. I’ve heard stories and experienced first-hand the challenges with traditional meeting room technology. Dial-ins never work, people can’t see the what’s being presented in the physical room, it’s difficult to hear people on the phone, people don’t have the right version of the software…the list of frustrations is long.

The announcement has lots of new features that enhances a range of experiences though, I want to focus on the ones I think enhance the meting experience.

Microsoft Whiteboard

Whilst still in preview, this is a hidden gem. Microsoft Whiteboard allows people to quickly draw and share in real-time. No physical whiteboards needed. Better yet the drawings are automatically saved and easily available when you need to revisit. Taking it one step further –  those lucky enough to have Surface Hub, tighter Teams integration is coming your way. There’s lots of interest in Surface Hub and worthwhile checking out.

New Calendar App

The current Meetings Tab is being renamed to Calendar and lots of updates are coming. You’ll be able to join, RSVP, cancel or decline meetings from the right click-menu. You’ll be able to see a range of views including weekly, daily or work week which will honour your settings in Outlook. So no need to switch to Outlook as often. You have to wonder – will Teams eventually replace Outlook? I think so.

New Meeting Devices

As Teams grows, so does the maturity of hardware. There are some great new Teams Certified devices from AudioCodes, Crestron, HP, Jabra, Lenovo, Logitech, Plantronics/Polycom & Sennheiser. Check out the Teams Marketplace to buy and start trialling.

My colleague Craig Chiffers has some great articles on managing Teams devices & enabling Teams Rooms – worthwhile reads if you are looking to start using these devices.

Content Cameras and Intelligent Capture

For those people who like to still use white boards, this feature is for you! Not only can you now add a second camera to your meeting, it digitises the physical whiteboard. The intelligent capture ensures your white board drawings are still visible even as your draw, check out below to see it in action –borrowed from Microsoft Announcement.

Hopefully that’s given you a quick overview of the great new features coming to Microsoft Teams soon.

Microsoft Teams and IOT controled Robotics — The IOT device

This is the third installment of a four part series on using Microsoft Teams and Azure services to collaborate with machines and devices. In the previous posts, I described how Teams and the Azure BOT services work to send commands to the IoT device attached to the RoboRaptor. This post will describe the IoT hardware and connection on the RoboRaptor to the MXCHIP.

To recap, Teams messages are sent from the Teams user interface to our Azure BOT for analysis. The command is then sent from the Azure BOT to the Azure IoT HUB. The IoT HUB then forwards the command message to the MXCHIP mounted on the back of the RoboRaptor. The MXCHIP then translates the received command into IR pulses for direct input to the RoboRaptor.

The factory version RoboRaptor is controlled through a hand held infra-red controller. In order to send commands to the RoboRaptor I first had to read the IR pulses sent from the controller and analyze the Infra red pulse stream from the factory controls of the RoboRaptor.  For this i created another project and used an Arduino UNO 3 and an IR receiver. There are lots of prebuilt free code on GIT HUB. As I pushed each button of the controller I recorded a hex signal from the UNO 3 serial port.

The Teams controlled RoboRaptor controller.

As I pushed each button on the controller I recorded a hex signal from the UNO 3 serial port. The diagram below shows the codes received.

My second key objective was to remotely activate hardware through sending commands that activate an attached relay module and supplying power to a remote external device. In this project the aim was to activate the power switch of the RoboRaptor and the Laser remotely with Teams commands. I used a 2PH63091A dual optical relay for this role. I mounted the relays on the RoboRaptor belly.

Connection Diagram



To activate the relays I need to connect the relay input signal port to external pins of the MXCHIP. The switching of a MXCHIP external pin to a low or high signal will activate and deactivate the relay

The Arduino code to configure MXCHIP pins are as follows.

To control the signals sent to external sensors and relays I need to assign a logical Arduino PIN and a physical MXCHIP PIN to an external relay switch. For example, to switch on power to activate the RoboRaptor, is to assign Logical pin 45 IN CODE. The physical pin 45 on the MXCHIP is wired the its Relay input trigger. On the pin going low  the relay will activate and close its relay contacts and supply power to the RoboRaptor.

Project Libraries:

My development platform for the project is Microsoft Visual Studio code. The key libraries required are AZ3166WifI.h for running the WIFI role, The AzureIOTHub.h library is used to manage Azure IoT HUB connectivity, The DEVKitMQTTClient.h library is used to manage the cloud to device and device to cloud messaging. The other libraries manage MXCHIP hardware and sensors.The MXCHIP library has its own IRDA infra-red Library code. The coding description of this library was very light, so I created my own function code to control the transmission of InfraRed pulses and commands.


The following code sets up the IR carrier and a 12 BIT command code. The MXCHIP onboard LED works fine, however I found I needed to add an additional external IR LED as the signal was degraded when I mounted the MXCHIP board to the back of the raptor.




The void loop() function contains the main body of code, and will continuously run in a loop checking for WIFI connectivity and new received MQTT messages. The following code shows the continuous monitoring of a system tick heart beat. If the WIFI connection is up the IoT device will send telemetric data to the IoT HUB. For this project I send temperature and humidity reading every few seconds to the IoT HUB for processing. The IoT HUB will route the telemetric messages to Azure blob storage and make it available for Power BI analytics.

The DevKittMQTTClient  function will then check for any new MQTT cloud to device messages. Any new messages that come in will be compared to command strings. If we have a match the command function will activate and call the IR transmission function, otherwise it will repeat the void loop.


The RoboRaptor only requires a basic command for it to understands the intent of the user’s message. The basic intent of moving forward just need to be acknowledged and resent to the RoboRaptor as IR pulse 0x186.  I used the direct method for sending commands to the IOT MXCHIP device.

The direct method is an excellent lightweight message that contains a method name and a small message payload.

The message format is in two parts. A method name as the header and the payload message for the body. For example, the Method name = forward and the payload = robo moving forward.

The code below shows how I am only checking the method-Name variable and if I get a match for the method name I will run a function then will send the correct IR signals to the RoboRaptor.

How to create an IoT Hub and register a device for the IoT Dev-Kit:

The IoT HUB is an Azure service that you will register your multiple end point devices. The free tier allows you to register a single device and capacity for 8000 messages. To add a new device to the IoT HUB is as simple as selecting the add device button on the IoT HUB Menu. You will be asked to supply a device-ID name. When the resource is configured it will create a new host name URL and a set of primary and secondary keys and connection strings. The values need to be saved as they are required by the IoT device to securely connect to the IoT hub Service.


Setting the connection string:

Now I have created a IoT HUB device in the cloud I need to set up the device with the device URL and secure connection string. In my Visual Studio code platform I press F1 to bring up a list of commands and I select configure device > Config Device Connection String. The Menu system will direct me to supply the Device FQN and Connection string. Once I have submitted the info the IoT device (MXCHIP) can now connect with the IOT_HUB service

The last installment of the RoboRaptor project will be looking into adding facial recognition. The objective is to use a camera to capture images and compare these to a saved photo of myself. If the Face ID gets a match the RoboRaptor will come towards me.

Links to other posts in this series:

  1. Intelligent Man to Machine collaboration with Microsoft TEAMS. -Robo Raptor
  2. Microsoft Teams and IOT controled Robotics & The BOT
  3. Configuring Facial recognition – COMING SOON

Microsoft Teams and IOT controled Robotics — The BOT

Part 2 of 4 series into Teams Controlled Robotics

Part 1

Microsoft Teams is an excellent collaboration tool with person to person communication workloads like, Messaging, Voice and Video collaboration. Microsoft Teams can also use Microsoft AI and cognitive services to collaborate with machines and devices. The Azure suite of services allows person to machine control, remote diagnostics and telemetrics analytics of internet connected devices.

To demonstrate how Microsoft Teams can control remote robotics, I have created a fun project that allows Teams to manage a RoboRaptor through Teams natural language messages. The objective is to send control commands from Teams as natural language messages that are sent to a Microsoft AI BOT. The BOT will then use Azure  language understanding services (LUIS) to determine the command intent. The result is sent to the Internet of Things controller card attached to the RoboRaptor for translation into machine commands. Finally I have configured a Teams channel to the Azure BOT service. In Teams it looks like a contact with an application ID. When I type messages into the Teams client it is sent from Teams over the channel to the Azure BOT service for processing. The RoboRaptor command messages to the IoT device are sent from the BOT or functions to the Azure IoT HUB service for messaging to the physical device.

The overview of the logical connectivity is below:


The Azure services and infrastructure used in this IoT environment is extensive and fall into five key areas.



  1. The blue services belong to Azure AI and machine learning services and it includes chat bots and cognitive services.
  2. The services in orange belong to Azure compute, analytics.
  3. The services in green belong to Azure internet of things suite.
  4. The Services in yellow are IoT hardware, switches and sensors.
  5. The services in white are network connectivity infrastructure

The Azure Bot service plays an essential part in the artificial intelligence and personal assistant role by calling and controlling functions and cognitive services. As the developer, I create code that collects instant messages from web chats and Teams channels and tries to collect key information and then determines an intent of the user.

Question and Answer Service:

In this project I want to be able to deliver a help menu. When users type in a request for help with the commands that they can use with the RoboRaptor, I wish to be able to return a list in a Teams card of all commands and possible resultant actions. The Azure Q&A service is best suited for this task. The Q&A service is an excellent repository for a single question and a single reply with no processing. With the Q&A service you build a list of sample questions, and if you match you reply with the assigned text, it is best for Frequently asked Questions scenarios.

I can use the BOT to collect information from the user and store it in dialog tables for processing. For example, I can ask for a user’s name and store it for replies and future use.

Sending commands

I wanted to be able to use natural language to forward commands to the RoboRaptor. As Teams is a collaboration tool and for those who are part of the team have permissions for this BOT, so they too can send commands to IoT robotic devices. The Teams members can have many ways of saying a request. Sure, I can just assign a single word for an action like forward however if I want to string commands together I will need to use the Azure LUIS service BOT arrays to build action table. For example, I can build a BOT that replicates talking to a human through the teams chat window.

As you can see the LUIS service can generate a more natural conversation with robotics.

How I use the Luis service?:

The LUIS service is a repository and collection of intents and key phrases. The diagram below shows an entry I have created to determine the intent of a user requirement and check its intent confidence level.

I have a large list of intents that equates to a RoboRaptor command request, like move forward and Stop and it includes intents for other projects like collecting names and phone numbers, it can also contain all my home automation commands too.

In the example below, I have the intent that I want the RoboRaptor to dance. Under the dance intent I have several ways of asking the RoboRaptor to dance.


The LUIS service will return to the BOT the Intent of dance and a score of how confident it is of a match. The following is BOT code that evaluates the returned intent and score. If the confidence score is above 0.5 the BOT will initiate a process based on a case match. I created basic Azure BOT service from Visual Studio 2017. You can start with the Hello world template and then build dialogue boxes and middleware to other azure services like Q&A maker and the Luis service.

In our case the intent is dance so the Sendtoraptor process is called with the command string dance.


A series of direct method commands to the IoT using the direct call method is invoked. The method name= forward, and a message payload “dance fwd” is sent to the IoT-Hub service and IoT device name of “IOT3166keith” which is my registered MXCHIP. A series of other moves are sent to give the impression that the RoboRaptor is dancing.


 if (robocmd == “dance”)


//forward 4, then back 4 then right 4 then forward 4 left 4


//send stop signal

methodInvocation = new CloudToDeviceMethod(“forward”) { ResponseTimeout = TimeSpan.FromSeconds(300) };

methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = “dance fwd” }));

response = await serviceClient.InvokeDeviceMethodAsync(“IOT3166keith”, methodInvocation);


methodInvocation = new CloudToDeviceMethod(“backward”) { ResponseTimeout = TimeSpan.FromSeconds(300) };

methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = “dance back” }));

response = await serviceClient.InvokeDeviceMethodAsync(“IOT3166keith”, methodInvocation);


methodInvocation = new CloudToDeviceMethod(“right”) { ResponseTimeout = TimeSpan.FromSeconds(300) };

methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = “dance right” }));

response = await serviceClient.InvokeDeviceMethodAsync(“IOT3166keith”, methodInvocation);


methodInvocation = new CloudToDeviceMethod(“left”) { ResponseTimeout = TimeSpan.FromSeconds(300) };

methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = “dance left” }));

response = await serviceClient.InvokeDeviceMethodAsync(“IOT3166keith”, methodInvocation);





In the above code the method Invocation API attributes are configured, The new cloudToDeviceMethod(“forward”) sets up a direct call- Cloud to Device method with a methodname = forward and the setPayloadJson configurs a json payload message  “dance fwd”.

The await serviceClient.InvokeDeviceMethodAsync (“IOT3166keith”, methodInvocation); function initiate the asynchronous transmission of the message to the IoT Hub service and the device IOT3166keith.

The IOTHUB then sends the message to the physical device. The onboard oled display will show commands as they are received.


The MXCHIP has many environment sensors built in. I selected Temperature and Humidity as data I wish to send to Power BI for analytics. Every few seconds the Telemetric information is sent to the IoT hub service.

I have configured message routing for Telemetric messages to get to stream Analytics service in the IOT HUB service. I then parse the json files and save the data in Azure Blob storage, where Power BI can generate reports. More on this with next blog.

The next Blog will discover more about the IOT hardware and IOT HUB service.


Intelligent Man to Machine collaboration with Microsoft TEAMS. -Robo Raptor

Microsoft Teams is excellent collaboration tool with person to person communication workloads like, Messaging, Voice and Video collaboration. Microsoft Teams can also use Microsoft AI and cognitive services to collaborate with machines and devices. Together with the large suite of Azure services that allows me to call Azure apps to orchestrate  person to machine control, remote diagnostics and telemetrics analytics of internet connected devices.

My Teams BOT  is set up as a personal assistant that  manages communications between several of my projects. The fact I can use a single interface to run many purchased and custom built apps displays the flexibility of Azure BOT’s. I currently run three personal custom created applications, a Office 365 management Assistant, a lockdown and alert system, and IOT device control, all through this single Teams BOT.
To demonstrate how Microsoft teams can control remote robotics, I have created a fun project that allows Teams to manage a RoboRaptor through Teams natural language messages.
The objective is to send control commands from MS Teams as natural language messages that are sent to a Microsoft AI BOT. The BOT will then use Azure LUIS language understanding services to determine the command intent. The result is sent to the Internet of Things controller card attached to the robo raptor for translation into machine commands.

The Robo Raptor  and the MXCHIP is a working IOT device. Live telemetric data is sent back into Azure IOT HUB service to monitor environmental statistics which can be measured through Power BI. Temperature and humidity readings are typical of a standard IOT end point. The MXCHIP is configured with Arduino code which is very common microcontroller IDE platform.

The RoboRaptor project is complex and consumes multiple services from Azure. However, I have been able to build this solution with free tier services and so far I am up for $80 for the MXCHIP and dual relay module. The RoboRaptor was one of the kids old toys I saved from extinction.

The Robo Raptor Project uses the following Azure services.

The Project includes,
• Microsoft Teams for user interface
• BOTs for creating intelligent interaction and function calls to IOT and other Azure services
• Cognitive services, LUIS language understanding services to allow normal natural language, between user and robotics
• QNA, Question and Answer builder to create help menus and information repositories for users
• Facial Recognition Cognitive service, to scan people around the raptor and identify as owner or foe,
• Server-less Azure functions to control communications between IOT and Teams
• IOT, Azure internet of things services to manage and communicate with IOT hardware
• MXCHIP, A small microcontroller that I have attached to the raptor to provide secure internet communication to Azure IOT Hub. The MXCHIP will receive commands and send instructions to the Robo Raptor.
The Mxchip will activate power to the robotics and a fire a laser weapon through switched circuits. The MXCHIP also sends telemetry data back to AZURE for storage and analytics. Information include things like Temperature, Humidity, Pressure, Accelerometer, and Gyroscope info.

My choice of IOT hardware was the MX Chip. I found this development board easy to use and register to the Microsoft Azure IOT HUB. It is Arduino compatible and the board library easy to follow. I used a break out board to access IO pins to activate relays to turn on power and activate the laser. The hardware specs are as follows.
Device Summary:
Microcontroller: STM32F412RG ARM®32-bit Cortex®-M4 CPU
Operating Voltage: 3.3V
Input Voltage: 7-12V
Digital I/O Pins (DIO): 31
Analog Input Pins (ADC): 2
UARTs: 2
SPIs: 1
I2Cs: 1
Flash Memory: 1 MB
SRAM: 256 KB
Clock Speed: 100 MHz

The follow diagram shows the message flow between the MXChip and MS Teams.



Video footage in action

The project blogs is broken up into the following key milestones

Microsoft Teams BOTS and cognitive services. Part 2

Microsoft IOT and the MXCHIP Part 3

Robo raptor Facial recognition Part 4


How to make cool modern SharePoint Intranets Part 1 – Strategize (scope & plan)

Over the last year, we have seen many great advancements with SharePoint communications sites that have bought it more closer of being an ideal Intranet. In this blog series, I will be discussing about some of my experiences for the last years in implementing Modern Intranet and best practices for the same.

In this specific blog, we will look at the strategies about the first block of building a great Intranet – Strategize the Intranet approach.

So what should we be looking for in the new Intranet? The answer in most of the cases is generally about easy reach and effective communication. To achieve this we should be planning with the below headers.

Shared Ownership

Practically a single team couldn’t own the Intranet. It is a shared responsibility between the core business groups, who provide content and IT, who provide Tech support. Until this is defined effectively, there will be gaps for the Intranet to reach its full potential .


It is important to plan the steps for a Intranet roll out, not just the overall strategy. For eg. design, build, release to various groups (Big Bang) or progressively etc.

User Experience and Design

Over the years I have first hand realised that User experience is very critical for good adoption of any system including Intranet. It must look aesthetically appealing and easy to use, so users can get to what they want and find it fast. So for every Intranet it is needed to have a UX plan.


One of key aspects of any Intranet is seamless adoption. No organisation will be spending thousands of dollars teaching how to use the Intranet. And for who are thinking it, the ‘force down the throat’ approach just doesn’t work. It is important to have a Change and Adoption plan for the team.

Prepare a Wishlist

It is important to prepare the wish list of expected items way before starting the implementation process. Most of the times, I have seen teams prolonging it till the implementation phase which delays the release date.

MVP (Minimal Viable Product) cycle


Generally Intranet is thought of a single shot solution which it is prepared perfectly for its first release. But most of the times, this approach doesn’t work effectively. It adds more strain and takes a lot of time to create an ideal Intranet. However with SharePoint communication sites, we could make this process much simpler and faster.

Intranet could become an evolving process where we implement the first stage of the product with minimal viable requirements such as pages, navigation, must be used corporate components such as news, information etc. Then we build a feedback mechanism into the solution where we allow the focus users and teams to provide responses on the likability and adoption of the Intranet.

After the first stage is built and ready, and we start getting more feedback from business unit and focus groups. In the next phase, we could implement these requirements such as apps, solutions, workflows while expanding the scope of the Intranet.

Subsequently we keep adding more functionality with more cycles of design, build and feedback.


Using the above process, we could start with the strarergy and plan of making a great Intranet. In the upcoming blogs we will look at more steps for building a great Intranet and start planning the next steps for it.

Create Office365 business value through the power of limitation

Recent consulting engagements have found me helping customers define what Office365 means to them & what value they see in its use. They are lucky to have licenses and are seeking help to understand how they drive value from the investment.

You’ve heard the sales pitches: Office365 – The platform to solve ALL your needs! From meetings, to document management, working with people outside your organisation, social networking, custom applications, business process automation, forms & workflow, analytics, security & compliance, device management…the list goes on and is only getting bigger!

When I hear Office365 described – I often hear attributes religious people give to God.

  • It’s everywhere you go – Omnipresent
  • It knows everything you do – Omniscient
  • It’s so powerful it can do everything you want – Omnipotent
  • It’s unified despite having multiple components – Oneness
  • It can punish you for doing with the wrong thing – Wrathful

It’s taking on a persona – how long before it becomes self-aware!?

If it can really meet ALL your needs, how do we define its use, do we just use it for everything? Where do we start? How do we define what it means if it can do everything?

Enter limitation. Limitation is a powerful idea that brings clarity through constraint. It’s the foundation on which definition is built. Can you really define something that can do anything?

The other side would suggest limiting technology constrains thinking and prevents creativity.  I don’t agree. Limitation begets creativity. It helps zero-in thinking and helps create practical, creative solutions with what you have. Moreover, having modern technology doesn’t make you a creative & innovative organisation. It’s about culture, people & process. As always, technology is a mere enabler.

What can’t we use Office365 for?

Sometimes its easier to start here. Working with Architecture teams to draw boundaries around the system helps provide guidance for appropriate use. They have a good grasp on enterprise architecture and reasons why things are the way they are. It helps clearly narrow use cases & provides a definition that makes sense to people.

  • We don’t use it to share content externally because of..
  • We can’t use it for customer facing staff because of…
  • We don’t use it for Forms & Workflow because we have <insert app name here>
  • We can’t don’t use it as a records management system because we have …

Office365 Use cases – The basis of meaning

Microsoft provide some great material on generic use cases. Document collaboration, secure external sharing, workflow, managing content on-the-go, making meetings more efficient etc.  These represent ideals and are sometimes too far removed from the reality of your organisation. Use them as a basis and further develop them with relevance to your business unit or organisation.

Group ideation workshops, discussions & brainstorming sessions are a great way to help draw out use cases. Make sure you have the right level of people, not too high & not too low. You can then follow-up with each and drill in to the detail and see the value the use case provides.

Get some runs on the board

Once you’ve defined a few use cases, work with the business to start piloting. Prove the use case with real-life scenarios. Build the network of people around the use cases and start to draw out and refine how it solves pain, for here is where true value appears. This can be a good news story that can be told to other parts of the business to build excitement.

Plan for supply & demand

Once you some have runs on the board, if successful, word will quickly get out. People will want it. Learn to harness this excitement and build off the energy created. Be ready to deal with sudden increase in supply.

On the demand side, plan for service management. How do people get support? Who support it?  How do we customise it? What the back-out plan? How do we manage updates? All the typical ITIL components you’d expect should be planned for during your pilots.

Office365 Roadmap to remove limitations & create new use cases

They are a meaningful way to communicate when value will be unlocked. IT should have a clear picture of business value is and how it will work to unlock the capabilities the business needs in order for it to be successful.

Roadmaps do a good at communicating this. Though typically, they are technology focused.  This might be a great way to help unify the IT team, but people on the receiving end wont quiet understand. Communicate using their language in ways they understand i.e. what value it will provide them, when & how it will help them be successful.

AWS DeepRacer – Tips and Tricks – Battery and SSH

If you would like to know more about what the AWS DeepRacer is, please refer to my previous post:  AWS DeepRacer – Overview

I was going to do an unboxing video, but Andrew Knaebel has done a well enough job of that and posted it on YouTube, so I’ll skip that part and move onto more detail on getting up and running with the AWS DeepRacer. 

A lot of this is covered in the AWS DeepRacer Getting Started Guide so I’ll try and focus on the places where it was not so clear.

Before we get started there are a few items you will need to follow through this blog. They are:

  • AWS DeepRacer physical robot
  • USB-C Power adapter
  • PowerBank with USB-C connector
  • 7.4V 1100mAh RC car battery pack
  • Balance Charger for RC battery pack
  • If not in the US, a power socket adapter

Connecting and Charging

When I followed the instructions in the AWS getting started guide, I found that the instructions left out a few minor details that make your life easier going forward. Below is a way of avoiding pulling apart the whole car to charge it every time.

1. Install the USB-C PowerBank on top of the vehicle with the USB-C port closer to the right-hand side, closer to the USB-C port on the vehicle

2. Install the RC battery by taking off the 4 pins and (GENTLY as there are wired connected) move the top compute unit to the side like below, ensure you leave the charging cable and power sockets available as you don’t want to be unpinning the car every time

3. Connect the USB-C power adaptor to the USB-C port on the PowerBank and connect the Balance charger to the charging cable of the battery

4. Wait for the PowerBank to have four solid lights on it to signify its charged and the charge light on the balance charger to be off to let you know the RC battery is ready

Opening up SSH to allow for easier model loads

I’m sure that AWS are working hard to either include AWS IoT Green Grass capabilities, to allow users to push their latest model to the AWS DeepRacer. But for now it looks like that isn’t an option

Another nice feature would be the ability to upload the model.pb file via the AWS DeepRacer local Web server. Alas we currently need to put files onto USB sticks.

There is another way for the moment, and that’s to open up SSH on the AWS DeepRacer firewall and using SCP to copy the file into the correct location.

Firstly, you will need to log in to the Ubuntu server on the AWS DeepRacer. For instructions on how to achieve this please refer to my previous post, AWS DeepRacer – How to login to the Ubuntu Computer Onboard

1. Once logged in, open up a terminal

2. type in: sudo ufw allow ssh
from now on you will be able to login via SSH

3. On another machine, you should be able to now login via SSH: ssh deepracer@<ip address of your DeepRacer>

4. Copy over your model.pb a file to your DeepRacer home directory via SCP (I used WinSCP)

5. Move the file into a folder inside /opt/aws/deepracer/artifacts/<folder name of your choice>/model.pb

6. Your Done! Enjoy being able to load a model without a USB stick

There are other Tips and Tricks coming as I experience the AWS DeepRacer ecosystem.

AWS DeepRacer – Training your reinforcement learning model in AWS Robomaker

If you would like to know more about what the AWS DeepRacer is, please refer to my previous post:  AWS DeepRacer – Overview

There seems to be many ways to get your AWS DeepRacer model trained. These are a few I have discovered:

  • The AWS DeepRacer Console (Live Preview yet to commence, GA early 2019)
  • SageMaker RL notebook
  • Locally from the DeepRacer GitHub repository
  • AWS RoboMaker sample simulation
  • AWS RoboMaker Cloud9 IDE with sample application downloaded

In this post, we will be exploring how to train a reinforcement learning model using AWS Robomaker, both with the sample application downloaded and in the Cloud9 development environment. The reason I chose to run through both, is that AWS has created a fully autonomous deployment using CloudFormation that does all the work for you, but this does not give you the ability to modify the code.  If changing reward function and other things in the ROS application is what you want then the Cloud 9 development environment is the way to go, we can modify the code and takes you through most of the simulation workflow.

As for taking you through the Proximal Policy Optimization(PPO) algorithm that will require it’s own post as there is a heap of prerequisites to understanding this algorithm. AWS has leveraged Intel’s RL Coach which is a platform of packaged RL algorithm implementations, including Clipped PPO.

What you will need:

  • An AWS account
  • Access to create IAM roles within the AWS account

Running the Fully Autonomous Sample Application

1. Log into your AWS account

2. Switch to the us-east-1 region (N.Virginia) 

3. Navigate to the AWS RoboMaker service

4. Click on the sample application link on the left 

5. Scroll down the list and select Self-driving using reinforcement learning and then click the Launch simulation job button

6. You can now view the simulation job from the Simulation jobs section

7. If you want to view the DeepRacer in simulation work click on the Gazebo icon (it does not show up until the simulation is in the Running state)

Running the Cloud9 development environment with a custom reward function

1. Log into your AWS account

2. Switch to the us-east-1 region (N.Virginia) 

3. Run the above sample simulation as this creates all the resources you will need for the following. alternatively you can create the resources manually when you get to the roboMakerSettings.json section below

4. Navigate to the AWS RoboMaker service

5. Click on the development environments section 

6. Click the create environment button

7. Fill out the Name, Instance type, VPC, and Subnets fields then click Create as in Figure 1.0 below

Figure 1.0

8. Once created click Open environment button to open Cloud9

9. The Cloud9 environment should look like Figure 1.1 below

10. Click on RoboMaker Resources and select Download Samples then 6. Self-Driving using Reinforcement Learning

11. You should now have two folders under the root path, you should delete the roboMakerLogs folder and base JSON file as it has duplicate config files and saves confusion later

12. Now its time to modify your roboMakerSettings.json file that is under the DeepRacer folder. This file has a number of values that need to be filled out. I used the output of the sample simulation job run in the first part of this post to populate this. 

13. Ensure you add the “robotSoftwareSuite” section on line 43 (as of writing this post) as the simulation fails with it. The API for RoboMaker must have changed since the author of this bundle wrote it.

14. Also pay attention to the “maxJobDurationInSeconds”: 28800 section and set the simulation job to something less than 8 hours unless you feel like you need that time.

15. Save the file, as Cloud9 doesn’t seem to auto-save your changes

16. At this point you may want to modify the default reward function in the bundle. This is located inside the OpenAI Gym definition file located under: DeepRacer/simulation_ws/src/sagemaker_rl_agent/markov/environments/ 

17. Click on the Robomaker Run button on the top-right, navigate to the Workflow menu option and select DeepRacer – Build and Bundle All

18. Monitor the tabs below and look for exit code 0 on the bundle tab.

19. Click on the RoboMaker Run button on the top-right, navigate to Launch Simulation and click on DeepRacer

20. Check on your simulation job under the RoboMaker Simulation jobs

21. Once complete you can navigate to the S3 bucket configured in the JSON file above and look for the model.pb file in the model-store folder.

You can now use this model to load onto the AWS DeepRacer. If you are not sure how to do this, please read my previous post: AWS DeepRacer – How to load a model

In the next post I’ll take you through the SageMaker RL notebook and explore the Machine learning side of the AWS DeepRacer

AWS DeepRacer – How to load a model

If you would like to know more about what the AWS DeepRacer is, please refer to my previous post:  AWS DeepRacer – Overview

This post assumes you have followed the AWS DeepRacer Getting Started Guide which gets you to the point of being able to manually drive the car.

So now you have the AWS DeepRacer charged up and ready to go. You have a trained model you got from Re:Invent or you followed my other post here and trained your model with RoboMaker/SageMaker.

You also either have a file named model1.tar.gz on a USB stick from the DeepRacer console or a models folder in an S3 bucket that was trained with RoboMaker which just needs to be zipped up. You want this model to show up in the DeepRacer local web server in the Autonomous section.

How to load your model, unfortunately, seems to be missing from the documentation so far and it’s not something I found looking online. There is most likely a better way of doing this so if you find out a smoother method, let me know.

To complete this you will need to login to the Ubuntu server onboard. Instructions on how to do this are found in my previous post  AWS DeepRacer – How to login to the Ubuntu Computer Onboard 

Loading from DeepRacer Console via USB

You should be logged into the Ubuntu server

  1. Plug in your USB into one of the free USB ports
  2. Open up a terminal
  3. type in df -h to list all the mounted volumes and identify which is your USB stick “for me this was /media/deepracer/Lexar”
  4. then type in the following:
    1. sudo mkdir /opt/aws/deepracer/artifacts/<the name of the model>
    2. sudo cp <the path of your USB stick>/<the filename of the model gz> /opt/aws/deepracer/artifacts/<the name of the model>
    3. cd /opt/aws/deepracer/artifacts/<the name of the model>
    4. sudo tar -xvf <the filename of the model gz>
  5. below is a screenshot of my commands for your reference

Checking that the model is now available

Now that you have placed the files in the correct directory, you can now go back to your AWS DeepRacer web server e.g.

  1. login to the DeepRacer web server with the password on the bottom of your car.
  2. You should now see the Autonomous option is selected and a drop-down allowing you to choose the model by the name of the folder you chose

You should now be able to click the Start button and the AWS DeepRacer is now driving using the loaded model.

In the next few posts, we will explore running simulations with AWS RoboMaker and examine how we can change the reward function as well as other hyperparameters

AWS DeepRacer – How to login to the Ubuntu Computer Onboard

If you would like to know more about what the AWS DeepRacer is, please refer to my previous post:  AWS DeepRacer – Overview

This post assumes you have followed the AWS DeepRacer Getting Started Guide which gets you to the point of being able to manually drive the car.

So to go deep into your understanding of the AWS DeepRacer and to troubleshoot deep technical issues, it may become necessary to log into the Ubuntu Server on-board the AWS DeepRacer. This post will describe the steps required to achieve this.

What you will need:

  1. AWS DeepRacer
  2. HDMI cable
  3. A HDMI capable screen
  4. USB keyboard and mouse
  5. USB-C Power Adapter 
  6. Optional – Non-US power socket adapter

Plugging everything in.

  1. the HDMI cable goes from the AWS DeepRacer to your HDMI port on your screen.
  2. You plug in your keyboard and mouse into any of the free USB ports on the DeepRacer
  3. You plug in the USB-C power adaptor to the USB-C port on the right-hand side of the DeepRacer

Starting the server

Once all plugged in, we push the power button. You should now see Ubuntu loading up and stopping and a login screen.

For my DeepRacer the login details were:

username: deepracer

password : deepracer

Once you have typed in the password, you can now use the Ubuntu server for other tasks, for example:

  • examine the “/opt/aws/deepracer folder
  • as part of this blog series try loading your own model, which can be found here.
Follow Us!

Kloud Solutions Blog - Follow Us!