Microsoft Teams and IOT controled Robotics — The IOT device

This is the third installment of a four part series on using Microsoft Teams and Azure services to collaborate with machines and devices. In the previous posts, I described how Teams and the Azure BOT services work to send commands to the IoT device attached to the RoboRaptor. This post will describe the IoT hardware and connection on the RoboRaptor to the MXCHIP.

To recap, Teams messages are sent from the Teams user interface to our Azure BOT for analysis. The command is then sent from the Azure BOT to the Azure IoT HUB. The IoT HUB then forwards the command message to the MXCHIP mounted on the back of the RoboRaptor. The MXCHIP then translates the received command into IR pulses for direct input to the RoboRaptor.

The factory version RoboRaptor is controlled through a hand held infra-red controller. In order to send commands to the RoboRaptor I first had to read the IR pulses sent from the controller and analyze the Infra red pulse stream from the factory controls of the RoboRaptor.  For this i created another project and used an Arduino UNO 3 and an IR receiver. There are lots of prebuilt free code on GIT HUB. As I pushed each button of the controller I recorded a hex signal from the UNO 3 serial port.

The Teams controlled RoboRaptor controller.

As I pushed each button on the controller I recorded a hex signal from the UNO 3 serial port. The diagram below shows the codes received.

My second key objective was to remotely activate hardware through sending commands that activate an attached relay module and supplying power to a remote external device. In this project the aim was to activate the power switch of the RoboRaptor and the Laser remotely with Teams commands. I used a 2PH63091A dual optical relay for this role. I mounted the relays on the RoboRaptor belly.

Connection Diagram



To activate the relays I need to connect the relay input signal port to external pins of the MXCHIP. The switching of a MXCHIP external pin to a low or high signal will activate and deactivate the relay

The Arduino code to configure MXCHIP pins are as follows.

To control the signals sent to external sensors and relays I need to assign a logical Arduino PIN and a physical MXCHIP PIN to an external relay switch. For example, to switch on power to activate the RoboRaptor, is to assign Logical pin 45 IN CODE. The physical pin 45 on the MXCHIP is wired the its Relay input trigger. On the pin going low  the relay will activate and close its relay contacts and supply power to the RoboRaptor.

Project Libraries:

My development platform for the project is Microsoft Visual Studio code. The key libraries required are AZ3166WifI.h for running the WIFI role, The AzureIOTHub.h library is used to manage Azure IoT HUB connectivity, The DEVKitMQTTClient.h library is used to manage the cloud to device and device to cloud messaging. The other libraries manage MXCHIP hardware and sensors.The MXCHIP library has its own IRDA infra-red Library code. The coding description of this library was very light, so I created my own function code to control the transmission of InfraRed pulses and commands.


The following code sets up the IR carrier and a 12 BIT command code. The MXCHIP onboard LED works fine, however I found I needed to add an additional external IR LED as the signal was degraded when I mounted the MXCHIP board to the back of the raptor.




The void loop() function contains the main body of code, and will continuously run in a loop checking for WIFI connectivity and new received MQTT messages. The following code shows the continuous monitoring of a system tick heart beat. If the WIFI connection is up the IoT device will send telemetric data to the IoT HUB. For this project I send temperature and humidity reading every few seconds to the IoT HUB for processing. The IoT HUB will route the telemetric messages to Azure blob storage and make it available for Power BI analytics.

The DevKittMQTTClient  function will then check for any new MQTT cloud to device messages. Any new messages that come in will be compared to command strings. If we have a match the command function will activate and call the IR transmission function, otherwise it will repeat the void loop.


The RoboRaptor only requires a basic command for it to understands the intent of the user’s message. The basic intent of moving forward just need to be acknowledged and resent to the RoboRaptor as IR pulse 0x186.  I used the direct method for sending commands to the IOT MXCHIP device.

The direct method is an excellent lightweight message that contains a method name and a small message payload.

The message format is in two parts. A method name as the header and the payload message for the body. For example, the Method name = forward and the payload = robo moving forward.

The code below shows how I am only checking the method-Name variable and if I get a match for the method name I will run a function then will send the correct IR signals to the RoboRaptor.

How to create an IoT Hub and register a device for the IoT Dev-Kit:

The IoT HUB is an Azure service that you will register your multiple end point devices. The free tier allows you to register a single device and capacity for 8000 messages. To add a new device to the IoT HUB is as simple as selecting the add device button on the IoT HUB Menu. You will be asked to supply a device-ID name. When the resource is configured it will create a new host name URL and a set of primary and secondary keys and connection strings. The values need to be saved as they are required by the IoT device to securely connect to the IoT hub Service.


Setting the connection string:

Now I have created a IoT HUB device in the cloud I need to set up the device with the device URL and secure connection string. In my Visual Studio code platform I press F1 to bring up a list of commands and I select configure device > Config Device Connection String. The Menu system will direct me to supply the Device FQN and Connection string. Once I have submitted the info the IoT device (MXCHIP) can now connect with the IOT_HUB service

The last installment of the RoboRaptor project will be looking into adding facial recognition. The objective is to use a camera to capture images and compare these to a saved photo of myself. If the Face ID gets a match the RoboRaptor will come towards me.

Links to other posts in this series:

  1. Intelligent Man to Machine collaboration with Microsoft TEAMS. -Robo Raptor
  2. Microsoft Teams and IOT controled Robotics & The BOT
  3. Configuring Facial recognition – COMING SOON

Microsoft Teams and IOT controled Robotics — The BOT

Part 2 of 4 series into Teams Controlled Robotics

Part 1

Microsoft Teams is an excellent collaboration tool with person to person communication workloads like, Messaging, Voice and Video collaboration. Microsoft Teams can also use Microsoft AI and cognitive services to collaborate with machines and devices. The Azure suite of services allows person to machine control, remote diagnostics and telemetrics analytics of internet connected devices.

To demonstrate how Microsoft Teams can control remote robotics, I have created a fun project that allows Teams to manage a RoboRaptor through Teams natural language messages. The objective is to send control commands from Teams as natural language messages that are sent to a Microsoft AI BOT. The BOT will then use Azure  language understanding services (LUIS) to determine the command intent. The result is sent to the Internet of Things controller card attached to the RoboRaptor for translation into machine commands. Finally I have configured a Teams channel to the Azure BOT service. In Teams it looks like a contact with an application ID. When I type messages into the Teams client it is sent from Teams over the channel to the Azure BOT service for processing. The RoboRaptor command messages to the IoT device are sent from the BOT or functions to the Azure IoT HUB service for messaging to the physical device.

The overview of the logical connectivity is below:


The Azure services and infrastructure used in this IoT environment is extensive and fall into five key areas.



  1. The blue services belong to Azure AI and machine learning services and it includes chat bots and cognitive services.
  2. The services in orange belong to Azure compute, analytics.
  3. The services in green belong to Azure internet of things suite.
  4. The Services in yellow are IoT hardware, switches and sensors.
  5. The services in white are network connectivity infrastructure

The Azure Bot service plays an essential part in the artificial intelligence and personal assistant role by calling and controlling functions and cognitive services. As the developer, I create code that collects instant messages from web chats and Teams channels and tries to collect key information and then determines an intent of the user.

Question and Answer Service:

In this project I want to be able to deliver a help menu. When users type in a request for help with the commands that they can use with the RoboRaptor, I wish to be able to return a list in a Teams card of all commands and possible resultant actions. The Azure Q&A service is best suited for this task. The Q&A service is an excellent repository for a single question and a single reply with no processing. With the Q&A service you build a list of sample questions, and if you match you reply with the assigned text, it is best for Frequently asked Questions scenarios.

I can use the BOT to collect information from the user and store it in dialog tables for processing. For example, I can ask for a user’s name and store it for replies and future use.

Sending commands

I wanted to be able to use natural language to forward commands to the RoboRaptor. As Teams is a collaboration tool and for those who are part of the team have permissions for this BOT, so they too can send commands to IoT robotic devices. The Teams members can have many ways of saying a request. Sure, I can just assign a single word for an action like forward however if I want to string commands together I will need to use the Azure LUIS service BOT arrays to build action table. For example, I can build a BOT that replicates talking to a human through the teams chat window.

As you can see the LUIS service can generate a more natural conversation with robotics.

How I use the Luis service?:

The LUIS service is a repository and collection of intents and key phrases. The diagram below shows an entry I have created to determine the intent of a user requirement and check its intent confidence level.

I have a large list of intents that equates to a RoboRaptor command request, like move forward and Stop and it includes intents for other projects like collecting names and phone numbers, it can also contain all my home automation commands too.

In the example below, I have the intent that I want the RoboRaptor to dance. Under the dance intent I have several ways of asking the RoboRaptor to dance.


The LUIS service will return to the BOT the Intent of dance and a score of how confident it is of a match. The following is BOT code that evaluates the returned intent and score. If the confidence score is above 0.5 the BOT will initiate a process based on a case match. I created basic Azure BOT service from Visual Studio 2017. You can start with the Hello world template and then build dialogue boxes and middleware to other azure services like Q&A maker and the Luis service.

In our case the intent is dance so the Sendtoraptor process is called with the command string dance.


A series of direct method commands to the IoT using the direct call method is invoked. The method name= forward, and a message payload “dance fwd” is sent to the IoT-Hub service and IoT device name of “IOT3166keith” which is my registered MXCHIP. A series of other moves are sent to give the impression that the RoboRaptor is dancing.


 if (robocmd == “dance”)


//forward 4, then back 4 then right 4 then forward 4 left 4


//send stop signal

methodInvocation = new CloudToDeviceMethod(“forward”) { ResponseTimeout = TimeSpan.FromSeconds(300) };

methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = “dance fwd” }));

response = await serviceClient.InvokeDeviceMethodAsync(“IOT3166keith”, methodInvocation);


methodInvocation = new CloudToDeviceMethod(“backward”) { ResponseTimeout = TimeSpan.FromSeconds(300) };

methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = “dance back” }));

response = await serviceClient.InvokeDeviceMethodAsync(“IOT3166keith”, methodInvocation);


methodInvocation = new CloudToDeviceMethod(“right”) { ResponseTimeout = TimeSpan.FromSeconds(300) };

methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = “dance right” }));

response = await serviceClient.InvokeDeviceMethodAsync(“IOT3166keith”, methodInvocation);


methodInvocation = new CloudToDeviceMethod(“left”) { ResponseTimeout = TimeSpan.FromSeconds(300) };

methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = “dance left” }));

response = await serviceClient.InvokeDeviceMethodAsync(“IOT3166keith”, methodInvocation);





In the above code the method Invocation API attributes are configured, The new cloudToDeviceMethod(“forward”) sets up a direct call- Cloud to Device method with a methodname = forward and the setPayloadJson configurs a json payload message  “dance fwd”.

The await serviceClient.InvokeDeviceMethodAsync (“IOT3166keith”, methodInvocation); function initiate the asynchronous transmission of the message to the IoT Hub service and the device IOT3166keith.

The IOTHUB then sends the message to the physical device. The onboard oled display will show commands as they are received.


The MXCHIP has many environment sensors built in. I selected Temperature and Humidity as data I wish to send to Power BI for analytics. Every few seconds the Telemetric information is sent to the IoT hub service.

I have configured message routing for Telemetric messages to get to stream Analytics service in the IOT HUB service. I then parse the json files and save the data in Azure Blob storage, where Power BI can generate reports. More on this with next blog.

The next Blog will discover more about the IOT hardware and IOT HUB service.


SharePoint Modern Sites hacktips – Manage Client-Side Pages via PnP PowerShell

Sometimes I get requirements when projects require a full width layout pages in Team Sites or would like to create a page which is System maintained, so users cannot edit those pages. In this blog, we will look at some of these options and how easy it is to set them up using PnP PowerShell

Set Full Width Layout pages using PnP PowerShell

By default, any new page that is created in Team site gets a left-hand navigation (Article Layout) when created through UI. At present, it is not possible to create a full width page using the UI. But we could do the same using PnP PowerShell or using an Azure Function.

In this blog, we will look at the PnP PowerShell way of creating it. We can also do that using an Azure Function by using the steps stated in this blog here.

For starting with PnP PowerShell, check the post here.

After connecting to the site, we can run the below PowerShell to create a page with Full width layout. We could also set it as Home Page when creating it.

Add-PnPClientSidePage -Name "Test.aspx" -LayoutType Home -PromoteAs HomePage

To change/set the page type of existing page, we can use the below script

Set-PnPClientSidePage -Identity "" -LayoutType Home -PromoteAs HomePage

Some of the other layout types are Article and SingleWebPartAppPage. Article is the default choice for Team Sites and have left hand navigation.

Create a Single Web Part Page

This option is beneficial when you would need a page where you don’t want users to add/update the webparts and are ready to set it programmatically. As it is evident, this choice is not available using the UI. The created page doesn’t have a command bar and is not editable from the UI. Hence the best part is that users cannot modify them. However, since there is no UI all the changes have been done using PowerShell. Also, the property options must be set using Json Properties using PropertiesJson.


So, there you go, this is how we could create full width pages in Modern Team Sites and manage Modern Site Pages using PnP PowerShell


How to make cool modern SharePoint Intranets Part 1 – Strategize (scope & plan)

Over the last year, we have seen many great advancements with SharePoint communications sites that have bought it more closer of being an ideal Intranet. In this blog series, I will be discussing about some of my experiences for the last years in implementing Modern Intranet and best practices for the same.

In this specific blog, we will look at the strategies about the first block of building a great Intranet – Strategize the Intranet approach.

So what should we be looking for in the new Intranet? The answer in most of the cases is generally about easy reach and effective communication. To achieve this we should be planning with the below headers.

Shared Ownership

Practically a single team couldn’t own the Intranet. It is a shared responsibility between the core business groups, who provide content and IT, who provide Tech support. Until this is defined effectively, there will be gaps for the Intranet to reach its full potential .


It is important to plan the steps for a Intranet roll out, not just the overall strategy. For eg. design, build, release to various groups (Big Bang) or progressively etc.

User Experience and Design

Over the years I have first hand realised that User experience is very critical for good adoption of any system including Intranet. It must look aesthetically appealing and easy to use, so users can get to what they want and find it fast. So for every Intranet it is needed to have a UX plan.


One of key aspects of any Intranet is seamless adoption. No organisation will be spending thousands of dollars teaching how to use the Intranet. And for who are thinking it, the ‘force down the throat’ approach just doesn’t work. It is important to have a Change and Adoption plan for the team.

Prepare a Wishlist

It is important to prepare the wish list of expected items way before starting the implementation process. Most of the times, I have seen teams prolonging it till the implementation phase which delays the release date.

MVP (Minimal Viable Product) cycle


Generally Intranet is thought of a single shot solution which it is prepared perfectly for its first release. But most of the times, this approach doesn’t work effectively. It adds more strain and takes a lot of time to create an ideal Intranet. However with SharePoint communication sites, we could make this process much simpler and faster.

Intranet could become an evolving process where we implement the first stage of the product with minimal viable requirements such as pages, navigation, must be used corporate components such as news, information etc. Then we build a feedback mechanism into the solution where we allow the focus users and teams to provide responses on the likability and adoption of the Intranet.

After the first stage is built and ready, and we start getting more feedback from business unit and focus groups. In the next phase, we could implement these requirements such as apps, solutions, workflows while expanding the scope of the Intranet.

Subsequently we keep adding more functionality with more cycles of design, build and feedback.


Using the above process, we could start with the strarergy and plan of making a great Intranet. In the upcoming blogs we will look at more steps for building a great Intranet and start planning the next steps for it.

Great SharePoint Modern updates Feb 2019 #makespintranetcoolagain

During Ignite 2018, Microsoft showcased some great updates that are going to change how we implement SharePoint Intranets. It will scale Modern Communication sites to new heights to become the Next-gen Intranet.

Since Ignite 2018, there have been many steady releases and some great updates on SharePoint Modern communication sites. Following this blog, I am planning to start with a detailed blog series about how to build cool Modern Intranets in SharePoint #makespintranetcoolagain.

Here is quick blog for these updates.


This is the first time in SharePoint history that we have an Out of Box Multi-Navigation menu which goes beyond two level hierarchy. This will allow us to include any level of navigation hierarchy. Here is blog link mentioning this release


Modern SharePoint Webparts

There are few great additions to the existing list of SharePoint Webparts. The blog below talks about these in more detail. Some of the noted ones are:

1. New Yammer Conversations web part

2. My Recent and My Documents

3. Code Snippet for Devs and Markdown for Authors

Updated SharePoint Page editing experience

Now we could style page headers and update backgrounds for sections in SharePoint Pages. It is surely a valuable addition for Designers and Content Authors who are looking for ways to improve the design and content real estate on their sites.


Column Formatting on the Fly

Now applying column formatting is way simpler than using the Out of Box column JSON formatter. We were always able to do it using JSON but now we could set it using the UI designer directly. Isn’t that fabulostatic !!

Site Settings and Apply Site Design updates

Now we could apply a site design on an existing site from the site menu and see the status of the process. There is more information in the below link.

Page Designs

This is still in build but when released they will be a great addition to setting up page layouts in Modern environment.

Improved SharePoint Admin Center experience

SharePoint Admins Rejoice!! Now we could set external sharing, change Site admins and filter on site properties for eg. Hub site from the modern Admin Center. How cool is that.


Webpart Connections

This is a great announcement for devs of SPFx Framework. Now we could connect and send information within two connected webparts as mentioned below


Above we saw some of the cool updates released recently for the Modern Experience. In the upcoming blog series – SharePoint Modern Intranet #makespintranetcoolagain, we will be looking at some of them in detail and use them on real use cases.

Managing Microsoft Teams Devices

One of the coolest parts of the Office 365 Teams Admin Centre is the built in Devices management portal.

From here, you can view and control your deployed Microsoft Teams handsets, create and apply policies, firmware updates, and even reboot them.

The Dashboard

To access the dashboard, sign in to the Office 365 Portal, then choose Teams under Admin Centres.

From the menu on the left, choose Devices > Manage Devices

The dashboard will show you all Microsoft Teams devices that are registered to your Office 365 tenant.

At a quick glance, you can see:

  • Device Name
  • Manufacturer
  • Device Model
  • The last user who signed in
  • The current status (Offline, Online)
  • Any available updates
  • The last reported online time

This gives you a great snapshot of how healthy your Teams devices are looking right now.

A quick note too that only Microsoft Teams devices will appear. Your Skype for Business handsets (such as VVX500) won’t appear.

Device Information

Clicking a device’s name will take you to the device information page, that contains a little further info, including the devices serial number

User Information

To view user information about a device, click the username in the device list:

Firmware updates

Managing firmware updates for all Teams registered devices is super simple. You can view any new updates from the device management portal, and then with one click, install them.

From initial testing, once you have clicked install, the handset will download and begin installing the update as soon as it goes idle. Keep this in mind if you’re doing this throughout the day.

Rebooting handsets

Alright, i’ll admit that I LOVE this simple feature. How many times have you had to instruct a user to unplug and replug in a handset to reboot it? Well, now you can initiate a reboot right from within the portal!

Overall, the device management portal within the Teams Admin Centre is a really great addition, and a super simple way of managing your device deployment. What features would you like to see added?

Enabling Microsoft Teams Meeting Rooms

Enabling a meeting room within your organisation for Microsoft Teams is remarkably easy! In fact, the hardest decision you’ll need to make is which device you want to use within the room. Luckily, even this is made simple by using Microsoft’s offical Microsoft Teams Certified Devices List.

Alright, so you’ve chosen your in-room device, it’s been delivered, you’ve ripped the box open, thrown the plastic and packaging aside and connected the device to your PoE switch. Now what?

Office 365 and Room accounts

After carefully pulling the screen cover off your shiny new Microsoft Teams device, the next thing you’re going to need is a Room account.

If you’re not already familiar, room accounts are a kind of resource account in Exchange that appear in the Outlook room booking system, as well as appearing in Microsoft Teams as a bookable resource.

Why don’t we just use a generic user account?

You certainly can use a generic user account for room devices, but the account wont appear in Outlook or Teams as a bookable resource, and you also won’t get the additional benefits such as auto accept for room bookings, and information about the room itself such as location, and occupancy size.

Creating the Room account

The easiest way of creating a new room account is via the Office 365 admin centre:

Choose Rooms & equipment under Resources

Next, Click Add on the tool bar, and fill out the form, ensuring your select Room under Type.

When done, click Save.

Licensing Options

Now that we have our room account, we’re going to need to license it. There’s a number of options available:

E3/E5 licensing
If you already have E3 with phone system and conferencing or E5, although overkill, you can assign a license to the room account.

Meeting room licensing
If you don’t have spare E3/E5 licensing, or want to apply a more suitable license you can purchase a meeting room license.

Meeting room licenses give you:

Skype for Business Online (plan 2)
Phone System
Microsoft Teams
Microsoft Intune
Audio Conferencing

Calling Plan
Which ever license you choose, you’ll still need to assign a calling plan to your room account if you’d like to be able to make outbound PSTN calls.

Assigning a Phone Number

Ok, you have your Room account, it’s licensed and now you’d like to assign it a phone number.

Just as you do with users in Microsoft Teams, phone numbers must be assigned via the Skype legacy portal in the Teams admin centre.

Choose Phone Numbers within the Voice section, select the phone number and click Assign

If you don’t have enough spare numbers remaining, request a new number by clicking the + sign.

Search for your meeting room name, and click Assign.

A few final steps

Before we go signing in as our new Teams meeting room account, there’s a few things we need to do to improve the user experience.


Right now, your room account doesnt have a password. We’ll need to give it one so the device can sign in.

To do this, jump over to the Users > Active Users area of the Office 365 Admin Centre.

Find the Room account in the list, click it and then click Reset Password

Either choose to generate a password, or type your own and click Save.

Password Expiry

Having passwords to room accounts expire is a pain. Let’s turn password expiry off for our Room account.

Connect to Office 365 Powershell using an Admin account. If you’ve never done this before, Microsoft have an awesome article here about connecting to Office 365 in a single Powershell window.

Once connected, run the following to turn off password expiry for our Room account:

Set-MsolUser -UserPrincipalName -PasswordNeverExpires $true 

Whilst we’re here, let’s add a tooltip to the Room too that informs users of our cool new tech:

Set-Mailbox -Identity -MailTip “This room is Microsoft Teams enabled, please book a Microsoft Teams meeting to take advantage of the enhanced meeting experience”

Signing in for the first time

Alright, our Room account is good to go! Let’s sign in to our room device using our newly created room account username and password.

Users will now be able to book the room either via Outlook or via the Microsoft Teams app. Once booked, they’ll simply walk in and tap their meeting on the device’s screen to join! No more help desk calls or bad meeting join experiences!

Office 365 Lessons Learned – Advice on how to approach your deployment

Hindsight sometimes gets the better of us. That poor storm trooper.  If only he had the advice before Obi-Wan Kenobi showed up.  Clearly the Galactic Empire don’t learn lessons.  Perhaps consultants use the Jedi mind trick to persuade their customers?

I’ve recently completed an 18 month project that rolled out Office 365 to an ASX Top 30 company. This project successfully rolled out Office 365 to each division & team across the company, migrating content from SharePoint 2007 and File Shares. It hit the success criteria, was on-budget & showed positive figures on adoption metrics.

Every project is unique and has lessons to learn, though we rarely take time to reflect. This post is part of the learning process, it’s a brief list of lessons we learned along the way. It adds credibility when a consultancy can honestly say – Yes, we’ve done this before,  here’s what we learned & here’s how it applies to your project. I believe Professional Services companies lose value when they don’t reflect & learn, including on their successful projects.

Don’t try to do too much

We often get caught up in capabilities of a platform and want to deploy them all. A platform like Office 365 has an endless list of capabilities. Document Management, Forms, Workflow, Planner, Teams, New intranet, Records Management, Automated governance…the list goes on. Check out my recent post on how you can use the power of limitation to create value. When we purely focus on capabilities, we can easily lose sight of how overwhelming it can be to people who use them.  After all, the more you try to do, the more you’ll spend on consulting services.

Choose your capabilities, be clear on your scope, communicate your plans and road map future capability.

Design for day 1 and anticipate future changes

People in the business are not technology experts. They are experts in their relevant fields e.g. finance, HR, risk, compliance etc.  You can’t expect them to know & understand all the concepts in Office 365 without them using it. Once people start using Office 365, they start to understand and then the Ahah! moment. Now the change requests start flowing.

Design for day 1 and anticipate future changes. Resource your project so post go-live activities includes design adjustments and enhancements. Ensure these activities don’t remove key resources from the next team you are migrating.

Respect business priorities

It’s easy to forget that people have a day job, they can’t be available all the time for your project. This is more so the case when there’s important business process like budgeting, results and reporting are on.  You can’t expect to migrate or even plan to migrate during these periods, it just wont fly. If you are migrating remote teams, be mindful of events or processes that only affect them. There might be an activity which requires 100% of their time.

When planning & scheduling, be mindful of these priorities. Don’t assume they can carry the extra workload you are creating for them. Work with senior stakeholders to identify times when to engage or avoid.

Focus on the basics

Legacy systems that have been in place for years means people are very comfortable with how to use them. Office 365 often has multiple ways of doing the same thing – take Sharing for example, there’s 20+ ways to share the same content, through the browser, Office Pro Plus client, Outlook, OneDrive client, Office portal etc.

Too much choice is bad. Pick, train & communicate one way to do something. Once people become comfortable, they’ll find out the other ways themselves.

The lines between “Project” & “BAU” are blurred

New features & capabilities are constantly announced. We planned to deliver a modern intranet which we initially budged for a 3-month custom development cycle. When it came time to start this piece of work, Microsoft has just announced Communication sites. Whilst the customer was nervous with adopting this brand-new feature, it worked out to be good choice. The intranet now grows and morphs with the platform. Lots of new features have been announced, most recently we have megamenus, header & footer customisation plus much more.  This was great during the project, but what happens when the project finishes? Who can help make sense of these new features?

Traditional plan-build-run models aren’t effective for a platform that continuously evolves outside of your control. This model lacks value creation & evolution. It makes the focus reactive incident management. To unlock value, you need to build a capability that can translate new features to opportunities & pain points within business teams. This helps deepen the IT/Business relationship & create value, not to mention help with adoption.

What have you recently learned? Leave a comment or drop me an email!


Office365-AzureHybrid: Building an automated solution to pull Office 365 Audit logs

Custom reporting for Office 365 Audit logs is possible using data fetched from the Security and Compliance center. In the previous blogs here, we have seen how to use PowerShell and Office 365 Management API to fetch the data. In this blog, we will look at planning, prerequisites and rationale to help decide between the approaches.

The Office 365 Audit logs are available from the Security and Compliance center when enabled. At present, audit logging is not enabled by default and needs to be enabled from the Security and Compliance center. This could be turned on (if not done already) via the Start recording user and admin activity on the Audit log search page in the Security & Compliance Center. In future, supposedly Microsoft will be turning it On by default. The Audit information across all Office 365 services are tracked after enabling.

The Audit log search in Security and Compliance center allows to search the audit logs but is limited in what is provided. Also it takes a long time to obtain the results. All the below cases need custom hosting to provide more efficiency and performance

Planning and prerequisites:

Few considerations for custom processes are as follows:

  1. Need additional compute to process the data – Create a periodic job to fetch the data from the Office 365 audit log using a custom process since the audit log data is huge and queries take a longer time. The options are using a PowerShell job or Azure Function App as detailed below.
  2.  Need additional hosting for storing Office 365 Audit log data – The records could range from 5000 to 20000 per hour depending on the data sources and relevant data size. Hence to make it easier to retrieve the data later, store the data in a custom database. Since the data cost could be significant for this, use either dedicated hosting or NOSQL hosting such as Azure Tables/CosmosDB (Azure) or SimpleDB / DynamoDB (AWS)
  3. Might need additional Service Account or Azure AD App – The data will be retrieved using an elevated account at runtime so use an Azure AD app or service account to gather the data. For more information about this, please refer to this blog here.


Some of the scenarios when the Office 365 Audit log data could be useful.

  1. Create custom reports for user activities and actions
  2. Store audit log data for greater than 90 days
  3. Custom data reporting and alerts which are not supported in Security and Compliance center


Below are few approaches to pull the data from the Office 365 Audit Logs. Also there is benefits and limitations of the approaches in order to help decide on implementation.

Using PowerShell

Search-UnifiedAuditLog of Exchange Online PowerShell could be used to retrieve data from Office 365 Audit log. More implementation details could be found at the blog here.


  1. Doesn’t need additional compute hosting. The PowerShell job could be run on a local system with a service account or on a server.
  2. One off data-pull is possible and can be retrieved later
  3. Able to retrieve data more than 90 days from Office 365 Audit log
  4. No session time out constraints as long the PowerShell console can stay active
  5. Local Date filtering is applicable while searching. No need to convert to GMT Formats


  1. It Need Tenant Admin rights when connecting to Exchange PowerShell to download cmdlets from Exchange Online
  2. Needs active connection to Exchange online PowerShell every time it runs
  3. It is not possible to run it on Azure or AWS at present as connection with Exchange Online PowerShell cmdlet is not possible in serverless environment
  4. Needs longer active window time as the job could run for hours depending on the data

Using Office 365 Management API :

The Office Management API provides another way to retrieve data from Azure Logs using a subscription service and Azure AD App. For more detailed information, please check the blog here.


  1. Support of any language such as C#, Javascript, Python etc.
  2. Parallel processing allows greater speed and flexibility of data management
  3. Controlled data pull depending on data size to increase efficiency and performance


  1. Need Additional compute hosting for serverless workloads or web jobs to process the data
  2. Needs an Azure AD app or OAuth layer to connect to the subscription service
  3. Needs additional Time zone processing since all dates are in GMT for retrieving data
  4. Session timeout might occur in data pull involving large datasets. So advisable to use smaller time slot windows for data pull
  5. Multilevel data pull required to fetch the audit log. Please check the blog here to get more information

Final Thoughts

Both PowerShell and Office 365 Management Activity APIs are a great way to fetch Office 365 Audit log data in order to create custom reports. The above points could be used to decide on an approach to fetch the data efficiently and process it. For more details on the steps of the process, please check the blog here (PowerShell) and here (Office 365 Management API).

Analogue Devices and Microsoft Teams

Last week, I was working through a technical workshop with a customer who wanted to make the move to Microsoft Teams. We’d worked through the usual questions, and then the infamous question came: So .. are there any analogue devices still in use? “Yeah, about 50 handsets”. You’d be forgiven for thinking that analogue handsets were a thing of the past. However, much like the fax machine, there’s still a whole lot of love out there for them. There are many reasons for this, but the ones often heard are:
  • A basic analogue handset fits the requirement – There’s no need for a fancy touch screen.
  • It’s a common area phone – hallways, lifts, stairwells, doors, gates etc
  • It’s a wireless DECT handset – this may include multiple handsets and base stations.
  • It’s something special – like a car park barrier phone or intercom system
  • It’s in a difficult to reach or remote location – such as a shed or building located away from the main office
  • There’s no power or ethernet cabling to this location – it’s simply using a copper pair.
Whatever the reason, in almost all cases I have encountered, the customer has a requirement to have a working phone at that location. This means we need to come up with a plan of how we’re going to handle these analogue devices once we’ve moved to Microsoft Teams. So, What’s the plan? Well, firstly check and confirm with the customer that they actually still need the handset at that location. There’s always a possibility that it’s no longer required. As mentioned above though, this seldom happens. Once you’ve confirmed the phone is still required, figure out if it can be replaced with a Microsoft Teams handset. Currently, there are a small number of Microsoft Teams handsets available from Yealink and AudioCodes:
  • Yealink T56A
  • Yealink T58A
  • Audiocodes C450HD
Some things to consider with this approach:
  • Availability of networking and PoE – These phones will require a network connection, and can be powered via PoE.
  • Is this a noisy environment? – If the old analogue device was connected to a separate external ringer like a bell or light, this will need to be replaced too.
What if I can’t replace the handset with a Teams compatible device? There will be times when you simply can’t replace an old analogue device with a Teams compatible handset. This could be as simple as there not being ethernet cabling at that location, or that the analogue device is built into something like a car park barrier, or emergency lift phone. Most of the time, your customer is going to want to keep the same level of functionality on the device. The best news is, there are a number of ways to achieve this! Options You’ve got a few options here: Option 1: Do .. nothing You’ve read that right. Do nothing. Your PABX is already configured to work with these devices. If you can leave the PABX in place, as well as the PSTN connectivity, these devices can remain connected to the PABX and happily continue to work as they always have. If you have this as an option, great! Most of us don’t though. Option 2: Deploy Microsoft Teams Direct Routing Alright, so the PABX has to go. What now? Microsoft Teams Direct Routing is the answer. Direct Routing involves deploying a compatible session border controller (SBC) on premises, which allows you to connect up your analogue devices and convert them to SIP. Here’s a simplified overview of how it works: With this approach, your analogue devices and Microsoft Teams users can call each other internally, and you get to keep your existing ISDN or SIP provider for PSTN calls. You can deploy this solution to many different sites within your organisation, and you can even route calls between SBC’s so analogue devices at different sites can make internal calls to each other. What if we’ve gone down the Microsoft Online-only path? If you’re already making and receiving calls via Microsoft Phone System and Calling Plans in Office 365, you’ll need to deploy direct routing at locations where analogue devices still require connectivity. I’m ready to delve into this Awesome! Microsoft have plenty of helpful documentation on Direct Routing over at And as usual, if you have any questions, feel free to leave a comment below.
Follow Us!

Kloud Solutions Blog - Follow Us!