Create Office365 business value through the power of limitation

Recent consulting engagements have found me helping customers define what Office365 means to them & what value they see in its use. They are lucky to have licenses and are seeking help to understand how they drive value from the investment.

You’ve heard the sales pitches: Office365 – The platform to solve ALL your needs! From meetings, to document management, working with people outside your organisation, social networking, custom applications, business process automation, forms & workflow, analytics, security & compliance, device management…the list goes on and is only getting bigger!

When I hear Office365 described – I often hear attributes religious people give to God.

  • It’s everywhere you go – Omnipresent
  • It knows everything you do – Omniscient
  • It’s so powerful it can do everything you want – Omnipotent
  • It’s unified despite having multiple components – Oneness
  • It can punish you for doing with the wrong thing – Wrathful

It’s taking on a persona – how long before it becomes self-aware!?

If it can really meet ALL your needs, how do we define its use, do we just use it for everything? Where do we start? How do we define what it means if it can do everything?

Enter limitation. Limitation is a powerful idea that brings clarity through constraint. It’s the foundation on which definition is built. Can you really define something that can do anything?

The other side would suggest limiting technology constrains thinking and prevents creativity.  I don’t agree. Limitation begets creativity. It helps zero-in thinking and helps create practical, creative solutions with what you have. Moreover, having modern technology doesn’t make you a creative & innovative organisation. It’s about culture, people & process. As always, technology is a mere enabler.

What can’t we use Office365 for?

Sometimes its easier to start here. Working with Architecture teams to draw boundaries around the system helps provide guidance for appropriate use. They have a good grasp on enterprise architecture and reasons why things are the way they are. It helps clearly narrow use cases & provides a definition that makes sense to people.

  • We don’t use it to share content externally because of..
  • We can’t use it for customer facing staff because of…
  • We don’t use it for Forms & Workflow because we have <insert app name here>
  • We can’t don’t use it as a records management system because we have …

Office365 Use cases – The basis of meaning

Microsoft provide some great material on generic use cases. Document collaboration, secure external sharing, workflow, managing content on-the-go, making meetings more efficient etc.  These represent ideals and are sometimes too far removed from the reality of your organisation. Use them as a basis and further develop them with relevance to your business unit or organisation.

Group ideation workshops, discussions & brainstorming sessions are a great way to help draw out use cases. Make sure you have the right level of people, not too high & not too low. You can then follow-up with each and drill in to the detail and see the value the use case provides.

Get some runs on the board

Once you’ve defined a few use cases, work with the business to start piloting. Prove the use case with real-life scenarios. Build the network of people around the use cases and start to draw out and refine how it solves pain, for here is where true value appears. This can be a good news story that can be told to other parts of the business to build excitement.

Plan for supply & demand

Once you some have runs on the board, if successful, word will quickly get out. People will want it. Learn to harness this excitement and build off the energy created. Be ready to deal with sudden increase in supply.

On the demand side, plan for service management. How do people get support? Who support it?  How do we customise it? What the back-out plan? How do we manage updates? All the typical ITIL components you’d expect should be planned for during your pilots.

Office365 Roadmap to remove limitations & create new use cases

They are a meaningful way to communicate when value will be unlocked. IT should have a clear picture of business value is and how it will work to unlock the capabilities the business needs in order for it to be successful.

Roadmaps do a good at communicating this. Though typically, they are technology focused.  This might be a great way to help unify the IT team, but people on the receiving end wont quiet understand. Communicate using their language in ways they understand i.e. what value it will provide them, when & how it will help them be successful.

AWS DeepRacer – Tips and Tricks – Battery and SSH

If you would like to know more about what the AWS DeepRacer is, please refer to my previous post:  AWS DeepRacer – Overview

I was going to do an unboxing video, but Andrew Knaebel has done a well enough job of that and posted it on YouTube, so I’ll skip that part and move onto more detail on getting up and running with the AWS DeepRacer. 

A lot of this is covered in the AWS DeepRacer Getting Started Guide so I’ll try and focus on the places where it was not so clear.

Before we get started there are a few items you will need to follow through this blog. They are:

  • AWS DeepRacer physical robot
  • USB-C Power adapter
  • PowerBank with USB-C connector
  • 7.4V 1100mAh RC car battery pack
  • Balance Charger for RC battery pack
  • If not in the US, a power socket adapter

Connecting and Charging

When I followed the instructions in the AWS getting started guide, I found that the instructions left out a few minor details that make your life easier going forward. Below is a way of avoiding pulling apart the whole car to charge it every time.

1. Install the USB-C PowerBank on top of the vehicle with the USB-C port closer to the right-hand side, closer to the USB-C port on the vehicle

2. Install the RC battery by taking off the 4 pins and (GENTLY as there are wired connected) move the top compute unit to the side like below, ensure you leave the charging cable and power sockets available as you don’t want to be unpinning the car every time

3. Connect the USB-C power adaptor to the USB-C port on the PowerBank and connect the Balance charger to the charging cable of the battery

4. Wait for the PowerBank to have four solid lights on it to signify its charged and the charge light on the balance charger to be off to let you know the RC battery is ready

Opening up SSH to allow for easier model loads

I’m sure that AWS are working hard to either include AWS IoT Green Grass capabilities, to allow users to push their latest model to the AWS DeepRacer. But for now it looks like that isn’t an option

Another nice feature would be the ability to upload the model.pb file via the AWS DeepRacer local Web server. Alas we currently need to put files onto USB sticks.

There is another way for the moment, and that’s to open up SSH on the AWS DeepRacer firewall and using SCP to copy the file into the correct location.

Firstly, you will need to log in to the Ubuntu server on the AWS DeepRacer. For instructions on how to achieve this please refer to my previous post, AWS DeepRacer – How to login to the Ubuntu Computer Onboard

1. Once logged in, open up a terminal

2. type in: sudo ufw allow ssh
from now on you will be able to login via SSH

3. On another machine, you should be able to now login via SSH: ssh deepracer@<ip address of your DeepRacer>

4. Copy over your model.pb a file to your DeepRacer home directory via SCP (I used WinSCP)

5. Move the file into a folder inside /opt/aws/deepracer/artifacts/<folder name of your choice>/model.pb

6. Your Done! Enjoy being able to load a model without a USB stick

There are other Tips and Tricks coming as I experience the AWS DeepRacer ecosystem.

AWS DeepRacer – Training your reinforcement learning model in AWS Robomaker

If you would like to know more about what the AWS DeepRacer is, please refer to my previous post:  AWS DeepRacer – Overview

There seems to be many ways to get your AWS DeepRacer model trained. These are a few I have discovered:

  • The AWS DeepRacer Console (Live Preview yet to commence, GA early 2019)
  • SageMaker RL notebook
  • Locally from the DeepRacer GitHub repository
  • AWS RoboMaker sample simulation
  • AWS RoboMaker Cloud9 IDE with sample application downloaded

In this post, we will be exploring how to train a reinforcement learning model using AWS Robomaker, both with the sample application downloaded and in the Cloud9 development environment. The reason I chose to run through both, is that AWS has created a fully autonomous deployment using CloudFormation that does all the work for you, but this does not give you the ability to modify the code.  If changing reward function and other things in the ROS application is what you want then the Cloud 9 development environment is the way to go, we can modify the code and takes you through most of the simulation workflow.

As for taking you through the Proximal Policy Optimization(PPO) algorithm that will require it’s own post as there is a heap of prerequisites to understanding this algorithm. AWS has leveraged Intel’s RL Coach which is a platform of packaged RL algorithm implementations, including Clipped PPO.

What you will need:

  • An AWS account
  • Access to create IAM roles within the AWS account

Running the Fully Autonomous Sample Application

1. Log into your AWS account

2. Switch to the us-east-1 region (N.Virginia) 

3. Navigate to the AWS RoboMaker service

4. Click on the sample application link on the left 


5. Scroll down the list and select Self-driving using reinforcement learning and then click the Launch simulation job button

6. You can now view the simulation job from the Simulation jobs section

7. If you want to view the DeepRacer in simulation work click on the Gazebo icon (it does not show up until the simulation is in the Running state)

Running the Cloud9 development environment with a custom reward function

1. Log into your AWS account

2. Switch to the us-east-1 region (N.Virginia) 

3. Run the above sample simulation as this creates all the resources you will need for the following. alternatively you can create the resources manually when you get to the roboMakerSettings.json section below

4. Navigate to the AWS RoboMaker service

5. Click on the development environments section 

6. Click the create environment button

7. Fill out the Name, Instance type, VPC, and Subnets fields then click Create as in Figure 1.0 below

Figure 1.0

8. Once created click Open environment button to open Cloud9

9. The Cloud9 environment should look like Figure 1.1 below

10. Click on RoboMaker Resources and select Download Samples then 6. Self-Driving using Reinforcement Learning

11. You should now have two folders under the root path, you should delete the roboMakerLogs folder and base JSON file as it has duplicate config files and saves confusion later

12. Now its time to modify your roboMakerSettings.json file that is under the DeepRacer folder. This file has a number of values that need to be filled out. I used the output of the sample simulation job run in the first part of this post to populate this. 

13. Ensure you add the “robotSoftwareSuite” section on line 43 (as of writing this post) as the simulation fails with it. The API for RoboMaker must have changed since the author of this bundle wrote it.

14. Also pay attention to the “maxJobDurationInSeconds”: 28800 section and set the simulation job to something less than 8 hours unless you feel like you need that time.

15. Save the file, as Cloud9 doesn’t seem to auto-save your changes

16. At this point you may want to modify the default reward function in the bundle. This is located inside the OpenAI Gym definition file located under: DeepRacer/simulation_ws/src/sagemaker_rl_agent/markov/environments/deepracer_env.py 

17. Click on the Robomaker Run button on the top-right, navigate to the Workflow menu option and select DeepRacer – Build and Bundle All

18. Monitor the tabs below and look for exit code 0 on the bundle tab.

19. Click on the RoboMaker Run button on the top-right, navigate to Launch Simulation and click on DeepRacer

20. Check on your simulation job under the RoboMaker Simulation jobs

21. Once complete you can navigate to the S3 bucket configured in the JSON file above and look for the model.pb file in the model-store folder.

You can now use this model to load onto the AWS DeepRacer. If you are not sure how to do this, please read my previous post: AWS DeepRacer – How to load a model

In the next post I’ll take you through the SageMaker RL notebook and explore the Machine learning side of the AWS DeepRacer

AWS DeepRacer – How to load a model

If you would like to know more about what the AWS DeepRacer is, please refer to my previous post:  AWS DeepRacer – Overview

This post assumes you have followed the AWS DeepRacer Getting Started Guide which gets you to the point of being able to manually drive the car.

So now you have the AWS DeepRacer charged up and ready to go. You have a trained model you got from Re:Invent or you followed my other post here and trained your model with RoboMaker/SageMaker.

You also either have a file named model1.tar.gz on a USB stick from the DeepRacer console or a models folder in an S3 bucket that was trained with RoboMaker which just needs to be zipped up. You want this model to show up in the DeepRacer local web server in the Autonomous section.

How to load your model, unfortunately, seems to be missing from the documentation so far and it’s not something I found looking online. There is most likely a better way of doing this so if you find out a smoother method, let me know.

To complete this you will need to login to the Ubuntu server onboard. Instructions on how to do this are found in my previous post  AWS DeepRacer – How to login to the Ubuntu Computer Onboard 

Loading from DeepRacer Console via USB

You should be logged into the Ubuntu server

  1. Plug in your USB into one of the free USB ports
  2. Open up a terminal
  3. type in df -h to list all the mounted volumes and identify which is your USB stick “for me this was /media/deepracer/Lexar”
  4. then type in the following:
    1. sudo mkdir /opt/aws/deepracer/artifacts/<the name of the model>
    2. sudo cp <the path of your USB stick>/<the filename of the model gz> /opt/aws/deepracer/artifacts/<the name of the model>
    3. cd /opt/aws/deepracer/artifacts/<the name of the model>
    4. sudo tar -xvf <the filename of the model gz>
  5. below is a screenshot of my commands for your reference

Checking that the model is now available

Now that you have placed the files in the correct directory, you can now go back to your AWS DeepRacer web server e.g. https://192.168.0.21/

  1. login to the DeepRacer web server with the password on the bottom of your car.
  2. You should now see the Autonomous option is selected and a drop-down allowing you to choose the model by the name of the folder you chose

You should now be able to click the Start button and the AWS DeepRacer is now driving using the loaded model.

In the next few posts, we will explore running simulations with AWS RoboMaker and examine how we can change the reward function as well as other hyperparameters

AWS DeepRacer – How to login to the Ubuntu Computer Onboard

If you would like to know more about what the AWS DeepRacer is, please refer to my previous post:  AWS DeepRacer – Overview

This post assumes you have followed the AWS DeepRacer Getting Started Guide which gets you to the point of being able to manually drive the car.

So to go deep into your understanding of the AWS DeepRacer and to troubleshoot deep technical issues, it may become necessary to log into the Ubuntu Server on-board the AWS DeepRacer. This post will describe the steps required to achieve this.

What you will need:

  1. AWS DeepRacer
  2. HDMI cable
  3. A HDMI capable screen
  4. USB keyboard and mouse
  5. USB-C Power Adapter 
  6. Optional – Non-US power socket adapter

Plugging everything in.

  1. the HDMI cable goes from the AWS DeepRacer to your HDMI port on your screen.
  2. You plug in your keyboard and mouse into any of the free USB ports on the DeepRacer
  3. You plug in the USB-C power adaptor to the USB-C port on the right-hand side of the DeepRacer

Starting the server

Once all plugged in, we push the power button. You should now see Ubuntu loading up and stopping and a login screen.

For my DeepRacer the login details were:

username: deepracer

password : deepracer

Once you have typed in the password, you can now use the Ubuntu server for other tasks, for example:

  • examine the “/opt/aws/deepracer folder
  • as part of this blog series try loading your own model, which can be found here.

AWS DeepRacer – Overview

Recently I had the privilege of attending the AWS Re:Invent 2018 conference in Las Vegas. Among the hundreds of announcements, there was one that particularly spoke to my passions of reinforcement learning and robotics.

The AWS DeepRacer!

I was one of the lucky few that got into the AWS DeepRacer workshops where we were introduced to the technology in the service as well as interacting with the yet to be released DeepRacer console. We got to train our own reinforcement learning model and download it to a USB drive. And at the end of the workshop we got told we would be getting the AWS DeepRacer car for free!! 

What is the AWS DeepRacer?

Here is what AWS said it is:
“AWS DeepRacer is the fastest way to get rolling with machine learning, literally. Get hands-on with a fully autonomous 1/18th scale race car driven by reinforcement learning, 3D racing simulator, and global racing league.”

For more information, follow this link: https://aws.amazon.com/deepracer/

Here is what I think it is:
DeepRacer is not just an RC car with a DeepLens glued on top. It is an end to end machine learning and robotics project that allows you to learn about multiple disciplines of entire technology stacks. Reinforcement learning is just the tip of the iceberg.

What is the DeepRacer League?

A race in the simulation world that spills out to the physical world held at each AWS Summit throughout 2019. It’s also a way of enticing people to play with the product and touching on their competitive nature. The cars race via time trials and only one car in on the track at a time.

Overview of DeepRacer

The astonishing thing about this product is the sheer number of teams that must have been involved in getting it to release. Here is a list off the top of my head of the technologies that went into making this.

  • Hardware Development and product design
  • Robotics
    • Gazebo3d Physics Simulation engine
    • 3Dmodeling of the car and track including all the physics like inertia and joint configurations
    • RobotOperating System application design and implementation
  • Development
    • Programming in C++, Python and more
    • Web Development
  • CloudComputing, utilizing a large number of AWS services
  • Operating system automation and scripting of Ubuntu deployments
  • MachineLearning
    • SagemakerServices to automate the creation of models
    • Intel RL Coach platform which is a collection of RL algorithms optimized for Intel hardware
    • Utilizing and creating OpenAI Gym environments
    • An understanding of the Proximal Policy Optimization RL algorithm
    • nowledge and application of the bleeding edge domain optimization techniques in allowing a model trained in a simulation to be applied on a real physical robot

And I’m sure there are others. It is mind-blowing the amount of technology here. I have tried to break down the DeepRacer into technology stacks to show this.

As of the writing of this post, AWS DeepRacer Console is yet to be in live preview, and the cars are on Pre-Order for an early 2019 delivery.

If you do eventually get your hands on one of the real cars, you may be thinking, how do I use my autonomous vehicle when the service is still not available? Well, there are ways…

Starting to build my own track

In the next few posts, we will look at getting started with the DeepRacer,  including a few lessons learned in setting up the real car, as well as setting up RoboMaker environments to allow for training our own model.

Reflections from the field – Tips for being a better consultant

Striving to be better at what you do is important for your development. Though, it typically translates into developing what you know rather than how you act. For consulting (or any job), there are two parts to the equation; Hard Skills & Soft skills. Balance is needed so you should learn to develop both.
I aim to help people develop their soft skills. They are typically harder to define and require more attention. Below are concepts I work on developing every day and hopefully you can take some away and start developing them for yourself.

Quality builds trust which creates opportunity

The link between quality & trust is easy to understand. When a relationship or engagement is new, you must prove yourself. The best way is to deliver something of superior quality. Whether that’s a presentation, application or a document. Do what you can to make it a quality output. It can be difficult to define what the quality standard should be so it’s important to set this upfront.

Delivering quality is the best way to build trust however, being aware of when trust it exists is challenging. Know where you are at on the spectrum. It’s not as easy as asking ‘do you trust me’. Start with small tests to gauge where you are, build up to something bigger. Once trust is established, only then can you start being opportunistic and by that, I mean, challenge peoples thinking, pitching ideas, pitching for more work. If it doesn’t exist, work on getting it.

Understand when to focus on delivering quality versus being opportunistic

Own your brand

This is how people see you. Your actions, traits & values have a direct correlation to your brand. What you are known for? How effective do people think you are? How well do you know your domain of expertise? At some point, people will talk about you. Managers, customers or colleagues both past & future. These conversations, ones you aren’t involved in, define your brand and it’s important you own it.

What do you want to be known for?

Keep your commitments

Sounds simple enough. What you agree to in meetings, quick conversations or any other discussion. Manage them, follow up on them & keep on top of them. Let people know where they are up to. Don’t ignore them. Often, we forget the small things we commit to. I’ve found it’s delivering on these small things that go a long way in building quality relationships. People tend not to forget if you let them down.

Diligence is important to your brand. Avoid being that person who can’t keep commitments.

Embrace adversity, build your resilience

Before getting to this one, I’ll say that work can be tough. Mental health is far more important than any job you will ever work. Know your limits. If you are in need support, please seek it. Most companies have an assistance programs available so contact your manager or HR representative if you are feeling overwhelmed.

Something always goes wrong. You project isn’t delivering quality, a relationship is damaged, you can’t get something signed off or you’ve just gone live and everything is on fire. For me, building resilience has been key to being successful in consulting. When things aren’t going well it’s difficult to get motivated, relationships are left in the balance, and you probably want to give up. I believe it’s in these tough moments, our true character really comes out. Do you pull out all stops to get things back on track? Do you give up? How do you respond to these situations?

Adversity defines character but know your limits

Respond, don’t react

Passion is a beautiful thing. When harnessed and used in the right way it can lead to amazing things. We get passionate about what we create or are heavily involved in, so when things don’t go your way it’s very easy to get frustrated & annoyed. In these situations, it’s important to not let your emotions guide your reaction. They always manifest in negative ways. You become short, you get agitated, frustrated quicker and if left unchecked can impact the work you deliver.

Don’t write that email. Avoid confronting that person. Go and take time to think about a response.

Play the ball. Not the person.

Stay true to your values

What do you stand for? What’s the right thing to do? Morally & ethically, these are difficult questions to answer. People that know & live through their values are more content with their work & personal lives. Understanding these goes way beyond anything you can do at work. I’m of the view that your values are generally set by age 6 and from that point develop & mature. Work to identify what your values & seek work that aligns with them. When personal values don’t align with professional, it leads to a world of pain.

Hopefully this list can help you sharpen your softer skills & make you a more effective consultant.

Con
Con.Efessopoulos@kloud.com.au

User Psychology and Experience

Often times when designing a product or solution for a customer, in planning and concept development, we might consider the user experience to be one of two (or both) things:

  1. User feedback regarding their interaction with their technological environment/platforms
  2. The experience the user is likely to have with given technology based on various factors that contribute to delivering that technology to them; presentation, training, accessibility, necessity, intuitiveness, just to name a few.

These factors are not solely focused on the user and their role in the human – technology interaction process, but also their experience of dealing with us as solution providers. That is to say, the way in which we engage the experience and behaviour of the user is just as important to the delivery of new technology to them, as is developing our own understanding of a broader sense of human interfacing technology behaviour. UX is a colourful – pun intended – profession/skill to have within this industry. Sales pitches, demos and generally ‘wowing the crowd’ are a few of the ways in which UX-ers can deploy their unique set of skills to curve user behaviour and responsiveness in a positive direction, for the supplier especially.

Not all behavioural considerations with regards to technology are underpinned by the needs or requirements of a user, however. There are more general patterns of behaviour and characteristics within people, particularly in a working environment, that can be observed, to indicate how a user experiences [new] technology, including functionality and valued content that, at a base level, captures a user’s attention. The psychology of this attention can be broken down into a simplified pathology: the working mechanisms of perception as a reaction to stimulus, and how consistent the behaviour is that develops out of this. The stimulus mentioned are usually the most common ones when relating to technology; visual, auditory.

You’ve likely heard of, or experienced first-hand, the common types of attention in everyday life. The main three are identified as selective, divided and undivided. Through consistency of behavourial outcomes, or observing in a use case a consistent reaction to stimuli, we look to observe a ‘sustainability of attention or interest’ over an extended period of the time, even if repetition of an activity or a set of activities is involved. This means that the solution, or at very least, the awareness and training developed to sell a solution, should serve a goal of achieving sustainable attention.

How Can We Derive a Positive User Experience through Psychology?

Too much information equals lack of cognitive intake. From observation and general experience, a person’s attention, especially when captured within a session, a day or week, is a finite resource. Many other factors of an individual’s life can create a cocktail of emotions which makes people in general, unpredictable in a professional environment. The right amount of information, training and direct experience should be segmented based on a gauge of the audience’s attention. Including reflection exercises or on-the-spot feedback, especially in user training can give you a good measure of this. The mistake of cognitively overloading the user can be best seen when a series of options are present as viable routes to the desired solution or outcome. Too many options can, at worst, create confusion, an adversity to the solution and new technologies in general, and an overall messy user experience.

Psychology doesn’t have to be a full submersion into the human psyche, especially when it comes to understanding the user experience. Simple empathy can be a powerful tool to mitigate some of the aforementioned issues of attention and to prevent the cultivation of repeated adverse behaviour from users. When it boils down to the users, most scenarios in the way of behaviour and reaction have been seen and experienced before, irrespective of the technology being provided. Fundamentally, it is a human experience that [still] comes first before we look at bridging the user and the technology. For UX practitioners, tools are already in-place to achieve this such as user journey maps and story capturing,

There are new ideas still emerging around the discipline of user experience, ‘UX’. From my experience with it thus far, it presents a case that it could integrate very well with modern business analysis methodologies. It’s more than designing the solution, it’s solutions based on how we, the human element, are designed.

Creating custom Deep Learning models with AWS SageMaker

S

This blog will cover how to use SageMaker, and I’ve included the code from my GitHub, https://github.com/Steve–Hunter/DeepLens-Safety-Helmet.

1 What is AWS SageMaker?

AWS (Amazon Web Services) SageMaker is “a fully managed machine learning service. With Amazon SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment.” (https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html). In other words, SageMaker gives you a one-stop-shop to get your Deep Learning models going, in a relatively friction-less way.
Amazon have tried hard to deliver a service that appeals to the life-cycle for developing models, which are the results of training. It enables Deep Learning to complete the virtuous circle of:

Data can cover text, numeric, images, video – the idea is that the model gets ‘smarter’ as it learns more of the exceptions and relationships in being given more data.
SageMaker provides Jupyter Notebooks as a way to develop models; if you are unfamiliar, think of Microsoft OneNote with code snippets, you can run (and re-run) a snippet at a time, and intersperse with images, commentary, test runs. The most popular coding language is Python (which is in the name of Jupyter).

2 AI / ML / DL ?

I see the phrases AI (Artificial Intelligence), Machine Learning (ML) and Deep Learning used inter-changeably, this diagram shows the relationship:



(from https://www.geospatialworld.net/blogs/difference-between-ai%EF%BB%BF-machine-learning-and-deep-learning/

So I see AI encompassing most things not yet possible (e.g. Hollywood ‘killer robots’); Deep Learning has attracted attention, as it permits “software to train itself”; this is contrary to all previous software, which required a programmer to specifically tell the machine what to do. What makes this hard is that it is very difficult to foresee everything that could come up, and almost impossible to code for exception from ‘the real world’. An example of this is machine vision, where conventional ‘rule-based’ programming logic can’t be applied, or if you try, only works in very limited circumstances.
This post will cover the data and training of a custom model to identify people wearing safety helmets (like those worn on a construction site), and a future post will show how to load this model into an AWS DeepLens (please see Sam Zakkour’s post on this site). A use case for this would be getting something like a DeepLens to identify workers at a construction site that aren’t wearing helmets.

3 Steps in the project

This model will use a ‘classification’ approach, and only have to decide between people wearing helmets, and those that aren’t.
The project has 4 steps:

  • Get some images of people wearing and not wearing helmets
  • Store images in a format suitable for Deep Learning
  • Fine tune an existing model
  • Test it out!

3.1 Get some images of people wearing and not wearing helmets

The hunger for data to feed Deep Learning models has led to a number of online resources that can supply data. A popular one is Imagenet (http://www.image-net.org/), with over 14 million images in over 21,000 categories. If you search for ‘hard hat’ (a.k.a ‘safety helmet’) in Imagenet:

Your query returns:

The ‘Synset’ is a kind of category in Imagenet, and covers the inevitable synonyms such as ‘hard hat’, ‘tin hat’ and ‘safety hat’.
When you expand this Synset, you get all the images; we need the parameter in the URL that uniquely identifies these images (the ‘WordNet ID’) to download them:

Repeat this for images of ‘people’.
Once you have the ‘WordNet ID’ you can use this to download the images. I’ve put the code from my Jupyter Notebook here if you want to try it yourself https://github.com/Steve–Hunter/DeepLens-Safety-Helmet/blob/master/1.%20Download%20ImageNet%20images%20by%20Wordnet%20ID.ipynb
I added a few extras in my code to:

  1. Count of images and reporting
  2. Added continue on bad image (poisoned my .rec image file!)
  3. Parameterise the root folder and class for images

This saves the images to the SageMaker server in AWS, where they are picked up by the next stage …

3.2 Store images in a format suitable for Deep Learning

It would be nice if we could just feed in the images as JPEGs, but most image processing frameworks require the images to be pre-processed, mainly for performance reasons (disk IO). AWS uses MXNet a lot, and so that’s the format I used, ‘ImageRecord format or recordIO. You can read more about it here https://gluon-cv.mxnet.io/build/examples_datasets/recordio.html, and the Jupyter Notebook is here https://github.com/Steve–Hunter/DeepLens-Safety-Helmet/blob/master/2.%20Store%20images%20into%20binary%20recordIO%20format%20for%20MXNEt.ipynb .
The utility to create the ImageRecord format also splits the images into

  • a set of training and testing images
  • images that show wearing and not wearing helmets (the two categories we are interested in)

It’s best practice to train on a set of images, but test on another, in a ratio of around 70:30. This avoid the curse of deep learning of ‘over-fitting’ where the model hasn’t really learned ‘in general’ what people wearing safety helmets look like, only the ones it has seen already. This is the really cool part of deep learning, it really does learn, and can tell from an unseen image if there is a person(s) wearing a safety helmet!
The two ImageRecord files for training and testing are stored in SageMaker, for the next step …

3.3 Fine tune an existing model

One of my favourite saying is by Isaac Newton “If I have seen further it is by standing on the shoulders of Giants.”, and this applies to Deep Learning, in this case the ‘Giants’ are Google, Microsoft etc, and ‘standing on’ is the open source movement. You could train your model on all 14 million images in Imagenet, taking weeks and immense amount of compute power (which only Google/Microsoft can afford, but generously open source the trained models), but a neat trick in deep learning is to take an existing model that has been trained, and ‘re-purpose’ it for what you want. There may not be a pre-trained model for the images you want to identify, but you can find something close enough, and train it on just the images you want.
There are so many pre-trained models, the MXNet framework refers to them as a ‘model zoo’, the one I used is called ‘Squeezenet’ – there are competitions to find the model that can perform best, and Squeezenet gives good results, and is small enough to load onto a small device like a DeepLens.
So the trick is to start with something that looks like what we are trying to classify; Squeezenet has two existing categories for helmets, ‘Crash helmet’ and ‘Football helmet’.
When you use the model ‘as is’, it does not perform well, and gets things wrong – telling it to look for ‘Crash Helmets’ in these images, it thinks it can ‘see them’ – there are two sets of numbers below which each represent the probability of the corresponding images having helmets in them. Both numbers are a percentage and the first of the number being the prediction of a helmet, the second there not being a helmet.
!

Taking ‘Crash helmet’ as the starting point, and re-trained (also called ‘fine tuning’ or ‘transfer learning’) the last part of the model (the purple one on the far right), to learn what safety helmets look like.

The training took about an hour, on an Amazon ml.t2.medium instance (free tier) and I picked the ‘best’ accuracy, you can see the code and runs here: https://github.com/Steve–Hunter/DeepLens-Safety-Helmet/blob/master/3.%20Fine%20tune%20existing%20model.ipynb

3.4 Test it out!

After training things improve a lot – in the first image below, the model is now 96% certain it can see safety helmets, and in the second 98% certain it is not.
What still ‘blows my mind’ is that there are multiple people in the image – the training set contained individuals, groups, different lighting and helmet colours – imagine trying to ‘code’ for this in a conventional way! But the model has learned the ‘helmet-ness’ of the images!




You can give the model an image it has never seen (e.g. me wearing a red safety helmet, thanks fire warden!):

4 Next

My GitHub goes onto cover how to deploy to a DeepLens (still working on that), and I’ll blog about how that works later, and what it could do if it ‘sees’ someone not wearing a safety helmet.
This example is a simple classifier (‘is’ or ‘is not’ … like the ‘Silicon Valley’ episode of ‘Hotdog not hotdog’), but could cover many different categories, or be trained to recognise people faces from a list.
The same method can be applied to numeric data (e.g. find patterns to determine if someone is likely to default on a loan), and with almost limitless cloud-based storage and processing, new applications are emerging.
I feel that the technology is already amazing enough, we can now dream up equally amazing use cases and applications for this fast moving and evolving field of deep learning!

Processing Azure Event Grid events across Azure subscriptions

Consider a scenario where you need to listen to Azure resource events happening in one Azure subscription from another Azure subscription. A use case for such a scenario can be when you are developing a solution where you listen to events happening in your customers’ Azure subscriptions, and then you need to handle those events from an Azure Function or Logic App running in your subscription.
A solution for such a scenario could be:
1. Create an Azure Function in your subscription that will handle Azure resource events received from Azure Event Grid.
2. Handle event validation in the above function, which is required to perform a handshake with Event Grid.
3. Create an Azure Event Grid subscription in the customers’ Azure subscriptions.
Before, I go into details let’s have a brief overview of Azure Event Grid.
Azure Event Grid is a routing service based on a publish/subscribe model, which is used for developing event-based applications. Event sources publish events, and event handlers can subscribe to these events via Event Grid subscriptions.

event-grids

Figure 1. Azure event grid publishers and handlers


Azure Event Grid subscriptions can be used to subscribe to system topics as well as custom topics. Various Azure services automatically send events to Event Grid. The system-level event sources that currently send events to Event Grid are Azure subscriptions, resource groups, Event Hubs, IOT Hubs, Azure Media Services, Service Bus, and blob storage
You can listen to these events by creating an event handler. Azure Event Grid supports several Azure Services and custom webhooks for event handlers. There are number of Azure services that can be used as event handlers, including Azure Functions, Logic Apps, Event Hubs, Azure Automation, Hybrid Connections, and storage queues.
In this post I’ll focus on using Azure Functions as an event handler to which an Event Grid subscription will send events to whenever an event occurs at the whole Azure subscription level. You can also create an Event Grid subscription at a resource group level to be notified only for the resources belonging to a particular resource group. The figure 1 posted above, shows various event sources that can publish events, and various supported event handlers. As per our solution Azure subscriptions and Azure Functions are marked.

Create an Azure Function in your subscription and handle the validation event from Event Grid

If our Event Grid subscription and function were in the same subscription, then we could have simply created an Event Grid-triggered Azure Function. Using that you can simply specify the Event Grid subscription details with this function specified as an endpoint in the Event Grid subscription. However, in our case this cannot be done as we need to have the Event Grid subscription in the customer subscription, and the Azure Function in our subscription. Therefore, we will simply create a HTTP-triggered function or a webhook function
Because we’re not selecting an Event Grid triggered function, we need us to do an extra validation step. At the time of creating a new Azure Event Grid subscription, Event Grid requires the endpoint to prove the ownership of the webhook, so that Event Grid can deliver the events to that endpoint. For built-in event handlers such as Logic Apps, Azure Automation, and Event Grid triggered functions, this process of validation is not necessary. However, in our scenario where we are using a HTTP-triggered function we need to handle the validation handshake
When an Event Grid subscription is created, it sends a subscription validation event in a POST request to the endpoint. All we need to do is to handle this event, read the request body, read the validationCode property in the data object in the request, and send it back in the response. Once Event Grid receives the same validation code back it knows that endpoint is validated, and it will start delivering events to our function. Following is an example of a POST request that Event Grid sends to the endpoint for validation.

Our function can check if the eventType is Microsoft.EventGrid.SubscriptionValidationEvent , which indicates it is meant for validation, and send back the value in data.validationCode. In all other scenarios, eventType will be based on the resource on which the event occurred, and the function can process those events accordingly. Also, the resource validation event contains a header aeg-event-type with value SubscriptionValidation. You should also validate this header.
Following is the sample code for a Node.js function to handle the validation event and send back the validation code and hence completing the validation handshake.

Processing Resource Events

To process the resource events, you can filter them on the resourceProvider or operationName properties. For example, the operationName property for a VM create event is set to Microsoft.Compute/virtualMachines/write. The event payload follows a fixed schema as described here. An event for a virtual machine creation looks like below:

Authentication

While creating the Event Grid subscription, detailed in next section, it should be created with the endpoint URL pointing to function URL including the function key.. Also, event validation done for the handshake acts as another means of authentication. To add an extra layer of authentication, you can generate your own access token, and append it to your function URL when specifying the endpoint for the Event Grid subscription. Your function can now also validate this access token before further processing.

Create an Azure Event Grid Subscription in customer’s subscription

A subscription owner/administrator should be able to run an Azure CLI or PowerShell command for creating the Event Grid subscription in customer subscription.
Important: This step must be done after the above step of creating the Azure Function is done. Otherwise, when you try to create an Event Grid subscription, and it raises the subscription validation event, Event Grid will not get a valid response back, and the creation of the Event Grid subscription will fail.
You can add filters to your Event Grid subscription to filter the events by subject. Currently, events can only be filtered with text comparison of the subject property value starting with or ending with some text. The subject filter doesn’t support a wildcard or regex search.

Azure CLI or PowerShell

An example Azure CLI command to create an Event Grid Subscription, which receives all the events occurring at subscription level is as below:

Here https://myhttptriggerfunction.azurewebsites.net/api/f1?code= is the URL of the function app.

Azure REST API

Instead of asking customer to run a CLI or PowerShell script to create the Event Grid subscription, you can automate this process by writing another Azure Function that calls Azure REST API. The API call can be invoked using service principal with rights on the customer’s subscription.
To create an Event Grid subscription for the customer’s Azure Subscription, you submit the following PUT request:
PUT https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /providers/Microsoft.EventGrid/eventSubscriptions/ eg-subscription-test?api-version=2018-01-01
Request Body:
{
"properties": {
"destination": {
"endpointType": "WebHook",
"properties": {
"endpointUrl": " https://myhttptriggerfunction.azurewebsites.net/api/f1?code="
}
},
"filter": {
"isSubjectCaseSensitive": false
}
}
}

 

Follow Us!

Kloud Solutions Blog - Follow Us!