Xamarin Forms: Mircosoft.EntityFrameworkCore.Sqlite issue with Physical devices

Introduction

Building Xamarin Forms apps using .Net Standard 2.0 is still pretty much new to industry, we are just started to learn how differently we have to configure Xamarin setting to get it working when compared to PCL based projects.

I was building a Xamarin Forms based App using Microsoft’s Entityframeworks SQlite to store app’s data. Entity framework using sqlite is an obvious choice when it comes to building App using .Net Standard 2.0

Simulator

Works well on pretty much on all simulators without any issue, all read/write operations works well.

Issue  – Physical Device

App crashes on physical device, when tried to read or write data from the SQlite database

Error

System.TypeInitializationException: The type initializer for ‘Microsoft.EntityFrameworkCore.EntityFrameworkQueryableExtensions’ threw an exception. —> System.InvalidOperationException: Sequence contains
no matching element

Resolution

Change linker behavior to “Don’t Link”

Xamarin forms using .Net Standard 2.0

Introduction

All Xamarin developers, please welcome Net standard 2.0. This is the kind of class library we were waiting for all these years. The .Net standard 2.0 specification is now complete and it is included with Net core 2.0, Net framework 4.6.1 and up to latest versions. It can be used using Visual Studio versions 15.3 and up. Net Standard 2.0 obviously supports C# and also F# and Visual Basic.

More APIs

Net Standard 2.0 is for sharing code via various platforms. It is included with all the common APIs that all .Net implementations, it unified all .net frameworks to avoid any fragmentations in future. There are more than 32000 APIs in .Net Standard 2.0 most of them that are already available in .Net Framework APIs. Microsoft has made it easy to port existing code to .Net Standard 2.0. It is now easy to extend any .Net Standard to .Net core 2.0 or any versions that come in future.

NuGet Support

Most NuGet packages currently work well with .Net framework, but not all projects are compatible to move to .Net Standard 2.0, therefore a compatibility mode is added to support them.  Even after compatibility mode, only upt0 70% of packages are supported.

Frameworks and Libraries

Below is the table,list all the support frameworks and libraries. Click here for more details

.NET Standard
1.0 1.1 1.2 1.3 1.4 1.5 1.6 2.0
.NET Core 1.0 1.0 1.0 1.0 1.0 1.0 1.0 2.0
.NET Framework 4.5 4.5 4.5.1 4.6 4.6.1 4.6.1 4.6.2 4.6.1 vNext 4.6.1
Mono 4.6 4.6 4.6 4.6 4.6 4.6 4.6 5.4
Xamarin.iOS 10.0 10.0 10.0 10.0 10.0 10.0 10.0 10.14
Xamarin.Mac 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.8
Xamarin.Android 7.0 7.0 7.0 7.0 7.0 7.0 7.0 8.0
Universal Windows Platform 10.0 10.0 10.0 10.0 10.0 10.0.16299 10.0.16299 10.0.16299
Windows 8.0 8.0 8.1
Windows Phone 8.1 8.1 8.1
Windows Phone Silverlight 8.0

Sample to convert PCL or Shared to .Net Standard 2.0

  1. Create a default PCL or Shared based Xamarin Forms applications and name it appropreately and wait for solution to loadScreen Shot 2017-12-09 at 09.18.05
  2. Add .Net Standard class library by selecting .Net Standard 2.0Screen Shot 2017-12-09 at 09.24.38Screen Shot 2017-12-09 at 09.25.41Now project should look something like belowScreen Shot 2017-12-09 at 09.26.38.png
  3. Now remove PCL or Shared based project (VERY Important only after moving all the required project files to Netstandard20Test library) and compileScreen Shot 2017-12-09 at 09.28.38.png
  4. now rename the NetStandard20Test to NetStandardTest (Same as deleted library), make sure to rename DefaultNameSpace and Assembly to NetStandarTestScreen Shot 2017-12-09 at 09.30.07Screen Shot 2017-12-09 at 09.30.14Screen Shot 2017-12-09 at 09.30.24Screen Shot 2017-12-09 at 09.30.44Screen Shot 2017-12-09 at 09.34.23.png
  5. Now build the project and see if build is successfully.
  6. Your build should fail with errors as shown below, it is because of the deleted project, now we have to reference back the newly created .Net Standard 2.0 to both Android and iOSScreen Shot 2017-12-09 at 09.35.53.png
  7. Now edit references on each platform project to add newly created project as shown below onceScreen Shot 2017-12-09 at 09.37.58Screen Shot 2017-12-09 at 09.38.05
  8. references are applied correctly, you should get below errorsScreen Shot 2017-12-09 at 09.52.14
  9. Now add Xamarin.Forms NuGet package for all projectsScreen Shot 2017-12-09 at 09.54.04.png
  10. Now build the project and you should see any errorsScreen Shot 2017-12-09 at 10.58.06
  11. Microsoft has also released a compatibility NuGet package that makes sure’s all the existing packages are compatible to .Net Standard 2.0
  12. Add NuGet package, Mirosoft.NETCore.Portable.Compatibility to .Net Standard 2.0 project.Screen Shot 2017-12-09 at 11.03.01

Hope this blog is useful to you.

 

Disk Space Reporting through Lamba Functions- Linux servers

Solution Objective:

The solution provides detailed report related to hard disk space for all the Linux Ec2 instances in the AWS environment.

Requirements:

 

Mentioned below are the requirements the solution should be able to fulfil.

  • Gather information related to all mount points in all the Linux EC2 instances in the environment.
  • Able to generate cumulative report based on all instances in the environment.

3. Assumptions:

The following assumptions are considered

  • All the EC2 instances have SSM agent installed.
  • The personnel responsible for the configuration have some understanding of IAM Roles, S3 buckets and lambda functions

4. Solutions Description:

The following services provided by Amazon will be utilized to generate the report

  • Linux shell Scripts
  • AWS S3
  • AWS Lambda
  • AWS IAM Roles
  • Maintenances Windows

4.1      Linux Shell Script.

Linux Shell Script will be utilized to generate information about the instance and the mount points space utilization.

Mentioned below script needs to be executed on all Linux Ec2 instances to generate the mount point information.

curl http://169.254.169.254/latest/meta-data/instance-id # Prints the Instance ID
printf "\n" # Adds line
df # provides details of the mount point

4.1      AWS S3

The result of the shell script will be posted to an S3 bucket for further use.

The EC2 instances will need write access to the nominated S3 bucket for certificate Maintenance.

S3 Bucket Name: eomreport ( sample name )

4.2      AWS Lambda Functions

Lambda Functions will be used to perform the following activities.

  • Acquire the result of the Shell script from the S3 bucket
  • Generate a Report
  • Email the report to the relevant recipient

The Lambda Functions would need read access to the S3 bucket and access to AWS SES to send emails to recipients.

Mentioned below is the Lambda Functions that performs the mentioned above tasks.

import boto3
import codecs
import pprint
from datetime import datetime, date, time
def lambda_handler(event,Context):
    s3 = boto3.resource('s3')
    mybucket = s3.Bucket('eomreport')
    resulthtml = ["<h1>Report : Hard disk Space Icon Water</h1>"] # Adds heading to the email body
    resulthtml.append('<html><body><table border="1">') # Creates a table
    resulthtml.append('<tr><td><b>InstanceID</b></td><td><b>Available Space</b></td><td><b>Used Space</b></td><td><b>Use %</b></td></td><td><b>Mounted on</b></td></b></tr>')
    for file_key in mybucket.objects.all():
        complete_string = str(file_key)
        search = "stdout"
        check = complete_string.find(search)
        if check > 0 :
            body = file_key.get()['Body'].read().decode('utf-8')
            complete=body.splitlines() #splits data into lines.
            id="".join(complete[0])
            hr=complete[1]
            hr2=hr.split()
            hr2.append("InstanceID")
            hstr=",".join(hr2)
            details=complete[2:]
            for line in details:
                    output_word=line.split()
                    dstr="".join(line)
                    resulthtml.append(("<td>'{}'</td><td>'{}'</td><td>'{}'</td><td>'{}'</td><td>'{}'</td></tr>").format(id,output_word[3],output_word[2],output_word[4],output_word[5])) # for the HTML email to be sent.
    resulthtml.append('</table></body></html>')
    final=str("".join(resulthtml))
    final=final.replace("'","")
    print(final)
    sender = "email@email.com"
    recipient = "email@email.com"
    awsregion = "us-east-1"
    subject = "Certificate Update list"
    charset = "UTF-8"
    mylist="mylist update"
    client = boto3.client('ses',region_name=awsregion)
    try:
        response = client.send_email(
           Destination={
               'ToAddresses': [
                   recipient,
                ],
            },
         Message={
                  'Body': {
                      'Html': {
                        'Charset': charset,
                        'Data': final,
                             },
                    'Text': {
                     'Charset': charset,
                     'Data': mylist,
                    },
                },
                'Subject': {
                    'Charset': charset,
                    'Data': subject,
                },
            },
            Source=sender,
        )
    # Display an error if something goes wrong.
    except Exception as e:
        print( "Error: ", e)
    else:
       print("Email sent!")

 

4.1 AWS IAM Roles

Roles will be used to grant

  • AWS S3 write access to all the EC2 instances as they will submit the output of the  the S3 bucket
  • AWS SES access to Lambda Functions to send emails to relevant recipients.

4.2 AWS SES

Amazon Simple Email Service (Amazon SES) evolved from the email platform that Amazon.com created to communicate with its own customers. In order to serve its ever-growing global customer base, Amazon.com needed to build an email platform that was flexible, scalable, reliable, and cost-effective. Amazon SES is the result of years of Amazon’s own research, development, and iteration in the areas of sending and receiving email.( Ref. From https://aws.amazon.com/ses/).

We would be utilizing AWS SES to generate emails using AWS lambda.

The configuration of the Lambda functions can be modified to send emails to a distribution group to provide Certificate reporting, or it can be used to send emails to ticketing system in order to provide alerting and ticket creation in case a certificate expiration date crosses a configured threshold.

5Solution Configuration

5.1 Configure IAM Roles

The following Roles should be configured

  • IAM role for Lambda Function.
  • IAM for EC2 instances for S3 bucket Access

5.1.1 Role for Lambda Function

Lambda function need the following access

  • Read data from the S3 bucket
  • Send Emails using Amazon S3

To accomplish the above the following policy should be created and attached to the IAM Role

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1501474857000",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::S3BucketName/*"
            ]
        },
        {
            "Sid": "Stmt1501474895000",
            "Effect": "Allow",
            "Action": [
                "ses:SendEmail"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

6.1.2  Role for EC2 instance

All EC2 instances should have access to store the Shell output in the S3 bucket.

To accomplish the above , the following policy should be assigned to the EC2 roles

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1501475224000",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::eomreport"
            ]
        }
    ]
}

6.2 Configure Maintenance Window.

The following tasks need to be performed for the maintenance window

  • Register a Run Command with Run-Shell Script using the script in section 4.1
  • Register targets based on the requirements
  • Select the schedule based on your requirement

Maintenance Window Ref : 

http://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html

6.3 Configure Lambda Function:

The following tasks need to be performed for the Lambda Function

  • Create a blank lambda function with the S3 put event as the trigger\lambda function
  • Click on Next
  • Enter the Name and Description
  • Select run time Python 3.6
  • Copy and paste the lambda function mentioned in section 4.3

    6.4 Configuring AWS SES

The following tasks need to be completed before the execution of the Run-commands.

  • Email Addresses should be added to the AWS SES section of the tenant.
  • The email addresses should be verified.

 7. Result:

Based on the above configuration, whenever the run command is executed, the following report is generated and sent to the nominated email account.

InstanceID Available Space Used Space Use % Mounted on
i-sampleID1 123984208 1832604 0.02 /
i-sampleID1 7720980 0 0 /dev
i-sampleID1 7746288 0 0 /dev/shm
i-sampleID1 7721456 24832 0.01 /run
i-sampleID1 7746288 0 0 /sys/fs/cgroup
i-sampleID2 122220572 3596240 0.03 /
i-sampleID2 7720628 0 0 /dev
i-sampleID2 7746280 8 0.01 /dev/shm
i-sampleID2 7532872 213416 0.03 /run
i-sampleID2 7746288 0 0 /sys/fs/cgroup
i-sampleID2 81554964 16283404 0.17 /sit
i-sampleID2 83340832 14497536 0.15 /uat
i-sampleID2 1549260 0 0 /run/user/1000
i-sampleID3 123983664 1833148 0.02 /
i-sampleID3 7720980 0 0 /dev
i-sampleID3 7746288 0 0 /dev/shm
i-sampleID3 7721448 24840 0.01 /run
i-sampleID3 7746288 0 0 /sys/fs/cgroup

 

Azure AD Domain Services

I recently had what I thought was a rather unique requirement from a customer.

The requirement was to build Azure IaaS virtual machines and have them joined to a managed domain, while also being able to authenticate to the virtual machines using Azure AD credentials.

The answer is Azure AD Domain Services!

Azure AD Domain Services provides managed domain services such as domain join, group policy and Kerberos/NTLM authentication without the need for you to deploy and  manage domain controllers in the cloud. For more information see https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-overview

It is not without its limitations though, main things to call out is that configuring domain trusts and applying schema extensions is not possible with Azure AD Domain Services. For a full list of limitations see: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-comparison

Unfortunately at this point in time you cannot use ARM templates to configure Azure AD Domain Services so you are limited to the Azure Portal or PowerShell. I am not going to bore you with the details of the deployment steps as it is quite simple and you can easily follow the steps supplied in the Microsoft documentation: https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-enable-using-powershell

What I would like to do is point out the following learnings that I discovered during my deployment.

  1. In order to utilise Azure AD credentials that are synchronised from on-premises, synchronisation of NTLM/Kerberos credential hashes must be enabled in Azure AD Connect, this is not enabled by default.
  2. If there is any cloud-only user accounts, all users who need to use Azure AD Domain Services must change their passwords after Azure AD Domain Services is provisioned. The password change process causes the credential hashes for Kerberos and NTLM authentication to be generated in Azure AD.
  3. Once a cloud-only user account has changed their password, you will need to wait for a minimum of 20 minutes before you will be able to use Azure AD Domain Services (this got me as I was impatient).
  4. Speaking of patience the provisioning process of Azure Domain Services takes about an hour.
  5. Have a dedicated subnet for Azure AD Domain services to avoid any connectivity issues that may occur with NSGs/firewalls.
  6. You can only have one managed domain connected to your Azure Active Directory.

That’s it, hopefully this helped you get a better understanding of Azure AD Domain Services and assists with a smooth deployment.

Understanding Azure’s Container PaaS Capabilities

siliconvalve

If you’ve been using Azure over the past twelve months, you can’t but have the feeling that it’s become a bit like this…

Containers... Containers Everywhere

.. and you’d be right.

To be fair, though, Containers have been one of the hot topics in computing in general and certainly one that’s been getting the most interest in my recent Azure Open Source Roadshows.

One thing that has struck me though is that people are not clear on the purpose of all the services in Azure that have ‘Containers’ listed as a capability, so in this post I am going to try and review the Azure Platform-as-a-Service offerings that have Container capabilities and cover what the services can be used for.

First, before we begin, let’s quickly get some fundamentals under our belts.

What is a Container?

Containers provide encapsulation and isolation for workloads and remove the need for a complete Operating System image…

View original post 1,698 more words

AWS Re:Invent 2017 – what’s out so far

What a week it’s been for AWS customers. Just in the last 5 days we already seen a huge number of product releases including:

AWS Sumerian: With Sumerian, you can construct an interactive 3D scene without any programming experience, test it in the browser, and publish it as a website that is immediately available to users. Product details can be found https://aws.amazon.com/about-aws/whats-new/2017/11/announcing-amazon-sumerian-preview/

Amazon MQ:Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it easy to set up and operate message brokers in the cloud. Amazon MQ works with your existing applications and services without the need to manage, operate, or maintain your own messaging system. See Jeff’s blog post here https://aws.amazon.com/blogs/aws/amazon-mq-managed-message-broker-service-for-activemq/

Amazon EC2 Bare Metal Instances:Amazon EC2 Bare Metal instances provide your applications with direct access to the processor and memory of the underlying server. These instances are ideal for workloads that require access to hardware feature sets (such as Intel VT-x), or for applications that need to run in non-virtualized environments for licensing or support requirements. For for info on getting into the preview, visit https://aws.amazon.com/about-aws/whats-new/2017/11/announcing-amazon-ec2-bare-metal-instances-preview/

PrivateLink for Customers/Partners: We announced that customers can now use AWS PrivateLink to access third party SaaS applications from their Virtual Private Cloud (VPC) without exposing their VPC to the public Internet. Customers can also use AWS PrivateLink to connect services across different accounts and VPCs within their own organizations, significantly simplifying their internal network architecture. See details https://aws.amazon.com/about-aws/whats-new/2017/11/aws-privatelink-now-available-for-customer-and-partner-services/ and kloud will be blogging about this much more in the coming weeks

Amazon GuardDuty:Amazon GuardDuty is a threat detection service that gives you a more accurate and easy way to continuously monitor and protect your AWS accounts and workloads. With a few clicks in the AWS Management Console, GuardDuty begins analyzing AWS data across all your AWS accounts integrated with threat intelligence feeds, anomaly detection, and machine learning for more actionable threat detection in an easy to use, pay as you go cloud security service. Again, Jeff’s done a great article https://aws.amazon.com/blogs/aws/amazon-guardduty-continuous-security-monitoring-threat-detection/

Now that Andy Jassy’s keynote has just finished, we now have a bunch more:

AWS Fargate: containers as a service where a customer no longer needs to manage the underlying EC2 instances. See blog post https://aws.amazon.com/blogs/aws/aws-fargate/

Elastic Kubernetes Service: the same level of integration we’ve to come expect from ECS, but running kubernetes. For more details, see https://aws.amazon.com/blogs/aws/amazon-elastic-container-service-for-kubernetes/

Aurora serverless: Designed for workloads that are highly variable and subject to rapid change, this new configuration allows you to pay for the database resources you use, on a second-by-second basis. More details can be found https://aws.amazon.com/blogs/aws/in-the-works-amazon-aurora-serverless/

AWS recognition for video: Amazon Rekognition Video is a new video analysis service feature that brings scalable computer vision analysis to your S3 stored video, as well as, live video streams. see Jeff’s blog here https://aws.amazon.com/blogs/aws/launch-welcoming-amazon-rekognition-video-service/

AWS Neptune: a fast and reliable graph database service that makes it easy to gain insights from relationships among your highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds of latency. Jeff’s blog can be found https://aws.amazon.com/blogs/aws/amazon-neptune-a-fully-managed-graph-database-service/

AWS DeepLens: a new video camera that runs deep learning models directly on the device, out in the field. You can use it to build cool apps while getting hands-on experience with AI, IoT, and serverless computing. AWS DeepLens combines leading-edge hardware and sophisticated on-board software, and lets you make use of AWS Greengrass, AWS Lambda, and other AWS AI and infrastructure services in your app. See here for the latest blog post https://aws.amazon.com/blogs/aws/deeplens/

This is by no means a complete list of everything released, but just a glimpse of what’s come out so far. Stay tuned to our blog for detailed deep dives into some of these services.

Geographically Visualizing your workforce using Microsoft Identity Manager, xMatters and Power BI

Introduction

In the last couple of weeks I’ve posted about visualizing relationships of data from Microsoft Identity Manager using Power BI. Earlier this week I posted about building a Management Agent for Microsoft Identity Manger to integrate with xMatters.

In this post I combine data from the last two in order to allow us to visualise the geographic office locations for an organisation and then summary data about it (how many employees are located there, and what departments).

Prerequisites

You’ll need an Azure AD and Office 365 subscription to allow you to create a Power BI Application. Too create a Power BI Application see Registering a Power BI Application in this post here.

You’ll also need the Power BI PowerShell Module. I’m using 2.0.0.9 available from the PowerShell Gallery here and of course the Lithnet MIIS PowerShell Module available from here.

Overview

Using our registered Power BI Application we’ll create a Dataset consisting of two tables. One for the xMatters Sites (that we also get the geographic co-ordinates of from the xMatters Management Agent), and the other with our xMatters Users that contains the officeLocation that maps to an xMatters Site.

I create a relationship between the two tables on xMattersSite displayName (which is the location name) and the xMattersUsers officeLocation. We can then create a nice visual using data from both tables.

Create the Dataset (two tables with relationship)

Initially I tried to create the dataset with a relationship as I’ve previously shown here. However that didn’t work. After some debugging I got the result I wanted after some trial and error using the Power BI API Explorer. So I’ll provide you with the raw JSON format for creating a New Dataset, Two Tables (xMattersSites and xMattersUsers) and a relationship between them (where xMattersSites\displayName joins with xMattersUsers\officeLocation) as per my xMatters Management Agent detailed here.

Start by authenticating to the Power BI API Explorer with an account in the environment where you created your Power BI Application and navigate to the Create Dataset section here.

Create Dataset

Update this JSON formatted object that details the Dataset, Tables and Relationships for your environment.

Paste your validated JSON object into the Body section of the API Explorer and select Call Resource.

Dataset Body

If your JSON object is formatted corrected you’ll get a 201 response and your DataSet and Tables with Relationship will be created.

Create Success

Switching over to Power BI you’ll see the xMatters Dataset in the bottom left, then the two tables in on the right hand side with their columns.

xMatters DataSet PBI.PNG

Load xMatters User Data into Power BI

Now that we have somewhere to put the data, lets populate the dataset. I’m using the Lithnet MIIS Automation PowerShell Module (detailed in the prerequsites to query the Metaverse and return all users. Then I refine the list down to those that are Active (based on my employeeActive Boolean attribute) then finally, only those users that are connected on the xMatters Management Agent (see lines 14 & 18).

The script will drop any existing values from the xMatters Users table then upload what we have retrieved from the Metaverse (and refined).

Upload Users.PNG

Load xMatters Site Data into Power BI

Again I’m also using the Lithnet MIIS Automation PowerShell Module to query the Metaverse and return all xMatters Sites.

The script will drop any existing values from the xMatters Sites table then upload what we have retrieved from the Metaverse.

Upload Sites.PNG

Creating the Power BI Visual

Now we have data we can build the visual. I’m using the ArcGIS Maps for Power BI visual which is available in the default set of visuals. Then by selecting displayName and geo the map will automagically show all xMatters Sites in their respective co-ordinates.

xMatters Sites to Map

We can then add a Card Visual and choose officeLocation and then configure the visual for Count of officeLocation and we’ll get a count of the employees at that location. As we can see below with the Sydney location selected from the map the card updates to tell me there are 665 Employees at that officeLocation.

Count of Employees at Selected Location

Pretty quickly we can also expand out other data points, like departments at a location, employees etc as shown below (I’ve obfuscated the departments and a number of the other office locations).

Summary.PNG

Conclusion

We haven’t generated any new data. We’ve taken information we already have in Microsoft Identity Manager from connected systems and quickly visualized it via Power BI. However providing this to the business and with the ability for consumers of the information to export it from the visual can be pretty powerful.

Seamless Multi-identity Browsing for Cloud Consultants

If you’re a technical consultant working with cloud services like Office 365 or Azure on behalf of various clients, you have to deal with many different logins and passwords for the same URLs. This is painful, as your default browser instance doesn’t handle multiple accounts and you generally have to resort to InPrivate (IE) or Incognito (Chrome) modes which mean a lot of copying and pasting of usernames and passwords to do your job. If this is how you operate today: stop. There is an easier way.

Two tools for seamless logins

OK, the first one is technically a feature. The most important part of removing the login bottleneck is Chrome Profiles. This essential feature of Chrome lets you maintain completely separate profiles for Chrome, including saved passwords, browser cache, bookmarks, plugins, etc. Fantastic.

Set one up for each customer that you have a dedicated account for. Once you log in once, the credentials will be cached and you’ll be able to pass through seamlessly.

This is obviously a great improvement, but only half of the puzzle. It’s when Profiles are combined with another tool that the magic happens…

SlickRun your Chrome sessions

If you haven’t heard of the venerable SlickRun (which must be pushing 20 years if it’s a day) – download it right now. This gives you the godlike power of being able to launch any application or browse to any Url nearly instantaneously. Just hit ALT-Q and input the “magic word” (which autocompletes nicely) that corresponds to the command you want to execute and Bob’s your Mother’s Brother! I tend to hide the SlickRun prompt by default, so it only shows up when I use the global ALT-Q hotkey.

First we have to set up our magic word. If you simply put a URL into the ‘Filename or URL’ box, SlickRun will open it using your default browser. We don’t want that. Instead put ‘chrome.exe’ in the box and use the ‘–profile-directory’ command line switch to target the profile you want, followed by the URL to browse to.

N.B. You don’t seem to be able to reference the profiles by name. Instead you have to put “Profile n” (where n is the number of the profile in the order you created it).

SlickRun-MagicWord

That’s all there is to it. Once you’ve set up your magic words for the key web apps you need to be able to access for each client (I go with a naming convention of ‘clientappname‘ and extend that further if I have multiple test accounts I need to log in as, etc), then get to any of them in seconds and usually as seamlessly as single-sign-on would provide.

This hands-down my favourite productivity trick and yet I’ve never seen anyone else do it, or seen a better solution to the multiple logins problem. Hence this post! Hope you find it as awesome a shortcut as I do…

Till next time!

Psychodynamics: Are We Smarter Than The Device?

PhonePeople

How did you know about this blog post?

It’s likely that you were notified by your smartphone or device, the notification itself as a part of trundle that you’re figuratively swiping left in-between email reminders about upcoming events and direct messages from your favourite social media. Or you were trawling your usual network feeds for updates to catch your attention.

Now if you were to time the window in which you check your smart device again for notifications, new messages or general updates, I’d bet that this window would be within a minute or just outside of it, and would require no prompting whatsoever… much like, say, breathing?

On the way to lunch this past week I had to tell three pedestrians to “Look up!” because they were walking on their smartphones while walking through the mess of the CBD at lunch time and just asking for some bad luck to go down. One was even across the intersection while the walk sign was red! Roadworks or not. However these smart device distractions amongst societal situations where we should become actively engaged, are becoming less distractive and more the norm.

Admittedly, I’ve been guilty of this also (stands up in anonymous meeting group circle) “Hi everyone, it’s been 24 days, 4 hours and 19 minutes since my last smart device infringement…”

Separating norms, habits and addictions have become difficult in this regard. A study conducted last year on 205 users, ranging from ages 16 to about 64, and spanning across the UK, China, Australia and the US, drew a preliminary conclusion that people grow emotionally attached to their smartphones. Obviously, a lost or stolen phone can be replaced, and even more conveniently, the data backup restored to the replacement. However the same cannot be said for a lost pet dog for instance.

The study in fact suggests that the emotional connection comes from is the connectivity and community the device facilitates – what we’re actually sacrificing for behavioural controls is the luxury of functionality.

It is with the ease of which these devices can be used, the ability to pour one’s life into apps and social networks, customise and personalise options, is what creates the need for us to be close to it, the loss of it coming with the emotional baggage of disconnection and an inability to “interact substantially”.

Do we know what life was like before this? I would say kind of, but maybe in another ten years’ time, not so much. Sure, we still have to get off our butts for some of our daily activities, but as we move, so does our devices, both figuratively and literally.

We’re well and truly plugged in; it’s the world we live in now. I can get my plumbing fixed and a slice of cake brought to my doorstep by a complete stranger on a single app (and trust that it will happen). Why not?

For further reading on the study, see the article under Computers in Human Behaviour.

Building a FIM/MIM Management Agent for xMatters

Introduction

A couple of weeks ago one of my customers had a requirement to provision and manage identities into xMatters. The xMatters API Documentation looked straight-forward and I figured it would be pretty quick to knock up an PowerShell Management Agent.

The identification of users (People) in xMatters was indeed pretty quick. I was quickly able to enumerate all users (that had initially been seeded independent of FIM/MIM) and join them to corresponding users in the MetaVerse.

It was then as I started digging deeper that the relationship between Sites (Locations) and Email/Mobile (Devices) attributes became apparent. This post details how I approached it and a base xMatters MA that should get you started if you need to do something similar.

Overview

A key concept to keep in mind is that at the simplest level there are 3 key Object Types in xMatters;

  • People
    • User Objects along with basic naming attributes
  • Device
    • Each contact medium is a device. Email Address, Mobile Phone, Home Phone, Text Phone (SMS) etc.
  • Site
    • Location of the entity (person)

Associated with each is an id which can be either dynamically created on provisioning (by xMatters) or specified. For People there is also targetName which is the equivalent of UID/sAMAccountName. When using the API (for people) you can use either their ID or their targetName. For all other entities you need to use the ID.

For each entity as you’d expect there are different API URI’s. They are;

Finally to retrieve devices for a person use;

Other key points to consider that I uncovered are;

  • if you are updating a Device (e.g. someones Email Address or Phone Number) don’t specify the owner attribute (as you do when you create the Device). It considers that you are trying to change the owner and won’t allow it.
  • to update a Device you need to know the ID of the Device. I catered for this on my Import by bringing through People and Device ID’s.
  • When creating/updating a users location you need to specify the Site ID and Site Name. I brought these through as a separate ObjectClass into FIM/MIM and query the MV for them when Exporting
  • In my initial testing the API returned a number of different errors 400 (Bad Request), 409 Conflict (when trying to Add a Device that already exists), 404 (Not Found) along with API Timeouts. You need to account for these and perform processing appropriately
  • On success of Update, Create or Delete the API returns the full object that you performed the operation on. You need to capture this and let MIM know that on Success a full object being returned is Success and not an error
  •  xMatters expects phone numbers to be in E164 format (e.g +61 400 123 456). I catered for this on an import on another Management Agent
  • xMatters timezone is in the format of Country/Region. For Australia these are as follows. Correct, it doesn’t accept Australia/Canberra for ACT;
    • “NSW”  = “Australia/Sydney”
      “VIC”  = “Australia/Melbourne”
      “QLD”  = “Australia/Brisbane”
      “ACT”  = “Australia/Sydney”
      “WA”  = “Australia/Perth”
      “TAS”  = “Australia/Hobart”
      “NT”  = “Australia/Darwin”

xMatters PowerShell Management Agent

With all that introduction, here is a base xMatters PowerShell MA (implemented using the Granfeldt PowerShell MA) to get you started. You’ll need to tailor for your environment and trigger Provisioning, Deletes and Flow Rules for your environment and look to handle the xMatters API for your integration.

Schema Script

I’ve created two Object Classes. User and Site. User incorporates User Devices. Site is the locations (Sites) from xMatters.

Import Script

Credentials for the Import script to connect to xMatters are flowed in from the Management Agent Username and Password attributes. This isn’t using Paged Imports. If you have a large number of users you may want to consider that. After retrieving all of the People entities each is queried to obtain their Devices. I’m only bringing through SMS and Email Devices. You’ll need to modify for additional Devices.

Ensure that you flow into the MetaVerse (onto custom attributes) the IDs associated with your Devices (e.g MobileID and EmailID). That will allow you to use the ID when updating those attributes.

For Sites, I created a custom ObjectClass (Site) in the MV and used objectID of the SiteID and displayName for the Site Name (as shown below).

Attribute Flows.png

Export Script

This is where it gets a little more complicated. As PowerShell is not good at reporting webrequest responses we have to deal with the return from each API call and determine if we were successful or not. Then let FIM/MIM know so it can report that via the UI.

The Export script below deals with Adding, Deleting and Updating users. Update line 31 for your API URI for xMatters.

Summary

The detail above will get you started and give you a working Management Agent to import Users and Sites. You’ll need to do the usual steps (Set, Workflow, Sync Rule and MPR) to trigger Provisioning on the MA along with how you handle deletes.