Create reports using a Power BI Gateway

Background

Once you have a Power BI gateway setup to ensure data flow from your on-premises data sources to Power BI service in the cloud, next step is to create reports using Power BI desktop and build reports using data from multiple on-premises data sources.

Note: If you didn’t have a gateway setup already, please follow my earlier post to set it up before you continue reading this post.

Scenario

All on-premises data is stored in SQL server instances and spread across few data warehouses and multiple databases built and managed by your internal IT teams.

Before building reports, you need to ensure following key points:

  1. Each data source should be having connectivity to your gateway with minimum latency, this should be ensured.
  2. Every data source intended to be used within reports needs to be configured within a gateway in the Power BI service
  3. List of people needs to be configured against each data source who can publish reports using this data source

An interaction between on-premises data sources and cloud services is depicted below:

Pre-requisites

Before you build reports, you need to setup on-premises data sources in the gateway to ensure Power BI service knows which data sources are allowed by gateway administrator to pull data from on-premises sources.

Login into https://app.powerbi.com with Power BI service administrator service credentials.

  1. Click on Manage gateways to modify settings
  2. You will see a screen with gateway options that your setup earlier while configuring gateway on-premises
  3. Next step is to setup gateway administrators, who will have permission to setup on-premises data sources as and when required
  4. After gateway configuration, you need to add data sources one by one so published reports can use on-premises data sources (pre-configured within gateway)
  5. You need to setup users against each data source within a gateway who can use this data source to pull data from on-premises sources within their published reports
  6. Repeat above steps for each of your on-premises data sources by selecting appropriate data source type and allowing users who can use them while building reports

Reports

Upon reaching this step, you are all good to create reports.

  1. Open Power BI desktop
  2. Select sources you want to retrieve data from
  3. Just ensure while creating reports, data source details are same as what was configured in Power BI service while you were setting up data sources.
  4. Great! once you publish reports to your Power BI service – your gateway will be able to connect to relevant on-premises data sources if you have followed steps above.

 

Where’s the source!

SauceIn this post I will talk about data (aka the source)! In IAM there’s really one simple concept that is often misunderstood or ignored. The data going out of any IAM solution is only as good as the data going in. This may seem simple enough but if not enough attention is paid to the data source and data quality then the results are going to be unfavourable at best and catastrophic at worst.
With most IAM solutions data is going to come from multiple sources. Most IAM professionals will agree the best place to source the majority of your user data is going to be the HR system. Why? Well simply put it’s where all important information about the individual is stored and for the most part kept up to date, for example if you were to change positions within the same company the HR systems are going to be updated to reflect the change to your job title, as well as any potential direct report changes which may come as a result of this sort of change.
I also said that data can come and will normally always come from multiple sources. At typical example of this generally speaking, temporary and contract staff will not be managed within the central HR system, the HR team simply put don’t care about contractors. So where do they come from, how are they managed? For smaller organisations this is usually something that’s manually done in AD with no real governance in place. For the larger organisations this is less ideal and can be a security nightmare for the IT team to manage and can create quite a large security risk to the business, so a primary data source for contractors becomes necessary what this is is entirely up to the business and what works for them, I have seen a standard SQL web application being used to populate a database, I’ve seen ITSM tools being used, and less common is using the IAM system they build to manage contractor accounts (within MIM 2016 this is through the MIM Portal).
There are many other examples of how different corporate applications can be used to augment the identity information of your user data such as email, phone systems and to a lessor extent physical security systems building access, and datacentre access, but we will try and keep it simple for the purpose of this post. The following diagram helps illustrate the dataflow for the different user types.

IAM Diagram

What you will notice from the diagram above, is even though an organisation will have data coming from multiple systems, they all come together and are stored in a central repository or an “Identity Vault”. This is able to keep an accurate record of the information coming from multiple sources to compile what is the users complete identity profile. From this we can then start to manage what information is flowed to downstream systems when provisioning accounts, and we can also ensure that if any information was to change, it can be updated to the users profiles in any attached system that is managed through the enterprise IAM Services.
In my next post I will go into the finer details of the central repository or the “Identity Vault”

So in summary, the source of data is very important in defining an IAM solution, it ensures you have the right data being distributed to any managed downstream systems regardless of what type of user base you have. My next post we will dig into the central repository or the Identity Vault, this will go into details around how we can set precedence to data from specific systems to ensure that if there is a difference in the data coming from the difference sources that only the highest precedence will be applied we will also discuss how we augment the data sets to ensure that we are also only collecting the necessary information related to the management of that user and the applications that use within your business.

As per usual, if you have any comments or questions on this post of any of my previous posts then please feel free to comment or reach out to me directly.

Windows 10 Domain Join + AAD and MFA Trusted IPs

Background

Those who have rolled out Azure MFA (in the cloud) to non-administrative users are probably well aware of the nifty Trusted IPs feature.   For those that are new to this, the short version is that this capability is designed to make it a little easier on the end user experience by allowing you to define a set of ‘trusted locations’ (e.g. your corporate network) in which MFA is not required.

This capability works via two methods:

  • Defining a set of ‘Trusted” IP addresses.  These IP addresses will be the public facing IP addresses of your Web Proxies and/or network gateways and firewalls
  • Utilising issued claims from Federated Users.   This uses the insidecorporatenetwork = true claim, sent by ADFS, to determine that this user is coming from a ‘trusted location’.  Enabling this capability is discussed in this article.

The Problem

Now, the latter of these is what needs further consideration when you are looking to moving to the ‘modern world’ of Windows 10 and Azure AD (AAD).  Unfortunately, due to some changes made in the way that ‘Win10 Domain Joined with AAD Registration (AAD+DJ) machines performs Single Sign On (SSO) with AAD, the method of utilising federated claims to determine a ‘trusted location’ for MFA will no longer work.

To understand why this is the case, I highly encourage that you first read Jairo Cadena‘s truly excellent blog series that discuss in detail around how Win10 AAD SSO and its associated services works.  The key takeaways from those posts are that Win10 now has this concept of a Primary Refresh Token (PRT) and with this approach to authentication you now have the following changes:

  • The PRT is what is used to obtain access tokens to AAD applications
  • The PRT is cached and has a sliding window lifetime from 14 days up to 90 days
  • The use of the PRT is built into the Windows 10 credential provider.  Both IE and Edge know to utilise the PRT when communicating with AAD
  • It effectively replaces the ADFS with Integrated Windows Auth (IWA) approach to achieve SSO with Azure AD
    • That is, the auth flow is no longer: Browser –> Login to AAD –> Redirect to ADFS –> Perform IWA SSO –> SAML Token provided with claims –> AAD grants access
    • Instead, the auth flow is a lot more streamlined:  Browser –> Login and provide PRT to AAD –> AAD grants access

Hopefully from this auth flow change you can see why Microsoft have done this.  Because the old way relied on IWA to perform ‘seamless’ SSO, it only worked when the device was domain joined and you had line of sight to a DC to perform kerberos.  So when connecting externally, you would always see the prompt from the ADFS forms based authentication.  In the new way, whenever an auth prompt came in from AAD, the credential provider could see this and immediately provide the cached PRT, providing SSO regardless of your network location.  It also meant that you no longer needed a domain joined machine to achieve ‘seamless’ SSO!

The side effect though is that because the SAML token provided by ADFS is no longer involved in gaining access, Azure AD loses visibility on those context based claims like insidecorporatenetwork which subsequently means that specific Trusted IPs feature no longer works.   While this is most commonly used for MFA scenarios, be aware that this will also apply to any Azure AD Conditional Access rules you define that uses the Trusted IPs criteria (e.g. block access to app when external).

Side Note: If you want to confirm this behaviour yourself, simply use a Win10 machine that is both Domain Joined and AAD Registered, perform a fiddler capture, and compare the sign in experience differences between a IE and Edge (i.e. PRT aware) and Chrome (i.e. not PRT aware)

The Solution/Workaround?

So, you might ask, how do you fix this apparent gap in capability?   Does this mean you’re going backwards now?   For any enterprise customer of decent size, managing a set of IP address ranges may not be practical or desireable in order to drive MFA (or conditional access) behaviours between internal and external users.   The federated user claim method was a simple, low admin, way of solving that problem.

To answer this question, I would actually take a step back and look at the underlying problem that you’re trying to solve.  If we remind ourselves of the MFA mantra, the idea is to ensure that the user provides “something they know” (e.g. a secret/password) and “something they have” (e.g. a mobile device) to prove their ‘trustworthiness’.

When we make a decision to allow an MFA bypass for internal users, we are predicating this on the fact that, from a security standpoint, they have met their ‘trustworthiness’ level through a seperate means.  This might be through a security access card that lets them into an office location or utilising a corporate laptop that can perform a VPN connection.  Both of which ultimately lets them connect to the internal network and thus is what you use as your criteria for granting them the luxury of not having to perform another factor of authentication.

So with that in mind, what you could then do is to also expand that critera to include Domain Joined machines.  That is, if a user is utilising a corporate issued device that has been domain joined (and registered to AAD), this can now act as your “something you have” aspect of the MFA mantra to prove your trustworthiness, and so you no longer need to differentiate whether they are actually internal or external anymore.

To achieve this, you’ll need to use Azure AD Conditional Access policies, and modify your Access Grant rules to look something like that below:

Win10PRT1

You’ll also need to perform the steps outlined in the How to configure automatic registration of Windows domain-joined devices with Azure Active Directory article to ensure the devices properly identify themselves as being domain joined.

Side Note:  If you include the Workplace Join packages as outlined above, this approach can also expand to Windows 7 and Windows 8.1 devices.

Side Note 2: You can also include Intune managed mobile devices for your ‘bypass criterias’ if you include the Require device to be marked as compliant critera as well.

Fun Fact: You’ll note that in my image the (preview) reference for ‘require one of the selected controls’ is still there.  This is because until recently (approx. May/June 2017), the MFA or domain joined device criteria didn’t acutally work because of the behaviour/order of how the evaluations were being done.  When AAD was evaluating the domain joined criteria, if it failed it would immediately block access rather then trying the MFA approach next, thus preventing an ‘or’ scenario.   This has now been fixed and I expect the (preview) tag to be removed soon.

Summary

The move to the modern ‘any where, any device’ approach to end user computing means that there is a need to start re-assessing how you approach security.  Old world views of security being defined via network boundaries will eventually disappear and instead you’ll need to consider user-and device based contexts to define when to initiate security controls.

With Windows 10’s approach to authentication with AAD, internal and external access is no longer relevant and should not be used for your criteria in driving MFA or conditional access. Instead, use the device based conditions such as ‘device compliance’ or ‘domain join’ as one of your deciding factors.

Enabling and Scripting Azure Virtual Machine Just-In-Time Access

Last week (19 July 2017) one of Microsoft’s Azure Security Center’s latest features went from Private Preview to Public Preview. The feature is Azure Just in time Virtual Machine Access.

What is Just in time Virtual Machine access ?

Essentially JIT VM Access is a wrapper for automating an Azure Network Security Group rule set for access to an Azure VM(s) for a temporal period on a set of network ports restricted to a source IP/Network.

Personally I’d done something a little similar earlier in the year by automating the update of an NSG inbound rule to allow RDP only for my current public IP Address. Details on that are here. But that is essentially now redundant.

Enabling Just in time VM Access

In the Azure Portal Select the Security Center icon.

In the central pane you will find an option to Enable Just in time VM Access. Select that link.

In the right hand pane you will then see a link for Try Just in time VM Access. Select that.

If you have not previously enabled the Security Center you will need to select a Pricing Tier. The Free Tier does not include the JIT VM Access, but you should get an option for a 60 day trial for the Standard Tier that does.

With everything enabled you can select Recommended to see a list of VM’s that JIT VM Access can be enabled for.

I’ve selected one of mine from the list and then selected Enable JIT on 1 VM.

In the Enable JIT VM Config you can add and remove ports as required. Also the maximum timeframe for the access. The Per-Request for source IP will enable the rule for the requester and their current IP.  Select Ok.

With the rule configured you can now Request access

When requesting access we can tailor the access based on what is in the rule. Select the ports we want from within the policy and IP Range or Current IP and reduce the timeframe if required. Then select Open Ports.

For the VM we can now see that JIT VM Access has been requested and is currently active.

Looking at the Network Security Group that is associated with the VM we can see the rules that JIT VM Access has put in place. We can also see that the rules are against my current IP Address.

Automating JIT VM Access Requests via PowerShell

Now that we have Just-in-time VM Access all configured for our VM, the reality is I just want to invoke the access request via PowerShell, start-up my VM (as they would normally be stopped unless in use) and utilise the resource.

The script below is a simplified version of the my previous script to automate NSG rules  detailed here. It assumes you enabled JIT VM Access as per the manual process above, and that your VM would normally be in an off state and you’re about to enable access, start it up and connect.

You will need to have the AzureRM and the new Azure-Security-Center PowerShell Modules. If you are running PowerShell 5.1 or later you can install them by un-remarking lines 3 and 5.

Update lines 13, 15 and 19 for your Resource Group name, Virtual Machine name and the link to your RDP file. Update line 21 for the number of hours to request access (in line with your policy).

Line 28 uses the new Invoke-ASCJITAccess cmdlet from the Azure-Security-Center Powershell module to request access. 

Summary

This simplifies the management of NSG Rules for access to VM’s and reduces the exposure of VM’s to brute force attacks. It also simplifies for me the access to a bunch of VM’s I only have running on an ad-hoc basis.

Looking into the Azure-Security-Center PowerShell module there are cmdlets to also manage the JIT Policies.

Is Your Serverless Application Testable? – Azure Logic Apps

I’ve talked about testable Azure Functions in my previous post. In this post, I’m going to introduce building a testable Azure Logic Apps.

Sample codes used in this post can be found here.

User Story

  • As a DevOps engineer,
  • I want to search ARM templates using Azure Logic Apps
  • So that I can browse the search result.

Workflow in Logic Apps

Based on the user story above, the basic workflow in Logic Apps might look like:

  1. An HTTP request triggers the workflow by hitting the Request trigger.
  2. The first action, HTTP in this picture, calls the Azure Function written in the previous post.
  3. If this action returns a response with an HTTP status code greater or equal to 400, the workflow takes the path at the left-hand side, executes the ErrorResponse action and terminates.
  4. If this action returns a response with an HTTP status code less than 400, the workflow takes the path at the right-hand side and executes the Parse JSON action followed by the Condition action.
  5. If the condition returns true, the workflow runs the OkResponse action; otherwise it runs the NotFoundResponse action, and terminates.

Therefore, we can identify that the HTTP action returns three possible scenarios – Error, Success with no result and Success with result. Let’s keep this in mind.

In fact, the workflow is a visual representation of this JSON object.

As a Logic App is basically a JSON object, we can easily plug it into an ARM template and deploy it to Azure so that we use the Logic App workflow straight away.

THERE IS NO CODE AT ALL.

From the unit-testing point of view, no code means we’re not able to test codes. Logic Apps only works in Azure cloud environment, ie. we can’t run test in our isolated local development environment. Well, being unable to run unit-test Logic Apps doesn’t necessarily mean we can’t test it at all.

Let’s see the picture again.

There’s a point where we can run test. Yes, it’s the HTTP action – the API call. As we’ve identified above, there are three scenarios we need to test. Let’s move on.

Manual Testing the Logic App

First of all, send a normal request that returns the result through Postman.

This time, we send a different request that returns no data so that it returns 404 response code.

Finally, we send another request that returns an error response code.

Tests complete! Did you find any issue in this test process? We actually hit the working API defined in the HTTP action. What if the actual API call causes any side effect? Can we test the workflow WITHOUT hitting the actual API? We should find a way to decouple the API call from the workflow to satisfy testabilities. API mocking can achieve this goal.

API Mocking

I wrote a post, API Mocking for Developers. In the post, I introduced how to mock APIs using Azure API Management. This is one way of mocking APIs. Even though it’s very easy to use with a Swagger definition, it has a costing issue. Azure API Management is fairly expensive for mocking. What if we use Azure Functions for mocking? It’s virtually free unless more than one million times of function calls per month occur.

Therefore, based on the three scenarios identified above, we can easily write Functions codes that returns dummy response like below:

It seems to be cumbersome because we need to write code for mocking APIs. So, it’s totally up to you to use either Azure API Management or Azure Functions for mocking.

Now, we’ve got mocked API endpoints for testing. What’s next?

Logic Apps for Testing – Manual Test

As we identified three scenarios, we need to clone the working Logic Apps three times and replace the API endpoint with the mocked ones. Here are the three Logic Apps cloned from the working one.

Each Logic App needs to be called through Postman to check the expected result. In order to pass the test, the first one should return error response, the second one should return 404 and the last one should return 200. Here’s the high level diagram of testing Logic Apps based on the expected scenarios.

We’re now able to test Logic Apps with mocked APIs. However, it’s not enough. We need to automate this test to integrate a CI pipeline. What can we do further?

Logic Apps for Testing – Automated Test

If we write a PowerShell script or script to call those three testing Logic Apps, we can call the script within the CI pipeline.

Here’s the high level diagram for test automation.

During the build process, Jenkins, VSTS or other build automation server calls the PowerShell script. The script runs three API requests and checks their responses. If all requests return expected responses respectively, we consider the test succeeded. If any of request returns a response different from expectation, the test is considered as failure.


So far, we’ve briefly walked through how to run tests against Logic Apps by API mocking. By decoupling API request/response from the Logic App, we can focus on the workflow itself. I hope this approach helps writing your Logic Apps workflow.

Resolving Skype for business 2015 Backup service “ErrorState” issue

I’ve been working with one SFB customer recently. I met some unique issue and I would like to share the experience of what I did to solve the problem.

Issue Description:

When I went through the Lync Event logs, I noticed the SFB FE servers are having lots of LS Backup Service with Event 4052, Event 4098 and Event 4071. Error logs are saying 

“Skype for business Server 2015, backup service users store backup module has backup data that never gets imported by backup pool.Backup data “file:\filestore\2-backupservice-1\backupstore\userservice\PresenceFocus\Data\Backup.zip

Cause: Import issue in the backup pool. Please check event log of Skype for business Server 2015, Backup service in the backup pool for more information.

Resolution:

Fix import issue in the backup pool”

 After I read these errors, I did a health check by running “get-csbackupservicestatus -poolfqdn primarypoolname” The result showed: OverallExportStatus: ErrorState, OverallImportStatus: NormalState.

By running the same cmd on the backup pool “get-csbackupservicestatus -poolfqdn backuppoolname”, the result showed: overallExportStatus: ErrorSate, OverallImportStatus: ErrorState.

I Checked the filestore folder permissions settings, it looked all correct, everyone is given access to the folder with read & write permissions. So this issue was not related with folder permission settings. This made sense because the backup services were running all good prior to certain time point. 

Then I did a bit of googling: people say to solve the backup service problem by recreating the backup folder. I tried to stop SFB Backup Service, File Transfer Agent Service, Master Replicator Agent Service on the FE servers across both primary pool and DR pool. Deleted the folder structure within the backup service folders. After this, I restarted all the stopped services above. After a few seconds, the new backup folder structures were recreated again. I run “Invoke-csbackupservicesync -poolfqdn primarypoolname” also “Invoke-csbackupservicesync -poolfqdn backuppoolname” Everything looked just fine. But when I run “get-csbackupservicestatus -poolfqdn poolname” on both pools, I get the same error results as previous.

To me, this is not good news. I was sure that something changed from the environment background. I started to do basic troubleshooting again from the primary site, I browse to the backup folder on the primary site servers recheck the folder permissions and everything looked good. I tried to browse to the DR folder at the primary site. It looked successful, nothing wrong. :/

Root Cause:

When I moved to the DR servers and tried to browse to primary site backup folder via the same directory, interesting thing happened, obviously, the filestore with the same directory path name on the DR servers was totally different from the filestore I browsed on the Primary servers. I did further ping test and verified that filestore host name was resolved differently at primary site and DR site. This meant the filestore of primary site and DR site can’t talk with each other, so that’s the root cause of the backup service having error status.

What exactly changed?

I spoke with customer IT team and they advised that originally both primary filestore and DR filestore were located on one DFS host. A couple of weeks ago, the IT team made some changes on the DFS farm, at the end, SFB FE servers at primary site resolved the filestore name against the Primary site DFS host, however, the SFB FE servers at DR site resolved the filestore name against the DR site DFS host, which is a totally different host. This breaks the configuration sync and caused backup service failed.

Solutions:

Reconfigured the DFS farm, all the SFB FE servers across both sites resolved the filestore name against the primary site DFS host. After that, restarted the backup services, everything started working again.

 Run health check again “get-csbackupservicestatus -poolfqdn primarypoolname”, the overallexportstatus: FinaState, the overallimportstatus:NormalState. Run health check for DR site, the result looks all correct. Verified the issue resolved.

So posting this as I couldn’t find any reference to this particular environment related SFB backup service error issue elsewhere. Hopefully it can help someone else too.

Is Your Serverless Application Testable? – Azure Functions

In this post, I’m going to introduce several design patterns when we write Azure Functions codes in C#, by considering full testabilities.

Sample codes used in this post can be found here.

User Story

  • As a DevOps engineer,
  • I want to search ARM templates using Azure Functions
  • So that I can browse the search result.

Basic Function Code

We assume that we have a basic structure that Azure Functions can use. Here’s a basic function code might look like:

As you can see, all dependencies are instantiated and used within the function method. This works perfectly fine so there’s no problem at all. However, from a test point of view, this smells A LOT. How can we refactor this code for testing?

I’m going to introduce several design patterns. They can’t be overlooked, if you’ve been working with OO programming. I’m not going to dive into them too much in this post, by the way.

Service Locator Pattern

As you can see the code above, each and every Azure Functions method has the static modifier. Because of this restriction, we can’t use a normal way of dependency injections but the Service Locator Pattern. With this, the code above can be refactored like this:

First of all, the function code declares the IServiceLocator property with the static modifier. Then, the Run() method calls the service locator to resolve an instance (or dependency). Now, the method is fully testable.

Here’s a simple unit test code for it:

The Mock instance is created and injected to the static property. This is the basic way of dependency injection for Azure Functions. However, there’s still an issue on the service locator pattern.

Mark Seemann argues on his blog post that the service locator pattern is an anti pattern because we can’t guarantee whether dependencies that we want to use have already been registered or not. Let’s have a look at the refactored code again:

Within the Run() method, the IGitHubService instance is resolved by this line:

var service = ServiceLocator.GetInstance();

When the amount of codes is relatively small, that wouldn’t be an issue. However, if the application grows up quickly and has become a fairly large one, can we ensure if the service variable is always NOT null? I don’t think so. We have to use the service locator pattern, but at the same time, try not to use it. That doesn’t sound quite right, though. How can we achieve this goal?

Strategy Pattern

Strategy Pattern is basically one of the most popular design patterns and can be applied to all Functions codes. Let’s start from the IFunction interface.

Every function now implements the IFunction interface and all logics written in a trigger function MUST move into the InvokeAsync() method. Now, the FunctionBase abstract class implements the interface.

While implementing the IFunction interface, the FunctionBase class add the virtual modifier to the InvokeAsync() method so that other functions class inheriting this base class will override the method. Let’s put the real logic.

The GetArmTemplateDirectoriesFunction class overrides the InvokeAsync() method and processes everything within the method. Dead simple. By the way, what is the generic type of TOptions? This is the Options Pattern that we deal with right after this section.

Options Pattern

Many API endpoints have route variables. For example, if an HTTP trigger’s endpoint URL is /api/products/{productId}, the productId value keeps changing and the value can be handled by the function method like Run(HttpRequestMessage req, int productId, ILogger log). In other words, route variables are passed through parameters of the Run() method. Of course each function has different number of route variables. If this is the case, we can pass those route variables, productId for example, through an options instance. Code will be much simpler.

There is an abstract class of FunctionParameterOptions. The GetArmTemplateDirectoriesFunctionParameterOptions class inherits it, and contains a property of Query. This property can be used to store a querystring value for now. The InvokeAsync() method now only cares for the options instance, instead of individual route variables.

Builder Pattern

Let’s have a look at the function code again. The IGitHubService instance is injected through a constructor.

Those dependencies should be managed somewhere else, like an IoC container. Autofac and/or other IoC container libraries are good examples and they need to be combined with a service locator. If we introduce a Builder Pattern, it would be easily accomplished. Here’s a sample code:

With Autofac, all dependencies are registered within the RegisterModule() method and the container is wrapped with AutofacServiceLocator and exported to IServiceLocator within the Build() method.

Factory Method Pattern

All good now. Let’s combine altogether, using the Factory Method Pattern.

Within the constructor, all dependencies are registered by the ServiceLocatorBuilder instance. This includes all function classes that implement the IFuction interface. FunctionFactory has the Create() method and it encapsulates the service locator.

This function now has been refactored again. The IServiceLocator property is replaced with the FunctionFactory property. Then, the Run method calls the Create() method and InvokeAsync() method consecutively by fluent design approach. At the same time, GetArmTemplateDirectoriesFunctionParameterOptions is instantiated and passed the querystring value, which is injected to the InvokeAsync() method.

Now, we’ve got all function triggers and its dependencies are fully decoupled and unit-testable. It has also achieved to encapsulates the service locator so that we don’t even care about that any longer. From now on, code coverages have gone up to 100%, if TDD is properly applied.


So far, I’ve briefly introduced several useful design patterns to test Azure Functions codes. Those patterns are used for everyday development. This might not be the perfect solution, but I’m sure this could be one of the best practices I’ve found so far. If you have a better approach, please send a PR to the GitHub repository.

Azure Logic Apps will be touched in the next post.

Setup a Power BI Gateway

Scenario

So, you have explored Power BI (free) and wanted to start some action in the cloud. Suddenly you realise that your data is stored in an on-premise SQL data source and you still wanted to get insights up in the cloud and share it with your senior business management.

Solution

Microsoft’s on-premises data gateway is a bridge that can securely transfer your data to Power BI service from your on-premises data source.

Assumptions

  • Power BI pro licenses have been procured already for the required no of users (this is a MUST)
  • Users are already part of Azure AD and can sign in to Power BI service as part of Office 365 offering

Pre-requisites

You can build and setup a machine to act as a gateway between your Azure cloud service and on-premises data sources. Following are the pre-requisites to build that machine:

1) Server Requirements

Minimum Requirements:
  • .NET 4.5 Framework
  • 64-bit version of Windows 7 / Windows Server 2008 R2 (or later)

Recommended:

  • 8 Core CPU
  • 8 GB Memory
  • 64-bit version of Windows 2012 R2 (or later)

Considerations:

  • The gateway is supported only on 64-bit Windows operating systems.
  • You can’t install gateway on a domain controller
  • Only one gateway can be installed on a single machine
  • Your gateway should not be turned off, disconnected from the Internet, or have a power plan to go sleep – in all cases, it should be ‘always on’
  • Avoid wireless network connection, always use a LAN connection
  • It is recommended to have the gateway as close to the data source as possible to avoid network latency. The gateway requires connectivity to the data source.
  • It is recommended to have good throughput for your network connection.

Notes:

  • Once a gateway is configured and you need to change a name, you will need to reinstall and configure a new gateway.

2) Service Account

If your company/client is using a proxy server and your gateway is not having a direct connection to Internet. You may need to configure a windows service account for authentication purposes and change default log on credential (NT SERVICE\PBIEgwService) to a service account you like with a right of ‘Log on as a service’

3) Ports

The gateway creates an outbound connection to Azure Service Bus and does not require inbound ports for communication. It is required to whitelist IP addresses listed in Azure Datacentres IP List.

Installation

Once you are over with a pre-requisite as listed in the previous paragraph, you can proceed to gateway installation.

  1. Login to Power BI with your organisation credentials and download your data gateway setup
  2. While installing, you need to select the highlighted option so a single gateway can be shared among multiple users.
  3. As listed in pre-requisites section, if your network has a proxy requirement – you can change the service account for the following windows service:

  4. You will notice gateway is installed on your server
  5. Once you open a gateway application, you can see a success message

Configuration

Post installation, you need to configure a gateway to be used within your organisation.

  1. Open gateway application and sign in with your credentials
  2. You need to set a name and a recovery key for a gateway that can be used later to reconfigure/restore gateway
  3. Once it is configured successfully, you will see a message window that now it is ready to use

  4. Switch to Gateway’s Network tab and you will see its status as Connected – great!
  5. You are all set, the gateway is up and running and you need to start building reports to use data from your on-premises server using gateway you just configured.

 

Don’t Make This Cloud Storage Mistake

In recent months a number of large profile data leaks have occurred which have made millions of customers’ personal details easily available to anyone on the internet. Three recent cases GOP, Verizon and WWE involved incorrectly configured Amazon S3 buckets (Amazon was not at fault in any way).

Even though it is unlikely you will ever find the URLs to Public Cloud storage such as Amazon S3 or Azure Storage Accounts, they are surprisingly easy to find using the search engine SHODAN which scours the internet for hidden URLs. This then allows hackers or anyone access to an enormous amount of internet-connected devices, from Cloud storage to web-cams.

Better understanding of the data that you wish to store in the Cloud can help you make a more informed decision on the method of storage.

Data Classification

Before you even look at storing your company or customer data in the Cloud you should be classifying your data in some way. Most companies classify their data according to sensitivity. This process then gives you a better understanding of how your data should be stored.

One possible method is to divide data into several categories, based upon the impact to the business in the event of an unauthorised release. For example, the first category would be public, which is intended for release and poses no risk to the business. The next category is low business impact (LBI), which might include data or information that does not contain Personally Identifiable Information (PII) or cover sensitive topics but would generally not be intended for public release. Medium business impact (MBI) data can include information about the company that might not be sensitive, but when combined or analysed could provide competitive insights, or some PII that is not of a sensitive nature but that should not be released for privacy protection. Finally, high business impact (HBI) data is anything covered by any regulatory constraints, involves reputational matters for the company or individuals, anything that could be used to provide competitive advantage, anything that has financial value that could be stolen, or anything that could violate sensitive privacy concerns.

Next, you should set policy requirements for each category of risk. For example, LBI might require no encryption. MBI might require encryption in transit. HBI, in addition to encryption in transit, would require encryption at rest.

The Mistake – Public vs Private Cloud Storage

When classifying the data to be stored in the Cloud the first and most important question is “Should this data be available to the public, or just to individuals within the company?”

Once you have answered this question you can now configure your Cloud storage whether Amazon S3, Azure Storage accounts or whichever provider you are using. One of the most important options available when configuring Cloud storage is whether it is set to “Private” or “Public” access. This is where the mistake was made in the cases mentioned earlier. In all of these cases the Amazon S3 buckets were set to “Public“, however the data stored within them was of a private nature.

The problem here is the understanding of the term “Public” when configuring Cloud storage. Some may think that the term “Public” means that the data is available publicly to all individuals within your company, however this is not the case. The term “Public” means that your data is available to anyone who can access your Cloud Storage URL, whether they are within your company or a member of the general public.

This setting is of vital importance, once you are sure this is correct you can then worry about other features that may be required such as encryption in transit and encryption at rest.

This is a simple error with a big impact which can cost your company or customer a lot of money and even more importantly their reputation.

One meeting to allow them all

The day is getting closer where you can look to Microsoft Office 365 Skype for Business to be able to meet all the capability of your UC collaboration platform in the cloud. The hardest part is knowing what is available and what would you need to purchase to make the complete package for your organisation.

E3 licenses will get you the Skype for Business cloud platform for IM/P, Web and Video Conferencing within your organisation and federated Skype for Business partner organisations. The additional steps to making your UC experience complete needs to include additional services in my opinion.

  1. Allowing participants to join your organisations meetings by a phone number, which is commonly known as a PSTN Audio Conferencing Bridge
  2. Allowing participants to join your organisations meetings by any video conferencing endpoint, which I call a Video Interop Bridge
  3. Allow the public and your employees to make/receive standard phone calls, which Microsoft have called Cloud PBX
    1. Plus a Voice Plan (carrier call rates)

Lets break down what these additional services are and why they complete a collaboration platform.

Once you adopt a collaboration platform you soon realise it is only as good as the customers, partners and employees that are stripped with the same colour as you, in this case sky blue.

We have Skype for Business, BUT we have plenty of customers and partners that have different flavours and they can’t join our meetings and collaborate.

This is where we must add the capability for external parties to join our business collaboration with ease.

Join by phone

First thing, make your meeting accessible (at a minimum) via the PSTN. At a base level offer a phone number for a party to join via a standard phone call. Everyone has a mobile or desk phone and we can cover this off pretty easy.

This is achieved via the PSTN conferencing user license in Office 365.

Join with a video conferencing device

Secondly, make your meetings accessible via any video conferencing endpoint. This is the next level collaboration for your organisation. Acknowledging that not all parties are doing things the same as you and allow them to join via a video interoperability services that is available through Polycom and Microsoft.

This is achieved via Polycom Real Connect services user license obtained through Polycom and added to your Office 365 license bundles in the portal.

What do I have as a result?

Subscribing to these two services on top of your base Skype for Business license will give your employees the ability to create meetings with additional details for participant joining. As shown below in a calendar invite created by the meeting organiser.

In the above image, both sets of meeting details for phone and video conferencing are automatically added via the user licenses. If the scheduler doesn’t have the license, then the Skype meeting doesn’t include the detail. Simple.

PSTN Calling

The last piece of the puzzle is allowing your employees to make PSTN calls to domestic and international numbers from their Skype for Business account. This was commonly referred to as Enterprise Voice in an on-premises deployment but has taken on a different name in the cloud service called “Cloud PBX”. Cloud PBX enables your employees with a direct inward dial number for inbound and outbound calling. Microsoft grants you the capability of punching digits into the Skype for Business client to normalise to a phone number, but you’ll need a voice plan associated with that user with call rates.

At the time of writing Cloud PBX is available, but the voice plans have not been made available to the public in the Australia region.

Summary

If I want the complete all inclusive collaboration platform for my organisation, I can purchase the user subscription licenses to achieve nirvana.

  1. PSTN Conferencing Add-on
  2. Polycom RealConnect for Office 365
  3. Cloud PBX with a Voice Plan

Licenses 1 and 2 in my opinion are a must if you plan to use Skype for Business as your collaboration/meeting space. While license 3 is when you start looking at Skype for Business to replace your existing telephone system (PBX) capability.