Debugging Azure Functions in Our Local Box

Because of the nature of Azure Functions – Serverless Architecture, it’s a bit tricky to run it on my local machine for debugging purpose.

There is an approach related to the issue in this post, Testing Azure Functions in Emulated Environment with ScriptCs. According to the article, we can use ScriptCs for local unit testing. However, the question on debugging still remains because testing and debugging is a different story. Fortunately, Microsoft has recently released Visual Studio Tools for Azure Functions. It’s still a preview stage, but worth having a look. In this post, I’m going to walk-through how to debug Azure Functions within Visual Studio.

Azure Functions Project & Templates

After we install the toolings, we can create an Azure Functions project.

That gives us the same development experiences. Pretty straight forward. Once we create the new project, we can find nothing but only a couple of .json files   appsettings.json and host.json. The appsettings.json is only used for our local development, not for the production, to hook up the actual Azure Functions in the cloud. We are going to touch this later in this post.

Now let’s create a function in C# codes. Right mouse click on the project and add a new function.

Then we can see a list of templates to start with. We just select from HttpTrigger function in C#.

Now we have a fresh new function.

As we can see, we have a couple of another .json files for settings. function.json defines input and output, and project.json defines list of NuGet packages to import, same as what .NET Core projects do.

We’ve all got now. How to debug the function then? Let’s move on.

Debugging Functions – HTTP Trigger

Open the run.csx file. Set a break point wherever we want.

Now, it’s time for debugging! Just punch F5 key. If we haven’t installed Azure Functions CLI, we will be asked to install it.

We can manually install CLI through npm by typing:

Once the CLI is installed, a command prompt console is open. In the console, it shows a few various useful information.

  • Functions in the debugging mode only takes the 7071 port. If any of application running in our local has already taken the port, it would be in trouble.
  • The endpoint of the Function is always http://localhost:7071/api/Function_Name, which is the same format as the actual Azure Functions in the cloud. We can’t change it.

Now it’s waiting for the request coming in. It’s basically an HTTP request, we can send the request through our web browser, Postman or even curl.

Any of request above is executed, it will hit the break point within Visual Studio.

And the CLI console prints out the log like:

Super cool! Isn’t it? Now, let’s do queue trigger function.

Debugging Functions – Queue Trigger

Let’s create another function called QueueTriggerCSharp. Unlike HTTP triggered functions, this doesn’t have endpoint URL (at least not publicly exposed) as it relies on Azure Storage Queue (or Azure Service Bus Queue). We have Azure Storage Emulator that runs on our dev machine. With this emulator, we can debug our queue-triggered functions.

In order to run this function, we need to setup both appsettings.json and function.json

Here’s the appsettings.json file. We simply assign UseDevelopmentStorage=true value to AzureWebJobsStorage for now. Then, open function.json and assign AzureWebJobsStorage to the connection key.

We’re all set. Hit F5 key see how it’s going.

Seems nothing has happened. But when we see the console, certainly the queue-triggered function is up and running. How can we pass the queue value then? We have to use the CLI command here. If environment variable doesn’t recognise func.exe, we have to use the full path to run it.

Now we can see the break point at Visual Studio.

So far, we’ve briefly walked through how to debug Azure Functions in Visual Studio. There are, however, some known issues. One of those issues is, due to the nature of .csx format, IntelliSense doesn’t work as expected. Other than that, it works great! So, if your organisation was hesitating at using Azure Functions due to the lack of debugging feature, now it’s the time to play around it!

Real world Azure AD Connect: multi forest user and resource + user forest implementation

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango.

***

Disclaimer: During October I spent a few weeks working on this blog posts solution at a customer and had to do the responsible thing and pull the pin on further time as I had hit a glass ceiling. I reached what I thought was possible with Azure AD Connect. In comes Nigel Jones (Identity Consultant @ Kloud) who, through a bit of persuasion from Darren (@darrenjrobinson), took it upon himself to smash through that glass ceiling of Azure AD Connect and figured this solution out. Full credit and a big high five!

***

tl;dr

  • Azure AD Connect multi-forest design
  • Using AADC to sync user/account + resource shared forest with another user/account only forest
  • Why it won’t work out of the box
  • How to get around the issue and leverage precedence to make it work
  • Visio’s on how it all works for easy digestion

***

In true Memento style, after the quick disclaimer above, let me take you back for a quick background of the solution and then (possibly) blow your mind with what we have ended up with.

Back to the future

A while back in the world of directory synchronisation with Azure AD, to have a user and resource forest solution synchronised required the use of Microsoft Forefront Identity Manager (FIM), now Microsoft Identity Manager (MIM). From memory, you needed the former of those products (FIM) whenever you had a multi-forest synchronisation environment with Azure AD.

Just like Marty McFly, Azure AD synchronisation went from relative obscurity to the mainstream. In doing so, there have been many advancements and improvements that negate the need to ever deploy FIM or MIM for ever the more complex environment.

When Azure AD Connect, then Azure AD Sync, introduced the ability to synchronise multiple forests in a user + resource model, it opened the door for a lot of organisations to streamline the federated identity design for Azure and Office 365.

2016-12-02-aadc-design-02

In the beginning…

The following outlines a common real world scenario for numerous enterprise organisations. In this environment we have an existing Active Directory forest which includes an Exchange organisation, SharePoint, Skype for Business and many more common services and infrastructure. The business grows and with the wealth and equity purchases another business to diversity or expand. With that comes integration and the sharing of resources.

We have two companies: Contoso and Fabrikam. A two-way trust is set up between the ADDS forests and users can start to collaboration and share resources.

In order to use Exchange, which is the most common example, we need to start to treat Contoso as a resource forest for Fabrikam.

Over at the Contoso forest, IT creates disabled user objects and linked mailboxes Fabrikam users. When where in on-premises world, this works fine. I won’t go into too much more detail, but, I’m sure that you, mr or mrs reader, understand the particulars.

In summary, Contoso is a user and resource forest for itself, and a resource forest for Fabrikam. Fabrikam is simply a user forest with no deployment of Exchange, SharePoint etc.

How does a resource forest topology work with Azure AD Connect?

For the better part of two years now, since AADConnect was AADSync, Microsoft added in support for multi-forest connectivity. Last year, Jason Atherton (awesome Office 365 consultant @ Kloud) wrote a great blog post summarising this compatibility and usage.

In AADConnect, a user/account and resource forest topology is supported. The supported topology assumes that a customer has that simple, no-nonsense architecture. There’s no room for any shared funny business…

AADConnect is able to select the two forests common identities and merge them before synchronising to Azure AD. This process uses the attributes associated with the user objects: objectSID in the user/account forest and the msExchMasterAccountSID in the resource forest, to join the user account and the resource account.

There is also the option for customers to have multiple user forests and a single resource forest. I’ve personally not tried this with more than two forests, so I’m not confident enough to say how additional user/account forests would work out as well. However, please try it out and be sure to let me know via comments below, via Twitter or email me your results!

Quick note: you can also merge two objects by sAmAccountName and sAmAccountName attribute match, or specifying any ADDS attribute to match between the forests.

Compatibility

aadc-multi-forest

If you’d like to read up on this a little more, here are two articles reference in detail the above mentioned topologies:

Why won’t this work in the example shown?

Generally speaking, the first forest to sync in AADConnect, in a multi-forest implementation, is the user/account forest, which likely is the primary/main forest in an organisation. Lets assume this is the Contoso forest. This will be the first connector to sync in AADConnect. This will have the lowest precedence as well, as with AADConnect, the lower the precedence designated number, the higher the priority.

When the additional user/account forest(s) is added, or the resource forest, these connectors run after the initial Contoso connector due to the default precedence set. From an external perspective, this doesn’t seem like much of a bit deal. AADConnect merges two matching or mirrored user objects by way of the (commonly used) objectSID and msExchMasterAccountSID and away we go. In theory, precedence shouldn’t really matter.

Give me more detail

The issue is that precedence does in deed matter when we go back to our Contoso and Fabrikam example. The reason that this does not work is indeed precedence. Here’s what happens:

2016-12-02-aadc-whathappens-01

  • #1 – Contoso is sync’ed to AADC first as it was the first forest connected to AADC
    • Adding in Fabrikam first over Contoso doesn’t work either
  • #2 – The Fabrikam forest is joined with a second forest connector
  • AADC is configured with user identities exist across multiple directories
  • objectSID and msExchMasterAccountSID is selected to merge identities
  • When the objects are merged, sAmAccountName is taken Contoso forest – #1
    • This happens for Contoso forest users AND Fabrikam forest users
  • When the objects are merged, mail or primarySMTPaddress is taken Contoso forest – #1
    • This happens for Contoso forest users AND Farikam forest users
  • Should the two objects not have a completely identical set of attributes, the attributes that are set are pulled
    • In this case, most of the user object details come from Fabrikam – #2
    • Attributes like the users firstname, lastname, employee ID, branch / office

The result is this standard setup is having Fabrikam users with their resource accounts in Contoso sync’ed, but, have their UPN set with the prefix from the Contoso forest. An example would be a UPN of user@contoso.com rather than the desired user@fabrikam.com. When this happens, there is no SSO as Windows Integrated Authentication in the Fabrikam forest does not recognise the Contoso forest UNP prefix of @contoso.com.

Yes, even with ADDS forest trusts configured correctly and UPN routing etc all working correctly, authentication just does not work. AADC uses the incorrect attributes and sync’s those to Azure AD.

Is there any other way around this?

I’ve touched on and referenced precedence a number of times in this blog post so far. The solution is indeed precedence. The issue that I had experienced was a lack of understanding of precedence in AADConnect. Sure it works on a connector rule level precedence which is set by AADConnect during the configuration process as forests are connected to.

Playing around with precedence was not something I want to do as I didn’t have enough Microsoft Identity Manager or Forefront Identity Manager background to really be certain of the outcome of the joining/merging process of user and resource account objects. I know that FIM/MIM has the option of attribute level precedence, which is what we really wanted here, so my thinking as that we needed FIM/MIM to do the job. Wrong!

In comes Nigel…

Nigel dissected the requirements over the course of a week. He reviewed the configuration in an existing FIM 2010 R2 deployment and found the requirements needed of AADConnect. Having got AADConnect setup, all that was required was tweaking a couple of the inbound rules and moving higher up the precedence order.

Below is the AADConnect Sync Rules editor output from the final configuration of AADConnect:

2016-12-01-syncrules-01

The solution centres around the main precedence rule, rule #1 for Fabrikam (red arrows pointing and yellow highlight) to be above the highest (and default) Contoso rule (originally #1). When this happened, AADConnect was able to pull the correct sAmAccountName and mail attributes from Fabrikam and keep all the other attributes associated with Exchange mailboxes from Contoso. Happy days!

Final words

Tinkering around with AADConnect shows just how powerful the “cut down FIM/MIM” application is. While AADConnect lacks the in-depth configuration and customisation that you find in FIM/MIM, it packs a lot in a small package! #Impressed

Cheers,

Lucian

Service Strategy – How do you become Instrumental?

There is a well-known concept developed by Ronald Coase around organisational boundaries being determined by transaction costs.

This concept stated that organisations are faced with three decisions.

To make, buy or rent.

In some scenarios, it makes sense for organisations to own and operate assets, or conduct activities in-house, however, at other times you could seek alternatives from the open market.

When seeking alternatives from the open markets the key factor can be the transaction cost.

The transaction cost it’s the overall costs of the economic exchange between the supplier and customer with the objective of ensuring that commitments are fulfilled.

In the context of Service Strategy, why is it important to understand this concept?

In the current shift towards cloud computing, the service transaction has now drastically been minimised in cost. It’s critical to understand this.

I often think of this like catching a fish, let me explain.

It takes time

There are costs that will be incurred when attempting to catch that fish.

In Service Strategy the costs can be calculated as the time taken to find and select qualified suppliers for goods or services that are required.

The right bait

Proven fishermen will always be asking the question “what bait works here”. So when putting together your service strategy make sure you know who you are considering in transacting with.  Knowing the track record of your provider is crucial, ask around you will save yourself a lot of time. I’m quite dumbfounded when I hear of customers ‘still’ transacting with suppliers that no longer provide any real value. More time spent transacting in the excuse basket than providing the outcome that the customer is after. Particularly when it comes to delivering services.

Certain bait attracts certain fish

If you are considering the leap to Cloud computing, find suppliers who are proven in this area. It’s not like the traditional on-premise managed service computing model. It has changed, go with providers who are working in this space and have been for a considerable period of time.  For example, they get what right sizing is and how crucial it could be to your cost model.

How many hooks and sinkers are you prepared to lose?

Governance is the costs of making sure the other party sticks to the terms of the contract and taking appropriate action if this turns out not to be the case.

Governance is fundamentally an intertwining of both leadership and management. Leadership in the sense of understanding the organisation’s vision, setting the strategy and bringing about alignment. Management in the sense of how we actually implement the strategy. Setting the budget, working out the resources required and so forth.

It’s crucial that your Cloud Enterprise Governance Framework has these qualities, for example, a policy is formally documented management expectations and intentions which can be used to direct decisions and ensure a consistent approach when implementing leadership’s strategy. In the cloud-climate where change is constant, you need to be in a position to respond with greater agility. Your governance framework has the ability to move between the two.

Once again do your homework. Find out what their Cloud Computing Service Model entails. Roadmaps, framework, fundamentally what approach they take. It’s crucial that they have this in place in order to succeed.

So what does it take to get hooked?

The actual answer to how you become instrumental is by clearly understanding your service transaction. The net effect will be brokering real value.

This, in turn, is likely to lead you to as prevailing conditions change, boundaries of the firm contract or expand with decisions such as make, buy, or rent.

Azure AD Connect – Using AuthoritativeNull in a Sync Rule

There is a feature in Azure AD Connect that became available in the November 2015 build 1.0.9125.0 (listed here), which has not had much fanfare but can certainly come in handy in tricky situations. I happened to be working on a project that required the DNS domain linked to an old Office 365 tenant to be removed so that it could be used in a new tenant. Although the old tenant was no long used for Exchange Online services, it held onto the domain in question, and Azure AD Connect was being used to synchronise objects between the on-premise Active Directory and Azure Active Directory.

Trying to remove the domain using the Office 365 Portal will reveal if there are any items that need to be remediated prior to removing the domain from the tenant, and for this customer it showed that there were many user and group objects that still had the domain used as the userPrincipalName value, and in the mail and proxyAddresses attribute values. The AuthoritativeNull literal could be used in this situation to blank out these values against user and groups (ie. Distribution Lists) so that the domain can be released. I’ll attempt to show the steps in a test environment, and bear with me as this is a lengthy blog.

Trying to remove the domain minnelli.net listed the items needing attention, as shown in the following screenshots:

This report showed that three actions are required to remove the domain values out of the attributes:

  • userPrincipalName
  • proxyAddresses
  • mail

userPrincipalName is simple to resolve by changing the value in the on-premise Active Directory using a different domain suffix, then synchronising the changes to Azure Active Directory so that the default onmicrosoft.com or another accepted domain is set.

Clearing the proxyAddresses and mail attribute values is possible using the AuthoritativeNull literal in Azure AD Connect. NOTE: You will need to assess the outcome of performing these steps depending on your scenario. For my customer, we were able to perform these steps without affecting other services required from the old Office 365 tenant.

Using the Synchronization Rules Editor, locate and edit the In from AD – User Common rule. Editing the out-of-the-box rules will display a message to suggest you create an editable copy of the rule and disable the original rule which is highly recommended, so click Yes.

The rule is cloned as shown below and we need to be mindful of the Precedence value which we will get to shortly.

Select Transformations and edit the proxyAddresses attribute, set the FlowType to Expression, and set the Source to AuthoritativeNull.

I recommend setting the Precedence value in the new cloned rule to be the same as the original rule, in this case value 104. Firstly edit the original rule to a value such as 1001, and you can also notice the original rule is already set to Disabled.

Set the cloned rule Precedence value to 104.

Prior to performing a Full Synchronization run profile to process the new logic, I prefer and recommend to perform a Preview by selecting a user affected and previewing a Full Synchronization change. As can be seen below, the proxyAddresses value will be deleted.

The same process would need to be done for the mail attribute.

Once the rules are set, launch the following PowerShell command to perform a Full Import/Full Synchronization cycle in Azure AD Connect:

  • Start-ADSyncSyncCycle -PolicyType Initial

Once the cycle is completed, attempt to remove the domain again to check if any other items need remediation, or you might see a successful domain removal. I’ve seen it take upto 30 minutes or so before being able to remove the domain if all remediation tasks have been completed.

There will be other scenarios where using the AuthoritativeNull literal in a Sync Rule will come in handy. What others can you think of? Leave a description in the comments.

Configuring AWS Web Application Firewall

In a previous blog, we discussed Site Delivery with AWS CloudFront CDN, one aspect in that blog was not covered and that was WAF (Web Application Firewall).

What is Web Application Firewall?

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules.

When to create your WAF?

Since this blog is related to CloudFront, and the fact that WAF is tightly integrated with CloudFront, our focus will be on that. Your WAF rules will run and be applied in all Edge Locations you specify during your CloudFront configuration.

It is recommended that you create your WAF before deploying your CloudFront Distribution, since CloudFront takes a while to be deployed, any changes applied after its deployment will take equal time to be updated. Even after you attach the WAF to the CloudFront Distribution (this is done in “Choose Resource” during WAF configuration – shown below).

Although there is no general rule here, it is up to the organisation or administrator to apply WAF rules before or after the deployment of CloudFront distribution.

Main WAF Features

In terms of security, WAF protects your applications from the following attacks:

  • SQL Injection
  • DDoS attacks
  • Cross Site Scripting

In terms of visibility, WAF gives you the ability to monitor requests and attacks through CloudWatch integration (excluded from this blog). It gives you raw data on location, IP Addresses and so on.

How to setup WAF?

Setting up WAF can be done in few ways, you could either use CloudFormation template, or configure the setting on the WAF page.

Since each organisation is different, and requirements change based on applications, and websites, the configuration present on this blog are considered general practice and recommendation. However, you will still need to tailor the WAF rules according to your own needs.

WAF Conditions

For the rules to function, you need to setup filter conditions for your application or website ACL.

I already have WAF setup in my AWS Account, and here’s a sample on how conditions will look.

1-waf

If you have no Conditions already setup, you will see something like “There is no IP Match conditions, please create one”.

To Create a condition, have a look at the following images:

In here, we’re creating a filter that an HTTP method contains a threat after HTML decoding.

2-waf

Once you’ve selected your filters, click on “Add Filter”. The filter will be added to the list of filters, and once you’re done adding all your filters, create your condition.

3-waf

You need to follow the same procedure to create your conditions for SQL Injection for example.

WAF Rules

When you are done with configuring conditions, you can create a rule and attach it to your web ACL. You can attach multiple rules to an ACL.

Creating a rule – Here’s where you specify the conditions you have created in a previous step.

4-waf

From the list of rules, select the rule you have created from the drop down menu, and attach it to the ACL.

5-waf

In the next steps you will have the option to choose your AWS Resource, in this case one of my CloudFront Distributions. Review and create your Web ACL.

6-waf

7-waf

Once you click on create, go to your CloudFront distribution and check its status, it should show “In progress”.

WAF Sample

Since there isn’t a one way for creating a WAF rule, and if you’re not sure where to begin, AWS gives you a good way to start with a CloudFormation template that will create WAF sample rules for you.

This sample WAF rule will include the following found here:

  • A manual IP rule that contains an empty IP match set that must be updated manually with IP addresses to be blocked.
  • An auto IP rule that contains an empty IP match condition for optionally implementing an automated AWS Lambda function, such as is shown in How to Import IP Address Reputation Lists to Automatically Update AWS WAF IP Blacklists and How to Use AWS WAF to Block IP Addresses That Generate Bad Requests.
  • A SQL injection rule and condition to match SQL injection-like patterns in URI, query string, and body.
  • A cross-site scripting rule and condition to match Xss-like patterns in URI and query string.
  • A size-constraint rule and condition to match requests with URI or query string >= 8192 bytes which may assist in mitigating against buffer overflow type attacks.
  • ByteHeader rules and conditions (split into two sets) to match user agents that include spiders for non–English-speaking countries that are commonly blocked in a robots.txt file, such as sogou, baidu, and etaospider, and tools that you might choose to monitor use of, such as wget and cURL. Note that the WordPress user agent is included because it is used commonly by compromised systems in reflective attacks against non–WordPress sites.
  • ByteUri rules and conditions (split into two sets) to match request strings containing install, update.php, wp-config.php, and internal functions including $password, $user_id, and $session.
  • A whitelist IP condition (empty) is included and added as an exception to the ByteURIRule2 rule as an example of how to block unwanted user agents, unless they match a list of known good IP addresses

Follow this link to create a stack in the Sydney region.

I recommend that you review the filters, conditions, and rules created with this Web ACL sample. If anything, you could easily update and edit the conditions as you desire according to your applications and websites.

Conclusion

In conclusion, there are certain aspects of WAF that need to be considered, like choosing an appropriate WAF solution and managing its availability, and you have to be sure that your WAF solution can keep up with your applications.

The best feature of WAF, and since it is integrated with CloudFront it can be used to protect websites even if they’re not hosted in AWS.

I hope you found this blog informative. Please feel free to add your comments below.

Thanks for reading.

Experiences with the new AWS Application Load Balancer

Originally posted on Andrew’s blog @ cloudconsultancy.info

Summary

Recently I had an opportunity to test drive AWS Application load balancer as my client had a requirement for making their websocket application fault tolerant. The implementation was complete windows stack and utilised ADFS 2.0 for SAML authentication however this should not affect other people’s implementation.

The AWS Application load balancer is a fairly new feature which provides layer 7 load balancing and support for HTTP/2 as well as websockets. In this blog post I will include examples of the configuration that I used to implement as well is some of the troubleshooting steps I needed to resolve.

The application load balancer is an independent AWS resource from classic ELB and is defined as aws elbv2 with a number of different properties.

Benefits of Application Load Balancer include:

  • Content based routing, ie route /store to a different set of instances from /apiv2
  • Support for websocket
  • Support for HTTP/2 over HTTPS only (much larger throughput as it’s a single stream multiplexed meaning it’s great for mobile and other high latency apps)
  • Cheaper cost than classic, roughly 10% cheaper than traditional.
  • Cross-zone load balancing is always enabled for ALB.

Some changes that I’ve noticed:

  • Load balancing algorithm used for application load balancer is currently round robin.
  • Cross-zone load balancing is always enabled for an Application Load Balancer and is disabled by default for a Classic Load Balancer.
  • With an Application Load Balancer, the idle timeout value applies only to front-end connections and not the LB-> server connection and this prevents the LB cycling the connection.
  • Application Load balancer is exactly that and performs at Layer 7, so if you want to perform SSL bridge use Classic load balancer with TCP and configure SSL certs on your server endpoint.
  • cookie-expiration-period value of 0 is not supported to defer session timeout to the application. I ended up having to configure the stickiness.lb_cookie.duration_seconds value. I’d suggest making this 1 minute longer than application session timeout, in my example a value of 1860.
  • The X-Forwarded-For parameter is still supported and should be utilised if you need to track client IP addresses, in particular useful if going through a proxy server.

For more detailed information from AWS see http://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/how-elastic-load-balancing-works.html.

Importing SSL Certificate into AWS – Windows

(http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html )

  1. Convert the existing pkcs key into .pem format for AWS

You’ll need openssl for this, the pfx and the password for the SSL certificate.

I like to use chocolatey as my Windows package manager, similar to yum or apt-get for Windows, which is a saviour for downloading package and managing dependencies in order to support automation, but enough of that, check it out @ https://chocolatey.org/

Once choco is installed I simply execute the following from an elevated command prompt.

“choco install openssl.light”

Thereafter I run the following two commands which breaks out the private and public keys (during which you’ll be prompted for the password):

openssl pkcs12 -in keyStore.pfx -out SomePrivateKey.key –nodes –nocerts

openssl pkcs12 -in keyStore.pfx -out SomePublic.cert –nodes –nocerts

NB: I’ve found that sometimes copy and paste doesn’t work when trying to convert keys.

  1. Next you’ll need to also break out the trust chain into one contiguous file, like the following.
-----BEGIN CERTIFICATE-----

Intermediate certificate 2

-----END CERTIFICATE-----

-----BEGIN CERTIFICATE-----

Intermediate certificate 1

-----END CERTIFICATE-----

-----BEGIN CERTIFICATE-----

Optional: Root certificate

-----END CERTIFICATE-----

Save the file for future use,

thawte_trust_chain.txt

Example attached above is for a Thawte trust chain with the following properties

“thawte Primary Root CA” Thumbprint ‎91 c6 d6 ee 3e 8a c8 63 84 e5 48 c2 99 29 5c 75 6c 81 7b 81

With intermediate

“thawte SSL CA - G2” Thumbprint ‎2e a7 1c 36 7d 17 8c 84 3f d2 1d b4 fd b6 30 ba 54 a2 0d c5

Ordinarily you’ll only have a root and intermediate CA, although sometimes there will be second intermediary CA.

Ensure that your certificates are base 64 encoded when you export them.

  1. Finally execute the following after authenticating to the AWS CLI (v1.11.14+ to support aws elbv2 function) then run “aws configure” applying your access and secret keys, configuring region and format type. Please note that this includes some of the above elements including trust chain and public and private keys.

If you get the error as below

A client error (MalformedCertificate) occurred when calling the UploadServerCertificate operation: Unable to validate certificate chain. The certificate chain must start with the immediate signing certificate, followed by any intermediaries in order. The index within the chain of the invalid certificate is: 2”

Please check the contents of the original root and intermediate keys as they probably still have the headers and maybe some intermediate,

ie

Bag Attributes

localKeyID: 01 00 00 00

friendlyName: serviceSSL

subject=/C=AU/ST=New South Wales/L=Sydney/O=Some Company/OU=IT/CN=service.example.com

issuer=/C=US/O=thawte, Inc./CN=thawte SSL CA - G2

Bag Attributes

friendlyName: thawte

subject=/C=US/O=thawte, Inc./OU=Certification Services Division/OU=(c) 2006 thawte, Inc. - For authorized use only/CN=thawte Primary Root CA

issuer=/C=US/O=thawte, Inc./OU=Certification Services Division/OU=(c) 2006 thawte, Inc. - For authorized use only/CN=thawte Primary Root CA

AWS Application LB Configuration

Follow this gist with comments embedded. Comments provided based on some gotchas during configuration.

You should be now good to go, the load balancer takes a little while to warm up, however will be available within multiple availability zones.

If you have issues connecting to the ALB, validate connectivity direct to the server using curl.  Again chocolatey comes in handy “choco install curl”

Also double check your security group registered against the ALB and confirm NACLS.

WebServer configuration

You’ll need to import the SSL certificate into the local computer certificate store. Some of the third party issuing (Ensign, Thawte, etc) CAs may not have the intermediate CA within the computed trusted Root CAs store, especially if built in a network isolated from the internet, so make sure after installing the SSL certificate on the server that the trust chain is correct.

You won’t need to update local hosts file on the servers to point to the load balanced address.

Implementation using CNAMEs

In large enterprises where I’ve worked there have been long lead times associated with fairly simple DNS changes, which defeats some of the agility provided by cloud computing. A pattern I’ve often seen adopted is to use multiple CNAMEs to work around such lead times. Generally you’ll have a subdomain domain somewhere where the Ops team have more control over or shorter lead times. Within the target domain (Example.com) create a CNAME pointing to an address within the ops managed domain (aws.corp.internal) and have a CNAME created within that zone to point to the ALB address, ie

Service.example.com -> service.aws.corp.internal -> elbarn.region.elb.amazonaws.com

With this approach I can update service.aws.corp.internal to reflect a new service which I’ve built via a new ELB and avoid the enterprise change lead times associated with a change in .example.com.

Site Delivery with AWS CloudFront CDN

Nowadays, most companies are using some sort of a Content Delivery Network (CDN) to improve the performance and high availability of their sites, those include Azure CDN, CloudFlare, CloudFront, Varnish, and so on.

In this blog however, I will demonstrate how you can deliver your entire website through AWS’s CloudFront. This blog will not go through other CDN services. This blog also assumes you have knowledge of AWS services, DNS, and CDN.

What is CloudFront?

Amazon CloudFront is a global content delivery network (CDN) service that accelerates delivery of your websites, APIs, video content or other web assets. It integrates with other Amazon Web Services products to give developers and businesses an easy way to accelerate content to end users with no minimum usage commitments.

CloudFront delivers the contents of your websites through global datacentres known as “Edge Locations”.

Assuming the webserver is located in New York, and you’re accessing the website from Melbourne, then the latency will be greater than someone trying to access the website from London.

CloudFront’s Edge Locations will serve the content of a website depending on location. That is, if you’re trying to access a New York based website from Melbourne, you will be directed to the closest Edge Location available for users from Australia. There are two Edge Locations in Australia, one in Melbourne and one in Sydney.

How CloudFront delivers content?

Please note that contents aren’t delivered from the first request (whether you use CloudFront or any other CDN solution). That is, the first user who accesses a page from Melbourne (first request), the contents of the page hadn’t been cached (yet) and it will be fetched from the webserver. The second user who accesses the website (second request), will get the contents from the Edge Location.

Here’s how:

drawing1

The main features of CloudFront are:

  • Edge Location: This is the location where content will be cached. This is separate to an AWS Region or Availability Zone
  • Origin: This is the origin of all the files that the CDN will distribute. This can be either an S3 bucket, an EC2 instance, and ELB or Route53.
  • Distribution: The name given the CDN which consists of collection of Edge Location
  • Web Distribution: Typically used for Websites.
  • RTMP: Typically used for Media Streaming (Not covered in this blog).

In this blog, we will be covering “Web Distribution”.

There are multiple ways to “define” your origin. You could either upload your contents to an S3 bucket, or let CloudFront cache objects from your webserver.

I advise you to keep the same naming conventions you have previously used for your website.

There’s no real difference between choosing an S3 bucket, or your webserver to deliver contents, or even an Elastic Load Balancer.

What matters however, should you choose for CloudFront to cache objects from your origin, you may need to change your hostname. Alternatively, if your DNS registrar allows it, you can make an APEX DNS change.

Before we dive deep into setting up and configuring CloudFront, know that it is fairly a very simple process, and we will be using CloudFront GUI to achieve this.

Alternatively you can use other third party tools like S3 Browser, Cloudberry Explorer, or even CloudFormation if you have several websites you’d like to enable CloudFront for. These tools are excluded from this blog.

Setup CloudFront with S3

Although I do not recommend this approach because S3 is designed as a storage service and not a delivery (content) service, under load it will not provide you with optimum performance.

  1. Create your bucket
  2. Upload your files
  3. Make your content public (this is achieved through Permissions. Simply choose grantee “everyone”.

Configuring CloudFront with S3

As aforementioned, configuring CloudFront is very straightforward. Here are the steps for doing so.

I will explain the different settings at the end of each image.

Choose your origin (domain name, S3 bucket, ELB…). If you have an S3 bucket or an ELB already configured, they will show in the drop down menu.

You could simply follow the selected options in the image for optimal performance and configuration of your CloudFront distribution.

5-cdn

  • Origin path: This is optional and usually not needed to be specified. This is basically a directory in your bucket in which you’re telling CloudFront to request the content from.
  • Origin ID: This is automatically populated, but you can change it. Its only function is for you to distinguish origins if you have multiple origins in the same distributions.
  • Restrict Bucket Access: This is for users to access your CloudFront URL e.g. 123456.cloudfront.net rather than the S3 URL.
  • Origin Access Identity: This is required if you want your users to always access your Amazon S3 content using CloudFront URLs. You can use the same Access Identity for all your distributions. In fact, it is recommended you do so to make life simpler.
  • Grant Read Permissions on Bucket: This applies on the “Origin Access Identity” so CloudFront can access objects in your Amazon S3 bucket. This is automatically applied.
  • Viewer Protocol Policy: This is to specify how users should access your origin domain name. If you have a website that accepts both HTTP and HTTPS, then choose that. CloudFront will fetch the contents based on this viewer policy. That is, if a user typed in http://iknowtech.com.au then CloudFront will fetch content over HTTP. If HTTPS is used, then CloudFront will fetch contents over HTTPS. If your website only accepts HTTPS, then choose that option.
  •  Allowed HTTP Methods: This basically is used for commerce websites or websites with login forms which requires data from end users for better performance. You can keep it default on “Get, Head”. Nevertheless, make sure to configure your webserver to handle “Delete” appropriately, otherwise users might be able to delete contents.
  • Cached HTTP Methods: You will have an additional “Options”, if you choose the specified HTTP Methods shown above. This is to specify the methods in which you want CloudFront to do caching.

In the second part of the configuration:

6-cdn

  • Price Class: This is to specify which regions of available Edge Locations you want to “serve” your website from.
  • AWS WAF Web ACL: Web Application Firewall (WAF) is a set of ACL rules which you create to protect your website from attacks, e.g. SQL Injections etc. I highlighted that on purpose as there will be another blog for that alone.
  • CNAME: If you don’t want to use CloudFront’s URL e.g.123456.cloudfront.net, and instead you want to use your own domain, then specifying a CNAME is a good idea, and you can specify up to 100 CNAMEs. Nevertheless, there may be a catch. Most DNS hosting services may not allow you to edit the APEX zone of your records. And if you create a CNAME for http://www.domain.com to point to 123456.cloudfront.net, then any requests coming from htttp://domain.com will not be going through CloudFront. And if you have a redirection set up in your webserver, for any request coming from http://www.domain.com to go http://domain.com then there’s no point configuring CloudFront.
  • SSL Certificates: You could either use CloudFront’s certificate, and it is a wildcard certificate of *.cloudfront.net, or you can request to use your own domain’s certificate.
  • Supported HTTP Versions: What you need to know is that CloudFront always forwards requests to the origin using HTTP/1.1. This also is based on the viewer policy, most modern websites support all HTTP methods shown above. HTTP/2 is usually faster. Read more here for more info on HTTP/2 support. In theory this sounds ideal, technically however, nothing much is happening in the backend of CloudFront.
  • Logging: Choose to have logging on. Logs are saved in an S3 bucket.
  • Bucket for Logs: Specify the bucket you want to save the logs onto.
  • Log Prefix: Choose a prefix for your logs. I like to include the domain name for each log of each domain.
  • Cookie logging: Not quite important to have it turned on.
  • Enable IPv6: You can have IPv6 enabled, however as of this writing this is still being deployed.
  • Distribution State: You can choose to deploy/create your CloudFront distribution with either in an enabled or disabled state.

Once you’ve completed the steps above, click on “Create Distribution”. It might take anywhere from 10 to 30 minutes for CloudFront to be deployed. Average waiting time is 15 minutes.

Setup and Configure your DNS Records

Once the Distribution is started, head over to “Distributions” in CloudFront, then click on the Distribution ID, take note of the domain name: d2hpa1xghjdn8b.cloudfront.net.

Head over to your DNS records and add the CNAME (or CNAMEs) you have specified in earlier steps to point to d2hpa1xghjdn8b.cloudfront.net.

Do not wait until the Distribution is complete, add the DNS records while the Distribution is being deployed. This will at least give time for your DNS to propagate, since CloudFront takes anywhere between 10 to 30 minutes to be deployed.

17-dns

18-dns

If you’re delivering the website through an ELB, then you can use the ELB’s CNAME to point your site to it.

Here’s what will appear eventually once the CloudFront Distribution is complete. Notice the URLs: http://d80u8wk4w5p58.cloudfront.net/nasa_blue_marble.jpg (I may remove this link in the future).

8-cdn

You can also access it via: http://cdn.iknowtech.com.au/nasa_blue_marble.jpg (I may also remove this link in the future).

10-cdn

Configuring CloudFront with Custom Origin

Creating a CloudFront Distribution based on Custom Origin, that is to allow CloudFront to cache directly from your domain, is pretty much the same as above, with some differences, as shown below. Every other setting is the same as above.

9-cdn

The changes, as you can see relate to SSL Protocols, HTTP and HTTPS ports.

  • Original SSL Protocols: This is to specify which SSL Protocols CloudFront will use when establishing a connection to your origin. If you don’t have SSLv3, keep it disabled. If you do, and your origin does not support v1, v1.1. and v1.2, then choose SSLv3.
  • Origin Protocol Policy: This is the same as Viewer Protocol Policy discussed above. If you choose “Match Viewer” then it will work with both HTTP and HTTPS. Obviously, it also depends on how your website is set up.
  • HTTP and HTTPS ports: Leave default ports.

Configure CloudFront with WordPress

If you have a WordPress page, it is most probably the easiest way to configure CloudFront for WordPress. Through the use of plugins, you can change the hostname.

  1. Install the W3 Total Cache plugin in your WordPress page.
  2. Enable CDN and choose CloudFront. This is found in the “General Settings” tab of the W3 plugin.11-cdn
  3. While scrolling down to CDN, you may enable other forms of “caching” found in settings.
  4. After saving your changes, click on “CDN” tab of the W3 plugin.
  5. Key in a the required information. I suggest you create an IAM user with permission to CloudFront to be used here.

Note that I used cdn2.iknowtech.com.au because I had already used cdn.iknowtech.com.au. CloudFront will detect this setting and give you an error if you try and use the same CNAME.

12-cdn

Once your settings are saved, here’s how it’ll look.

Note the URLs: http://d2hpa1xghjdn8b.cloudfront.net (I may remove this link in the future).13-cdn

You can also access it via: http://cdn2.iknowtech.com.au (I may also remove this link the future).

14-cdn

 

 

To make sure your CDN is working, you can perform some test, using any of the followings: gtmetrix.com, pingdom.com or webpagetest.org .

Here are the results from gtmetrix, tested for iknowtech.com.au

15-cdn

The same result for cdn2.iknowtech.com.au.

Notice the page load time after the second request.

16-cdn

And that’s it. All you need to know about to create a CloudFront Distribution.

Conclusion

CloudFront is definitely a product to use if you’re looking for CDN. It is true you have many other CDN products out there, but CloudFront is one of the easiest, highly-available CDNs in the market.

Before you actually utilise CloudFront or any other CDN solutions, just be mindful of your hostnames. You need your primary domain or record to be cached.

I hope you found this blog informative. Please feel free to post your comments below.

Thanks for reading.

 

azure-function-twitter

How to use a Powershell Azure Function to Tweet IoT environment data

Overview

This blog post details how to use a Powershell Azure Function App to get information from a RestAPI and send a social media update.

The data can come from anywhere, and in the case of this example I’m getting the data from WioLink IoT Sensors. This builds upon my previous post here that details using Powershell to get environmental information and put it in Power BI.  Essentially the difference in this post is outputting the manipulated data to social media (Twitter) whilst still using a TimerTrigger Powershell Azure Function App to perform the work and leverage the “serverless” Azure Functions model.

Prerequisites

The following are prerequisites for this solution;

Create a folder on your local machine for the Powershell Module then save the module to your local machine using the powershell command ‘Save-Module” as per below.

Save-Module -Name InvokeTwitterAPIs -Path c:\temp\twitter

Create a Function App Plan

If you don’t already have a Function App Plan create one by searching for Function App in the Azure Management Portal. Give it a Name, Select Consumption so you only pay for what you use, and select an appropriate location and Storage Account.

Create a Twitter App

Head over to http://dev.twitter.com and create a new Twitter App so you can interact with Twitter using their API. Give you Twitter App a name. Don’t worry about the URL too much or the need for the Callback URL. Select Create your Twitter Application.

Select the Keys and Access Tokens tab and take a note of the API Key and the API Secret. Select the Create my access token button.

Take a note of your Access Token and Access Token Secret. We’ll need these to interact with the Twitter API.

Create a Timer Trigger Azure Function App

Create a new TimerTrigger Azure Powershell Function. For my app I’m changing from the default of a 5 min schedule to hourly on the top of the hour. I did this after I’d already created the Function App as shown below. To update the schedule I edited the Function.json file and changed the schedule to “schedule”: “0 0 * * * *”

Give your Function App a name and select Create.

Configure Azure Function App Application Settings

In your Azure Function App select “Configure app settings”. Create new App Settings for your Twitter Account, Twitter Account AccessToken, AccessTokenSecret, APIKey and APISecret using the values from when you created your Twitter App earlier.

Deployment Credentials

If you haven’t already configured Deployment Credentials for your Azure Function Plan do that and take note of them so you can upload the Twitter Powershell module to your app in the next step.

Take note of your Deployment Username and FTP Hostname.

Upload the Twitter Powershell Module to the Azure Function App

Create a sub-directory under your Function App named bin and upload the Twitter Powershell Module using a FTP Client. I’m using WinSCP.

From the Applications Settings option start Kudu.

Traverse the folder structure to get the path do the Twitter Powershell Module and note it.

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Twitter Powershell Module. Include the get-help lines for the module so we can see in the logs that the modules were imported and we can see the cmdlets they contain.

Validating our Function App Environment

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Twitter Powershell Module. Include the get-help line for the module so we can see in the logs that the module was imported and we can see the cmdlets they contain. Select Save and Run.

Below is my output. I can see the output from the Twitter Module.

Function Application Script

Below is my sample script. It has no error handling etc so isn’t production ready, but gives a working example of getting data in from an API (in this case IoT sensors) and sends a tweet out to Twitter.

Viewing the Tweet

And here is the successful tweet.

Summary

This shows how easy it is to utilise Powershell and Azure Function Apps to get data and transform it for use in other ways. In this example a social media platform. The input could easily be business data from an API and the output a corporate social platform such as Yammer.

Follow Darren on Twitter @darrenjrobinson

Completing an Exchange Online Hybrid individual MoveRequest for a mailbox in a migration batch

I can’t remember for certain, however, I would say since at least Exchange Server 2010 Hybrid, there was always the ability to complete a MoveRequest from on-premises to Exchange Online manually (via PowerShell) for a mailbox that was a within a migration batch. It’s really important for all customers to have this feature and something I have used on every enterprise migration to Exchange Online.

What are we trying to achive here?

With enterprise customers and the potential for thousands of mailboxes to move from on-premises to Exchange Online, business analyst’s get their “kind in a candy store” on and sift through data to come up with relationships between mailboxes so these mailboxes can be grouped together in migration batches for synchronised cutovers.

This is the tried and true method to ensure that not only business units, departments or teams cutover around the same timeframe, but, any specialised mailbox and shared mailbox relationships cutover. This then keeps business processes working nicely and with minimal disruption to users.

Sidebar – As of March 2016, it is now possible to have permissions to shared mailboxes cross organisations from cloud to on-premises, in hybrid deployments with Exchange Server 2013 or 2016 on-premises. Cloud mailboxes can access on-premises shared mailboxes which can streamline migrations where no all shared mailboxes need to be cutover with their full access users.

Reference: https://blogs.technet.microsoft.com/mconeill/2016/03/20/shared-mailboxes-in-exchange-hybrid-now-work-cross-premises/

How can we achieve this?

In the past and from what I had been doing for, what I feel is, the last 4 years, required one or two PowerShell cmdlets. The main reference that I’ve recently gone to is Paul Cunningham’s ExchangeServerPro blog post that conveniently is also the main Google search result. The two lines of PowerShell are as follows:

Step 1 – Change the Suspend When Ready to Complete switch to $false or turn it off

Get-MoveRequest mailbox@domain.com | Set-MoveRequest -SuspendWhenReadyToComplete:$false

Step 2 – Resume the move request to complete the cutover of the mailbox

Get-MoveRequest mailbox@domain.com | Resume-MoveRequest

Often I don’t even do step 1 mentioned above. I just resumed the move request. It all worked and go me the result I was after. This was across various migration projects with both Exchange Server 2010 and Exchange Server 2013 on-premises.

Things changed recently!?!? I’ve been working with a customer on their transition to Exchange Online for the last month and we’re now up to moving a pilot set of mailboxes to the cloud. Things were going great, as expected, nothing out of the ordinary. That is, until I wanted to cutover one initial mailbox in our first migration batch.

Why didn’t this work?

It’s been a few months, three quarters of a year, since I’ve done an Office 365 Exchange Online mail migration. I did the trusted Google and found Paul’s blog post. Had that “oh yeah, thats it” moment and remembered the PowerShell cmdlet’s. Our pilot user was cutover! Wrong..

The mailbox went from a status of “synced” to “incrementalsync” and back to “synced” with no success. I ran through the PowerShell again. No buono. Here’s a screen grab of what I was seeing.

SERIOUS FACE > Some names and identifying details have been changed to protect the privacy of individuals

And yes, that is my PowerShell colour theme. I have to get my “1337 h4x0r” going and make it green and stuff.

I see where this is going. There’s a work around. So Lucian, hurry up and tell me!

Yes, dear reader, you would be right. After a bit of back and forth with Microsoft, there is indeed a solution. It still involves two PowerShell cmdlet’s, but, there is two additional properties. One of those properties is actually hidden: -preventCompletion. The other property is a date and time value normally represented like: “28/11/2016 12:00:00 PM”.

Lets cut to the chase; here’s the two cmdlets”

Get-MoveRequest -Identity mailbox@domain.com | Set-MoveRequest -SuspendWhenReadyToComplete:$false -preventCompletion:$false -CompleteAfter 5
Get-MoveRequest -Identity mailbox@domain.com | Resume-MoveRequest

After running that, the mailbox, along with another four to be sure it worked, had all successfully cutover to Exchange Online. My -CompleteAfter value of “5” was meant to be 5 minutes. I would say that could be set with the correct date and time format and have the current date and time + 5minutes added to do it correctly. For me, it worked with the above PowerShell.

Final words

I know it’s been sometime since I moved mailboxes from on-premises to the cloud, however, it’s like riding a bike: you never forget. In this case, I knew the option was there to do it. I pestered Office 365 support for two days. I hope the awesome people there don’t hate me for it. Although, in the long run, it’s good that I did as figuring this out and using this method of manual mailbox cutover for sync’ed migration batches is crucial to any transition to the cloud.

Enjoy!

Lucian

azure-function-powerbi

How to use a Powershell Azure Function App to get RestAPI IoT data into Power BI for Visualization

Overview

This blog post details using a Powershell Azure Function App to get IoT data from a RestAPI and update a table in Power BI with that data for visualization.

The data can come from anywhere, however in the case of this post I’m getting the data from WioLink IoT Sensors. This builds upon my previous post here that details using Powershell to get environmental information and put it in Power BI.  Essentially the major change is to use a TimerTrigger Azure Function to perform the work and leverage the “serverless” Azure Functions model. No need for a reporting server or messing around with Windows scheduled tasks.

Prerequisites

The following are the prerequisites for this solution;

  • The Power BI Powershell Module
  • Register an application for RestAPI Access to Power BI
  • A Power BI Dataset ready for the data to go into
  • AzureADPreview Powershell Module

Create a folder on your local machine for the Powershell Modules then save the modules to your local machine using the powershell command ‘Save-Module” as per below.

Save-Module -Name PowerBIPS -Path C:\temp\PowerBI
Save-Module -Name AzureADPreview -Path c:\temp\AzureAD 

Create a Function App Plan

If you don’t already have a Function App Plan create one by searching for Function App in the Azure Management Portal. Give it a Name, Select Consumption Plan for the Hosting Plan so you only pay for what you use, and select an appropriate location and Storage Account.

Register a Power BI Application

Register a Power BI App if you haven’t already using the link and instructions in the prerequisites. Take a note of the ClientID. You’ll need this in the next step.

Configure Azure Function App Application Settings

In this example I’m using Azure Functions Application Settings for the Azure AD AccountName, Password and the Power BI ClientID. In your Azure Function App select “Configure app settings”. Create new App Settings for your UserID and Password for Azure (to access Power BI) and our PowerBI Application Client ID. Select Save.

Not shown here I’ve also placed the URL’s for the RestAPI’s that I’m calling to get the IoT environment data as Application Settings variables.

Create a Timer Trigger Azure Function App

Create a new TimerTrigger Azure Powershell Function App. The default of a 5 min schedule should be perfect. Give it a name and select Create.

Upload the Powershell Modules to the Azure Function App

Now that we have created the base of our Function App we’re going to need to upload the Powershell Modules we’ll be using that are detailed in the prerequisites. In order to upload them to your Azure Function App, go to App Service Settings => Deployment Credentials and set a Username and Password as shown below. Select Save.

Take note of your Deployment Username and FTP Hostname.

Create a sub-directory under your Function App named bin and upload the Power BI Powershell Module using a FTP Client. I’m using WinSCP.

To make sure you get the correct path to the powershell module from Application Settings start Kudu.

Traverse the folder structure to get the path to the Power BI Powershell Module and note the path and the name of the psm1 file.

Now upload the Azure AD Preview Powershell Module in the same way as you did the Power BI Powershell Module.

Again using Kudu validate the path to the Azure AD Preview Powershell Module. The file you are looking for is the Microsoft.IdentityModel.Clients.ActiveDirectory.dll” file. My file after uploading is located in “D:\home\site\wwwroot\MyAzureFunction\bin\AzureADPreview\2.0.0.33\Microsoft.IdentityModel.Clients.ActiveDirectory.dll”

This library is used by the Power BI Powershell Module.

Validating our Function App Environment

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Power BI Powershell Module. Include the get-help line for the module so we can see in the logs that the modules were imported and we can see the cmdlets they contain. Select Save and Run.

Below is my output. I can see the output from the Power BI Module get-help command. I can see that the module was successfully loaded.

Function Application Script

Below is my sample script. It has no error handling etc so isn’t production ready, but gives a working example of getting data in from an API (in this case IoT sensors) and puts the data directly into Power BI.

Viewing the data in Power BI

In Power BI it is then quick and easy to select our Inside and Outside temperature readings referenced against time. This timescale is overnight so both sensors are reading quite close to each other.

Summary

This shows how easy it is to utilise Powershell and Azure Function Apps to get data and transform it for use in other ways. In this example a visualization of IoT data into Power BI. The input could easily be business data from an API and the output a real time reporting dashboard.

Follow Darren on Twitter @darrenjrobinson