Calling WCF client proxies in Azure Functions

Azure Functions allow developers to write discrete units of work and run these without having to deal with hosting or application infrastructure concerns. Azure Functions are Microsoft’s answer to server-less computing on the Azure Platform and together with Azure ServiceBus, Azure Logic Apps, Azure API Management (to name just a few) has become an essential part of the Azure iPaaS offering.

The problem

Integration solutions often require connecting legacy systems using deprecating protocols such as SOAP and WS-*. It’s not all REST, hypermedia and OData out there in the enterprise integration world. Development frameworks like WCF help us deliver solutions rapidly by abstracting much of the boiler plate code away from us. Often these frameworks rely on custom configuration sections that are not available when developing solutions in Azure Functions. In Azure Functions (as of today at least) we only have access to the generic appSettings and connectionString sections of the configuration.

How do we bridge the gap and use the old boiler plate code we are familiar with in the new world of server-less integration?

So let’s set the scene. Your organisation consumes a number of legacy B2B services exposed as SOAP web services. You want to be able to consume these services from an Azure Function but definitely do not want to be writing any low level SOAP protocol code. We want to be able to use the generated WCF client proxy so we implement the correct message contracts, transport and security protocols.

In this post we will show you how to use a generated WCF client proxy from an Azure Function.

Start by generating the WCF client proxy in a class library project using Add Service Reference, provide details of the WSDL and build the project.

add_service_reference

Examine the generated bindings to determine the binding we need and what policies to configure in code within our Azure Function.

bindings

In our sample service above we need to create a basic http binding and configure basic authentication.

Create an Azure Function App using an appropriate template for your requirements and follow the these steps to call your WCF client proxy:

Add the System.ServiceModel NuGet package to the function via the project.json file so we can create and configure the WCF bindings in our function
project_json

Add the WCF client proxy assembly to the ./bin folder of our function. Use Kudo to create the folder and then upload your assembly using the View Files panelupload_wcf_client_assembly

In your function, add references to both the System.ServiceModel assembly and your WCF client proxy assembly using the #r directive

When creating an instance of the WCF client proxy, instead of specifying the endpoint and binding in a config file, create these in code and pass to the constructor of the client proxy.

Your function will look something like this

Lastly, add endpoint address and client credentials to appSettings of your Azure Function App.

Test the function using the built-in test harness to check the function executes ok

test_func

 

Conclusion

The suite of integration services available on the Azure Platform are developing rapidly and composing your future integration platform on Azure is a compelling option in a maturing iPaaS marketplace.

In this post we have seen how we can continue to deliver legacy integration solutions using emerging integration-platform-as-a-service offerings.

Automate the nightly backup of your Development FIM/MIM Sync and Portal Servers Configuration

Last week in a customer development environment I had one of those oh shit moments where I thought I’d lost a couple of weeks of work. A couple of weeks of development around multiple Management Agents, MV Schema changes etc. Luckily for me I was just connecting to an older VM Image, but it got me thinking. It would be nice to have an automated process that each night would;

  • Export each Management Agent on a FIM/MIM Sync Server
  • Export the FIM/MIM Synchronisation Server Configuration
  • Take a copy of the Extensions Folder (where I keep my PowerShell Management Agents scripts)
  • Export the FIM/MIM Service Server Configuration

And that is what this post covers.

Overview

My automated process performs the following;

  1. An Azure PowerShell Timer Function WebApp is triggered at 2330 each night
  2. The Azure Function App initiates a Remote PowerShell session to my Dev MIM Sync Server (which is also a MIM Service Server)
  3. In the Remote PowerShell session the script;
    1. Creates a new subfolder under c:\backup with the current date and time (dd-MM-yyyy-hh-mm)

  1. Creates further subfolders for each of the backup elements
    1. MAExports
    2. ServerExport
    3. MAExtensions
    4. PortalExport

    1. Utilizes the Lithnet MIIS Automation PowerShell Module to;
      1. Enumerate each of the Management Agents on the FIM/MIM Sync Server and export each Management Agent to the MAExports Folder
      2. Export the FIM/MIM Sync Server Configuration to the ServerExport Folder
    2. Copies the Extensions folder and subfolder contexts to the MAExtensions Folder
    3. Utilizes the FIM/MIM Export-FIMConfig cmdlet to export the FIM Server Configuration to the PortalExport Folder

Implementing the FIM/MIM Backup Process

The majority of the setup to get this to work I’ve covered in other posts, particularly around Azure PowerShell Function Apps and Remote PowerShell into a FIM/MIM Sync Server.

Pre-requisites

  • I created a C:\Backup Folder on my FIM/MIM Server. This is where the backups will be placed (you can change the path in the script).
  • I installed the Lithnet MIIS Automation PowerShell Module on my MIM Sync Server
  • I configured my MIM Sync Server to accept Remote PowerShell Sessions. That involved enabling WinRM, creating a certificate, creating the listener, opening the firewall port and enabling the incoming port on the NSG . You can easily do all that by following my instructions here. From the same post I setup up the encrypted password file and uploaded it to my Function App and set the Function App Application Settings for MIMSyncCredUser and MIMSyncCredPassword.
  • I created an Azure PowerShell Timer Function App. Pretty much the same as I show in this post, except choose Timer.
    • I configured my Schedule for 2330 every night using the following CRON configuration

0 30 23 * * *

  • I set the Azure Function App Timezone to my timezone so that the nightly backup happened at the correct time relative to my timezone. I got my timezone index from here. I set the  following variable in my Azure Function Application Settings to my timezone name AUS Eastern Standard Time.

    WEBSITE_TIME_ZONE

The Function App Script

With all the pre-requisites met, the only thing left is the Function App script itself. Here it is. Update lines 2, 3 & 6 if your variables and password key file are different. The path to your password keyfile will be different on line 6 anyway.

Update line 25 if you want the backups to go somewhere else (maybe a DFS Share).
If your MIM Service Server is not on the same host as your MIM Sync Server change line 59 for the hostname. You’ll need to get the FIM/MIM Automation PS Modules onto your MIM Sync Server too. Details on how to achieve that are here.

Running the Function App I have limited output but enough to see it run. The first part of the script runs very quick. The Export-FIMConfig is what takes the majority of the time. That said less than a minute to get a nice point in time backup that is auto-magically executed nightly. Sorted.

 

Summary

The script itself can be run standalone and you could implement it as a Scheduled Task on your FIM/MIM Server. However I’m using Azure Functions for a number of things and having something that is easily portable and repeatable and centralised with other functions (pun not intended) keeps things organised.

I now have a daily backup of the configurations associated with my development environment. I’m sure this will save me some time in the near future.

Follow Darren on Twitter @darrenjrobinson

 

 

 

Integrating Microsoft Flow with Azure Functions for Non-IT People

Microsoft Flow (Flow) creates automated workflows between various apps and services so that users can get notifications, collect data and more. This is similar to Azure Logic Apps (Logic Apps), but has different target audiences such as marketing, sales or all other non-IT related people. This document provides high-level comparisons between Flow, Logic Apps and Azure Functions.

Flow contains comprehensive number of pre-defined workflows called templates so we can just simply choose one of them, provide necessary information and use it. If there is no template suitable for our purpose, we can create a new template from scratch using pre-defined triggers and actions. If there is no trigger or action pre-defined, we can use a simple HTTP trigger using Azure Functions. In this post, we are going to have a look how use Azure Functions, HTTP Trigger in particular, to integrate with Flow.

As a Marketing Staff, I Want to …

Let’s say there is someone from a marketing department. They want to search all Twitter posts with a hashtag, #ausopen, for example and those posts are fetched to their marketing Slack channel. This can be easily accomplished by using a pre-defined template.

We can easily set the hashtag they want to follow and Slack channel to fetch like:

This is all set! Too easy! Now, we are with the Free plan, this Flow runs every five minutes. If we want to run the flow more frequently, we should upgrade the plan to paid ones like Flow Plan 1 (runs every 3 mins) or Flow Plan 2 (runs every minute). Once the flow runs, the marketing channel in Slack will receive all tweets like:

We’ve so far created a Flow item as an example.

As a …, I Want to Handle those Tweets in a Different Way

Probably, the marketing staff needs more sophisticated analysis by storing those tweets into database or want to do something else that pre-defined actions/triggers don’t support out-of-the-box. In this case we can introduce HTTP Trigger Functions to do so. Let’s create an HTTP Trigger Function.

Of course, we should implement more complex logic in the function. However, this is just an example, so we only log how Flow passes the data to Azure Function for now. When the function is ready like above, we know its endpoint URL like https://my-function-app.azurewebsites.net/api/TwitterWebhoook?code=XXXXXX.

Copy this endpoint URL for Flow. Now we need to modify the existing Flow item like:

When a new tweet with the hashtag #ausopen is found, the entire tweet object is passed to Azure Functions through the POST method, then the tweet is posted to the Slack channel. Wait for up to five minutes (we’re with the Free Plan!)

Slack channel has finally been updated.

This is the log from Flow:

And this is the log from Azure Functions:

So far, we have integrated Azure Functions (HTTP Trigger) with Microsoft Flow so that we can do more complex jobs through it. The code used in this post was very simple, but depending on the complexity of requirements, the function will handle jobs in more sophisticated way.

Testing Precompiled Azure Functions

Azure Functions has recently added a new feature that allows precompiled assembly to run functions. This gives us a great confidence with regards to unit testing. In this post, we are walking through how to unit test functions with ease, like which we do tests everyday.

The sample code used in this post can be found at HERE.

Function without Dependency

We’re not digging down precompiled function too much as it’s on the document. Let’s have a quick look at the HTTP trigger function code:

Nothing special. Now, we’re writing a test code for this function using xUnit and FluentAssertions.

How does it look like? It’s the same unit test code as what we do everyday. Let’s move on.

Function with Dependency

As I wrote Managing Dependencies in Azure Functions on the other day, dependency management is a bit tricky for Azure Functions due to its static nature. Therefore, we should introduce Service Locator Pattern for dependency management (or injection). Here’s the sample function code:

As we can see the code above, value is retrieved from the service locator instance. Of course, this is just a simple implementation of service locator pattern (If we need more sophisticated one, we should consider an IoC container library like Autofac). And here’s the poorman’s service locator:

Let’s see the test code for the function with dependencies. With the service locator, we can inject mocked object for unit testing, which is convenient for developers. In order for mocking, we use Moq in the following test code.

We create a mocked instance and inject it into the service locator. Then the injected value (or instance) is consumed within the function. How different is it from the everyday testing? There’s no difference at all. In other words, implementing a service locator gives us the same development experiences on Azure Functions, from the testing point of view.

I wrote another article for testing a few months ago, using ScriptCs. This used to be one approach, when Azure Functions didn’t support the precompiled assemblies. Now, we have precompiled functions supported. Therefore, I hope this post would be useful to design functions with better testability.

How to create an Azure Function App to Simultaneously Start|Stop all Virtual Machines in a Resource Group

Just on a year ago I wrote this blog post that detailed a method to “Simultaneously Start|Stop all Azure Resource Manager Virtual Machines in a Resource Group”. It’s a simple script that I use quite a lot and I’ve received a lot of positive feedback on it.

One year on though and there are a few enhancements I’ve been wanting to make to it. Namely;

  • host the script in an environment that is a known state. Often I’m authenticated to different Azure Subscriptions, my personal, my employers and my customers.
  • prioritize the order the virtual machines startup|shutdown
  • allow for a delay between starting each VM (to account for environments where the VM’s have roles that have cross dependencies; e.g A Domain Controller, an SQL Server, Application Servers). You want the DC to be up and running before the SQL Server, and so forth
  • and if I do all those the most important;
    • secure it so not just anyone can start|stop my environments at their whim

Overview

This blog post is the first that executes the first part of implementing the script in an environment that is a known state aka implementing it as an Azure Function App. This won’t be a perfect implementation as you will see, but will set the foundation for the other enhancements. Subsequent posts (as I make time to develop the enhancements) will add the new functionality. This post covers;

  • Creating the Azure Function App
  • Creating the foundation for automating management of Virtual Machines in Azure using Azure Function Apps
  • Starting | Stopping all Virtual Machines in an Azure Resource Group

Create a New Azure Function App

First up we are going to need a Function App. Through your Azure Resource Manager Portal create a new Function App.

For mine I’ve created a new Resource Group and a new Storage Account as this solution will flesh out over time and I’d like to keep everything organised.

Now that we have the Azure App Plan setup, create a New PowerShell HTTP Trigger Function App.

Give it a name and hit Create.

 

Create Deployment Credentials

In order to get some of the dependencies into the Azure Function we need to create deployment credentials so we can upload them. Head to the Function App Settings and choose Go to App Service Settings.

Create a login and give it a password. Record the FTP/Deployment username and the FTP hostname along with your password as you’ll need this in the next step.

Upload our PowerShell  Modules and Dependencies

Just as my original PowerShell script did I’m using the brilliant Invoke Parallel Powershell Script from Rambling Cookie Monster. Download it from that link and save it to your local machine.

Connect to your Azure Function App using your favourite FTP Client using the credentials you created earlier. I’m using WinSCP. Create a new sub-directory under /site/wwwroot/ named “bin” as shown below.

Upload the Invoke-Parallel.ps1 file from wherever you extracted it to on your local machine to the bin folder you just created in the Function App.

We are also going to need the AzureRM Powershell Modules. Download those via Powershell to your local machine (eg. Save-Module -Name AzureRM -Path c:\temp\azurerm). There are a lot of modules obviously and you’re not going to need them all. At a minimum for this solution you’ll need;

  • AzureRM
  • AzureRM.profile
  • AzureRM.Compute

Upload them under the bin directory also as shown below.

Test that our script dependencies are accessible

Now that we have our dependent modules uploaded let’s test that we can load and utilise them. Below is commands to load the Invoke-Parallel script and test that it has loaded by getting the Help.

# Load the Invoke-Parallel Powershell Script
. "D:\home\site\wwwroot\RG-Start-Stop-VirtualMachines\bin\Invoke-Parallel.ps1"

# See if it is loaded by getting some output
Get-Help Invoke-Parallel -Full

Put those lines into the code section, hit Save and Run and select Logs to see the output. If successful you’ll see the help. If you don’t you probably have a problem with the path to where you put the Invoke-Parallel script. You can use the Kudu Console from the Function App Settings to get a command line and verify your path.

Mine worked successfully. Now to test our AzureRM Module Loads. Update the Function to load the AzureRM Profile PSM as per below and test you have your path correct.

# Import the AzureRM Powershell Module
import-module 'D:\home\site\wwwroot\RG-Start-Stop-VirtualMachines\bin\AzureRM.profile\2.4.0\AzureRM.Profile.psm1'
Get-Help AzureRM

Success. Fantastic.

Create an Azure Service Principal

In order to automate the access and control of the Azure Virtual Machines we are going to need to connect to Azure using a Service Principal with the necessary permissions to manage the Virtual Machines.

The following script does just that. You only need to run this as part of the setup for the Azure Function so we have an account we can use for our automation tasks. Update line 6 for your naming and the password you want to use. I’m assigning the Service Principal the “DevTest Labs User” Azure Role (Line 17) as that allows the ability to manage the Virtual Machines. You can find a list of the available roles here.

Take note of the key outputs from this script. You will need to note the;

  • ApplicationID
  • TenantID

I’m also securing the credential that has the permissions to Start|Stop the Virtual Machines using the example detailed here in Tao’s post.

For reference here is an example to generate the keyfile. Update your path in line 5 if required and make sure the password you supply in line 18 matches the password you supplied for the line in the script (line 6) when creating the Security Principal.

Take note of the password encryption string from the end of the script to pair with the ApplicationID and TenantID from the previous steps. You’ll need these shortly in Application Settings.

Additional Dependencies

I created another sub-directory under the function app site named ‘keys’ again using WinSCP. Upload the passkey file created above into that directory.

Whilst we’re there I also created a “logs” directory for any erroneous output (aka logfiles created when you don’t specify them) from the invoke-parallel script.

Application Variables

Using the identity information you have created and generated we will populate variables on the Function App, Application Settings that we can then leverage in our Function App. Go to your Azure Function App, Application Settings and add an application setting (with the respective values you have gathered in the previous steps) for;

  • AzureAutomationPWD
  • AzureAutomationAppID
  • AzureAutomationTennatID (bad speed typing there)

Don’t forget to click Save up the top of the Application Settings screen.

 

The Function App Script

Below is the sample script for your testing purposes. If you plan to use something similar in a production environment you’ll want to add more logging and error handling.

Testing the Function

Select the Test option from the right-hand side pane and update the request body for what the Function takes (mode and resourcegroup) as below.   Select Run and watch the logs. You will need to select Expand to get more screen real estate for them.

You will see the VM’s enumerate then the script starting them all up. My script has a 30 second timeout for the Invoke-Parallel Runspace as the VM’s will take longer than 30 seconds to startup. And you pay for use, so we want to keep this lean. Increase the timeout if you have more VM’s or latency that doesn’t see all your VM’s state transitioning.

Checking in the Azure Portal I can see my VM’s all starting up (too fast on the screenshot for the spfarm-mim host).

 

Sample Remote PowerShell Invoke Script

Below is a sample PowerShell script that is remotely calling the Azure Function and providing the info the Function takes (mode and resourcegroup) the same as we did in the Test Request Body script in the Azure Function Portal.  This time to stop the VMs.

Looking in the Azure Portal and we can see all the VMs shutting down.

 

Summary

A foundational implementation of an Azure Function App to perform orchestration of Azure Virtual Machines.

The Function App is rudimentary in that the script exits (as described in the Runspace timeout) after 30 seconds which is prior to the VMs fully returning after starting|stopping. This is because the Function App will timeout after 5mins anyway.

Now to workout the enhancements to it.

Finally, yes I have renewed/changed the Function Key so no-one else can initiate my Function 🙂

Follow Darren Robinson on Twitter

Is Azure Functions over Web API Beneficial?

Whenever I meet clients and give a talk about Azure Functions, they are immediately interested in replacing their existing Web API features with Azure Functions. In this post, I’d like to discuss:

  • Can Azure Functions replace Web API?
  • Is it worth doing?

It would be a good idea to have a read through this article, Serverless Architectures, before starting.

HTTP Trigger Function == Web API Action

One of characteristics of Serverless Architecture is “event-driven”. In other words, all functions written in Azure Functions are triggered by events. Those events of course include HTTP requests. From this HTTP request point of view, both HTTP trigger function and Web API action work exactly the same way. Let’s compare both codes to each other:

How do both look like? They look pretty similar to each other. Both take an HTTP request, process it and return a response. Therefore, with minor modification, it seems that Web API can be easily migrated to Azure Functions.

HTTP Trigger Function != Web API Action

However, life is not easy. There are some major differences we should know before migration:

Functions are always static methods

Even though Azure Functions are extensions of Azure WebJobs, each function has a static modifier by design, unlike Azure WebJobs can be without the static modifier.

Actions of Web API, by the way, don’t have the static modifier. This results in a significant architectural change during the migration, especially with dependency injection (DI). We will touch it later.

Functions always receive HttpRequestMessage instance as a parameter

Within the HTTP request/response pipeline, a Web API controller internally creates an HttpContext instance to handle data like headers, cookies, sessions, querystrings and request body (of course querystrings and request body can be handled in a different way). The HttpContext instance works as an internal property so any action can directly access to it. As a result, each action only passes necessary details as its parameters.

On the other hand, each function takes a different HTTP request/response pipeline from Web API, which passes the HttpRequestMessage instance to the function as a parameter. The HttpRequestMessage instance only handles headers, querystrings and request body. It doesn’t look after cookies or sessions. This is the huge difference between Web API and Azure Functions in terms of stateless.

Functions define HTTP verbs and routes in function.json

In Web API, we put some decorators like HttpGet, HttpPost, HttpPut, HttpPatch and HttpDelete on each action to declare which HTTP verbs take which action, by combining with the Route decorator.

On the other hand, each function has a definition of HTTP verbs and routes on function.json. With this definition, different functions having the same route URI can handle requests based on HTTP verbs.

Functions define base endpoint URI in host.json

Other than the host part of URI, eg) https://api.myservice.com, the base URI is usually defined on the controller level of Web API by adding the Route decorator. This is dead simple.

However, as there’s no controller on Azure Functions, it is defined in host.json. Default value is api, but we can remove or redefine to others by modifying host.json.

While function.json can be managed at the function level through GUI or editor, unfortunately it’s not possible to edit host.json directly in the function app. There’s a workaround using Azure App Service Editor to modify host.json, by the way.

Functions should consider service locator pattern for dependency management

There are many good IoC container libraries for Web API to manage dependencies. However, we have already discussed this in my previous post, Managing Dependencies in Azure Functions, that Service Locator Pattern should be considered for DI in Azure Functions and actually this is the only way to deal with dependencies for now. This is because every Azure Function has the static modifier which prevents us from using the same way as the one in Web API.

We know different opinions against service locator patterns for Azure Functions exist out there, but this is beyond our topic, so we will discuss it later in another post.

Is Azure Functions over Web API Beneficial?

So far, we have discussed what are same and what are different between Web API and Azure Functions HTTP Trigger. Back to the initial question, is it really worth migrating Web API to Azure Functions? How does your situation fall under any of below?

  • My Web API is designed for microservices architecture: Then it’s good to go for migration to Azure Functions.
  • My Web API takes long for response: Then consider Azure Functions using empty instance in App Service Plan because it costs nothing more. Consumption Plan (or Dynamic Service Plan) would cost too much in this case.
  • My Web API is refactored to use queues: Then calculate the price carefully, not only price for Azure Functions but also price for Azure Service Bus Queue/Topic and Azure Storage Queue. In addition to this, check the number of executions as each Web API is refactored to call one Http Trigger function plus at least one Queue Trigger function (two executions in total, at least). Based on the calculations, we can make a decision to stay or move.
  • My Web API needs a significant amount of efforts for refactoring: Then it’s better to stay until it’s restructured and suitable for microservices architecture.
  • My Web API is written in ASP.NET Core: Then stay there, do not even think of migration, until Azure Functions support ASP.NET Core.

To sum up, unless your Web API requires a significant amount of refactoring or written in ASP.NET Core, it surely is worth considering migration to Azure Functions. It is much easier to use and cost-effective solution for your Web API.

Debugging Azure Functions in Our Local Box

Because of the nature of Azure Functions – Serverless Architecture, it’s a bit tricky to run it on my local machine for debugging purpose.

There is an approach related to the issue in this post, Testing Azure Functions in Emulated Environment with ScriptCs. According to the article, we can use ScriptCs for local unit testing. However, the question on debugging still remains because testing and debugging is a different story. Fortunately, Microsoft has recently released Visual Studio Tools for Azure Functions. It’s still a preview stage, but worth having a look. In this post, I’m going to walk-through how to debug Azure Functions within Visual Studio.

Azure Functions Project & Templates

After we install the toolings, we can create an Azure Functions project.

That gives us the same development experiences. Pretty straight forward. Once we create the new project, we can find nothing but only a couple of .json files   appsettings.json and host.json. The appsettings.json is only used for our local development, not for the production, to hook up the actual Azure Functions in the cloud. We are going to touch this later in this post.

Now let’s create a function in C# codes. Right mouse click on the project and add a new function.

Then we can see a list of templates to start with. We just select from HttpTrigger function in C#.

Now we have a fresh new function.

As we can see, we have a couple of another .json files for settings. function.json defines input and output, and project.json defines list of NuGet packages to import, same as what .NET Core projects do.

We’ve all got now. How to debug the function then? Let’s move on.

Debugging Functions – HTTP Trigger

Open the run.csx file. Set a break point wherever we want.

Now, it’s time for debugging! Just punch F5 key. If we haven’t installed Azure Functions CLI, we will be asked to install it.

We can manually install CLI through npm by typing:

Once the CLI is installed, a command prompt console is open. In the console, it shows a few various useful information.

  • Functions in the debugging mode only takes the 7071 port. If any of application running in our local has already taken the port, it would be in trouble.
  • The endpoint of the Function is always http://localhost:7071/api/Function_Name, which is the same format as the actual Azure Functions in the cloud. We can’t change it.

Now it’s waiting for the request coming in. It’s basically an HTTP request, we can send the request through our web browser, Postman or even curl.

Any of request above is executed, it will hit the break point within Visual Studio.

And the CLI console prints out the log like:

Super cool! Isn’t it? Now, let’s do queue trigger function.

Debugging Functions – Queue Trigger

Let’s create another function called QueueTriggerCSharp. Unlike HTTP triggered functions, this doesn’t have endpoint URL (at least not publicly exposed) as it relies on Azure Storage Queue (or Azure Service Bus Queue). We have Azure Storage Emulator that runs on our dev machine. With this emulator, we can debug our queue-triggered functions.

In order to run this function, we need to setup both appsettings.json and function.json

Here’s the appsettings.json file. We simply assign UseDevelopmentStorage=true value to AzureWebJobsStorage for now. Then, open function.json and assign AzureWebJobsStorage to the connection key.

We’re all set. Hit F5 key see how it’s going.

Seems nothing has happened. But when we see the console, certainly the queue-triggered function is up and running. How can we pass the queue value then? We have to use the CLI command here. If environment variable doesn’t recognise func.exe, we have to use the full path to run it.

Now we can see the break point at Visual Studio.

So far, we’ve briefly walked through how to debug Azure Functions in Visual Studio. There are, however, some known issues. One of those issues is, due to the nature of .csx format, IntelliSense doesn’t work as expected. Other than that, it works great! So, if your organisation was hesitating at using Azure Functions due to the lack of debugging feature, now it’s the time to play around it!

How to use a Powershell Azure Function to Tweet IoT environment data

Overview

This blog post details how to use a Powershell Azure Function App to get information from a RestAPI and send a social media update.

The data can come from anywhere, and in the case of this example I’m getting the data from WioLink IoT Sensors. This builds upon my previous post here that details using Powershell to get environmental information and put it in Power BI.  Essentially the difference in this post is outputting the manipulated data to social media (Twitter) whilst still using a TimerTrigger Powershell Azure Function App to perform the work and leverage the “serverless” Azure Functions model.

Prerequisites

The following are prerequisites for this solution;

Create a folder on your local machine for the Powershell Module then save the module to your local machine using the powershell command ‘Save-Module” as per below.

Save-Module -Name InvokeTwitterAPIs -Path c:\temp\twitter

Create a Function App Plan

If you don’t already have a Function App Plan create one by searching for Function App in the Azure Management Portal. Give it a Name, Select Consumption so you only pay for what you use, and select an appropriate location and Storage Account.

Create a Twitter App

Head over to http://dev.twitter.com and create a new Twitter App so you can interact with Twitter using their API. Give you Twitter App a name. Don’t worry about the URL too much or the need for the Callback URL. Select Create your Twitter Application.

Select the Keys and Access Tokens tab and take a note of the API Key and the API Secret. Select the Create my access token button.

Take a note of your Access Token and Access Token Secret. We’ll need these to interact with the Twitter API.

Create a Timer Trigger Azure Function App

Create a new TimerTrigger Azure Powershell Function. For my app I’m changing from the default of a 5 min schedule to hourly on the top of the hour. I did this after I’d already created the Function App as shown below. To update the schedule I edited the Function.json file and changed the schedule to “schedule”: “0 0 * * * *”

Give your Function App a name and select Create.

Configure Azure Function App Application Settings

In your Azure Function App select “Configure app settings”. Create new App Settings for your Twitter Account, Twitter Account AccessToken, AccessTokenSecret, APIKey and APISecret using the values from when you created your Twitter App earlier.

Deployment Credentials

If you haven’t already configured Deployment Credentials for your Azure Function Plan do that and take note of them so you can upload the Twitter Powershell module to your app in the next step.

Take note of your Deployment Username and FTP Hostname.

Upload the Twitter Powershell Module to the Azure Function App

Create a sub-directory under your Function App named bin and upload the Twitter Powershell Module using a FTP Client. I’m using WinSCP.

From the Applications Settings option start Kudu.

Traverse the folder structure to get the path do the Twitter Powershell Module and note it.

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Twitter Powershell Module. Include the get-help lines for the module so we can see in the logs that the modules were imported and we can see the cmdlets they contain.

Validating our Function App Environment

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Twitter Powershell Module. Include the get-help line for the module so we can see in the logs that the module was imported and we can see the cmdlets they contain. Select Save and Run.

Below is my output. I can see the output from the Twitter Module.

Function Application Script

Below is my sample script. It has no error handling etc so isn’t production ready, but gives a working example of getting data in from an API (in this case IoT sensors) and sends a tweet out to Twitter.

Viewing the Tweet

And here is the successful tweet.

Summary

This shows how easy it is to utilise Powershell and Azure Function Apps to get data and transform it for use in other ways. In this example a social media platform. The input could easily be business data from an API and the output a corporate social platform such as Yammer.

Follow Darren on Twitter @darrenjrobinson

Azure Functions: Send email using SendGrid

siliconvalve

Prior to Azure Functions announcing their General Availability (GA) I had previously used SendGrid as an output binding in order to send email messages.

Since GA, however, the ability to use SendGrid remains undocumented (I assume to give the Functions team time to test and document the binding properly) and the old approach I was using no longer seems valid.

As I needed to use this feature I spent some time digging into getting this working with the GA release of Azure Functions (version ~1). Thankfully as Functions is an abstraction over WebJobs I had plenty of information on how to do it right now thanks to the WebJobs documentation and extensibility :).

Here’s how you can get this working too:

1. Register your SendGrid API key in Application Settings: you must utilise the documented approach of setting your API key in an App Setting called “AzureWebJobsSendGridApiKey”. Without this your…

View original post 151 more words

Managing Dependencies in Azure Functions

Just before the Connect(); event, Azure Functions has become GA. Now more and more developers and IT pros are interested in it. One of the main attractions of using Azure Functions is, as a developer using C# codes, we can directly import existing private assemblies into Functions. In other words, we can easily migrate our existing applications to Azure Functions with minimal changes!

However, as we all know, migration is not that easy, especially if a huge architectural change is required. Once of the biggest challenges for the migration is probably “Managing Dependencies”. In this post, we are going to have a look which approach would be suitable for us to migrate applications smoothly.

We can find the source codes used in this post here: https://github.com/devkimchi/Azure-Functions-Dependency-Injections-Sample

Azure Function Structure

Let’s see the basic function structure. Here’s a function code when we create a manual trigger function:

What can we see here? That’s right. It’s a static method! That simply means our existing IoC container won’t fit here, if we use an IoC container for dependency management. So, what can we do here? We now need to consider a service locator pattern.

Autofac and CommonServiceLocator

If we use Autofac for our application’s dependency management, we need to create a service locator object using Autofac.Extras.CommonServiceLocator. Let’s see some code bits:

First of all, like what we are doing with Autofac, we just add dependencies using builder.RegisterType(). Then, we create an AutofacServiceLocator instance and put the instance to the service locator instance. That’s it! Let’s create a function.

Azure Functions with Service Locator

We don’t need to follow the whole step, but as a best practice that Azure Functions team suggests, we’re putting a private assemblies into the Shared folder using KUDU.

And create a .csx file within the Shared folder to create the service locator instance.

Now we have a static locator instance. We all set. Let’s create a function using this instance.

If we run this function, we get the result as expected.

This also works in an async function:

So far, we have a brief look how the service locator pattern works for our Azure Functions to manage dependencies. Migration to Azure Functions from our existing application wouldn’t be that easy, of course. However, we still use existing private assemblies as long as we introduce the service locator pattern for dependency management. Service locator pattern might be old-fashioned, but surely it’s working. There might be a better solution for managing dependencies. If you guys find anything better, let’s discuss here!