Once you start publishing content to Azure Cloud Services it becomes increasingly critical to have insights into what is going on with your Web or Worker Roles without the need to manually connect to the hosts and inspect local logs.
Logging locally to file is an option but results in a couple of challenges: there is limited local persistent disk space on an Azure Role and local logging makes it hard to get an aggregated view of what’s happening across multiple Instances servicing a single Cloud Service.
In this post we’ll take a look at how we can use the in-built capabilities of log4net’s TraceAppender and the standard Azure Diagnostics.
Baseline
If you’ve been hunting for details on how to log with log4net on Azure you will no doubt have come across a lot of posts online (particularly on Stack Overflow) that talk about how logging isn’t really reliable and that there appears to be issues using log4net.
The good news is that those posts are out-of-date and the vanilla setup works quite happily.
Just for good measure the contents of this post were pulled together using:
- log4net [1.2.13] nuget package
- Azure Diagnostics 2.4.0.0 as shipped in the Azure SDK 2.4
The Setup
You should add the two above references to the Web Application or other project you are planning on deploying to Azure. This solution should also have an Azure Cloud Service project added as well (this is the Azure deployment container for your actual solution).
You should open the primary configuration file for your solution (web.config or app.config) and add the following:
[code language=”xml”]
<system.diagnostics>
<trace>
<listeners>
<add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.4.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">
<filter type="" />
</add>
</listeners>
</trace>
</system.diagnostics>
[/code]
This will attach the Azure Diagnostics Trace Listener on deployment of your solution to an Azure Role Instance.
You will also need to define the log4net configuration (you may already have this in your project) and ensure that you add a TraceAppender as shown in the example below.
[code language=”xml”]
<log4net>
<appender name="TraceAppender" type="log4net.Appender.TraceAppender">
<layout type="log4net.Layout.PatternLayout">
<!– can be any pattern you like –>
<conversionPattern value="%logger – %message" />
</layout>
</appender>
<!– does not have to be at the root level –>
<root>
<level value="ALL" />
<appender-ref ref="TraceAppender" />
</root>
</log4net>
[/code]
So far so good.. now for the key bit!
Tie it all together
The key thing to understand is the relationship between the Azure Diagnostics level settings and the log4net log level. If you have a mismatched level then there is a chance you will not get all the logging information you expect to see.
Azure Diagnostics
Firstly we need to define the level we wish to capture using the Azure Diagnostics. This can be done prior to deployment in Visual Studio or it can be modified for an existing running Cloud Service use Server Explorer.
Prior to deployment (and useful for development debugging when using the local Azure emulator environment) you can use the properties of the Role in Visual Studio to set Diagnostics levels as shown below.
The above example is shown for local development use – you can have the build framework swap out the logging storage account location when deploying to Azure in order to use a proper Azure Storage account.
The important things here are: Diagnostics are enabled; the level. If you get either of these wrong you will see no messages logged by Azure Diagnostics from log4net. My recommendation is to deploy and run the solution for a while and see what volume of logs you capture.
You can update this setting post-deployment so you have some freedom to tweak it without needing to redploy your solution.
log4net
The only key with log4net is the level you set in your configuration. Unlike the Azure Diagnostics changing this level does require a redeployment to update so it’s worth experimenting with an appropriate level for your Azure environment prior to doing a “final” deployment.
But where are my logs?
Once deployed in Azure you will find a Table called ‘WADLogsTable’ is created in the specified Storage Account and your logging entries will begin showing up here. Individual Role Instances can be identified by the RoleInstance column in the table as shown below.
Note that this is not a real-time logging setup – the Diagnostics Trace Listener has some smarts baked into it to batch up log entries and write them to the table on a periodic basis.
You can view your log entries using a variety of tools – Visual Studio Server Explorer or the multitude of free Azure Storage Explorers on the market such as Azure Storage Explorer.
You can store up to 500TB of data in an individual Storage Account (at time of writing) so you don’t need to worry about logging too much information – the challenge of course is then finding the key information you need. I’d recommend performing periodic maintenance on the table and purging older entries.
So there you have it – how you can use standard log4net and the Azure Diagnostics tooling to redirect your existing log4net logging to Azure Diagnostics. Hope this helps!
I’ve noticed that the “Level” value is always 5, no matter what level I log at. Is there a way to fix this?
Phillip – I believe this is the solution to your query: http://stackoverflow.com/questions/11802663/log4net-traceappender-only-logs-messages-with-level-verbose-when-using-windows
Thank you Simon. I was confused since that stackoverflow post actually points to _this_ blog post, saying this has been fixed! I’ll give it a go.
True, but the link isn’t in the Answer so I’d recommend giving the Answer a whirl and see how you go :).
Thanks for the post. I have quick question on above stuff.
After the errors/logs pushed to storage account is there a way by which I can create alarm/alert on top of found errors instead of manually checking the storage table with filters etc.
Krishna – there is no native way today to achieve an alerting mechanism, but you could build a solution that achieves what you are looking for. You could use something like a worker role to periodically poll the logs, or alternatively place all log messages of a certain severity onto a Service Bus Queue that you attach a worker role to. The worker role would own the creation and delivery of any notifications.
It is very hard to get specific errors from WADLogsTable because filter takes long time. Is there any other way around to get diagnostics information?
We have not configured any storage account or access keys related to storage.
So the question comes here is, which storage account all those logs are going to be inserted in to ?
I have been your steps to create logs, on my local environment i can create logs on my Dev storage but when i deploy the same on azure it’s not creating any logs. If you have face the same request you to please help me on the same.