The distributed world
The cloud revolution has revived the importance of distributed computing in today’s enterprise market with the distribution of compute and storage workloads across multiple decoupled resources helping corporates optimise their capital and operational expenditure.
While there are benefits of moving to the cloud, it’s important to understand the ground rules of the cloud platform. Running your business critical services on commodity hardware with a service SLA of three nines (99.9) against five nines (99.999) does call for some precautions. A key mitigation is to adhere to the recommendations provided by the cloud platform provider for application hosting.
The Microsoft published article on cloud development recommendations is a the perfect cheat sheet.
Amongst many design patterns and recommendations for designing and developing applications for the cloud, designing the right methodology for enabling asynchronous communication between software services ultimately plays a key role in determining the reliability, scalability and efficiency of your application.
Why employ a Queue?
Queuing is as an effective solution for enabling asynchronous communications between software services. The following are few benefits of employing a queuing model:
- Minimal dependency on service availability – As queues act as a buffer between software components, the availability of a service will not impact another as they can function in a disconnected fashion.
- High reliability – Queues uses transactions to manage the messages stored in them. In case of a failure, the transaction can be rolled back to recover the message.
- Load balancing – Queues can be used for load balancing work between software services. Microsoft recommends a Queue based load leveling pattern as an ideal implementation of this.
Microsoft Azure provides two queuing solutions which can be used to enable asynchronous communication between software services:
- Azure Storage Queues – an early Azure feature which is a part of Azure storage service offer REST based reliable persistent messaging capability.
- Azure Service Bus Queues – introduced as a part of Azure Service Bus services to support additional features such as publish/subscribe and topics.
Picking the right queuing technology plays a significant role in the efficiency of a distributed cloud application. In the rest of this post I will cover a few important factors you should consider when choosing one.
What is the size of messages being transferred?
The maximum message size supported by Azure Storage Queues is 64KB while Azure Service Bus Queues support messages up to 256KB. This becomes an important factor especially when the message format is padded (such as XML). An ideal pattern to use for transferring larger chunks of data is to use Azure Storage Blobs as a transient store. The data can be stored as a blob and the link to the blob can be communicated to the consuming service using queues.
Does your ‘message consuming service’ go offline?
This is mostly applicable for batch processing systems which are designed to be dormant/offline periodically. In such a scenario the maximum size of the queue becomes an important factor to consider when choosing a queuing technology. Azure Storage Queues can grow to a maximum size of 200TB while Azure Service Bus Queues can only hold a maximum 80GB of data.
Another factor which impacts the choice of technology is the message expiration duration. In case of batch processing systems it is likely that the system only consumes messages once every few days or weeks. The maximum message expiry period for Azure Storage Queues is 7 days after which the messages cannot be recovered. In case of Azure Service Bus Queues the message expiry duration is unlimited.
Does the order of messages matter?
Although all queues are expected to follow FIFO (first in first out) ordering, it is not guaranteed in the case of Azure Storage Queues. Azure Service Bus Queues, however, guarantees FIFO ordering of messages at all times.
Does your messaging infrastructure require auditing?
Server side logs for operations on the queues is only supported on Azure Storage Queues. A custom implementation is required to capture queuing events if Azure Service Bus Queues are used.
What is the preferred programming model for your applications?
The messages from a queue can be consumed by two methods. A push (publish/subscribe) or a pull (polling) action. Azure Service Bus Queues supports both push and pull models to consume messages while Azure Storage Queues support only a pull model.
Does your application require features like dead letter handling, grouping, scheduling, forwarding or support for transactions?
Azure Service Bus Queues supports advanced features such as dead letter queues, dead letter events, message grouping, message forwarding, duplicate detection, at most once delivery and transactions. These features are not supported by Azure Storage Queues.
Queue design patterns
Here are a few useful design patterns which can be used to leverage the potential of Azure Queues in a distributed application hosted on cloud
- Queue-Based Load Leveling Pattern
- Competing Consumers Pattern
- Pipes and Filters Pattern
- Priority Queue Pattern
Detailed comparison
The following table compares the features of Azure Storage Queues, Azure Service Bus Queues and Amazon Simple Queuing Service (SQS) in detail.
Features | Azure Service Bus Queues | Azure Storage Queues |
Provisioning | ||
API support | Yes | Yes |
PowerShell command lets Support | Yes | Yes |
Local (Australia) availability | Yes | Yes |
Security | ||
Encryption | No | No |
Authentication | Symmetric key | Symmetric key |
Access control | RBAC via ACS | Delegated access via SAS tokens |
Auditing | No | Yes |
Identity provider federation | Yes | No |
Scale | ||
Max no: Queues per account | 10,000 (per service namespace, can be increased) | Unlimited |
Max Queue size | 1 GB to 80 GB | 200 TB |
Max message size | 256 KB | 64 KB |
Max message expiration duration | Unlimited | 7 days |
Max concurrent connections | Unlimited | Unlimited |
Max no: records returned per call | 5000 | – |
Poison Messages | ||
Dead letter handling | Yes | No |
Dead letter events | Yes | No |
Consumption patterns | ||
One-way messaging | Yes | Yes |
Request response | Yes | Yes |
Broadcast messaging | Yes | No |
Publish-Subscribe | Yes | No |
Batch processing | ||
Message grouping | Yes | No |
Scheduling | ||
Message scheduling | Yes | No |
Transactions | ||
Transaction support | Yes | No |
Delivery | ||
Assured FIFO | Yes | No |
Delivery guarantee | At-Least-Once
At-Most-Once |
At-Least-Once |
Receive behaviour | Blocking with/without timeout
Non-blocking |
Non-blocking |
Receive Mode | Peek & Lease
Receive & Delete |
Peek & Lease |
Lease/Lock duration | 60 seconds (default) | 30 seconds (default) |
Auto forwarding | Yes | No |
Duplicate detection | Yes | No |
Peek message | Yes | Yes |
Monitoring | ||
Server side logs | No | Yes |
Storage metrics | Yes | Yes |
State management | Yes | No |
Management | ||
Purge queue | No | Yes |
Management Protocol | REST over HTTPS | REST over HTTP/HTTPS |
Runtime Protocol | REST over HTTPS | REST over HTTP/HTTPS |
Development | ||
.Net managed API | Yes | Yes |
Native C++ API | No | Yes |
Java API | Yes | Yes |
PHP API | Yes | Yes |
Node.js API | Yes | Yes |
Queue naming rules | Yes | Yes |
Performance | ||
Maximum throughput | Up to 2,000 messages per second
(based on benchmark with 1 KB messages) |
Up to 2,000 messages per second
(based on benchmark with 1 KB messages) |
Average latency | 20-25 ms | 10 ms |
Throttling behaviour | Reject with exception/HTTP 503 | Reject with HTTP 503 |
Useful links
Azure Queues and Service Bus Queues – compared and contrasted.
Comments are closed.