Is there any Message Queue supporting message based on priority? - message

I have a requirement where consumer needs to consume the high priority message first from a queue.
Can anyone worked on such open source queue?
Also it will be good if it support batch fetching of message?

i have implemented ActiveMQ and it support message priority for consumer on queue.So it should fulfil your requirement.check:
http://activemq.apache.org/how-can-i-support-priority-queues.html
http://www.christianposta.com/blog/?p=289
As far batch message fetching is concerned,JMS don’t have any such method to fetch batch message from queue,you have to use multi-threading approach to run different consumer ,retrieved the message and group it on your side before delivering it to client.Or Loop the message throug hone consumer.
For multi-threaded consumer make sure to use pref-etch policy =0 for consumer during connection or on the queue.

Related

Jobs pushing to queue, but not processing

I am using AWS SQS. I am getting 2 issues.
Sometime, messages are present in the queue but I am not able to read that.
When I fetch, I am getting blank array, same like not any messages found in queue.
When I am deleting a message from queue then it gives me like
sqs.delete_message({queue_url: queue_url, receipt_handle: receipt_handle})
=> Aws::EmptyStructure
When I check in SQS (In AWS), message still present even I refresh page more then 10 times.
Can you help me why this happens ?
1. You may need to implement Long Polling.
SQS is a distributed system. By default, when you read from a queue, AWS returns you the response only from a small subset of its servers. That's why you receive empty array some times. This is known as Short Polling.
When you implement Long Polling, AWS waits until it gets the response from all it's servers.
When you're calling ReceiveMessage API, set the parameter WaitTimeSeconds > 0.
2. Visibility Timeout may be too short.
The Visibility Timeout controls how long a message currently being read by one poller is invisible to other pollers. If the visibility timeout is too short, then other pollers may start reading the message before your first poller has processed and deleted it.
Since SQS supports multiple pollers reading the same message. From the docs -
The ReceiptHandle is associated with a specific instance of receiving a message. If you receive a message more than once, the ReceiptHandle is different each time you receive a message. When you use the DeleteMessage action, you must provide the most recently received ReceiptHandle for the message (otherwise, the request succeeds, but the message might not be deleted).

msmq\storage keeps filling up with multicast queue

I want to create a simple publish subscribe setup where my publisher keeps broadcasting messages whether there are 0,1 or more subscribers and subscribers came and go when they need and read the latest messages. I don't want older messages to be read by the subscribers. For ex. if the publisher comes online and starts publishing, lets say it publishes 100 messages while there are currently no subscribers I want those messages to disappear. If a subscriber 1 comes online and 101st message is published that will be the first message seen by subscriber 1. This appears to be how multicast msmq works but the problem I am running into is that while my publisher is running, the \System32\msmq\storage will rapidly fill up with 4mb files, they have some autoincremented names, in my case usually r000001a.mq,r000001b.mq, or something similar.
I don't know how to manage how these files are created, there are no messages in my outgoing multicast queue, and these files show up whether or not I have any subscribers listening.
The only way I can clear these files is by restarting the message queuing service.
The code I'm using to publish these files is
using (var queue = new msmq.MessageQueue
("FormatName:MULTICAST=234.1.1.2:8001"))
{
var message = new msmq.Message();
message.BodyStream = snsData.ToJsonStream();
message.Label = snsData.GetMessageType();
queue.Send(message);
}
Is there any way I can programatically control how these .mq files get created? They will rapidly use up the allowable queue storage.
Thank you,
R*.MQ files are used to store express messages. It's just for efficiency, not recovery, as they are purged on a service restart as you are finding out. I would use Performance Monitor to find out which queue the messages are in - they have to be in a queue somewhere. Once you know the queue, you can work backwards - if it's a custom queue, check your code; if it's a system queue, then that would be interesting.

ActiveMQ Start dropping messages from Queue after memory limit is exceed

In our project i need to push messages to ActiveMQ and keep them persistant. When i send new message and memory limit is exceed oldest message in queue should be dropped/removed from queue or replaced with new one.
I do not want to clear whole queue, queue works like fail safe message backlog for our product so i need to keep last x amount of messages in the queue.
I have tried to look from google and no luck so far.
Here is my policy settings.xml
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" producerFlowControl="false" memoryLimit="5mb" >
<messageEvictionStrategy>
<oldestMessageEvictionStrategy/>
</messageEvictionStrategy>
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="100"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
The eviction policy object only apply to Topics, you cannot use them on Queues as the service contract of a Queue is that it stores all messages until they are either consumed or their lifetime expires via a set TTL value. The broker can store messages on the Queue to disk and thereby remove them from memory but for Topics the contract is looser and the eviction policies allow the messages that are in memory waiting to be dispatched to a Topic consumer to be dropped.
You can only control the lifetime of messages in the Queue via a TTL value.
You can not remove persistent messages from disk until and unless we delete it or consume it. You can enable producerFlowControl to throttle producer so that it will accept new menage after consumption of old message from queue or as Tim suggested set TTL on message.

Looking to implement write timeout when there is a delay in writing message to a queue

We are working on a billing invoice system. As a part of processing our request, we need to make an asynchronous call by placing a message in a queue. We work at 20TPS and have SLA for entire transaction of 12 sec. Occasionally, we have observed that when MQ server becomes very slow but still operational it's taking a lot of time just to write the message in the queue. We want to handle this scenario and have a system that throws an exception when it exceeds a predefined limit for writing the message in the queue.
In simple words, we want to implement a write timeout when there is a delay in writing a message in the queue. Any help is appreciated.
We are aware of mentioning timeout for receiving the response but we are unable to find any fix for mentioning timeout while writing the message in the queue.
We have found some suggestions on revalidating the destination. But in our case, we already know the destination is operational and our system becomes slow only during the response.

Amazon SQS End of Queue Detection

I was wondering if there was a best practice for notifying the end of an sqs queue. I am spawning a bunch of generic workers to consume data from a queue and I want to notify them that they can stop processing once they detect no more messages in the queue. Does sqs provide this type of feature?
By looking at the right_aws ruby gem source code for SQS I found that there is the ApproximateNumberOfMessages attribute on a queue. Which you can request using a standard API call.
You can find more information including examples here:
http://docs.amazonwebservices.com/AWSSimpleQueueService/latest/APIReference/Query_QueryGetQueueAttributes.html
For more information on how to do this using the right_aws gem in ruby look at:
https://github.com/rightscale/right_aws/blob/master/lib/sqs/right_sqs_gen2_interface.rb#L187
https://github.com/rightscale/right_aws/blob/master/lib/sqs/right_sqs_gen2_interface.rb#L389
Do you mean "is there a way for the producer to notify consumers that it has finished sending messages?" . If so, then no there isn't. If a consumer calls "ReceiveMessage" and gets nothing back, or "ApproximateNumberOfMessages" returns zero, that's not a guarantee that no more messages will be sent or even that there are no messages in flight. And the producer can't send any kind of "end of stream" message because only one consumer will receive it, and it might arrive out of order. Even if you used a separate notification mechanism such as an SNS topic to notify all consumers, there's no guarantee that the SNS notification won't arrive before all the messages have been delivered.
But if you just want your pool of workers to back off when there are no messages left in the queue, then consider setting the "ReceiveMessageWaitTimeSeconds" property on your queue to its maximum value of 20 seconds. When there are no more messages to process, a ReceiveMessage call will block for up to 20s to see if a message arrives instead of returning immediately.
You could have whatever's managing your thread pool query ApproximateNumberOfMessages to regularly scale/up down your thread pool if you're concerned about releasing resources. If you do, then beware that the number you get back is Approximate, and you should always assume there may be one or more messages left on the queue even if ApproximateNumberOfMessages returns zero.

Resources