In our project i need to push messages to ActiveMQ and keep them persistant. When i send new message and memory limit is exceed oldest message in queue should be dropped/removed from queue or replaced with new one.
I do not want to clear whole queue, queue works like fail safe message backlog for our product so i need to keep last x amount of messages in the queue.
I have tried to look from google and no luck so far.
Here is my policy settings.xml
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" producerFlowControl="false" memoryLimit="5mb" >
<messageEvictionStrategy>
<oldestMessageEvictionStrategy/>
</messageEvictionStrategy>
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="100"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
The eviction policy object only apply to Topics, you cannot use them on Queues as the service contract of a Queue is that it stores all messages until they are either consumed or their lifetime expires via a set TTL value. The broker can store messages on the Queue to disk and thereby remove them from memory but for Topics the contract is looser and the eviction policies allow the messages that are in memory waiting to be dispatched to a Topic consumer to be dropped.
You can only control the lifetime of messages in the Queue via a TTL value.
You can not remove persistent messages from disk until and unless we delete it or consume it. You can enable producerFlowControl to throttle producer so that it will accept new menage after consumption of old message from queue or as Tim suggested set TTL on message.
Related
I want to create SQS using code whenever it is required to send messages and delete it after all messages are consumed.
I just wanted to know if there is some delay required between creating an SQS using Java code and then sending messages to it.
Thanks.
Virendra Agarwal
You'll have to try it and make observations. SQS is a dostributed system, so there is a possibility that a queue might not immediately be usable, though I did not find a direct documentation reference for this.
Note the following:
If you delete a queue, you must wait at least 60 seconds before creating a queue with the same name.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html
This means your names will always need to be different, but it also implies something about the internals of SQS -- deleting a queue is not an instantaneous process. The same might be true of creation, though that is not necessarily the case.
Also, there is no way to know with absolute certainty that a queue is truly empty. A long poll that returns no messages is a strong indication that there are no messages remaining, as long as there are also no messages in flight (consumed but not deleted -- these will return to visibility if the consumer resets their visibility or improperly handles an exception and does not explicitly reset their visibility before the visibility timeout expires).
However, GetQueueAttributes does not provide a fail-safe way of assuring a queue is truly empty, because many of the counter attributes are the approximate number of messages (visible, in-flight, etc.). Again, this is related to the distributed architecture of SQS. Certain rare, internal failures could potentially cause messages to be stranded internally, only to appear later. The significance of this depends on the importance of the messages and the life cycle of the queue, and the risks of any such an issue seem -- to me -- increased when a queue does not have an indefinite lifetime (i.e. when the plan for a queue is to delete it when it is "empty"). This is not to imply that SQS is unreliable, only to make the point that any and all systems do eventually behave unexpectedly, however rare or unlikely.
The approximate maximum number of in-flight messages for an SQS Standard queue is 120'000. When this limit is reached the OverLimit error message is returned. 1
But no error message is returned for FIFO queues in that case (limit here being 20'000 in-flight messages). 1
Why is that so?
I don't think there's going to be an objective answer here, other than "it was an architectural decision."
The in-flight limit is something you should essentially never encounter -- it's only applicable to messages that have been delivered to consumers, not deleted, and not past visibility timeout.
The OverLimit error is only applicable to receiving messages -- not sending them. You can still send messages to either type of queue when it's in this state, you just can't receive them.
Presumably, FIFO treats this as an ordinary "no messages available" situation so that the consumer is able to continue long polling as normal rather than seeing an exception, which would increase the workload on the FIFO queue -- which has a 300 transaction per second limit that is not applicable to non-FIFO queues. The 300 trx/sec limit includes any combination of send, receive, and/or delete, with each transaction batching up to 10 messages, and appears to be a limit related to the overhead required for coordinating exactly-once, in-order delivery. You would not want a consumer seeing exceptions to increase the workload (and reduce the throughput) on the FIFO queue by continuously retrying, when something has already gone awry (as already evidenced by 20K in-flight messages).
I want to create a simple publish subscribe setup where my publisher keeps broadcasting messages whether there are 0,1 or more subscribers and subscribers came and go when they need and read the latest messages. I don't want older messages to be read by the subscribers. For ex. if the publisher comes online and starts publishing, lets say it publishes 100 messages while there are currently no subscribers I want those messages to disappear. If a subscriber 1 comes online and 101st message is published that will be the first message seen by subscriber 1. This appears to be how multicast msmq works but the problem I am running into is that while my publisher is running, the \System32\msmq\storage will rapidly fill up with 4mb files, they have some autoincremented names, in my case usually r000001a.mq,r000001b.mq, or something similar.
I don't know how to manage how these files are created, there are no messages in my outgoing multicast queue, and these files show up whether or not I have any subscribers listening.
The only way I can clear these files is by restarting the message queuing service.
The code I'm using to publish these files is
using (var queue = new msmq.MessageQueue
("FormatName:MULTICAST=234.1.1.2:8001"))
{
var message = new msmq.Message();
message.BodyStream = snsData.ToJsonStream();
message.Label = snsData.GetMessageType();
queue.Send(message);
}
Is there any way I can programatically control how these .mq files get created? They will rapidly use up the allowable queue storage.
Thank you,
R*.MQ files are used to store express messages. It's just for efficiency, not recovery, as they are purged on a service restart as you are finding out. I would use Performance Monitor to find out which queue the messages are in - they have to be in a queue somewhere. Once you know the queue, you can work backwards - if it's a custom queue, check your code; if it's a system queue, then that would be interesting.
Basically I want to create a data buffer that a client could occasionally subscribe to, get all data from the last while, keep listening on it for real-time data, then unsubscribe after some time, and repeat.
I'm thinking of using a TTL rabbitmq queue that expires. The idea is for a client to occasionally subscribe and unsubsribe from it. When the client subscribe to the queue, it should fetch all available messages on the queue. Then the client would keep on the channel to have real-time data pushed to them.
Is this a good way to go about this? I know how to pub/sub on rabbitmq. how do I make it so it pushes all data on queue everytime a client subscribe?
It depends on how much data you are talking about. The drawback to your method is that the queue could fill up with a large amount of data, if the data rate is high and the TTL is set for a long time. You also have to keep the queue alive. And you must have one queue alive from the start for every possible subscriber.
I would suggest the Recent History Exchange perhaps modifying it so that it holds more messages.
I was wondering if there was a best practice for notifying the end of an sqs queue. I am spawning a bunch of generic workers to consume data from a queue and I want to notify them that they can stop processing once they detect no more messages in the queue. Does sqs provide this type of feature?
By looking at the right_aws ruby gem source code for SQS I found that there is the ApproximateNumberOfMessages attribute on a queue. Which you can request using a standard API call.
You can find more information including examples here:
http://docs.amazonwebservices.com/AWSSimpleQueueService/latest/APIReference/Query_QueryGetQueueAttributes.html
For more information on how to do this using the right_aws gem in ruby look at:
https://github.com/rightscale/right_aws/blob/master/lib/sqs/right_sqs_gen2_interface.rb#L187
https://github.com/rightscale/right_aws/blob/master/lib/sqs/right_sqs_gen2_interface.rb#L389
Do you mean "is there a way for the producer to notify consumers that it has finished sending messages?" . If so, then no there isn't. If a consumer calls "ReceiveMessage" and gets nothing back, or "ApproximateNumberOfMessages" returns zero, that's not a guarantee that no more messages will be sent or even that there are no messages in flight. And the producer can't send any kind of "end of stream" message because only one consumer will receive it, and it might arrive out of order. Even if you used a separate notification mechanism such as an SNS topic to notify all consumers, there's no guarantee that the SNS notification won't arrive before all the messages have been delivered.
But if you just want your pool of workers to back off when there are no messages left in the queue, then consider setting the "ReceiveMessageWaitTimeSeconds" property on your queue to its maximum value of 20 seconds. When there are no more messages to process, a ReceiveMessage call will block for up to 20s to see if a message arrives instead of returning immediately.
You could have whatever's managing your thread pool query ApproximateNumberOfMessages to regularly scale/up down your thread pool if you're concerned about releasing resources. If you do, then beware that the number you get back is Approximate, and you should always assume there may be one or more messages left on the queue even if ApproximateNumberOfMessages returns zero.