I will be very appreciate for some help with resolving my issue.
I'm using Rabbitmq and there are a lot of generated queues(with names like amq.gen-pMJVWygd3iLb_buXp1oUyw), which are durable and leave forever.
The problem that such queues has exchange core.timeout, but there are also queue which should handle core.timeout.
So I'm stuck in this moment and can't find where this queues are generates.
According to your clarifications, the problem seems to be that you are letting Rabbit in your code to create durable queues automatically when connecting to an exchange.
Try debugging your class MQ to see where your queue creation is being trigger in the exchange core.timeout.
Check on the docs for more info about Rabbit.
Hope this helps.
Related
Currently we're using Hangfire for scheduling and running long lived tasks. We need these tasks to be able to be retried in the event of an ungraceful shutdown, which Hangfire handles for us.
We're looking to try and move to a producer/consumer model and I've built a basic prototype with Masstransit and AWS SQS, but I have some concerns about how to handle the event of a task being processed during an ungraceful shutdown.
I understand that eventually the SQS visibility timeout will expire and the queued item will be picked up for processing again, but setting that timeout isn't trivial as the length of tasks can be quite varied and I'd prefer if the task could immediately resume/retry processing when the application starts up again.
I got reading about Job Consumers and they seemed to be better fitted to this type of scenario, but all the examples I've seen are using RabbitMQ. Wondering if it's possible/appropriate to do this using SQS, or if there's a better approach?
Thank you for taking the time to read this question :)
MassTransit will extend the visibility timeout as long as the consumer is still running.
I believe SQS has an upper-limit of something like 12 hours, but you should look it up and find out.
Job Consumers have significantly greater requirements (sagas, temporary queues, etc.) and SQS is really annoying about not having auto-delete/expiring queues, so I'd stick to a regular consumer if you can swing it.
I'm not clear about how to create a pull queue in GCP from an outside application. I've found documentation about pulling messages but not about creating queues.
Can somebody point me out some information about it?
Best Regards
Creating queues from outside of an AppEngine App is currently not available.
Queue management features are coming in the new Cloud Tasks API, which is available in Alpha. You can request to join the Alpha here
I need to handle a time-consuming and error-prone task (e.g., invoking a SOAP endpoint that will trigger the delivery of an SMS) whenever a given endpoint of my REST API is invoked, but I'd prefer not to make my users wait for that before sending a response back. Spring AMQP is already part of my stack, so I though about leveraging it to establish a "work queue" and have a number of worker processes consuming from the queue and taking care of the "work units". I have, however, the following requirements:
A work unit is guaranteed to be delivered, and delivered to exactly one worker.
Shall a work unit fail to be completed for any reason it must get placed back in the queue so that another worker can pick it up later.
Work units survive server reboots and crashes. This is mandatory because I won't be using a DB of any kind to store them.
I know RabbitMQ and Spring AMQP can be configured in such a way that ensures these three requirements, but I've only ever used it to achieve RPC so I don't know much about anything other than that. Is there any example I might follow? What are some of the pitfalls to watch out for?
While creating queues, rabbitmq gives you two options; transient or durable. Durable messages will be available until you acknowledge them. And messages won't expire if you do not give queue a ttl. For starters you can enable rabbitmq management plugin and play around a little.
But if you really want to guarantee the safety of your messages against hard resets or hardware problems, i guess you need to use a rabbitmq cluster.
Rabbitmq Clustering and you can find high availability subject on the right side of the page.
This guy explaines how to cluster
By the way i like beanstalkd too. You can make it write messages to disk and they will be safe except disk failures.
I have done a lot of searching and I am aware of grails-executor and the JMS plugin. I am looking for advice on the best way to implement a long running (as long as the application is running) service that runs in the background and accepts input on a blocking queue. It seems that there at two ways to satisfy my requirements... 1. JMS (which feels overly heavy handed) and 2. a service running on a thread that watches the queue... when something is added to it, it processes it and then waits for the next item. This service needs to have GORM capability so that it can create/save objects. My preference is to startup some type of service on a thread and use a blocking queue... Can anyone suggest the best way to do this? Should I just implement a class that gets called when grails bootstraps and have that class use the grails-executor to create a thread that just runs in the background? If anyone has used the jms plugin in grails, is it sufficiently lightweight that I should reconsider my position on this? Any and all advice is greatly appreciated. I am really NOT tied to any one solution, so all ideas will be considered and very much appreciated.
Thanks in advance!
I use the quartz plugin for a lot of similar "queue watching" functionality.
You can use Spring integration instead. With quartz you have to develop you enqueuing logic but with spring integration every thing is pre-developed.
I am working on a Rails application where customer refunds are handed to a Sweatshop worker. If a refund fails (because we cannot reach the payment processor at that time) I want to requeue the job.
class RefundWorker < Sweatshop::Worker
def process_refund(job)
if refund
Transaction.find(job[:transaction]).update_attributes(:status => 'completed')
else
sleep 3
RefundWorker.async_process_refund(job) # requeue the job
end
end
Is there any better way to do this than above? I haven't found any "delay" feature in RabbitMQ, and this is the best solutions I've come up with so far. I want to avoid a busy loop while requeueing.
Have you looked at things like Ruote and Minion?
Some links here: http://delicious.com/alexisrichardson/rabbitmq+work+ruby
You could also try Celery which does not speak native Ruby but does speak HTTP+JSON.
All of the above work with RabbitMQ, so may help you.
Cheers
alexis
Have a timed-delivery service? You'd send the message to deliver as payload, wrapped up with a time-to-deliver, and the service would hold onto the message until the specified time had been reached. Nothing like that exists in the RabbitMQ server or any of the AMQP client libraries, as far as I'm aware, but it'd be a useful thing to have.
It doesn't seem like AMQP (or at least RabbitMQ) supports the idea of "delay this job." So the approach to re-queue the same job from inside the worker if it fails seems to be the best solution at this time.
I have the code working in a demo environment and it meeting my needs so far.