I have a scenario where I need to detach a rabbitmq consumer from the exchange, on a specific event, so that it stops consuming the messages in the queue and then reattach the same consumer to the exchange and start consuming the messages again. I'm using Spring-RabbitMQ 1.6.
So far, I have read a few pages on the internet e.g. http://www.programcreek.com/java-api-examples/index.php?api=org.springframework.amqp.rabbit.core.RabbitAdmin
and http://docs.spring.io/spring-amqp/docs/1.6.0.RC1/reference/htmlsingle/#idle-containers and many others but couldn't exactly understand how to make it work.
You can simply call container.stop() to stop the consumer; start() will restart it.
Related
I need to handle a time-consuming and error-prone task (e.g., invoking a SOAP endpoint that will trigger the delivery of an SMS) whenever a given endpoint of my REST API is invoked, but I'd prefer not to make my users wait for that before sending a response back. Spring AMQP is already part of my stack, so I though about leveraging it to establish a "work queue" and have a number of worker processes consuming from the queue and taking care of the "work units". I have, however, the following requirements:
A work unit is guaranteed to be delivered, and delivered to exactly one worker.
Shall a work unit fail to be completed for any reason it must get placed back in the queue so that another worker can pick it up later.
Work units survive server reboots and crashes. This is mandatory because I won't be using a DB of any kind to store them.
I know RabbitMQ and Spring AMQP can be configured in such a way that ensures these three requirements, but I've only ever used it to achieve RPC so I don't know much about anything other than that. Is there any example I might follow? What are some of the pitfalls to watch out for?
While creating queues, rabbitmq gives you two options; transient or durable. Durable messages will be available until you acknowledge them. And messages won't expire if you do not give queue a ttl. For starters you can enable rabbitmq management plugin and play around a little.
But if you really want to guarantee the safety of your messages against hard resets or hardware problems, i guess you need to use a rabbitmq cluster.
Rabbitmq Clustering and you can find high availability subject on the right side of the page.
This guy explaines how to cluster
By the way i like beanstalkd too. You can make it write messages to disk and they will be safe except disk failures.
I have a system that wraps RabbitMQ using erlang and the erlang client. We have the occasional situation where a subscriber goes down and messages queue. We will be implementing a dead-letter queue in the near future but I would like to implement a tool in the mean time to bind to a given queue and PULL all messages. I can then push them off somewhere else and replay them when the subscriber comes back online. However, I am having a hard time determining the best way to do this with the Rabbit tutorials/docs/ Mainly because the tutorials are a bit lacking for erlang clients.
Does anybody have experience with this or something similar?
I think the best thing to do is make the queue set to not auto delete. That way the queue will stay alive when the subscriber goes down. The exchange will continue to push messages to the queue which will store them until the subscriber comes back up and starts reading again.
I'm still kind of new to the erlang/otp world, so I guess this is a pretty basic question. Nevertheless I'd like to know what's the correct way of doing the following.
Currently, I have an application with a top supervisor. The latter will supervise workers that call gen_tcp:accept (sleeping on it) and then spawn a process for each accepted connection. Note: To this question, it is irrelevant where the listen() is done.
My question is about the correct way of making these workers (the ones that sleep on gen_tcp:accept) respect the otp design principles, in such a way that they can handle system messages (to handle shutdown, trace, etc), according to what I've read here: http://www.erlang.org/doc/design_principles/spec_proc.html
So,
Is it possible to use one of the available behaviors like gen_fsm or gen_server for this? My guess would be no, because of the blocking call to gen_tcp:accept/1. Is it still possible to do it by specifying an accept timeout? If so, where should I put the accept() call?
Or should I code it from scratch (i.e: not using an existant behavior) like the examples in the above link? In this case, I thought about a main loop that calls gen_tcp:accept/2 instead of gen_tcp:accept/1 (i.e: specifying a timeout), and immediately afterwards code a receive block, so I can process the system messages. Is this correct/acceptable?
Thanks in advance :)
As Erlang is event driven, it is awkward to deal with code that blocks as accept/{1,2} does.
Personally, I would have a supervisor which has a gen_server for the listener, and another supervisor for the accept workers.
Handroll an accept worker to timeout (gen_tcp:accept/2), effectively polling, (the awkward part) rather than receiving an message for status.
This way, if a worker dies, it gets restarted by the supervisor above it.
If the listener dies, it restarts, but not before restarting the worker tree and supervisor that depended on that listener.
Of course, if the top supervisor dies, it gets restarted.
However, if you supervisor:terminate_child/2 on the tree, then you can effectively disable the listener and all acceptors for that socket. Later, supervisor:restart_child/2 can restart the whole listener+acceptor worker pool.
If you want an app to manage this for you, cowboy implements the above. Although http oriented, it easily supports a custom handler for whatever protocol to be used instead.
I've actually found the answer in another question: Non-blocking TCP server using OTP principles and here http://20bits.com/article/erlang-a-generalized-tcp-server
EDIT: The specific answer that was helpful to me was: https://stackoverflow.com/a/6513913/727142
You can make it as a gen_server similar to this one: https://github.com/alinpopa/qerl/blob/master/src/qerl_conn_listener.erl.
As you can see, this process is doing tcp accept and processing other messages (e.g. stop(Pid) -> gen_server:cast(Pid,{close}).)
HTH,
Alin
I am struggling to work out how I can communicate between rabbitmq and em-websocet.
I want to place a message from a ruby on rails web page on a queue and have the queue handler process the message even if the browser is closed down. If the browser stays open, I want the results of the queue handler to pass json back to the browser.
I did find this but the github page says it is depreceated. Can anyone point me in the right direction?
From what I can gather, you've got a RabbitMQ queue, a way to add items to that queue, something to process items that get added to that queue, and you basically want to notify the browser of progress on that queue.
There are two main ways that you could approach this:
As the final action of the queue processor, publish the item/message via a messaging bus to an instance of em-websocket that's listening on that message bus.
If you can add features to RabbitMQ, then you could do the publish from within RabbitMQ, as a post-processed hook or something like that. (note, I don't know enough about RabbitMQ to say you can definitely do this).
Alternatively with #1, you could use Pusher.com or similar service to offload the handling of the WebSocket connections. Then, from within your queue processor, you would do the publish call to that services' API.
In the case of using Pusher, if you publish to a channel/socket that no longer exists (has any connections), then the message would just get discarded.
Hopefully this helps. Let me know if you want any help in setting up a basic em-websocket server.
i have used erlang for the passed five month and i have liked it now it is my time to write down a concurrent application that will interact with the YAWS web server and mnesia DBMS and to work on a distributed system may any one help me with a sketchy draft in Erlang?
i mean the application should have both the sever end and the client end where by the server can accept subscriptions from clients, Forwards notifications from event processes to each of the subscribers, accept messages to add events and start the needed processes, can accept messages to cancel an event and subsequently kill the event processes. whereas the client should be able to ask the server to add an event with all its details,ask the server to cancel an event, monitors the server (to know if it goes down) and shut down the event server if needed. The events requested from the server should contain a deadline
Spend some time browsing github, you can find projects corresponding to your description:
http://www.google.ca/search?hl=en&biw=1405&bih=653&q=site%3Agithub.com+erlang+yaws+mnesia&aq=f&aqi=&aql=&oq=