I have two instances of cowboy server running which are connected to RabbitMQ. I am using gen_bunny as RabbitMQ client to connect to RabbitMQ.
I can consume the message to from rabbitMQ if using bunnyc:consume(). However for that I need to fire this method explicitly. What I want is to bind an event on cowboy so as soon as there is a message in the Queue it should automatically notify to cowboy.
Is it possible using gen_bunny or other erlang client?
Dont know about gen_bunny, but with official erlang client you can subscribing to queue (look at http://www.rabbitmq.com/erlang-client-user-guide.html, "Subscribing To Queues" section)
As far as i understand, you need send messages from queue through WebSockets to clients. So you need subscribe to queue in process that communicate with client. And recieve messages in "receive ... end" or in handle_info (depends on realization)
ADDITION
I looked in gen_bunny sources... mochi/gen_bunny depends on mochi/amqp_client which provide amqp_channel:subscribe/3 (see https://github.com/mochi/amqp_client/blob/master/src/amqp_channel.erl#L177) you can use it for subscribing
Got it worked ... After some tweaking in the bunnyc.erl source. Now, In init function i have added subscription function and in start_link function in bunnyc.erl passing the process id of my cowboy process so as soon as there is a message in the queue I can get it in websocket_info function of cowboy..
Related
We have 2 different applications which interact with each by sending messages. Is it possible to have multiple listeners listening to the same queue. May be we could pass some header while pushing the message to the queue and then on the basis of header, the message would arrive in a single consumer.
No; RabbitMQ doesn't work that way; unlike JMS, there is no notion of a message selector.
Each consumer needs its own queue and you use a routing key to tell the broker which queue to route the message to.
I would like to create Inter Machine Communication using INDY TCP Client and Server components. IdTCPServer has an event called OnExecute which is triggered when client wants something from server. I would like to create the same functionality, when SERVER sends request to the CLIENT which would have the same OnExecute event, just like it was working as server. Is it easily to achieve? I need it because I can connect to the peer only one-way (NAT)
IdTCPServer has an event called OnExecute which is triggered when client wants something from server.
That is not how the TIdTCPServer.OnExecute event works. It is called in a continuous loop for the lifetime of the connection, regardless of anything that the client or server do with the connection between the time it is connected and the time it is disconnected.
The typical usage of the event is to block the calling thread waiting for the client to send a packet, then reply with a packet, and then exit, letting the loop fire the event again so it can wait for the next client packet.
But this is not the only way the event can be used.
I would like to create the same functionality, when SERVER sends request to the CLIENT which would have the same OnExecute event, just like it was working as server. Is it easily to achieve?
TIdTCPClient does not implement that kind of logic natively. It merely provides a connection, but you have to write your own code to tell it when to read and write data over that connection.
For what you are asking, you will need to create your own worker thread, either by writing a TThread/TIdThread-derived class, or using the TIdThreadComponent component. When the client connects to the server, start the thread. When the client disconnects, stop the thread. Then your thread can do whatever you want with the connection, just like with the TIdTCPServer.OnExecute event.
Depending on the format of your commands/responses, you might be able to use TIdCmdTCPClient instead of TIdTCPClient. TIdCmdTCPClient runs its own thread internally, and its CommandHandlers collection parses inbound requests and generates outgoing responses. All you would have to do is populate the collection with TIdCommandHandler objects that define the parsing criteria for each request, and assign an OnCommand event handler to each one to react to each request that the server sends.
Preamble: I'm trying to put together a proposal for what I assume to be a very common use-case, and I'd like to use Amazon's SWF and SQS to accomplish my goals. There may be other services that will better match what I'm trying to do, so if you have suggestions please feel free to throw them out.
Problem: The need at its most basic is for a client (mobile device, web server, etc.) to post a message that will be processed asynchronously without a response to the client - very basic.
The intended implementation is to for the client to post a message to a pre-determined SQS queue. At that point, the client is done. We would also have a defined SWF workflow responsible for picking up the message off the queue and (after some manipulation) placing it in a Dynamo DB - again, all fairly straightforward.
What I can't seem to figure out though, is how to trigger the workflow to start. From what I've been reading a workflow isn't meant to be an indefinite process. It has a start, a middle, and an end. According to the SWF documentation, a workflow can run for no longer than a year (Setting Timeout Values in SWF).
So, my question is: If I assume that a workflow represents one message-processing flow, how can I start the workflow whenever a message is posted to the SQS?
Caveat: I've looked into using SNS instead of SQS as well. This would allow me to run a server that could subscribe to SNS, and then start the workflow whenever a notification is posted. That is certainly one solution, but I'd like to avoid setting up a server for a single web service which I would then have to manage / scale according to the number of messages being processed. The reason I'm looking into using SQS/SWF in the first place is to have an auto-scaling system that I don't have to worry about.
Thank you in advance.
I would create a worker process that listens to the SQS queue. Upon receiving a message it calls into SWF API to start a workflow execution. The workflow execution id should be generated based on the message content to ensure that duplicated messages do not result in duplicated workflows.
You can use AWS Lambda for this purpose . A lambda function will be invoked by SQS event and therefore you don't have to write a queue poller explicitly . The lambda function could then make a post request to SWF to initiate the workflow
I am looking for a solution to poll messages from MQTT broker. I will describe the solution briefly here.
We have a Spring based Controller class which exposes REST APIs to handle certain vehicle related diagnostics data. Through one of these API-s Notify3P() I create a MQTT java client and publish messages based on some input data to the MQTT broker on a given topic. My requirement is to notify a third-party system every time the client publishes a message on MQTT.
The 3P system is going to pick up the message from MQTT once it receives the notification. It then needs to get the message from the MQTT broker through a getMessage() REST API (which we need to expose on the above controller class). The getMessage() API needs to poll MQTT for the messages that have already been published and hand it to the 3P system. The 3P system would then do some processing and send back a response to our system through another REST API postMessage() exposed on our controller class. The postMessage() should post the message on the response topic on MQTT. I need another REST API checkResponse() which then polls the response topic of MQTT and send back the response to the client.
What I have done so far: On application start up I have a start up bean which listens to MQTT request and response topics. Now I publish data to request topic using the REST API Notify3P(). I have attached a callback with the startup bean which gets the message. The problem comes when the 3P needs to call my controller to poll message from MQTT.
I am not clear how to hold back messages on MQTT and consume it on demand. Is there a mechanism to do it in MQTT? Also once the 3P system post messages on the response then again how do I poll the response topic to pick up the response from MQTT and send to clients of my Controller?
I hope the problem description makes sense. If there is any solution from anyone please post it. Any sample code would of great help.
Thanks in advance!!
You may have got the idea of MQTT a bit confused. One of the key points is that there is no polling.
You subscribe to your response topic and publish to the request topic. As soon as a response is available you will be sent it by the broker. You can't hold back messages.
It sounds like your controller also needs to talk MQTT. If it is subscribed to the response topic from the start then it will receive the messages and you can do with them what you will, no need for polling.
To achieve exactly what you want, where the third party notifies the controller to read messages from MQTT then the controller would need to be able to use MQTT anyway. At that point you might as well do it "properly". If you don't want to integrate MQTT into your controller, then you can't do what you describe and you will have to come up with another means of communicating between the two components.
Summary - get your controller to talk MQTT if you can.
I have a gen_server speaking to several hardware sensors. This setup works fine. Now I want to extend this system with a real time visualization of the data. And I want to deliver this data to a web client over websockets.
As I might have several "listeners", e.g. several people visualizing this data on different web browsers, I am thinking of something resembling the Observer Pattern. Each time a web client asks to subscribe to a sensor, it should be added to a list of stakeholders for that sensor. As soon as new sensor data arrives, it should be pushed out to the client without delay.
I am using yaws to quickly get websocket functionality. My problem is related to the way that yaws seems to work. On the server side I only "see" the client at connection time through the A#arg.clisock value (e.g. #Port<0.2825>). In the code below I register ws_server to receive the callbacks when new data enters from the client. After this point yaws seems to only allow me to respond to messages entering server side.
out(A) ->
CallbackMod = ws_server,
Opts = [{origin, "http://" ++ (A#arg.headers)#headers.host}],
{websocket, CallbackMod, Opts}.
This is what the callback module looks like:
% handle_message(Incoming)
% Incoming :: {text,Msg} | {binary,Msg} | {close, Status, Reason}
handle_message({Type,Data}) ->
gen_server:cast(?SERVER,{websocket,Data}),
noreply.
Nowhere, it seems, am I able to react to a message such as subscribe <sensor> and (after connection time) dynamically add this stakeholder to a list of observers.
How can I use the yaws server to accomplish asynchronous pushing of data to client and during session add and remove sensors that I want to listen to. Basically the easiest way would be if yaws could call back handle_message/2 with first argument being From. Otherwise I need to add a ref to keep a ref on both sides and send that to server each time, it seems backwards that I need to keep that information.
When the client starts a websocket connection Yaws will start a new gen_server process to handle all communication between client and server.
The gen_server dispatches all client-sent messages to your callback function handle_message/1, so it is not used only for the initial connect.
Also, the gen_server process is used to push data from the server to the client - use the yaws_api:websocket_send(Pid, {Type, Data}) function. The Pid parameter is the pid of the websocket gen_server. Your callback module could be changed to:
handle_message({Type,Data}) ->
gen_server:cast(?SERVER,{self(), Data}),
noreply.
This gives your server the pid of the websocket gen_server to be used with yaws_api:websocket_send/2. The Data parameter contains the client request and needs to be parsed by your server to check for sensor subscription requests and associate the websocket pid with the appropriate sensor.