I would like to use the plugin em-eventsource ( https://github.com/AF83/em-eventsource ) for server-sent events in a Rails 3.1-project. My problem is, that there is only explained how to listen on events and receive messages, but not how to fire a specific event up and send the message. I would like to produce the event in an Active Record-Observer. Am I right when I think that I have to defer a operation with EventMachine to produce this event, or how can I solve this?
And yes, it has to be Ruby on Rails. If I don't get this to work with EventMachine, I would try to bypass the whole ruby-part with node.js.
Actually I worked on this library a little with the maintainer. I think you mixed the client part with the server one. em-eventsource is a client library which you can use to consume a ServerSentEvent API, it's not meant to fire SSE.
On the server side, it quite doesn't matter whether you are using Rails or any other stack (nodejs, php…) as long as the server you are running on supports streaming. The default web server shipped with Rails does not (Webrick) but there are many others which do: Thin, Puma, Goliath…
In order to fire SSE in Rails, you would have to use both a streaming-capable server among those cited, and abide by the SSE specification. It mostly falls down to, first, responding with the proper Content-type header ("text/event-stream") so that the client (browser) knows it should hang-on, and then start streaming on the socket. That latter part is the one not easily possible as of today in Rails 3 (yet not impossible!); Rails 4 actually now supports streaming in an easy way, with a clean and simple internal API, so it's definitely coming.
In the mean time, you'd either:
mess with Rack's API in Rails (using EventMachine I guess, there are some examples in the wild)
or have it smart and make use of the streaming feature provided by Sinatra, built on top of Rack (see https://gist.github.com/1476463 for an example of Sinatra app which can be mounted in a Rails one!)
or you could use an external service such as Pusher
or leverage a entirely different stack…
A good overview: http://blog.phusion.nl/2012/08/03/why-rails-4-live-streaming-is-a-big-deal/
Maybe I'm wrong, but if IIRC Rails can't support long pooling. Rails block whole server (or thread if you have more than one running inside server) for each request and can't reuse them unless whole response was send. That's why you should setup reverse proxy (like nginx) in front of Rails application if you suspect there could be many concurrent connections - to proxy slow client requests and send them to Rails when whole request is received. It's just how Rack works, there's not much you can do about this probably.
Related
I'd like my Rails app to be able to listen and publish to ActiveMQ queues.
This article gives examples of how to use a ruby STOMP client, and a gem activemessaging that integrates that client into a Rails app. The functionality there seems ideal, but the activemessaging gem seems to no longer be maintained.
There are lots of resources on using rabbitMQ instead of ActiveMQ, but I'm trying to improve my Rails app's integration with an existing Java stack that's already using ActiveMQ.
So does anyone know of a gem I can use to achieve similar functionality to that of the activemessaging gem? I can't find one, so failing that:
How would I initialise a Stomp client with a persistent connection to my activeMQ instance inside the context of my Rails app, such that 1) The lifecycle of the client is tied to that of the ruby process running my app, not the request-response procedure, and 2) I get to consume to messages using code such as Active Record models or service objects defined in my app?
Thanks in advance.
According to the ActiveMessaging project website:
ActiveMessaging is a generic framework to ease using messaging, but is not tied to any particular messaging system - in fact, it now has support for Stomp, AMQP, beanstalk, Amazon Simple Queue Service (SQS), JMS (using StompConnect or direct on JRuby), WebSphere MQ...
So, it's an interface to simplify integration between various messaging protocols and/or providers. However, since your using a standardized messaging protocol (i.e. STOMP) you don't really need it.
I recommend you simply use this STOMP gem which is referenced in the original article.
STOMP, as the name suggests, is a very simple protocol. You should be able to use it however you need in your application.
As there's so little out there on this topic I thought I'd share the solution I came up with. Having established that using the STOMP gem directly is the way forward let me re-iterate the key challenges:
How would I initialise a Stomp client with a persistent connection to
my activeMQ instance inside the context of my Rails app, such that
1) The lifecycle of the client is tied to that of the ruby process
running my app, not the request-response procedure, and
2) I get to consume to messages using code such as Active Record models or service
objects defined in my app?
Part 1) turned out to be a bad idea. I managed to achieve this using a Rails initializer, which worked fine on my local. However, when I ran it in a staging environment I found that my message listeners died mysteriously. What seems to happen is that production web servers spawn the app (running the initializers), fork the process (without running them) and kill processes at random, eventually killing the listeners without ever having replaced them.
Instead, I used the daemons gem to create a background process that's easy to start and stop. My code in lib/daemons/message_listener.rb looked something like this:
require 'daemons'
# Usage (from daemons dir):
# ruby message_listener start
# ruby message_listener status
# ruby message_listener stop
# See https://github.com/thuehlinger/daemons for full docs.
# Require this to get your app code
require_relative '../../config/environment'
Daemons.run_proc('listener.rb') do
client = nil
at_exit do
begin
client.close
rescue # probably means there's no connection to close, do nothing to handle it.
end
end
client = Stomp::Client.new(your_config_options)
# Your message handling code using your rails app goes here
loop do
# I'd expected that subscribing to a stomp queue would be blocking,
# but it doesn't seem to be.
sleep(0.001)
end
end
I am experimenting with websockets in my ruby on Rails server. I am trying faye-websocket as described in here.
Initial tests look promising (I am using a python client and I am able to connect to the websocket) but I have a newbie question that keeps bugging me. Including my websockets library as a middleware in ruby seems to capture ALL requests from my client that are websocket connections. In such case, how do I differentiate (and reply differently) to client calls with different routing (e.g. calls to http://myserver.com/apple and http://myserver.com/pear being both websockets)?
EDIT
I found that the env variable contains the field "REQUEST_PATH" which has the information of the routing requested by the client. I can use that variable to return the appropriate answer to each one of the different client calls. Is there any more "elegant" way to do it?
I am building a pool of PhantomJS instances, and I am trying to make it so that each instance is autonomous (it fetches next job to be done).
My concern is to choose between these two:
Right now I have a Rails app that can give to PhantomJS which URL needs to be parsed next. So, I could do an HTTP get call from PhantomJS to my Rails app and Rails would respond with a URL that is pending to be done (most likely Rails would get that from a queue).
I am thinking on building a stand alone Redis server that PhantomJS would access via Webdis, so Rails would push the jobs there, and PhantomJS instances would fetch from it directly.
I am trying to think what would be the correct decision in terms of performance: PhantomJS hitting the Rails server (so Rails needs to get the job from the queue and send it to PhantomJS), or just making PhantomJS to access a Redis server directly.
Maybe I need more info but why isn't the performance answer obvious? Phantom JS hitting the Redis server directly means less stuff to go through.
I'd consider developing whatever is easier to maintain. What's the ballpark req/minute? What sort of company (how funded / resource-strapped are you)?
There's also more OOTB solutions like IronMQ that may ease the pain
Wanted to build a chat like application(i.e bidirectional message passing to multiple connected clients). Looked at the Faye gem but it opens a new port apart from port 80.
The big problem is that if the client is behind firewall all access to other ports except 80 are restricted and not all the hosting sites provide the support.
The ActionController::Live component does not have any mechanism to register the clients so that the message can not be passed to the registered clients on a specific event occurance.
Looking for a solution where the alive clients are stored in a collection(array or somthing like that) and when any of the alive client sends a message then the collection can be iterated and the messages can be written on it. All of these must happen only through port 80.
Good question - having implemented something similar, let me explain how it works:
Connections
A "live" web application is not really "live" at all - it's just got a persistent request; meaning it still works exactly the same as a "normal" Rails app, except clients don't close the connection (hence why you're interested in opening another port)
The way you handle the request is where the magic happens. This is as much to do with the client-side, as it is with Rails (server-side)
Clients
When you connect to a "chat" application, your browser is opening a live connection with the server. This will typically be done with either server sent events (Ajax long polling), or web sockets
The way this works is to open the connection using the normal Rails ActionDispatch middleware, and then allow you to connect
If you've played with ActionController::Live functionality, you'll find that it's not a typical controller-action. It's actually a separate technology (like resque or Redis) which you call from another controller action. This gives room to do cool things with
Server
The way you'd handle something like this is to separate the "live" functionality and the "normal" Rails app. It's one of the current down-falls of Rails - in that it's probably better to implement something like nodeJS with socket.io to handle the live data (with an endpoint like chat.yourapp.com), whilst using Rails to handle authentication & authorization
From a server perspective, its job is to handle incoming & outgoing requests -- not to handle persistent connections. So I guess you may want to look at ways you could "outsource" the websocket connectivity. Admittedly, my experience is slightly thin in this area, so you may do well searching the net
Solutions
We've had a lot of success using a third-party system called Pusher
This is a web socket system which allows you to open a persistent connection as a client, and integrates with Rails in a similar way to Redis (you can push to it)
This means you can host the "chat" application with Rails (http://yourapp.com/chat), send the messages to your Rails app (http://yourapp.com/chat/send), and handle the incoming chats from pusher (or similar)
Maybe you want to use my open source comet web server (https://github.com/TorstenRobitzki/Sioux). There is a ruby web chat example. I use this to implement an interactive role playing map with rails (http://dungeonpilot.com).
I'm having trouble figuring out how to do this using Rails, though it is probably cause I don't know the proper term for it.
I basically want to do this:
def my_action
sleep 1
# output something in the request, but keep it open
print '{"progress":15}'
sleep 3
# output something else, keep it open
print '{"progress":65}'
sleep 1
# append some more, and close the request
print '{"sucess":true}'
end
However I can't figure out how to do this. I basically want to replicate a slow internet connection.
I need to do this because I am scraping websites, which takes time, where I am 'sleeping' above.
Update
I'm reading this using iOS, so I don't want a websocket server, I think.
Maybe this is exactly what you're looking for:
Infinite streaming JSON from Rails 3.1
You probably want to do some reading around HTML5 WebSockets (there are backwards compatible hacks for older browsers) which let you push data to the client from the server.
Rails has a number of ways to implement a WebSockets server. This question gives some of the options Best Ruby on Rails WebSocket tool
If that would work on the server-side, how would you handle it on the client-side?
HTTP requests normally can just have one response (which may be chunked when using streaming, which wouldn't work in your case I think).
I guess you would either have to look into websockets or make separate requests for each step.