I'm working in a Rails 4.2.7 application and need to use WebSockets. Unfortunately I can't upgrade to Rails 5 and use Action Cable for this.
The application is using Puma in production, I say this because I found some gems that are meant to be used for Web Sockets but they work just in Thin app server but not with Puma app server.
Is there any easy solution for this?
Specifically what I need to do is trigger an event to my clients (browsers in this case) every time my model is updated
class MyModel < ActiveRecord::Base
after_commit :notify_users, if: :some_condition
def notify_users
# Trigger a web socket event to my clients here
end
end
I have tried with https://github.com/websocket-rails/websocket-rails, and the message gets sent according to the logs, but I'm not receiving it on the browser, and unfortunately it seems that the gems is not maintained anymore.
Is there a solutions that might work for this situation? Thanks
Based on this issue it sounds like the best bet is tubesock
Related
I'd like my Rails app to be able to listen and publish to ActiveMQ queues.
This article gives examples of how to use a ruby STOMP client, and a gem activemessaging that integrates that client into a Rails app. The functionality there seems ideal, but the activemessaging gem seems to no longer be maintained.
There are lots of resources on using rabbitMQ instead of ActiveMQ, but I'm trying to improve my Rails app's integration with an existing Java stack that's already using ActiveMQ.
So does anyone know of a gem I can use to achieve similar functionality to that of the activemessaging gem? I can't find one, so failing that:
How would I initialise a Stomp client with a persistent connection to my activeMQ instance inside the context of my Rails app, such that 1) The lifecycle of the client is tied to that of the ruby process running my app, not the request-response procedure, and 2) I get to consume to messages using code such as Active Record models or service objects defined in my app?
Thanks in advance.
According to the ActiveMessaging project website:
ActiveMessaging is a generic framework to ease using messaging, but is not tied to any particular messaging system - in fact, it now has support for Stomp, AMQP, beanstalk, Amazon Simple Queue Service (SQS), JMS (using StompConnect or direct on JRuby), WebSphere MQ...
So, it's an interface to simplify integration between various messaging protocols and/or providers. However, since your using a standardized messaging protocol (i.e. STOMP) you don't really need it.
I recommend you simply use this STOMP gem which is referenced in the original article.
STOMP, as the name suggests, is a very simple protocol. You should be able to use it however you need in your application.
As there's so little out there on this topic I thought I'd share the solution I came up with. Having established that using the STOMP gem directly is the way forward let me re-iterate the key challenges:
How would I initialise a Stomp client with a persistent connection to
my activeMQ instance inside the context of my Rails app, such that
1) The lifecycle of the client is tied to that of the ruby process
running my app, not the request-response procedure, and
2) I get to consume to messages using code such as Active Record models or service
objects defined in my app?
Part 1) turned out to be a bad idea. I managed to achieve this using a Rails initializer, which worked fine on my local. However, when I ran it in a staging environment I found that my message listeners died mysteriously. What seems to happen is that production web servers spawn the app (running the initializers), fork the process (without running them) and kill processes at random, eventually killing the listeners without ever having replaced them.
Instead, I used the daemons gem to create a background process that's easy to start and stop. My code in lib/daemons/message_listener.rb looked something like this:
require 'daemons'
# Usage (from daemons dir):
# ruby message_listener start
# ruby message_listener status
# ruby message_listener stop
# See https://github.com/thuehlinger/daemons for full docs.
# Require this to get your app code
require_relative '../../config/environment'
Daemons.run_proc('listener.rb') do
client = nil
at_exit do
begin
client.close
rescue # probably means there's no connection to close, do nothing to handle it.
end
end
client = Stomp::Client.new(your_config_options)
# Your message handling code using your rails app goes here
loop do
# I'd expected that subscribing to a stomp queue would be blocking,
# but it doesn't seem to be.
sleep(0.001)
end
end
I have created a simple ajax call with the following code:
controller.rb
def locations
sleep 1.2
some_data = [{"name"=> "chris", "age"=> "14"}]
render json: some_data
end
view.js
function getLocation() {
$.get('/location').success(function(data){console.log(data);});
}
$(".button").click(function() {getLocation();});
Routes.rb
get '/location' => 'controller#locations'
Note that the sleep 1.2 in the controller it is to prevent doing background jobs or database calls.
The screenshot below is from the devtools Network tab, it shows I have clicked the button 8 times and all the subsequent calls are stalled until the previous action is finished. I think it is due to Rails being single threaded? Will it be a different case if the server is made with NodeJS? And How can I achieve similar concurrency with Rails for similar AJAX calls?
Thanks!!
Actually, it is not due to Rails, but to the Rails server you are using. Some are single threaded, and others can be launched as multithreaded.
For instance, if you use Phusion passenger, you can configure it to run using several threads and so to improve the concurrency. You should look for Rails "server" comparisons instead of trying to find a solution or a problem with the Rails "framework".
Popular servers are Thin, Unicorn, Puma, Phusion passenger. The default development server is call Webrick.
There are a lot of other stackoverflow questions relating to the differences between servers so I think you should look into them.
In my application, I have used caching. This is the code, I have used. In after_filter, I called the method which include this one line code.
Rails.cache.write("properties", #properties.to_xml)
I try to get this in another action in before_filter like
#hotels = Rails.cache.fetch("properties")
this all working fine in development machine. But in server it returns null value. the application run in same development mode in server. Can you please anyone suggest me the right way. Thanks in advance.
It sounds like you haven't configured a backend for the store, so it will use ActiveSupport::Cache::MemoryStore
From the documentation:
If you're running multiple Ruby on Rails server processes (which is the case if you're using mongrel_cluster or Phusion Passenger), then this means that Rails server process instances won't be able to share cache data with each other.
This works in development since you are likely using a single server instance, so the cache is only stored in one process. For production you need to configure an alternative shared store. I'd recommend running a memcached instance, and installing and using the Dalli Gem as per the README.
I started developing a web-socket based game using the em-websocket gem.
To test the application I start the server by running
$> ruby server.rb
and then I just open two browsers going directly to the html file (no web server) and start playing.
But now I want to add a web server, some database tables, an other Ruby on Rails based gems.
How an achieve communication between my web-socket server and my Ruby on Rails application? Should they run in the same server and run as a single process? Run in separate servers and communicate through AJAX?
I need to support authentication and other features like updating the database when a game is finished, etc.
Thanks in advance.
There is an issue created about this:
https://github.com/igrigorik/em-websocket/issues/21
Here is the deal. I also wanted to develop a websocket server client with ruby on rails framework. However ruby-on-rails is not very friendly with eventmachine. I have struggeled with having a websocket client, so I managed to copy/cut/paste with from existing lib, and end up with the following two escessential ones.
Em-Websocket server
https://gist.github.com/ffaf2a8046b795d94ba0
ROR friendly websocket client
https://gist.github.com/2416740
have the server code in script directory, the start like the following in ruby code.
# Spawn a new process and run the rake command
pid = Process.spawn("ruby", "web_socket_server.rb",
"--loglevel=debug", "--logfile=#{Rails.root}/log/websocket.log",
:chdir=>"#{Rails.root}/script") #,
:out => 'dev/null', :err => 'dev/null'
Process.detach pid # Detach the spawned process
Then your client can be used like this
ws = WebSocketClient.new("ws://127.0.0.1:8099/import")
Thread.new() do
while data = ws.receive()
if data =~ /cancel/
ws.send("Cancelling..")
exit
end
end
end
ws.close
I wish there is a good ROR friendly em-websocket client, but couldn't fine one yet.
Once you made server/client works well, auth. and database support must not be very different from other rails code. (I mean having client side with some auth/db restrictions)
I am working on a gem that may be helpful with your current use case. The gem is called websocket-rails and has been designed from the ground up to make using WebSockets inside of a Rails application drop dead simple. It is now at a stable release.
Please let me know if you find this helpful or have any thoughts on where it may be lacking.
I would like to use the plugin em-eventsource ( https://github.com/AF83/em-eventsource ) for server-sent events in a Rails 3.1-project. My problem is, that there is only explained how to listen on events and receive messages, but not how to fire a specific event up and send the message. I would like to produce the event in an Active Record-Observer. Am I right when I think that I have to defer a operation with EventMachine to produce this event, or how can I solve this?
And yes, it has to be Ruby on Rails. If I don't get this to work with EventMachine, I would try to bypass the whole ruby-part with node.js.
Actually I worked on this library a little with the maintainer. I think you mixed the client part with the server one. em-eventsource is a client library which you can use to consume a ServerSentEvent API, it's not meant to fire SSE.
On the server side, it quite doesn't matter whether you are using Rails or any other stack (nodejs, php…) as long as the server you are running on supports streaming. The default web server shipped with Rails does not (Webrick) but there are many others which do: Thin, Puma, Goliath…
In order to fire SSE in Rails, you would have to use both a streaming-capable server among those cited, and abide by the SSE specification. It mostly falls down to, first, responding with the proper Content-type header ("text/event-stream") so that the client (browser) knows it should hang-on, and then start streaming on the socket. That latter part is the one not easily possible as of today in Rails 3 (yet not impossible!); Rails 4 actually now supports streaming in an easy way, with a clean and simple internal API, so it's definitely coming.
In the mean time, you'd either:
mess with Rack's API in Rails (using EventMachine I guess, there are some examples in the wild)
or have it smart and make use of the streaming feature provided by Sinatra, built on top of Rack (see https://gist.github.com/1476463 for an example of Sinatra app which can be mounted in a Rails one!)
or you could use an external service such as Pusher
or leverage a entirely different stack…
A good overview: http://blog.phusion.nl/2012/08/03/why-rails-4-live-streaming-is-a-big-deal/
Maybe I'm wrong, but if IIRC Rails can't support long pooling. Rails block whole server (or thread if you have more than one running inside server) for each request and can't reuse them unless whole response was send. That's why you should setup reverse proxy (like nginx) in front of Rails application if you suspect there could be many concurrent connections - to proxy slow client requests and send them to Rails when whole request is received. It's just how Rack works, there's not much you can do about this probably.