Breaking rails MVC: Sending data from model directly to view via AJAX - ruby-on-rails

I am creating a MUD using Rails. Here is what I got so far:
Right now I am working on a combat system. My combat system will work like this:
current_user sees characters and non_player_characters in room
When current_user attacks another character, the other characters have 5 seconds to "deflect" the attack or they are hit. (Not fully implemented)
When current_user attacks an NPC, there is a 50% the NPC will deflect the attack
NPC will send attacks back to user and user will have to deflect attacks within the proper time interval (Not fully implemented).
In order to implement this combat system I decided I needed to use multithreading and timers:
def initiate_attack
Thread.new do
sleep(5)
hit_target
ActiveRecord::Base.connection.close
end
end
def non_player_character_failed_to_deflect
(1 + rand(10)) < 5
end
def is_non_player_character?
#attack.target_type == "NonPlayerCharacter"
end
def hit_target
if is_non_player_character?
if non_player_character_failed_to_deflect
damage_target
else
puts "Deflected"
end
else
"hit player"
end
end
def damage_target
#target.update(power_level: #target.power_level - 10)
end
This works as far as pure functionality is concerned, but the problem is I can't figure out how to get the strings back to the view so the user can see them. The user should see a message upon anyone initiating an attack, and upon the completion of an attack. I think the main issue with doing this is that by using multithreading MVC is broken because my threads in model are still running after the control has been returned to the controller and view.
So to summarize my question:
1)How do I make it so my view is continuously updated using AJAX with data coming from the model?
For more information please visit the github page for this project:

You need a way to push data to the browser. To do that you have a few options:
Use polling or long-polling. message_bus makes it very simple.
Use websockets as suggested by Justin.
Use another new technology to push events from the server like (server-sent events)[http://www.w3schools.com/html/html5_serversentevents.asp].
I would give the message_bus gem a try.
EDIT: You might as well try (Sidekiq)[http://sidekiq.org/] to run the asynchronous code - I believe you'll find your code using it easier to maintain in the long run, specially compared to the approach of using threads directly.

Related

Must a server sent event always be firing regardless of what page a user is on?

I am pretty new to SSE so feel free to let me know if I've misunderstood the purpose and there's a much better way of implementing what I want!
I have a working SSE that, every minute, updates a user's dashboard. The code looks like this:
# SitesController
def dashboard
end
def regular_update
response.headers['Content-Type'] = 'text/event-stream'
sse = SSE.new(response.stream, event: 'notice')
begin
sse.write(NoticeTask.perform) # custom code returning the JSOn
sleep 60
rescue ClientDisconnected
ensure
sse.close
end
end
# routes
get "/dashboard(/:id)" => "sites#dashboard"
get "/site_update" => 'sites#regular_update'
# view - /dashboard
var source = new EventSource('/site_update');
source.addEventListener('notice', function(event) {
var data = JSON.parse(event.data)
appendNoticeAndAlert(data)
});
This works just fine. When I'm on /dashboard for a user, the right info is being updated regularly by the SSE, great!
However, I notice if I'm on any random page, like just the home page, the SSE is still running in the background. Now... obviously this makes sense, since there's nothing in the code that is otherwise limiting that... but shouldn't there be??? Like shouldn't there be a way to scope the SSE in some way? Isn't it a huge waste of resources if the user is never on the /dashboard for the SSE to be constantly working in the background, updating the /dashboard page?
Again, new to SSE, if this is fundamentally wrong, please advise as well. Thanks!
In your controller when handling SSE you're expected to do updates in a loop,
then ActionController::Live::ClientDisconnected is raised by response.stream.write once client is gone:
def regular_update
response.headers['Content-Type'] = 'text/event-stream'
sse = SSE.new(response.stream, event: 'notice')
loop do
sse.write(NoticeTask.perform) # custom code returning the JSOn
sleep 60
end
rescue ClientDisconnected
logger.info "Client is gone"
ensure
sse.close
end
your code disconnects the client after first update and delay, but everything appears to be working because EventSource automatically reconnects (thus you're getting long-polling updates).
On client EventSource should be close()d once it is not needed. Usually it is done automatically upon navigation away from page containing it, so:
make sure that eventsource javascript is only on the dashboard page, not in javascript bundle (or is in bundle, but only enabled on specific page)
if you're using turbolinks - you have to close() the connection manually, as a quick solution - try adding <meta name="turbolinks-visit-control" content="reload"> to page header or disabling turbolinks temporarily.
Also think again whether you actually need SSE for this specific task, because for plain periodic updates you can just poll a json action from client side code, that will render the same data. This will make controller simpler, will not keep connection busy for each client, has wider server compatibility etc.
For SSE to be reasoned - at least check if something has really changed and skip message if there's nothing. Better way is to use some kind of pub-sub (like Redis' SUBSCRIBE/PUBLISH, or Postgres' LISTEN/NOTIFY) - emit events to a topic every time something that affects the dashboard changes, subscribe on SSE connect and so on (may be also throttle updates, depends on your application). Similar can be implemented with ActionCable (is a bit overkill, but can be handy, since already has pub-sub integrated)

How to Make the Controller wait for a Delayed Job while the rest of the App continues on?

(This question is a follow-up to How do I handle long requests for a Rails App so other users are not delayed too much? )
A user submits an answer to my Rails app and it gets checked in the back-end for up to 10 seconds. This would cause delays for all other users, so I'm trying out the delayed_job gem to move the checking to a Worker process. The Worker code returns the results back to the controller. However, the controller doesn't realize it's supposed to wait patiently for the results, so it causes an error.
How do I get the controller to wait for the results and let the rest of the app handle simple requests meanwhile?
In Javascript, one would use callbacks to call the function instead of returning a value. Should I do the same thing in Ruby and call back the controller from the Worker?
Update:
Alternatively, how can I call a controller method from the Worker? Then I could just call the relevant actions when its done.
This is the relevant code:
Controller:
def submit
question = Question.find params[:question]
user_answer = params[:user_answer]
#result, #other_stuff = SubmitWorker.new.check(question, user_answer)
render_ajax
end
submit_worker.rb :
class SubmitWorker
def check
#lots of code...
end
handle_asynchronously :check
end
Using DJ to offload the work is absolutely fine and normal, but making the controller wait for the response rather defeats the point.
You can add some form of callback to the end of your check method so that when the job finishes your user can be notified.
You can find some discussion on performing notifications in this question: push-style notifications simliar to Facebook with Rails and jQuery
Alternatively you can have your browser periodically call a controller action that checks for the results of the job - the results would ideally be an ActiveRecord object. Again you can find discussion on periodic javascript in this question: Rails 3 equivalent for periodically_call_remote
I think what you are trying to do here is little contradicting, because you use delayed_job when do done want to interrupt the control flow (so your users don't want to want until the request completes).
But if you want your controller to want until you get the results, then you don't want to use background processes like delayed_job.
You might want to think of different way of notifying the user, after you have done your checking, while keeping the background process as it is.

Use model callbacks or observers for implementing activity feed?

I'm implementing an activity feed for a client similar to Twitter's (it's only activity that pertains to the current signed in user -- i.e. who favorite his/her post, mentions, etc..).. It won't rely on 'push' but instead, the user will have to refresh the page in order to see new activity (for now).
I've been googling & searching around SO for the past hour to find the best way to implement this, and observers keep coming up in the solutions. I also notice that many of these are using push notification. I noticed the approach R Bates took in his public activity railscast btw, which is why I'm asking this question.
What if I don't want to use push notification, would callbacks be ok or even better? Do you think I would still need to use implement other things outside rails for scalability? (like how you may use "Pushapp" for push notifications)
Any suggestions on better solutions or light shed would be helpful.
This is for #gg_s
I'm assuming in this case you're saying I have an activity_feed table (receiver_id, sender_id, activity_type, & activity_id) (belongs to user, belongs_to activity_type (???), :polymorphic => true)
# application_controller.rb
def publish_to_user_feed(message)
current_user.activity_feed << message
end
# favorites_controller.rb
def create
# blah blah blah
publish_to_user_feed "This just happened."
end
In the the favorites_controller's 'create' action, "This just happened" could == "#favorite.user just favorited #favorite.post by #favorite.post.user"
Again, I hope I'm not being too pesky & am pretty sure what I'm asking is obvious, but I think this will help clear things up for me & also help future visitors.
Thanks again
For anyone that wants to know, I'm still working on this.. Just took a little break.. My main concern is how heavy it'll be on the db & other performance issues so if anyone wants to better this (using the code above), feel free :)
Solution: I don't want to overcomplicate things so I'm taking ap's advice.
Use neither.
Callbacks and observers are more complex in this case than you might think. The only automation they provide is the ability to be triggered upon model events. That's it. You are responsible to implement logic determining:
what just happened?
should it be reported?
what to report?
Extending this logic to support several types of activities is needlessly complex. Ditch the automation and publish activities from the controller as they happen on an as-needed basis.
Create a helper method to keep things DRY:
# application_controller.rb
def publish_to_user_feed(message)
current_user.activity_feed << message
end
Then manually post to a user's feed when and where necessary:
# some_controller.rb
def some_action
# perform some action
publish_to_user_feed "This just happened."
end
Reporting directly from the controller is clear, readable, DRY, maintainable, and adhere's to Rails' MVC pattern. No complex callback chains or observers to write.
As a bonus, it is trivial to perform activities without posting to activity feeds, e.g. administrative activity or user privacy settings.

Providing updates during a long Rails controller action

I have an action that takes a long time. I want to be able to provide updates during the process so the user is not confused as to whether he lost the connection or something. Can I do something like this:
class HeavyLiftingController < ApplicationController
def data_mine
render_update :js=>"alert('Just starting!')"
# do some complicated find etc.
render_update :js=>"alert('Found the records!')"
# do some processing ...
render_update :js=>"alert('Done processig')"
# send #results to view
end
end
No, you can only issue ONE render within a controller action. The render does NOTHING until the controller terminates. When data_mine terminates, there will be THREE renders, which will result in an error.
UPDATE:
You'll likely have to set up a JavaScript (jquery) timer in the browser, then periodically send an AJAX request to the server to determine the current status of your long running task.
For example the long running task could write a log as it progresses, and the periodic AJAX request would read that log and create some kind of status display, and return that to the browser for display.
It is impossible to handle the request that way. For each request, you have just one answer.
If your action takes a long time, then maybe it should be performed asynchronously. You could send user e-mails during the process to notify him of the progress.
I suggest that you to take a look on DelayedJob gem:
http://rubygems.org/gems/delayed_job
It will handle most difficult parts of dealing with assync stuff for you (serializing / deserializing your objects, storage, so on...).
Hope it helps you!

Thin EventMachine Sinatra vs. Rails

I have been looking into the possibility of backgrounding some jobs with EventMachine. In Sinatra this appears to work great but Rails 3 appears to execute all ticks before rendering a view.
When I run the following code under the thin webserver it behaves as expected. The first request returns immediately and the second request is waiting for the 3 second sleep call to finish. This is the expected behavior.
class EMSinatra < Sinatra::Base
get "/" do
EM.next_tick { sleep 3 }
"Hello"
end
end
Whereas in Rails 3 running I am trying to do the same thing: (running under thin)
class EmController < ApplicationController
def index
EM.next_tick {
sleep(3)
}
end
end
In Rails the sleep call happens before rendering the view to the browser. The result is that I am waiting for 3 seconds for the initial page to render.
Does anybody know why this is happening? I am not looking for comments on wether this is a good practice or not. I am simply experimenting. Throwing small tasks into the reactor loop seems like an interesting thing to look into. Why should the client have to wait if I am going to make some non-blocking http-requests?
Im not sure this is the answer you are looking for but i did some research on this before.
Let me tell you a littlebit of background information:
What we wanted to achieve was that rails already flushed parts of the template tree (e.g. the first part of the layout) even when the controller action is taking a long while to load.
The effect of this is that the user already sees something in their browser while the webserver is still doing work.
Ofcourse the main view has to wait with rendering because it probably needs data from the controller action.
This technique is also known as BigPipe and facebook wrote a nice blog about this:
http://www.facebook.com/notes/facebook-engineering/bigpipe-pipelining-web-pages-for-high-performance/389414033919
Anyway, after doing some research to achieve this for rails 3 i found this blog post made by Yehuda Katz.
http://yehudakatz.com/2010/09/07/automatic-flushing-the-rails-3-1-plan/
So for now i think you really have to stick with the waiting for the controller
Using EM.defer instead of EM.next_tick causes the sleep to happen after the response is sent back.

Resources