I'm developing a chat web app, so of course I need to use some kind of a "push" method to post something if an event happens in the background. I've decided to stick with long-polling for the moment, because I'm kinda new to web development and I don't have a lot of time to learn a good way to push new information to the view, and it seems to work just fine. So, now to my question: In the server side I have a method in which I have a never ending while cycle, with no sleep in it, so if anything happens (e.g. the user gets a new message) I can post it to the view in real time. With a few users, it works fine, but what will happen to the server if a lot of users start to use it? Will it crash?
Code sample:
def update() {
boolean stayInWhile = true
while(stayInWhile) {
//check for updates
if(/*update available*/) {
stayInWhile = false
//set up a response
}
}
//return response
}
As you suspect, I think the approach you're using could end up starving the container of request threads if lots of people are simultaneously using your application.
A better way to do this might be to run a javascript timer on the client browser that submits an ajax request to your update() method every few seconds. That way your request thread will be returned to the pool after each 'check'. Take a look at the javascript setTimeout method. And also a couple of stackoverflow questions about updating a progress bar using setTimeout.
Alternatively, you could set up an ajax 'push' (from server -> client) using the excellent Grails Atmosphere plugin.
Related
I have a very weird situation: I have a system where a client app (Client) makes an HTTP GET call to my Rails server, and that controller does some handling and then needs to make a separate call to the Client via a different pathway (i.e. it actually goes via Rabbit to a proxy and the proxy calls the Client). I can't change the pathway for that different call and I can't change the Client at all (it's a 3rd party system).
However: the issue is: the call via the different pathway fails UNLESS the HTTP GET from the client is completed.
So I'm trying to figure out: is there a way to have Rails finish the HTTP GET response and then make this additional call?
I've tried:
1) after_filter: this doesn't work because the after filter is apparently still within the Request/Response cycle so the TCP/HTTP response back to the Client hasn't completed.
2) enqueuing a worker: this works, but it is not ideal because if the workers are backed up, this call back to the client may not happen right away and it really does need to happen right after the Client calls the Rails app
3) starting a separate thread: this may work, but it makes me nervous: adding threading explicitly in Rails could be fraught with peril.
I welcome any ideas/suggestions.
Again, in short, the goal is: process the HTTP GET call to the Rails app and return a 200 OK back to the Client, completely finishing the HTTP request/response cycle and then call some extra code
I can provide any further details if that would help. I've found both #1 and #2 as recommended options but neither of them are quite what I need.
Ideally, there would be some "after_response" callback in Rails that allows some code to run but after the full request/response cycle is done.
Possibly use an around filter? Around filters allow us to define methods that wrap around every action that rails calls. So if I had an around filter for the above controller, I could control the execution of every action, execute code before calling the action, and after calling it, and also completely skip calling the action under certain circumstances if I wanted to.
So what I ended up doing was using a gem that I had long ago helped with: Spawnling
It turns out that this works well, although it required a tweak to get it working with Rails 3.2. It allows me to spawn a thread to do the extra, out-of-band callback to the Client, but let the normal, controller process complete. And I don't have to worry about thread management, or AR connection management. Spawnling handles that.
It's still not ideal, but pretty close. And it's slightly better than enqueuing a Resque/Sidekiq worker as there's no risk of worker backlog causing an unexpected delay.
I still wish there was an "after_response_sent" callback or something, but I guess this is too unusual a request.
Hey how to make a notification system like Facebook or diaspora in rails.
I had tried making activity feed but that was not the thing I wanted I want an exactly same feature like this websites have.
I have a simple app Where there are two types of users buyers and sellers
I want to notify the seller whenever a buyer comment on their products.
What you are looking at here is a server push implementation. That means when some notification/action happens in the server, it should push a notification to your rails app. The difference with #manju's answer is, its proposing a a solution based on your clients browser will call the server periodically for new notifications.
There are two main ways to do this.
1 - Using some third party SASS solutions. (easy way, but cost money ;))
Fire base , allows you to send push notifications to clients.
pusher is another provider offers the same kind of functionalists.
Read their documentations, normally each of them have gems you can easily integrate to your rails app.
2 - Implement your own push server
You can implement your own push server with rails, and integrate to your app.
Faye is a one option,
But more exiting thing is Rails5 will have Action Cable which tries to solve the same issue. action cable gem
and there are articles showing action cable with rails4 apps (you dont have to wait till rails5 comes out) , but I haven't used it personally yet.
HTH
Facebook does it using comet techniques.
Here are some of the helpful links
Link1
Link2
Link3
Here is the theory how facebook does
Facebook works by polling the server for any changes.
Facebook page will make ajax request to server and the ajax request will have much time out
But in the server-side in the API in server it will constantly poll DB server if anything has changed by constantly checking the activity log table in database ..if a change has been found it will return the result till then it will poll the DB
Once Ajax request is complete it will recursively try again.
Here is a code snippet - Client side
function doPoll() {
$.get("events.php", {}, function(result) {
$.each(result.events, function(event) { //iterate over the events
//do something with your event
});
doPoll();
//this effectively causes the poll to run again as
//soon as the response comes back
}, 'json');
}
$(document).ready(function() {
$.ajaxSetup({
timeout: 1000*60//set a global AJAX timeout of a minute
});
doPoll(); // do the first poll
});
Here is a code-snippet in server side:
while(!has_event_happened()) {
sleep(5);
}
echo json_encode(get_events());
you can find it in much detail here
you can actually adopt this approach according to your needs
Is it possible (I think it is a usual thing) to cancel a request (also a method which already started on the server after this request)?
Say, I request from a rails app a video search on youtube or vimeo (which is implemented on the server and makes further requests directly to vimeo or youtube), but then I decide to cancel this search, so I can start a new one, without waisting the resources for already useless search results. So all requests are done via AJAX.
I think I should:
either define a global variable (e.g. cancel_req = 4939498348953 and each time when cancell set it to some specific ID known to the method I desire to cancel, and in the mean time just set to cancel = nil. So at some points in the code just check this variable (but what if the 3rd party API call is blocking with very long duration time, because it returns very much data?)
or introduce redis subscribing (I think its a bit over-engineered for this task)
But both methods sounds to me like just a workarounds. Are there any better ways to reach the cancel of those long-running methods like requesting 3rd parties APIs in own solution?
Update:
is it somehow possible in rails with callbacks? maybe with yields?
Update2:
The workflow is following:
client -> webserver -> rails-app-server (controller/helper) --->
|
foreignAPI // <-- break execution somewhere here (but of course still handle all other client requests, so - not exiting the application)
|
client <- webserver <- rails-app-server (controller/helper) <--
It seems like your problem is simply blocking calls. Consider delayed_job to run asynchronous tasks--in your case, these long API calls. You can set the configuration Delayed::Worker.max_run_time = 10.seconds (or whatever time) as the limit you're willing to wait, and there are callbacks for pretty much every event you can think of.
I discovered today Servlet 3.0 asynchronous facility. I have read about it and think I understood the concept.
I was wondering: would that make any difference on "standard" controller's actions, or should it be saved for the use of web services, or extensive computational processes ?
In other words, is it a bad idea to use it on all one's controller's actions, without considering the computational time of the actions method beforehand?
If it is, could you explained to me why ?
Thank you in advance.
No, this would be a bad idea.
On a controller action, you get a request and you want to serve a response as soon as possible. You can use the asynchronous only for thing that can be delayed.
If a user is requesting a page on your website, you can't respond with empty page, then do a push back to update his page. I would use this feature only for AJAX requests and even not for all of them. You have to decide what makes sense to run be run asynchronously and what not.
You should read the Grails documentation for Asynchronous Request Handling
In general for controller actions that execute quickly there is little benefit in handling requests asynchronously. However, for long running controller actions it is extremely beneficial.
The reason being that with an asynchronous / non-blocking response, the one thread == one request == one response relationship is broken. The container can keep a client response open and active, and at the same time return the thread back to the container to deal with another request, improving scalability.
Hopefully this should be clear enough, but please ask if something is not clear.
I am trying to use jQuery's .ajax functionality to make a progress bar.
A request is submited via .ajax, which starts a long running process. Once submited another .ajax request is called on an interval which checks the progress of this process. Then a progress meter is updated using this information.
However, the progress .ajax call only returns once the long running process has completed. Its like its being blocked by the initial request.
The weird thing is this process works fine on dev but is failing on the deployment server. I am running on IIS using ASP.Net MVC.
Update: Apparently, it is browser related because it is working fine on IE 7 but is not working on IE 8. This is strange because IE 8 allows up to 6 connections on broadband where IE 7 only allows 2 requests per domain.
Update2: I think it's a local issue because it appears to be working fine on another IE 8 machine.
The server will only run one page at a time from each user. When you send the requests to get the progress status, they will be queued.
The solution is to make the page that returns the status sessionless, using EnableSessionState="false" in the #Page directive. That way it's not associated with any user, so the request isn't queued.
This of course means that you can't use session state to communicate the progress state from the thread running the process to the thread getting the status. You have to use a different way of keeping track of running processes and send some identifier along with the requests that gets the status so that you know which user it came from.
Some browsers (in particular, IE) only allows two requests to the same domain at the same time. If there are any other requests happening at the same time, then you might run into this limitation. One way around it is to have a few different aliases for the domain (some sites use "www1.example.com" and "www2.example.com", etc)
You should be able to use Firebug or Fiddler to determine how many requests are in progress, etc.
Create an Asynchronus handler (IHttpAsyncHandler) for your second ajax request.
use any parameter required via the .ashx querystring in order to process what you want because the HttpContext won't have what you'll need. You barely will have access to the Application object.
Behind the scenes ASP.NET will create for you a thread from the CLR pool, not the application pool, so You'll have an extra performance gain with IHttpAsyncHandler