I need to create an API for my rails project, and the API main purpose is to start actions that will run in background for some time (using ActiveJob).
I need the API client to be able to monitor the background job using long-polling. The client makes the request and if the job is still running, the request waits until some timeout to give the result, or returns earlier in casethe job has terminated in the meantime.
If I ware to design a full-stack application, I'd use ActionCable to notify the client when the job terminates. But here, I don't want to make public any ActionCable websocket API and I want to stick with plain HTTP.
Is it possible to use ActionCable to wait for messages on a channel in a Rails controller (server-side)? Can such controller cause a connection count limit, especially with postgresql connections (which I won't be using while I'm waiting on Redis for ActionCable messages)?
Related
I would like to be able to make a request to a ruby method on a rails app after calling a long-running request that would run in parallel to the request. I know that using a background processing gem would make this possible, however is there a dev server that handles more than one request?
I'm building an API using Rails where requests come in and they need to be executed by a cluster of workers running on a different server (these workers call remote APIs and parse the data, etc...). I'm going to be using Sidekiq or Resque to handle the queueing/processing of that.
My issue is the client needs to wait while this is happening and the controller needs to return the response to the client once it's complete. How would I handle this in the controller? We're using a redis backend, so I was thinking something along the lines of subscribing to a pub/sub channel and waiting for the worker to publish a status message. The controller would wait for a set time period and then return a 'check back later' response to the client if it doesn't receive a message in time. What would be the best way to implement that, or is there a better solution?
Do not make your clients wait! There are a lot of issues if you make the controller block for a long running job:
Other programs may assume the request timed out (proxies, browsers, scripts, etc.)
It makes your API endpoints become a source for denial of service
It requires you to put more engineering work into web servers (since a rails process can't handle another web request while it's handling the blocking call)
Part of the reason of using Sidekiq or Resque is the avoid controllers that do heavily lifting during the http request.
Instead, background jobs should report their status to the database. Then web server should query and return to the client the latest status from the database.
If clients need more immediate feedback, you can:
make clients constantly poll
post request to the client (if the API consumer is another webserver)
use another protocol mechanism (eg - websockets).
I am implementing an json api using rails. I wish to make requests to another web service using delayed job to prevent it from blocking my rails app. So far so good. So i have a function defined in my model which does a http POST to this other web service.
However, the other web service is is an asynchronous api with callbacks. Hence I want to also receive callbacks from this api within my delayed job.
Is this possible? Can I have a http listener in my delayed job whose port number I can control or know within my code?
I have Rack Faye application on Thin server and i have some logic on every faye client handshake/subscribe/disconnect.
This logic requires data storage in DB, some calculations and publishing messages back to some channels.
Where and how should one implement such stuff to avoid blocking main Faye thread with extensions?
author of Faye here. Depends what DB you're talking to, but in general you should use a non-blocking (i.e. based on EventMachine's TCP stack) database client. This means the extension will return quickly (assuming you're not waiting on the result of the DB call to affect the incoming/outgoing message) so Faye can continue processing messages while the DB call is in progress.
I want to use something like EventMachine websockets to push status updates to the client as they happen.
My application crawls round a section of a website screen scraping relevant details of a user's search. I want to push any screen scraping captures to the client as they happen. I also want to persist these changes to the database. I also want the job to complete even if the user closes down the browser.
At the moment, the job is initiated from the client (browser) and the job is placed on a resque queue that completes the job. The client polls the database and displays the results.
I want to have a play around with websockets but I don't think I can get the same behaviour. It is more important that the results are persisted and the job completes than the real time pushes.
Am I wrong in the assumption that this cannot be done?
Have you looked at faye. Masseging With Faye(RailsCasts). You can keep on using the rescue queue to get the job completed and push the message to subscriber(your web client) as and when you find the results.