I've built a CRUD app that allows clients to scrape links. When the client clicks a button rails goes to the controller and runs the script (I can see all the activity in terminal), but there is not feedback on the frontend. Also, the user can't visit any other pages on the website while the script is running.
I script can take a long time so I want the client to be able to click a button, be redirected to another page and the process to start. The user can leave the page if he wants.
I would also like some sort of way to send an email to the user after it's completed.
Would my backend be able to run many tasks at once, right?
You need a background worker. The idea is to initiate the work when the user "Clicks a button", then let background worker perform the hard work while the user can continue browsing.
At the end the user is notified (email or other means) and the job result is accessible.
Off course several workers can work at the same time.
Have a look at sidekiq, resque or delayed_job
You can try EventMachine, if you are using supporting server(Thin, for example)
EM.next_tick{ p "hello, it's next tick!"}
Will be printed asyncronously
Related
I'm moving imports of excel files (often large, often small) into the background using Sidekiq, and alerting the user when the importing is complete using Pusher.
At first, in the 'synchronous' flow (Rails app here), it will kick off the excel import, and then it will redirect to a dashboard page which will receive notifications from Pusher when the importing is done.
The problem is the Sidekiq-Pusher flow sometimes goes faster than the redirect to the dashboard page. My JavaScript subscriber won't be initialized in time to receive the message that is being published from within the background process. So the client gets nothing.
Does Pusher offer a way to delay publishing a message until there is a subscriber? Or does Pusher offer a way to 'stockpile' messages until the subscriber springs to life to consume them? Or maybe there is a simpler solution here I have not thought of?
Fyi, I don't want the background job to sleep for a few seconds to make sure the client is ready, and I don't want to use Pusher to trigger a refresh (i.e. save something in a DB, then refresh to display it).
I am happy to provide code samples if desired.
EDIT:
I'm certainly open to using something else besides Pusher if something else can solve my problem.
Invoke the worker after 1 or 2 seconds from the current time so that it gets invoked and show the message after being redirected to the landing page.
TestProcess.perform_at(2.seconds.from_now,parameters)
It will work as expected. Hope it helps :)
I am building a website, and I have an administrator page. The admin will have to run a reporting task, meaning that, the task will iterate all the records fetch information and generate a pdf file. Now this will be heavy on the app and the database.
What is the usual approach for it ? Should I have a button that calls a method of a class or should I have a rake task? I heard that HTTP GET requests have a limit and if the report generation takes more than that then it kills the request.
I would like to use send_data(....) so the user is given a nice download pop up box when the report is done. Will it be better to use a mailer and email it?
Thanks
We have similar functionality in our Rails apps at my job.
We have one URL/action that initiates the request to generate the PDF file, and returns right away saying the request was started successfully.
Then we have another action that we can poll with AJAX that returns whether or not the report is complete, and when it is complete, it gives the user the PDF.
The actual generation is done by a Sidekiq worker which is not subject to the webserver timeout.
I am building an application which will send status requests to users (via email & sms) on a regular basis. I want to execute the service each hour which will:
Query the database for all requests that need to be sent (based on some logic)
Send the requests through Amazon's Simple Email Service (this is already working)
Write a record of the status request notification back to the data store
I am considering wrapping up this series of operations into a single controller with an end point that can be called remotely to kick off the process within the rails app.
Longer term, I will break this process out into an app that can be run independently of my rails app, but for now I'm just trying to keep it simple.
My first inclination is to build the following:
Controller with the following elements:
A method which will orchestrate the steps outlined above (and can be called externally)
A call to the status_request model which will bring back a collection of request needing to be sent
A loop to iterate through the pending requests, which will:
Make a call to my AWS Simple Email Service module to actually send the email, and
Make a call to the status_request model to log the request back to the database
Model:
A method on my status_request model which will bring back a collection of requests that need to be sent
A method in my status_request model which will log that a notification was sent
Since this will behave as a service that gets called periodically from an outside scheduler I don't think I'll need a view for this operation. (Will, of course, need views to show users and admins what requests have been sent, but that's later...).
As someone new to Rails, I'm asking for review of this approach and any suggestions you may have.
Thanks!
Instead of a controller which Jeff pointed out exposes a security risk, you may just want to expose a rake task and use cron to invoke it on an hourly basis.
If you are still interested in building a controller, look at devise gem and its single access token, token_authenticatable, for securing the methods you are exposing.
You may also want to look at delayed_job or resque to offload the call to status_request and the loop to AWS simple service to a background worker process.
You may want a seperate controller and view for the log file so you can review progress on demand.
And if you want to get real fancy use Amazon SNS to send you alerts when the service reaches some unacceptable level of failures, backlog, etc.
Since you are trying to invoke this from an outside process, your approach should work. You could also have a worker process that processes task when they are there.
You will need routes to expose your service, and you may want to also make security decisions. How will the service that invokes your application authenticate so all others can't hit it at will?
Another consideration should be how many emails are you sending. If there are enough, we may want to look into the fact that writing this sort of loop is going to be extremely top heavy; and may affect users on the current system if it's a web application.
In the end, there are many ways to do this. I would focus on the performance/usage you expect as well as security. There's never one perfect way to solve a problem like this, and your way should just be aware of the variables it will need to be operating within.
Resque and Redis might be helpful to you in scheduling and performing operatio n .They are simple and superfast, [here](http://railscasts.com/episodes/271-resque] is a simple tut on same.
i have an action that executes a "possible" long running task. A possible, because it does a request to a remote server, and because of network latency, it can block the user interface and give a small delay to the user.
My question is not related with "how to send long tasks in background", but how to push a notification to the user. My idea, was that the user clicks the button, it fires a task in background, the web interface is unblocked, and the user can do whatever he wants and, when the task is done, he receives a flash message. I can do it with AJAX, polling the server, a specific action that gives me the status of my task, for example, but there is any pattern to do it event based? Kudos for answers with proof of concept or prototypes.
No proof of concept here, but you could use something like spawn or delayed_job to fire off your Rails task and unblock the interface, and then communicate back to the client with node.js or something similar. Depending on what you want to do, however, long-polling may be more practical than setting up more server software.
I want to use something like EventMachine websockets to push status updates to the client as they happen.
My application crawls round a section of a website screen scraping relevant details of a user's search. I want to push any screen scraping captures to the client as they happen. I also want to persist these changes to the database. I also want the job to complete even if the user closes down the browser.
At the moment, the job is initiated from the client (browser) and the job is placed on a resque queue that completes the job. The client polls the database and displays the results.
I want to have a play around with websockets but I don't think I can get the same behaviour. It is more important that the results are persisted and the job completes than the real time pushes.
Am I wrong in the assumption that this cannot be done?
Have you looked at faye. Masseging With Faye(RailsCasts). You can keep on using the rescue queue to get the job completed and push the message to subscriber(your web client) as and when you find the results.