i try to start a process in the background which executes some code. When i start the process, my app freezes until the background task is done. Iam using SuckerPunch to work around exactly this freezing, however the app still waits for the process to end. Do i have the wrong expectations ? How can i solve it ?
app/controller/mycontroller:
MyJob.perform_async(data_array)
app/jobs/myjob:
class MyClass
include SuckerPunch::Job
workers 1
def perform(data)
my code which takes around 20sec to execute
end
end
Related
The idea is to have a page with a parallel process and in the same time show a loading .gif or a JS game because the waiting is soo long, like this:
So, that's basically the idea, enter a controller that lauch a parallel process that's takes a lot of time meanwhile display his page that have something to
entertain the user and, by ajax, is constantly checking if the parallel process is done, so display a button to continue to the next page
Consider that is done this way because the process itself takes over 10 minutes of time so the timeout of the server kills the process before it's complete so in this idea the timeout is completed when is display the wait.html.erb and in the same time the parallel process is still alive
Also consider that i don't do anything yet because i don't know if the very idea is possible, so my question is if its possible? and how will be done?
You could simply use background processing worker (Sidekiq in this example):
class HardWorker
include Sidekiq::Worker
def perform(user_id)
VeryLongProcess.run
notify(user_id)
end
end
And the notify method could either push notification to user (using Websockets for example) or just store a setting in Redis or other DB that the job has completed. This in turn could be read by your AJAX call.
As described here, Passenger will fork my process, and I'll need to revive my background thread. I do that, and it usually works, but sometimes my process gets terminated before my background thread is finished. All I'm doing on that background thread is queuing a bunch of data so I can send it more efficiently in a bigger chunk. I just asked a similar question, but I have a new idea: I'm wondering if I can register for a callback similar to this one:
PhusionPassenger.on_event(:starting_worker_process) do |forked|...
but instead of on the :starting_worker_process event, I want to get notified that my process is about to be terminated so I can quickly flush my buffer and get out. Is there such an event?
Kernel provides #at_exit which can be used for this.
at_exit do
# Cleanup
end
From Passenger's source code, looks like there is an event called :stopping_worker_process. I haven't not tested this though.
I am creating a user defined thread library. I use Round-Robin scheduling algorithm and use the context switching method. But, I am unable to know what to do when a thread finishes its execution before the allotted time slot. The program is getting terminated. I actually want to reschedule all the threads, by calling the schedule function when the current thread gets terminated.
I found two ways to overcome this problem.
By calling explicitly thread_exit function at the end of the function that is being executed by the current thread.
By changing the stack contents such that the thread_exit function gets executed after the current function gets terminated.
But I am unable to find how to apply these solutions....
Anybody out there... plz help me...
It sounds like you have a bit of a design flaw. If I'm understanding you correctly, you're trying to implement a solution where you have threads that can be allocated to perform some task and after the task is complete, the thread goes idle waiting for the next task.
If that's true, I think I would design something like a daemon process or service that manages a queue for tasks coming in, a pool of threads responsible for executing the tasks with a controller that listens for new tasks.
I have got Sinatra/Rails app and an action which starts some long process.
Ordinary I make a queue for background jobs. But this case is too simple and background process starts very rarely, so queue is an overhead.
So how could I run background process without queue?
get "/build_logs/:project" do
LogBuilder.new(params[:project]).generate
"done"
end
I've tried to make it as a new Thread or Process fork, but it didn't help.
I have had success with this (simplified) in Sinatra:
get '/start_process'
##pid = Process.spawn('external_command_to_run')
end
This returns the Process ID, which you can use to terminate the process later if you need. Also, this is on Linux, it will not work on Windows.
I have a backroundrb scheduled task that takes quite a long time to run. However it seems that the process is ending after only 2.5 minutes.
My background.yml file:
:schedules:
:named_worker:
:task_name:
:trigger_args: 0 0 12 * * * *
:data: input_data
I have zero activity on the server when the process is running. (Meaning I am the only one on the server watching the log files do their thing until the process suddenly stops.)
Any ideas?
There's not much information here that allows us to get to the bottom of the problem.
Because backgroundrb operates in the background, it can be quite hard to monitor/debug.
Here are some ideas I use:
Write a unit test to test the worker code itself and make sure there are no problems there
Put "puts" statements at multiple points in the code so you can at least see some responses while the worker is running.
Wrap the entire worker in a begin..rescue..end block so that you can catch any errors that might be occurring and cutting the process short.
Thanks Andrew. Those debugging tips helped. Especially the begin..rescue..end block.
It was still a pain to debug though. In the end it wasn't BackgroundRB cutting short after 2.5 minutes. There was a network connection being made that wasn't being closed properly. Once that was found and closed, everything works great.