Rails - Delayed job stops running - ruby-on-rails

I'm developing this application which I have deployed to OpenShift.
I have "moved" the actual user registration process to a delayed job because there are a lot of stuff taking placing during this. Every two days (or so), the delayed job process stops running.
In the logs I find this:
Error while reserving job: closed MySQL connection
I tried starting it with the following command:
RAILS_ENV=production bin/delayed_job -m start
but the problem still exists.
Any ideas?

Try adding this to your database.yml
reconnect: true
I am not sure if this will fix your problem, but its worth trying.
Also, have a look at this MySql documentation about lost connection

Just had this problem (not using OpenShift). After I tried the command you said, I still had a problem. Then I restarted delayed_job like so:
RAILS_ENV=production bin/delayed_job stop
RAILS_ENV=production bin/delayed_job start
and the problem went away. In my case the problem was that delayed_job was looking for a method that no longer exists and simply needed to be restarted. Maybe this helps.
I also tried Vimsha's answer on development and not on production and it didn't effect the result for me.

Related

Rails Production server/console not reloading module

I am using DelayedJob for a long running task in my app. I have the job defined in a class MyJob saved in app/jobs/my_job.rb.
All was well, but I added some code to the file and restarted the server and the changes are not up. Before it was saving one field and now it should be saving two, and I added a logger.debug line to help me, but nothing is coming up in the logs and the models aren't being saved with the field.
This is in 'production' (though still using Thin webserver).
development works.
Since the folder is in the autoload path (at least if I am not misunderstood) I didn't think I needed to do anything special. But since it is not working, something must be off. Help? Let me know if you need more info.
Well, turns out I had a worker who was off the radar.
I run my server in a docker container (should have mentioned that) so even though I did a
RAILS_ENV=production script/delayed_job restart
in both the container and the host (the container has the app from the host as a volume), a worker I probably started some other time continued on. I saw it in the logs when I went back, so I did a
kill {pid}
and that solved my problem. So Flavio was right, I just had to kill the worker myself because the script wasn't picking it up.

Clear worker cache in delayed jobs in production

I am using delayed jobs in my rails application. it works fine but there is an issue occurred on production server. I created a class in lib and call its method from controller to generate a csv file through delayed jobs. It was working fine when I ran the delayed jobs on local and production server but then I made some changes to this class for file naming convention and restarted the delayed jobs on local and then on production server. Now when I call that method through delayed job then it works according to latest changes I made to the class and sometimes it uses the old logic of file naming convention.
What could be the issue?
Delayed job has a hidden "feature" which is to ignore any changes to your app, and just use old settings, env-variables, email-templates, etc. You can clear every cache and restart your server, and it still holds onto data which no longer exists anywhere in your app's codebase.
delayed_job - Performs not up to date code?
Also be aware that DJ's "restart" does not always kill and restart all the workers, so you need to hunt them down and kill them all manually with
ps aux | grep delay
See: Rails + Delayed Job => email view template does not get updated
I have not yet found a "clear delayed job cache" function. If it exists, someone please post it here.
In my case, I just spent almost 4 hours trying everything to delete failing delayed_jobs in Heroku. In case you get here trying to kill a zombie delayed_job, but you're in Heroku, this won't work.
You can not do ps aux like you'd do in a regular server, nor you can do rake jobs:clear, and if you check via Rails console, you'll see the jobs there, but not in the Database, so nothing you can do there either.
What I did was placing the app in maintenance mode, made a deployment totally uninstalling delayed_job gem and all its references, and then another deployment reverting that change. That cleared the zombie cache, and that did the trick.
I had a similar issue in Dokku. My solution was to remove the worker=1 entry from my DOKKU_SCALE file (so all it contained was web=1) and also to remove the worker: bundle exec rake jobs:work line from my Procfile.
I pushed that to my production server, reversed the changes above, pushed again and it was fixed.

Heroku Scheduler not creating log

I recently set up the Scheduler add on and set up my rake task, 'rake cron_jobs:my_task'.
When I test it with
'heroku run rake cron_jobs:my_task', it works fine.
The scheduler also claims it ran when it was supposed to, and is scheduled to run again, but there's no logging associated with the process the way https://devcenter.heroku.com/articles/scheduler#inspecting-output says there should be.
'heroku ps' shows no scheduled dynos, 'heroku logs --ps scheduler.1' has no output.
What am I missing?
Actually I was trying to solve this myself, and did not find the answer anywhere, so here it is if someone else is struggling with this:
heroku logs --tail --ps scheduler
--tail is important to keep streaming the logs.
My best guess: the heroku ps and heroku logs commands only give you status logs for currently running processes/dynos.
So after the scheduled rake task is done, you can't reach the logs through the command line.
You can access the history of your logs by using one of the logging addons. Most of them offer a free tier too.
They all are based on the log drains which you also could use directly, if you want to build it yourself.
Here is what I do for that:
Simply in your tasks itself include put statements to know when the job started running and when it is finished as well.
Also, you can include puts statement in the executed job as well.
I'm using paper trial add-on which is a very powerful logging tool that you can search and find any particular log at a specific time. Also, you can add an alert when your schedule job started to run.
I had a similar problem (using the newer Heroku PGBackups Service) and found an unexpected explanation for it.
The rake task rake pgbackups-archive was not run by Heroku Scheduler, but it worked when I ran it manually from the command line.
In my case, I noticed that my issues were caused by the different time zone used by the Heroku interface (which seems not to be CET). So my rake task which should have run at a specific time daily effectively never ran, as I changed the specific time throughout the day for testing and I always missed the specified time in the Heroku timezone.
You can try running the task every ten minutes and see if it works.

Delayed Jobs on Rails 2: any better way to run workers?

I finally got the DelayedJobs plugin working for Rails 2, and it does indeed work fine...as long as I run:
rake jobs:work
Just like the readme says, to be fair.
BUT, this doesn't fit my requirements...what kind of background task requires you to have a shell open, and a command running? That'd be like having to say script/server to run my rails app, and never getting that -d option so it'll keep running even after I close my shell.
Is there ANY way to keep the workers getting processed in the backgroun, or in daemon mode, or whatever?
I had a ray of hope when I saw the
You can also run by writing a simple
#script/job_runner#, and invoking it
externally:
Line in the readme...but...that just does the exact same thing the rake task does, you just call it a different way.
What I want:
I want to start my rails app, then start whatever will process the workers, and have BOTH of them run invisibly in the background, without the need for me to babysit it and keep the shell that started it running.
(My server is something I SSH into, so I don't want to have that shell that SSHed into it running 24/7 (especially since I like to turn off my local computer now and again)).
Is there any way to acomplish this?
You can make any *nix command run on the background by appending an & to its end:
rake jobs:work &
Just make sure you exit the shell (or use the disown command) to detach the process from your login session... Otherwise, if your session disconnects, the processes you own will be killed with it.
Perhaps Beanstalkd and Stalker?
Beanstalk is a fast and easy way to queue background tasks. Stalker provides a nice wrapper interface for creating these jobs.
See the railscast on it for more information
Edit:
You could also run that rake task as a cronjob which would mean the server would run it periodically without you needing to be logged in
Use the collectiveidea fork of delayed_job... It's more actively developed and has support for running the jobs in a daemon without any extra messing about.
My capistrano script calls
RAILS_ENV=production script/delayed_job start

Thinking Sphinx delta indexing fails in production

Here's what I've determined:
Delta indexing works fine in development
Delta indexing does not work when I push to the production server, and no action is logged in searchd.log
I'm running Phusion Passenger, and, as recommended in the basic troubleshooting guide, have confirmed that:
www-data has permission to run indexing rake tasks (ran them from command line manually)
the path to indexer and searchd are correct (/usr/local/bin)
there are no errors in production.log
What on earth could I possibly be missing? I'm running Ruby Enterprise 1.8.6, Rails 2.3.4, Sphinx 0.9.8.1, and Thinking Sphinx 1.2.11.
Thanks!
Last night as I slept it hit me. Unsurprisingly, it was a stupid issue involving bad configuration, though I am rather surprised that it produced the results it did. I guess I don't know much about Thinking Sphinx internals.
Recently I migrated servers. sphinx.yml looked like this:
production:
bin_path: '/usr/local/bin'
host: mysql.mysite.com
On the new server, MySQL was just a local service, but I had forgotten to remove that line. Interestingly, manual rake reindexing still worked just fine. I'm intrigued that Thinking Sphinx didn't throw an error when trying to reload the deltas, since mysql.mysite.com no longer exists, even though that was clearly the source of the issue.
Thanks for all your help, and sorry to have brought up such a silly problem.
Are there any clues in Apache/Nginx's error log?
Here's the next troubleshooting step I would take. Open up the file for the delta indexing strategy you are using (presumably lib/thinking_sphinx/deltas/default_delta.rb). Find the line where it actually generates the indexing command. In mine (v1.1.6) it's line 20:
output = `#{config.bin_path}indexer --config #{config.config_file} #{rotate} #{delta_index_name model}`
Change that so you can log the command itself, and maybe log the output as well:
command = `#{config.bin_path}indexer --config #{config.config_file} #{rotate} #{delta_index_name model}`
RAILS_DEFAULT_LOGGER.info(command)
output = `#{command}`
RAILS_DEFAULT_LOGGER.info(output)
Deploy that to production and tail the log while modifying a delta-indexed model. Hopefully that will actually show you the problem. Of course maybe the problem is elsewhere in the code and you won't even get to this point, but this is where I would start.
I was having this problem and found the "bin_path" solution mentioned above. When it didn't seem to work, it took me a while to realize that I'd pasted in the example code for "production" when I was testing on "stagiing" environment. Problem solved!
This was after making sure that the rake tasks that config, index, and start sphinx are all running as the same user as your passenger instance. If you log into the server as root to run these tasks, they will work in the console but not via passenger.
I had the same problem. Works on the command line, not inthe app.
Turns out that we still had a slave database that we were using for the indexing, but the slave wasn't getting updated.
As above, same issues were faced our side on two machine. The first one we had an issue with mysql which showed in apache2 log. Only seemed to affect local OSX machine..
Second time when we deployed to Ubuntu server, we had same issue. Rails c production was fine, no errors, bla bla bla.
Ended up being a permissions problem. Couldn't figure this out as there were no problems starting, although I guess I was doing so as root.
Using capistrano and passenger, we did this:
Create a passenger user and added to www-data group
Changed user in deploy.rb to be passenger
Manually changed all the /current files to be owned by above
Logged in as passenger user.
Ran rake ts:rebuild RAILS_ENV="production"
Worked a treat for us...
Good luck

Resources