Using Rails 3.2.21, whenever gem. This is the list of my crontab:
Begin Whenever generated tasks for: abc
0 * * * * /bin/bash -l -c 'cd /home/deployer/abc/releases/20141201171336 &&
RAILS_ENV=production bundle exec rake backup:perform --silent'
Here's the output when the scheduled job is run:
deployer#localhost:~$ ps aux | grep rake
deployer 25593 0.0 0.0 4448 764 ? Ss 12:00 0:00 /bin/sh -c /bin/bash -l -c
'cd /home/deployer/abc/releases/20141201171336 && RAILS_ENV=production bundle exec rake
backup:perform --silent'
deployer 25594 0.0 0.1 12436 3040 ? S 12:00 0:00 /bin/bash -l -c cd
/home/deployer/abc/releases/20141201171336 && RAILS_ENV=production bundle exec rake
backup:perform --silent
deployer 25631 69.2 4.4 409680 90072 ? Sl 12:00 0:06 ruby /home/deployer/abc/
shared/bundle/ruby/1.9.1/bin/rake backup:perform --silent
deployer 25704 0.0 0.0 11720 2012 pts/0 S+ 12:00 0:00 grep --color=auto rake
Notice the the top 2 processes are actually similar processes. Are they running 2 same jobs concurrently? How do I prevent that?
deployer 25593 0.0 0.0 4448 764 ? Ss 12:00 0:00 /bin/sh -c /bin/bash …
deployer 25594 0.0 0.1 12436 3040 ? S 12:00 0:00 /bin/bash …
Notice the the top 2 processes are actually similar processes. Are they running 2 same jobs concurrently?
No, they aren't. The first is a /bin/sh that started the second, the crontab command /bin/bash …. Most probably /bin/sh is just waiting for termination of /bin/bash and not running again before /bin/bash … has finished execution; you can verify this with e. g. strace -p 25593.
Check your scheduled.rb for a duplicate entry, if you find then remove and deploy.
If there is no duplicate entry in scheduled.rb then you need to remove/comment it from cron tab.
To delete or comment jobs in cron take a look at https://help.1and1.com/hosting-c37630/scripts-and-programming-languages-c85099/cron-jobs-c37727/delete-a-cron-job-a757264.html OR http://www.esrl.noaa.gov/gmd/dv/hats/cats/stations/qnxman/crontab.html
Related
I have this Dockerfile (where I am using miniconda just because I would like to schedule some python scripts, but it's a debian:jessie docker image):
FROM continuumio/miniconda:4.2.12
RUN mkdir -p /workspace
WORKDIR /workspace
ADD volume .
RUN apt-get update
RUN apt-get install -y cron
ENTRYPOINT ["/bin/sh", "/workspace/conf/entrypoint.sh"]
The script entrypoint.sh that keeps the container alive is this one:
#!/usr/bin/env bash
echo ">>> Configuring cron"
service cron start
touch /var/log/cron.log
mv /workspace/conf/root /var/spool/cron/crontabs/root
chmod +x /var/spool/cron/crontabs/root
crontab /var/spool/cron/crontabs/root
echo ">>> Done!"
tail -f /var/log/cron.log
From the docker documentation about supervisor (https://docs.docker.com/engine/admin/using_supervisord/) it looks like that could be an option as well as the bash script option (as in my example), that's why I decided to go for the bash script and to ignore supervisor.
And the content of the cron details /workspace/conf/root is this:
* * * * * root echo "Hello world: $(date +%H:%M:%S)" >> /var/log/cron.log 2>&1
(with at the bottom as an empty line \n)
I can not find a way to see that Hello world: $(date +%H:%M:%S) each minute appended to /var/log/cron.log, but to me all the cron/crontab settings are correct.
When I check the logs of the container I can see:
>>> Configuring cron
[ ok ] Starting periodic command scheduler: cron.
>>> Done!
Also, when logging into the running container I can see the cron daemon running:
root#2330ced4daa9:/workspace# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 4336 1580 ? Ss+ 13:06 0:00 /bin/sh /workspace/conf/entrypoint.sh
root 14 0.0 0.0 27592 2096 ? Ss 13:06 0:00 /usr/sbin/cron
root 36 0.0 0.0 5956 740 ? S+ 13:06 0:00 tail -f /var/log/cron.log
root 108 0.5 0.1 21948 3692 ? Ss 13:14 0:00 bash
root 114 0.0 0.1 19188 2416 ? R+ 13:14 0:00 ps aux
What am I doing wrong?
Are you sure the Cronjob has execution rights?
chmod 0644 /var/spool/cron/crontabs/root
I've setup monit to monitor my sunspot_solr process, which seems to work at first. If I restart the monit service with sudo service monit restart my sunspot process starts:
ps aux | grep sunspot
root 4086 0.0 0.0 9940 1820 ? Ss 12:41 0:00 bash ./solr start -f -s /ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr/solr
root 4137 45.1 4.8 1480560 185632 ? Sl 12:41 0:09 java -server -Xss256k -Xms512m -Xmx512m -XX:NewRatio=3 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -XX:CMSFullGCsBeforeCompaction=1 -XX:CMSTriggerPermRatio=80 -verbose:gc -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr/server/logs/solr_gc.log -Djetty.port=8983 -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=UTC -Djetty.home=/ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr/server -Dsolr.solr.home=/ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr/solr -Dsolr.install.dir=/ebs/staging/shared/bundle/ruby/2.3.0/gems/sunspot_solr-2.2.4/solr -jar start.jar --module=http
ubuntu 4192 0.0 0.0 10460 936 pts/3 S+ 12:41 0:00 grep --color=auto sunspot
However, I'm also running tail -f /var/logs/monit.log and see this at the same time:
[CST Mar 3 12:42:54] error : 'sunspot_solr' process is not running
[CST Mar 3 12:42:54] info : 'sunspot_solr' trying to restart
[CST Mar 3 12:42:54] info : 'sunspot_solr' start: /usr/bin/sudo
[CST Mar 3 12:43:25] error : 'sunspot_solr' failed to start
Plus, to make sure monit can actually restart the sunspot_solr process, I run sudo kill -9 <the pid> and monit can't restart sunspot_solr:
[CST Mar 3 12:44:25] error : 'sunspot_solr' process is not running
[CST Mar 3 12:44:25] info : 'sunspot_solr' trying to restart
[CST Mar 3 12:44:25] info : 'sunspot_solr' start: /usr/bin/sudo
[CST Mar 3 12:44:55] error : 'sunspot_solr' failed to start
Obviously something is wrong with my monit-solr_sunspot.conf file, but after messing around with it for a few hours now, I'm stumped:
check process sunspot_solr with pidfile /ebs/staging/shared/pids/sunspot-solr.pid
start program = "/usr/bin/sudo -H -u root /bin/bash -l -c 'cd /ebs/staging/releases/20160226191542; bundle exec sunspot-solr start -- -p 8983 -d /ebs/staging/shared/solr/data --pid-dir=/ebs/staging/shared/pids'"
stop program = "/usr/bin/sudo -H -u root /bin/bash -l -c 'cd /ebs/staging/releases/20160226191542; bundle exec sunspot-solr stop -- -p 8983 -d /ebs/staging/shared/solr/data --pid-dir=/ebs/staging/shared/pids'"
I've adapted this monit script to suit my needs: Sample sunspot-solr.monit but am still having no luck!
UPDATE
I've gotten monit to successfully restart sunspot_solr if I kill it, however it still produces the error that it failed to restart in the monit.log file.
I think monit runs as root. You may not want to use sudo both because it prompts for a password and because monit doesn't need it.
I have configured rake tasks using whenever gem to run every 5 minute. that rake task take 10 seconds to run in local but in server, It's not ending always it's stay in process list
ps -ef | grep some_rake_task`
shows like following output:
user 22628 22626 0 09:00 ? 00:00:00 /bin/sh -c /bin/bash -l -c 'cd app_direc && rake some_rake_task'
user 22630 22628 0 09:00 ? 00:00:00 /bin/bash -l -c cd app_direc && rake some_rake_task
user 22933 22630 0 09:00 ? 00:00:00 rake some_rake_task
user 22934 22933 0 09:00 ? 00:00:04 ruby /opt/user/app/shared/bundle/ruby/2.1.0/bin/rake some_rake_task
user 25261 25260 0 08:00 ? 00:00:00 /bin/sh -c /bin/bash -l -c 'cd app_direc && rake some_rake_task'
user 25263 25261 0 08:00 ? 00:00:00 /bin/bash -l -c cd app_direc && rake some_rake_task
user 25570 25263 0 08:00 ? 00:00:00 rake some_rake_task
user 25571 25570 0 08:00 ? 00:00:04 ruby /opt/user/app/shared/bundle/ruby/2.1.0/bin/rake some_rake_task
user 26570 26569 0 07:00 ? 00:00:00 /bin/sh -c /bin/bash -l -c 'cd app_direc && rake some_rake_task'
user 26573 26570 0 07:00 ? 00:00:00 /bin/bash -l -c cd app_direc && rake some_rake_task
user 26879 26573 0 07:00 ? 00:00:00 rake some_rake_task
user 26880 26879 0 07:00 ? 00:00:04 ruby /opt/user/app/shared/bundle/ruby/2.1.0/bin/rake some_rake_task
user 30915 30691 0 09:16 pts/2 00:00:00 grep --color=auto some_rake_task
I tried to kill process manually but every task initiated by crontab it's stays alive not ending. Any suggestionsm please?
My rake task code
namespace :some_rake_task do
desc "Send the email with content"
task :send => :environment do
User.find_in_batches do |user|
SendMailer.newsletter_email(user).deliver
end
end
end
But there is only below 1000 users
Crond detaches child processes, so when you kill crond process, spawned tasks will continue their work.
Judging by the output of ps -ef, there is no <defunct> near rake task processes, so i can decide that they are alive and running.
Your rake tasks either enter endless loop or just take much more time than you expect. In case of second it can be caused by slow network connection, for example if you call some external API service, poorly optimized SQL query or algorithm, too large database, something like that.
Advice: Try to fix your rake task or crontab entry, in order to not run rake task if same one is already running.
I'm stumped on this one. I am using capistrano and whenver gems to manage my builds to prod. Cron is being setup correctly on prod. When I look at 'crontab -e' I see...
# Begin Whenever generated tasks for: nso
10 * * * * /bin/bash -l -c 'cd /home/user/nso/releases/20140130161552 && RAILS_ENV=production bundle exec rake send_reminder_emails --silent >> /var/log/syslog 2>&1'
...this looks correct. In /var/log/syslog I see...
Feb 3 16:10:01 vweb-nso CRON[32186]: (user) CMD (/bin/bash -l -c 'cd /home/user/nso/releases/20140130161552 && RAILS_ENV=production bundle exec rake send_reminder_emails --silent >> /var/log/syslog 2>&1')
Feb 3 16:10:01 vweb-nso postfix/pickup[31636]: 8A67B805C7: uid=1001 from=<user>
Feb 3 16:10:01 vweb-nso postfix/cleanup[32191]: 8A67B805C7: message-id=<20140203211001.8A67B805C7#vweb-nso>
Feb 3 16:10:01 vweb-nso postfix/qmgr[31637]: 8A67B805C7: from=<user#abtech.edu>, size=712, nrcpt=1 (queue active)
Feb 3 16:10:01 vweb-nso postfix/local[32193]: 8A67B805C7: to=<user#abtech.edu>, orig_to=<user>, relay=local, delay=0.03, delays=0.02/0/0/0, dsn=2.0.0, status=sent (delivered to mailbox)
Feb 3 16:10:01 vweb-nso postfix/qmgr[31637]: 8A67B805C7: removed
Everything looks OK there..no?
Addtionally I can manually run the command. At the command prompt if I do.
/bin/bash -l -c 'cd /home/user/nso/releases/20140130161552 && RAILS_ENV=production bundle exec rake send_reminder_emails --silent >> /var/log/syslog 2>&1'
...I get my reminder emails as I should. What am I missing?
So this ended up being my issue...
Rails cron whenever, bundle: command not found
Once I added env :PATH, ENV['PATH'] to the top of config/schedule.rb everything worked as advertised!
So i have delayed_job installed on a production app. It runs fine via rake jobs:work. But when i try to start the script via capistrano:
run "if [ -d #{current_path} ]; then cd #{current_path} && RAILS_ENV=#{rails_env} script/delayed_job start -n 2; fi"
It starts without errors. But if i check script/delayed_job status it tells me that no instances are running. Any suggestions?
Edit
looks like there is something running (via sudo ps aux | grep delayed):
1000 7952 0.0 0.1 112312 832 pts/0 S+ 16:17 0:00 grep delayed
Output when i run the script:
/path/to/latest/release/config/initializers/bypass_ssl_verification_for_open_uri.rb:2: warning: already initialized constant VERIFY_PEER
Check the permissions on your shared/tmp/pid folder.
Delayed Job won't run unless the user that capistrano is running has has permission to write the PID file into the folder.
This is how I start the delayed job demon using capistrano, maybe this will also work for you:
require "delayed/recipes"
%w[start stop restart].each do |command|
after "deploy:#{command}", "delayed_job:#{command}"
end
The output of ps aux | grep delayed only shows its own process, so DJ is not running on your machine. Maybe this has something to do with your if-clause. You could try removing that and see if it start correctly using the ps aux | grep commands. The output should be something like this:
username 9989 0.0 0.0 7640 892 pts/0 S+ 10:54 0:00 grep delayed
username 10048 0.0 9.4 288244 99156 ? Sl Jan22 2:16 delayed_job