I have a cron job on an Ubuntu Hardy VPS that only half works and I can't work out why. The job is a Ruby script that uses mysqldump to back up a MySQL database used by a Rails application, which is then gzipped and uploaded to a remote server using SFTP.
The gzip file is created and copied successfully but it's always zero bytes. Yet if I run the cron command directly from the command line it works perfectly.
This is the cron job:
PATH=/usr/bin
10 3 * * * ruby /home/deploy/bin/datadump.rb
This is datadump.rb:
#!/usr/bin/ruby
require 'yaml'
require 'logger'
require 'rubygems'
require 'net/ssh'
require 'net/sftp'
APP = '/home/deploy/apps/myapp/current'
LOGFILE = '/home/deploy/log/data.log'
TIMESTAMP = '%Y%m%d-%H%M'
TABLES = 'table1 table2'
log = Logger.new(LOGFILE, 5, 10 * 1024)
dump = "myapp-#{Time.now.strftime(TIMESTAMP)}.sql.gz"
ftpconfig = YAML::load(open('/home/deploy/apps/myapp/shared/config/sftp.yml'))
config = YAML::load(open(APP + '/config/database.yml'))['production']
cmd = "mysqldump -u #{config['username']} -p#{config['password']} -h #{config['host']} --add-drop-table --add-locks --extended-insert --lock-tables #{config['database']} #{TABLES} | gzip -cf9 > #{dump}"
log.info 'Getting ready to create a backup'
`#{cmd}`
# Strongspace
log.info 'Backup created, starting the transfer to Strongspace'
Net::SSH.start(ftpconfig['strongspace']['host'], ftpconfig['strongspace']['username'], ftpconfig['strongspace']['password']) do |ssh|
ssh.sftp.connect do |sftp|
sftp.open_handle("#{ftpconfig['strongspace']['dir']}/#{dump}", 'w') do |handle|
sftp.write(handle, open("#{dump}").read)
end
end
end
log.info 'Finished transferring backup to Strongspace'
log.info 'Removing local file'
cmd = "rm -f #{dump}"
log.debug "Executing: #{cmd}"
`#{cmd}`
log.info 'Local file removed'
I've checked and double-checked all the paths and they're correct. Both sftp.yml (SFTP credentials) and database.yml (MySQL credentials) are owned by the executing user (deploy) with read-only permissions for that user (chmod 400). I'm using the 1.1.x versions of net-ssh and net-sftp. I know they're not the latest, but they're what I'm familiar with at the moment.
What could be causing the cron job to fail?
When scripts run correctly interactively but not when run by cron, the problem is usually because of the environment environment settings in place ... for example the PATH as alrady mentioned by #Ted Percival, but may be other environment variables.
This is because cron will not invoke .bash_profile, .bashrc or /etc/profile before executing.
The best way to avoid this is to ensure any scripts invoked by cron do not make any assumptions about the environment when executing. Over-coming this can be as simple as including a few lines in your script to make sure the environment is setup properly. For example, in my case I have all the significant settings in /etc/profile (for RHEL), so I will include the following line in any scripts to be run under cron:
source /etc/profile
Looks like your PATH is missing a few directories, most importantly /bin (for /bin/rm). Here's what my system's /etc/crontab uses:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
Are you sure the temporary file is being created correctly when running as a cron job? The working directory for your script will either be specified in the HOME environment variable, or the /etc/passwd entry for the user that installed the cron job. If deploy does not have write permissions for the directory in which it is executing, then you could specify an absolute path for the dump file to fix the problem.
Is cron sending emails with logs?
If not, pipe the output of cron to a log file.
Make sure to redirect STDERR to the log.
Related
I have a Rails runner task that I want to run from cron, but of course cron runs as root and so the environment is set up improperly to get RVM to work properly. I've tried a number of things and none have worked thus far. The crontab entry is:
* 0 * * * root cd /home/deploy/rails_apps/supercharger/current/ && /usr/local/rvm/wrappers/ruby-1.9.3-p484/ruby bundle exec rails runner -e production "Charger.start"
Apologies for the super long command line. Anyhow, the error I'm getting from this is:
ruby: No such file or directory -- bundle (LoadError)
So ruby is being found in the RVM directory, but again, the environment is wrong.
I tried rvm alias delete [alias_name] and it seemed to do something, but darn if I know where the wrapper it generated went. I looked in /usr/local/rvm/wrappers and didn't see one with the name I had specified.
This seems like a common problem -- common enough that the whenever gem exists. The runner command I'm using is so simple, it seemed like a slam dunk to just put this entry in the crontab and go, but not so much...
Any help with this is appreciated.
It sounds like you could use a third-party tool to tether your Rails app to cron: Whenever. You already know about it, but it seems you never tried it. This gem includes a simple DSL that could be applied in your case like:
every :day # Or specify another period, or something else, see README
runner "Charger.start"
end
Once you've defined your schedule, you'll need to write it into crontab with whenever command line utility. See README file and whenever --help for details.
It should not cause any performance impact at runtime since all it does is conversion into crontab format upon deployment or explicit command. It's not needed, once the server is running, everything is done by cron after that.
If you don't want an extra gem anyway, you might as well check what command does it issue for executing your task. Still, an automated way of adding a cron task is easier to maintain and to deploy. Sure, just tossing a line into the crontab is easier — just for you and just this once. Then it starts to get repetitive and tiring, not to mention confusion for other potential developers who will have to set up something similar on their own machines.
You can run cron as different user than root. Even in your example the task begins with
* 0 * * * root cd
root is the user that runs the command. You can edit it with crontab -e -u username.
If you insist on running cron task as root or running as other user does not work for some reason, you can switch user with su. For example:
su - username -c "bundle exec rails runner -e production "Charger.start"
I'm trying to output text to console during tests, to know what happens and have an history of the tests, but nothing seems to work, not printf, neither $stdout.write.
Should I just use a text log file and be done with it or it's possible to output to jenkins console?
As explained in https://content.pivotal.io/blog/what-happened-to-stdout-on-ci and https://github.com/ci-reporter/ci_reporter#environment-variables, you need to set CI_CAPTURE=off
Here is a copy of my jenkins config for running RSpec tests of a Rails app in Jenkins:
[ -d jenkins ] && rm -rf jenkins
mkdir jenkins
cp ~/configs/yourApp/default-db-config config/database.yml
rake db:migrate
rake db:test:prepare
export RAILS_ENV=test
export SPEC_OPTS="--no-drb --format documentation --format html --out jenkins/rspec.html"
rake spec
First it deletes any previous test history from the workspace if it exists.
Next it creates a jenkins directory in the workspace for storing the test output.
Then it sets up the app for testing with a working DB config (I don't store DB config files in my git repo).
Finally it migrates the dev DB if required, prepares the test DB, sets the RAILS_ENV to test and runs the tests with the specified SPEC_OPTS.
The important bits are as follows:
--format documentation ... this sends sensible output of test progress to your console log. Write your tests properly and this will be infinitely more useful than any puts commands you might have considered using.
--format html ... this outputs HTML files of the test results to the jenkins directory created earlier and specified in the --out attribute. Add the following to your job description to show those results on the main page of this job:
<iframe src='http://jenkins.your-domain.com/job/your-tests/ws/jenkins/rspec.html' width="100%" height="600" frameborder="0"/>
Hopefully that should get you up and running with a more useful jenkins test job for RSpec.
Not sure if Jenkins will change this (I don't think so), but in Rspec you can write
puts response.body
or
puts "My mommy made me mash my M&M's"
or whatever else you want and it will be put in the console/results
i deployed (with capistrano) a ruby on rails project on an aws micro server.
I'm on ruby 1.9.2-290 and rails 3.2.6 and i also use bundler.
I developed a task rake in my opt/rails-project/lib/tasks/tasks.rake
namespace :myclass do
task "my-task" => :environment do
# do the stuff which work nicely if i enter my command line manually
end
end
This is how i call it in my crontab :
*/3 * * * * cd /opt/rails-project/current && /opt/rails-project/shared/bundle/ruby/1.9.1/gems/rake-0.9.2.2/bin/rake myclass:my-task RAILS_ENV=production >> ~/logs-my-task.txt
The file ~/logs-my-task.txt is created and updated every 3min as it does. This file only contains info of the version release from capistrano but nothing from my task rake.
As i said in my comment in my task rake, if i launch this command directly in the server via ssh, my task rake does its job...
I searched the web all day/night long and can not figure it out.
I tried to remove the http_basic auth from rails but same problem.
Hope you have a idea,
Thanks for help !
Try to put this part
cd /opt/rails-project/current && /opt/rails-project/shared/bundle/ruby/1.9.1/gems/rake-0.9.2.2/bin/rake myclass:my-task RAILS_ENV=production >> ~/logs-my-task.txt
inside some file, somescript.sh, give execution permissions:
chmod +x somescript.sh
and try to run it manually:
/path/to/somescript.sh
If it works, try to put it into crontab:
*/3 * * * * /path/to/somescript.sh
It often helps to put complex stuff inside script to run in from crontab.
Next step, ensure that you PATH environment variable the same for your shell and for cron. You can set it inside crontab or inside your script.
After I used a shell script as recommended by denis.peplin and launched it manually, I got the problem described here: Ruby on Rails and Rake problems: uninitialized constant Rake::DSL.
I included the following line in my Rakefile and let my crontab as it was before:
require 'rake/dsl_definition'
I am using whenever gem with rails 3. On my production server (ubuntu) , the runner task does not run. I tried setting the :set job_template to get -l -i as mentioned in this github ticket. However that does not solve the problem.
The problem on this particular production ubuntu is that the ruby path is not there in echo $PATH:
echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
Whereas the ruby path is /var/rails/myapp/shared/bundle/ruby/1.8/bin
So if I manually edit the crontab file and add PATH=/var/rails/myapp/shared/bundle/ruby/1.8/bin:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games to the crontab file, the runner task is executed correctly.
However every time I do a deploy, I need to manually edit the crontab file to add the PATH statement to it.
Is there any way in whenever to add this PATH line in crontab file, so that there would not be any need to do this manually after every deploy?
Thanks
I am not using RVM and adding the below code in the file config/schedule.rb(place where u write whenever gem related code) worked for me.
env :PATH, ENV['PATH']
I think if you add /var/rails/myapp/shared/bundle/ruby/1.8/bin to the PATH of whatever user cron is running under on the server, it should be picked up. Or, you could add it in the whenever schedule.rb:
env :PATH, "$PATH:/var/rails/myapp/shared/bundle/ruby/1.8/bin"
That should do the trick, but I haven't tested it.
The answer from idlefingers looks mostly correct, but based on the comment from ami, I would change that line to the following:
env :PATH, "#{ENV["PATH"]}:/var/rails/myapp/shared/bundle/ruby/1.8/bin"
Notice the string interpolation for the environment key for PATH. I have not tested this, but based on ami's comment, this should fully expand the path string in the crontab file as expected.
Add the PATH statement to the top of the crontab file, before the line that starts
# BEGIN Whenever generated tasks for:
and you shouldn't have to manually edit your crontab file every time
I have a Rails application with a daemon that checks a mailbox for any new emails. I am using the Fetcher plugin for this task. The daemon file looks like this:
#!/usr/bin/env ruby
require File.dirname(__FILE__) + '/../config/environment.rb'
class MailFetcherDaemon < Daemon::Base
#config = YAML.load_file("#{RAILS_ROOT}/config/mail.yml")
#config = #config['production'].to_options
#sleep_time = #config.delete(:sleep_time) || 20
def self.start
puts "Starting MailFetcherDaemon"
# Add your own receiver object below
#fetcher = Fetcher.create({:receiver => MailProcessor}.merge(#config))
...
So I have it grab the new emails, parse them and create a resource from the parsed data. But when it tries to save the resource an exception is thrown. This is because the script is automatically assigned the development environment. So it is using my development database configuration instead of the production environment (which is the config that I want).
I have tried starting the script with:
rails-root$ RAILS_ENV=production; script/mail_fetcher start
but to no avail. It seems like when I load the environment.rb file it just defaults to the development environment and loads development.rb and the development database configuration from database.yml.
Thoughts? Suggestions?
Thanks
This is working in my app, the only difference I see is no semi-colon
RAILS_ENV=production script/mail_fetcher start
So when you say
RAILS_ENV=production; script/mail_fetcher start
do you mean
#!/bin/bash
export RAILS_ENV=production
cd /path/to/rails_root
./script/mail_fetcher start
You might try adding this to your script:
ENV['RAILS_ENV'] = "production"
Alternatively, it might work to add it to the command line.
#!/bin/bash
cd /path/to/rails_root
./script/mail_fetcher start RAILS_ENV=production