Tail production log with Capistrano - how to stop it - ruby-on-rails

I found this nifty code snippet on several sites, allowing me to analyze the production log via Capistrano:
desc "tail production log files"
task :tail_logs, :roles => :app do
run "tail -f #{shared_path}/log/production.log" do |channel, stream, data|
puts # for an extra line break before the host name
puts "#{channel[:host]}: #{data}"
break if stream == :err
end
end
It works perfectly well, however, when I'm finished reading the logs, I hit Ctrl+C and it produces a nasty error on my console. Not that this is a huge problem, but I find it annoying. What can I do so that no error is produced, but the task/tail/log viewing just quietly ends?
Also, I'm not that familiar with how to analyze logs - is this really the best way to just have a quick look at the most recent events in your (remote production) log, or is there a better way? I know there are a gazillion tools for log analysis, but I want a dead-simple solution to see the last couple requests, not something bulky and complicated. I'm not sure if this Capistrano solution is really optimal though. Like, what's the solution most people use?

Try trap("INT") { puts 'Interupted'; exit 0; } like this:
desc "tail production log files"
task :tail_logs, :roles => :app do
trap("INT") { puts 'Interupted'; exit 0; }
run "tail -f #{shared_path}/log/production.log" do |channel, stream, data|
puts # for an extra line break before the host name
puts "#{channel[:host]}: #{data}"
break if stream == :err
end
end
I hope this helps.

This was pretty easy to find on a blog
But here is some code for Capistrano 3
namespace :logs do
desc "tail rails logs"
task :tail_rails do
on roles(:app) do
execute "tail -f #{shared_path}/log/#{fetch(:rails_env)}.log"
end
end
end
I had issues with the rails_env variable, so i just replaced it, but it might be worth it to you to get it working, so I left it.

I made one small change to Jeznet's great answer. If you run capistrano-ext with multiple environments like we do, you can have the RAILS_ENV automatically specified for you:
run "tail -f #{shared_path}/log/#{rails_env}.log" do |channel, stream, data|

I had a problem with the trap("INT") part. While it makes the script exit without errors, the tail processes where still running on the remote machines. If fixed it with this line:
trap("INT") { puts 'Interupted'; run "killall -u myusername tail"; exit 0; }
Not elegant, but working for me.

I use gem capistrano-rails-tail-log and everything is ok.
https://github.com/ayamomiji/capistrano-rails-tail-log

Related

execute bash script inside deploy.rb using capistrano

I am learning (by doing) Rails and Capistrano.
How can I execute a scrpit inside deploy.rb?
I came across run(command), exec(command), execute: or run:.
I don't have to specify :db or web so I have the following backbone:
task :myTask do
on roles(:app) do
execute "bash myScript.sh"
puts "#{:server} reports: #{myTask}"
end
end
Is this correct?
Is the ssh part of the whole process or I have to ssh in the command?
How do people develope deploy.rb without cap deploy every time they make a change?
Thank you!
Ruby allows you to run a shell script using backtick
for example
output = `pwd`
puts "output is #{output}"
see more https://ruby-doc.org/core-1.9.3/Kernel.html#method-i-60
This is what worked for me:
role :app, 'user#domain1.com'
on roles(:app) do
within 'remote_path' do
execute 'bash', ' myScript.sh'
end
end

crontab didn't work in Rails rake task

I have a rake task in my Rails application,and when I execute the order in my rails app path /home/hxh/Share/ruby/sport/:
rake get_sportdata
This will work fine.
Now,I want to use crontab to make this rake to be a timed task .so,I add a task:
* * * * * cd /home/hxh/Share/ruby/sport && /usr/local/bin/rake get_sportdata >/dev/null 2>&1
But this doesn't work.I get the log in cron.log file:
Job `cron.daily' terminated
I want to know where the error is.
Does the "cd /home/hxh/Share/ruby/sport && /usr/local/bin/rake get_sportdata >/dev/null 2>&1" can work in your terminal?
But use crontab in Rails normally is not a good idea. It will load Rails environment every time and slow down your performance.
I think whenever or rufus-scheduler are all good. For example, use rufus-scheduler is very easy. In config\initializers\schedule_task.rb
require 'rubygems'
require 'rufus/scheduler'
scheduler = Rufus::Scheduler.start_new(:thread_name => "Check Resources Health")
scheduler.every '1d', :first_at => Time.now do |job|
puts "###########RM Schedule Job - Check Resources Health: #{job.job_id}##########"
begin
HealthChecker.perform
rescue Exception => e
puts e.message
puts e.backtrace
raise "Error in RM Scheduler - Check Resources Health " + e.message
end
end
And implement "perform" or some other class method in your controller, now the controller is "HealthChecker". Very easy and no extra effort. Hope it help.
So that you can test better, and get a handle on whether it works I suggest:
Write a shell script in [app root]/script which sets up the right environment variables to point to Ruby (if necessary) and has the call to rake. E.g., something like script/get-sportdata.sh.
Test the script as root. E.g., first do sudo -s.
Call this script from cron. E.g., * cd [...] && script/get-sportdata.sh. If necessary, test that line as root too.
That's been my recipe for success, running rake tasks from cron on Ubuntu. This is because the cron environment is a bit different than the usual shell setup. And so limiting your actual cron jobs to simple commands to run a particular script are a good way to divide the configuration into smaller parts which can be individually tested.

Capistrano Deploy Timeout

Alright, so, I've looked around on the web and it doesn't really look like a lot of other people have this issue, but maybe something else is wrong with what we're doing.
I've mananged to distill it down to what I think is a useful test case:
config/deploy.rb:
## Excerpt
task :big_delay, :roles => :web do
run "sleep 480"
run "echo Meow Meow Meow"
end
And stupid_script.sh:
#!/bin/sh
ssh foo 'sleep 480; echo Meow Meow Meow'
Where foo is the name of the same server we deploy to.
When I run both of these it should connect to the other box, do nothing for 8 minutes, then spit out some useless text and complete.
The stupid_script works, and the cap task fails.
I see the remote command finish with ps xf, but cap doesn't seem to notice anymore.
If the sleep is 20 instead of 240, the cap task works fine.
Obviously this task is super useless, but we do have expensive things run on deploy that trigger this, and I've made this to rule out any blame on ssh.
Another data point, if we ssh into the box and put the code there and then run cap deploy from there, then it works fine.
So... there seems to be some weird interplay going on between ssh and capistrano.
Thoughts?
Set ClientAliveInterval and ClientAliveCountMax in /etc/ssh/sshd_config on the server as choover suggests. I had the exact same issue with "assets:precompile" on my deploy until I made that change.

Capistrano tasks not performing within the given scope.

I have build some capistrano tasks which I need to run on within the defined :app roles. This is what I have so far:
desc "Stop unicorn"
task :stop, :roles => :app do
logger.info "Stopping unicorn server(s).."
run "touch #{unicorn_pid}"
pid = capture("cat #{unicorn_pid}").to_i
run "kill -s QUIT #{pid}" if pid > 0
end
As far as I know, this should run the given commands on the servers given in the :app role, right? But the fact of the matter is that it's running the commands on the servers in the :db role.
Can anyone give some insight into this problem? Or is there a way to force Capistrano to adhere to the :roles flag?
Thanks in advance
// Emil
Using Capture will cause the task to be run only on the first server listed.
From the documentation:
The capture helper will execute the given command on the first matching server, and will return the output of the command as a string.
https://github.com/capistrano/capistrano/wiki/2.x-DSL-Action-Inspection-Capture
Unfortunately I am facing a similar issue, the find_servers solution may work, but it's hacky, and runs N x N times, where N in the number of servers you have.

Start or ensure that Delayed Job runs when an application/server restarts

We have to use delayed_job (or some other background-job processor) to run jobs in the background, but we're not allowed to change the boot scripts/boot-levels on the server. This means that the daemon is not guaranteed to remain available if the provider restarts the server (since the daemon would have been started by a capistrano recipe that is only run once per deployment).
Currently, the best way I can think of to ensure the delayed_job daemon is always running, is to add an initializer to our Rails application that checks if the daemon is running. If it's not running, then the initializer starts the daemon, otherwise, it just leaves it be.
The question, therefore, is how do we detect that the Delayed_Job daemon is running from inside a script? (We should be able to start up a daemon fairly easily, bit I don't know how to detect if one is already active).
Anyone have any ideas?
Regards,
Bernie
Based on the answer below, this is what I came up with. Just put it in config/initializers and you're all set:
#config/initializers/delayed_job.rb
DELAYED_JOB_PID_PATH = "#{Rails.root}/tmp/pids/delayed_job.pid"
def start_delayed_job
Thread.new do
`ruby script/delayed_job start`
end
end
def process_is_dead?
begin
pid = File.read(DELAYED_JOB_PID_PATH).strip
Process.kill(0, pid.to_i)
false
rescue
true
end
end
if !File.exist?(DELAYED_JOB_PID_PATH) && process_is_dead?
start_delayed_job
end
Some more cleanup ideas: The "begin" is not needed. You should rescue "no such process" in order not to fire new processes when something else goes wrong. Rescue "no such file or directory" as well to simplify the condition.
DELAYED_JOB_PID_PATH = "#{Rails.root}/tmp/pids/delayed_job.pid"
def start_delayed_job
Thread.new do
`ruby script/delayed_job start`
end
end
def daemon_is_running?
pid = File.read(DELAYED_JOB_PID_PATH).strip
Process.kill(0, pid.to_i)
true
rescue Errno::ENOENT, Errno::ESRCH # file or process not found
false
end
start_delayed_job unless daemon_is_running?
Keep in mind that this code won't work if you start more than one worker. And check out the "-m" argument of script/delayed_job which spawns a monitor process along with the daemon(s).
Check for the existence of the daemons PID file (File.exist? ...). If it's there then assume it's running else start it up.
Thank you for the solution provided in the question (and the answer that inspired it :-) ), it works for me, even with multiple workers (Rails 3.2.9, Ruby 1.9.3p327).
It worries me that I might forget to restart delayed_job after making some changes to lib for example, causing me to debug for hours before realizing that.
I added the following to my script/rails file in order to allow the code provided in the question to execute every time we start rails but not every time a worker starts:
puts "cleaning up delayed job pid..."
dj_pid_path = File.expand_path('../../tmp/pids/delayed_job.pid', __FILE__)
begin
File.delete(dj_pid_path)
rescue Errno::ENOENT # file does not exist
end
puts "delayed_job ready."
A little drawback that I'm facing with this though is that it also gets called with rails generate for example. I did not spend much time looking for a solution for that but suggestions are welcome :-)
Note that if you're using unicorn, you might want to add the same code to config/unicorn.rb before the before_fork call.
-- EDITED:
After playing around a little more with the solutions above, I ended up doing the following:
I created a file script/start_delayed_job.rb with the content:
puts "cleaning up delayed job pid..."
dj_pid_path = File.expand_path('../../tmp/pids/delayed_job.pid', __FILE__)
def kill_delayed(path)
begin
pid = File.read(path).strip
Process.kill(0, pid.to_i)
false
rescue
true
end
end
kill_delayed(dj_pid_path)
begin
File.delete(dj_pid_path)
rescue Errno::ENOENT # file does not exist
end
# spawn delayed
env = ARGV[1]
puts "spawing delayed job in the same env: #{env}"
# edited, next line has been replaced with the following on in order to ensure delayed job is running in the same environment as the one that spawned it
#Process.spawn("ruby script/delayed_job start")
system({ "RAILS_ENV" => env}, "ruby script/delayed_job start")
puts "delayed_job ready."
Now I can require this file anywhere I want, including 'script/rails' and 'config/unicorn.rb' by doing:
# in top of script/rails
START_DELAYED_PATH = File.expand_path('../start_delayed_job', __FILE__)
require "#{START_DELAYED_PATH}"
# in config/unicorn.rb, before before_fork, different expand_path
START_DELAYED_PATH = File.expand_path('../../script/start_delayed_job', __FILE__)
require "#{START_DELAYED_PATH}"
not great, but works
disclaimer: I say not great because this causes a periodic restart, which for many will not be desirable. And simply trying to start can cause problems because the implementation of DJ can lock up the queue if duplicate instances are created.
You could schedule cron tasks that run periodically to start the job(s) in question. Since DJ treats start commands as no-ops when the job is already running, it just works. This approach also takes care of the case where DJ dies for some reason other than a host restart.
# crontab example
0 * * * * /bin/bash -l -c 'cd /var/your-app/releases/20151207224034 && RAILS_ENV=production bundle exec script/delayed_job --queue=default -i=1 restart'
If you are using a gem like whenever this is pretty straightforward.
every 1.hour do
script "delayed_job --queue=default -i=1 restart"
script "delayed_job --queue=lowpri -i=2 restart"
end

Resources