Upstart script using foreman export is using wrong ruby version - ruby-on-rails

I have just recently deployed my application to production server, but it looks that that every process I have added to my .procfile (foreman) is not started at all. Details:
I am using Rails 4.0.12, foreman 0.78.0, sidekiq 3.4.2 and clockwork 1.2.0. I am using Capistrano and I have defined a task to export procfile as an upstart service on Ubuntu 14.02. But when I start the service, no background jobs are processed. When I take look at the log files of that upstart service, I just see the following:
Your ruby version is 1.9.3, but your Gemfile specified 2.1.2
Application is running, I can see on sidekiq dashboard that nothing is processed. Based on the error message, it looks like that I am executing my procfile somehow wrong. I have tried multiple execution scenarios, but nothing seems to work. My .procfile currently looks like this:
worker: rbenv sudo bundle exec sidekiq -C config/sidekiq.yml -e production
clock: rbenv sudo bundle exec clockwork config/clock.rb
One part of exported upstart script looks like this for example:
start on starting aaa-clock
stop on stopping aaa-clock
respawn
env PORT=5100
setuid da_admin
chdir /var/www/aaa/releases/20150728172635
exec rbenv sudo bundle exec clockwork config/clock.rb
If i try the last two commands alone in bash, it works, but when i start the service "sudo service aaa start" or "rbenv sudo service aaa start", it doesn't work.
Part of deploy.rb where I am exporting my upstart service:
namespace :foreman do
desc "Export the Procfile to Ubuntu's upstart scripts"
task :export do
on roles(:app) do
within release_path do
execute :rbenv, "sudo bundle exec foreman export upstart /etc/init -a #{fetch(:application)} -u #{fetch(:user)} -l #{current_path}/log -f #{release_path}/Procfile"
end
end
end
desc "Start the application services"
task :start do
on roles(:app) do
execute :rbenv, "sudo service #{fetch(:application)} start"
end
end
desc "Stop the application services"
task :stop do
on roles(:app) do
execute :rbenv, "sudo service #{fetch(:application)} stop"
end
end
desc "Restart the application services"
task :restart do
on roles(:app) do
execute :rbenv, "sudo service #{fetch(:application)} restart"
end
end
end
Does anybody has any idea what could be wrong? I suspect, that this will be some mistake in environment configuration. Thank you in advance for your time.
EDIT:
The problem was at the end in environment of the upstart script, similar problems which pointed me in the right direction were:
foreman issue
foreman another issue
I had to create .env file with configuration of various environment variables. Now it atleast starts (other bugs arised, but they are not related to this issue).
Example of the .env file in the root of project directory:
PATH=/home/user/.rbenv/versions/2.1.2/bin:/usr/bin
RAILS_ENV=production
HOME=/home/user

Related

How to start Delayed Job with Ubuntu?

I use Delayed Job as a queuing backend for Active Job on my Rails 5 app but I have no idea how to start the worker on Ubuntu 14.04 after startup. Should I wrap rails jobs:work into a Bash script? How would I have it start automatically? Or is it preferable to use bin/delayed_job?
How do I start delayed job on boot?
It does not really matter what OS you're on (as long it is not Windows :D).
To start the processing the command is:
bundle exec rake jobs:work
to restart the delayed_job the command is:
RAILS_ENV=production script/delayed_job restart
Check out gems README for more info.
EDIT
(according to the comment)
You can create some bash script in user's home start_delayed_jon.sh.
Something along the lines:
#!/bin/bash
cd /path/to/your/project/directory/
RAILS_ENV=development bundle exec rake jobs:work
and run it in /etc/rc.local:
su -s /bin/bash - deploy /path/to/your/project/directory/start_delayed_jon.sh
Using the Whenever Gem, you can setup a cronjob that runs it on reboot. In your schedule.rb file:
every :reboot do
rake 'start_delayed_jobs'
end
Then in your rake file:
desc 'Start delayed jobs'
task :start_delayed_jobs do
system("#{Rails.root}/bin/delayed_job start")
end
end
If you are using gem 'delayed_job_active_record'.
You start a delayed jobs on your local ubuntu system, simply run the below command to start
./bin/delayed_job start
and to restart
./bin/delayed_job restart
If we are in development mode, we would use the below rake task instead.
bundle exec rake jobs:work
for production:
RAILS_ENV=production script/delayed_job -n2 restart
or
RAILS_ENV=production bin/delayed_job -n2 restart
n2 is the number of delayed jobs servers you want to restart in case of start use command start instead or restart.
documentation: https://github.com/collectiveidea/delayed_job#restarting-delayed_job

bundler: not executable: script/delayed_job

I'm trying to run the following command on my remote server (either via capistrano or ssh):
bundle exec RAILS_ENV=production script/delayed_job start
But I'm getting this error message: bundler: not executable: script/delayed_job
Never saw this before, and google had nothing for me. Any idea what might be the problem?
Maybe it does not have permissions to run? Try running this command
chmod +x script/delayed_job
and then executing the file again.
I am not sure if if it is a fundamental misunderstanding of the capistrano rbenv gem or some issue with the gem itself, but I had similar issue with delayed_job, where the bin/delayed_job file just would not get the executable permission when copied to the server by capistrano. So I wrote a task which I had run before invoking the delayed_job:restart task.
Note - Adding this answer because earlier one is from 2014, and also I wanted to show how to add the task, so the permission change can happen during deployment itself.
Created a task in lib/capistrano/tasks folder (in namespace delayed_job):
namespace :delayed_job do
desc 'Ensure that bin/delayed_job has the permission to be executable. Ideally, this should not have been needed.'
task :ensure_delayed_job_executable do
on roles(delayed_job_roles) do
within release_path do
execute :chmod, :'u+x', :'bin/delayed_job'
end
end
end
end
after 'deploy:publishing', 'deploy:restart'
namespace :deploy do
task :restart do
invoke 'delayed_job:ensure_delayed_job_executable'
invoke 'delayed_job:restart'
end
end

Capistrano-unicorn gem getting wrong environment set

I've been using this gem for a while and just took the dive to try deploying an actual staging environment to my staging server, and I ran into issues. Unicorn starts with the command unicorn_rails and -E production despite all the settings being correct afaik.
I noticed in deploy.rb that my unicorn_bin variable was set as unicorn_rails. I took out this setting in my deploy.rb. However unicorn:duplicate still executes the unicorn_rails command, when the default should be unicorn.
My vars are all set to staging in the deploy/staging.rb, as outlined in the multistage setup wiki document, but I noticed -E is still getting set to production.
Relevent info:
Here's my output from my unicorn.log file after a deploy:
executing ["/var/www/apps/myapp/shared/bundle/ruby/2.0.0/bin/unicorn_rails", "-c", "/var/www/apps/bundio/current/config/unicorn.rb", "-E", "production", "-D", {12=>#<Kgio::UNIXServer:/tmp/bundio.socket>, 13=>#<Kgio::TCPServer:fd 13>}] (in /var/www/apps/bundio/current)
Here's the output from cap -T (defaults to staging)
# Environments
rails_env "staging"
unicorn_env "staging"
unicorn_rack_env "staging"
# Execution
unicorn_user nil
unicorn_bundle "/usr/local/rvm/gems/ruby-2.0.0-p247#global/bin/bundle"
unicorn_bin "unicorn"
unicorn_options ""
unicorn_restart_sleep_time 2
# Relative paths
app_subdir ""
unicorn_config_rel_path "config"
unicorn_config_filename "unicorn.rb"
unicorn_config_rel_file_path "config/unicorn.rb"
unicorn_config_stage_rel_file_path "config/unicorn/staging.rb"
# Absolute paths
app_path "/var/www/apps/myapp/current"
unicorn_pid "/var/www/apps/myapp/shared/pids/unicorn.myapp.pid"
bundle_gemfile "/var/www/apps/myapp/current/Gemfile"
unicorn_config_path "/var/www/apps/myapp/current/config"
unicorn_config_file_path "/var/www/apps/myapp/current/config/unicorn.rb"
unicorn_config_stage_file_path
-> "/var/www/apps/myapp/current/config/unicorn/staging.rb"
And another curiousity, the unicorn_rails -E flag should reference the rails environment, whereas the unicorn -E should reference the rack env -- the rack env should only get the values developement and deployment, but it gets set to production, which is a bit strange (see unicorn docs for settings of the RACK_ENV variable.
Any insight into this would be much appreciated. On my staging server, I've also set the RAILS_ENV to staging. I've set up the things for rails for another environment, like adding staging.rb in my environments folder, adding a staging section to database.yml, etc.
Important lines in lib/capistrano-unicorn/config.rb talking about unicorn_rack_env:
_cset(:unicorn_env) { fetch(:rails_env, 'production' ) }
_cset(:unicorn_rack_env) do
# Following recommendations from http://unicorn.bogomips.org/unicorn_1.html
fetch(:rails_env) == 'development' ? 'development' : 'deployment'
end
Thanks in advance.
Ok, after a long time not having the correct environment, I have discovered the issue!
Basically, my init scripts were running BEFORE my capistrano-unicorn bin was doing its thing.
So, make sure that your init.d or upstart scripts to manage Unicorn and its workers are taken into account when capistrano-unicorn is doing the unicorn restart / reload / duplication tasks.
I did not think to look at these scripts when I had to debug the stale pid file / already running / unable to listen on socket errors. But it makes sense, as upstart starts Unicorn when it is not running, and then capistrano-unicorn is also attempting to start Unicorn.
I have now combined these capistrano tasks and hooks with Monit and a Unicorn init script.
Capistrano tasks:
namespace :monit do
desc ' wait 20 seconds '
task :wait_20_seconds do
sleep 20
end
task :monitor_all, :roles => :app do
sudo "monit monitor all"
end
task :unmonitor_all, :roles => :app do
sudo "monit unmonitor all"
end
desc 'monitor unicorn in the monit rc file'
task :monitor_unicorn, :roles => :app do
sudo "monit monitor unicorn"
end
desc 'unmonitor unicorn in the monit rc file'
task :unmonitor_unicorn, :roles => :app do
sudo "monit unmonitor unicorn"
end
end
Capistrano hooks:
after 'deploy:restart', 'unicorn:duplicate' # app preloaded. check https://github.com/sosedoff/capistrano-unicorn section for zero downtime
before 'deploy', "monit:unmonitor_unicorn"
before 'deploy:migrations', "monit:unmonitor_unicorn"
after 'deploy', 'monit:wait_20_seconds'
after "deploy:migrations", "monit:wait_20_seconds"
after 'monit:wait_20_seconds', 'monit:monitor_unicorn'
I use Monit to monitor my unicorn process:
Within /etc/monit/monitrc:
check process unicorn
with pidfile /var/www/apps/my_app/shared/pids/mypid.pid
start program = "/usr/bin/sudo service unicorn start"
stop program = "/usr/bin/sudo service unicorn stop"
Within your init script, you will start the unicorn process with something like:
unicorn_rails -c /var/www/apps/my_app/current/config/unicorn.rb -E staging -D
Make sure the -E flag is set to the correct environment. The capistrano-unicorn gem has directives using :set within deploy.rb which allow you to specify the environment for that unicorn process.

Bluepill - installed in user RVM - project specific gemset - how to run with sudo without password?

I have Bluepill setup to monitor my delayed_job processes.
On my production server, I use RVM installed in the user's home folder (username is deploy). My app's gems are installed in its own project-specific gemset. So, the bluepill gem and its corresponding binary are installed within the ~/.rvm/.... folder.
When I deploy my app using capistrano, I want bluepill to be stopped and started, so my DJs get restarted. I am looking at the instructions for the capistrano recipe here.
I think my RVM-compliant bluepill tasks have to be like the following:
# Bluepill related tasks
after 'deploy:start', 'bluepill:start'
after 'deploy:stop', 'bluepill:quit'
after 'deploy:restart', 'bluepill:quit', 'bluepill:start'
namespace :bluepill do
desc 'Stop processes that bluepill is monitoring and quit bluepill'
task :quit, :roles => [:app] do
run "cd #{current_path}; sudo bluepill #{application}_#{rails_env} stop"
run "cd #{current_path}; sudo bluepill #{application}_#{rails_env} quit"
sleep 5
end
desc 'Load bluepill configuration and start it'
task :start, :roles => [:app] do
run "cd #{current_path}; sudo bluepill load #{current_path}/config/server/#{rails_env}/delayed_job.bluepill"
end
desc 'Prints bluepills monitored processes statuses'
task :status, :roles => [:app] do
run "cd #{current_path}; sudo bluepill #{application}_#{rails_env} status"
end
end
I haven't tested the above yet.
What I am wondering is: what should I put in my sudoers file to allow the deploy user run just these bluepill related commands as root without a password? On this page they have mentioned this:
deploy ALL=(ALL) NOPASSWD: /usr/local/bin/bluepill
But the path to the bluepill binary would be different in my case. And it would be different for different projects, because of project-specific gemsets. Should I be mentioning each of the binary paths or is there a better way of handling this?
use wrappers and aliases:
namespace :bluepill do
task :setup do
run "rvm alias create #{application} #{rvm_ruby_name_evaluated}"
run "rvm wrappers #{application} --no-links bluepill"
end
end
so after this task bluepill is available via #{rvm_path}/wrappers/#{application}/bluepill which will be always the same even if you change ruby version, so it can be added to sudoers for preserving path:
deploy ALL=(ALL) NOPASSWD: /home/my_user/.rvm/wrappers/my_app/bluepill
and then the tasks can use:
sudo #{rvm_path}/wrappers/#{application}/bluepill ...
it is important to note here that the wrapper takes care of loading rvm environment because it was lost by invocation of sudo ... but this is just a detail ;)

How to enter rails console on production via capistrano?

I want to enter the rails console on production server from my local machine via capistrano.
I found some gists, e.g. https://gist.github.com/813291 and when I enter console via
cap production console
I get the following result
192-168-0-100:foldername username $ cap console RAILS_ENV=production
* executing `console'
* executing "cd /var/www/myapp/current && rails console production"
servers: ["www.example.de"]
[www.example.de] executing command
[www.example.de] rvm_path=$HOME/.rvm/ $HOME/.rvm/bin/rvm-shell '1.9.3' -c 'cd /var/www/myapp/current && rails console production'
/var/www/myapp/releases/20120305102218/app/controllers/users_controller.rb:3: warning: already initialized constant VERIFY_PEER
Loading production environment (Rails 3.2.1)
Switch to inspect mode.
and thats it... Now I can enter some text, but nothing happens...
Has anybody an idea how to get that work or another solution for my problem?
I've added my own tasks for this kind of thing:
namespace :rails do
desc "Remote console"
task :console, :roles => :app do
run_interactively "bundle exec rails console #{rails_env}"
end
desc "Remote dbconsole"
task :dbconsole, :roles => :app do
run_interactively "bundle exec rails dbconsole #{rails_env}"
end
end
def run_interactively(command)
server ||= find_servers_for_task(current_task).first
exec %Q(ssh #{user}##{myproductionhost} -t '#{command}')
end
I now say cap rails:console and get a console.
For Capistrano 3 you can add these lines in your config/deploy:
namespace :rails do
desc 'Open a rails console `cap [staging] rails:console [server_index default: 0]`'
task :console do
server = roles(:app)[ARGV[2].to_i]
puts "Opening a console on: #{server.hostname}...."
cmd = "ssh #{server.user}##{server.hostname} -t 'cd #{fetch(:deploy_to)}/current && RAILS_ENV=#{fetch(:rails_env)} bundle exec rails console'"
puts cmd
exec cmd
end
end
To open the first server in the servers list:
cap [staging] rails:console
To open the second server in the servers list:
cap [staging] rails:console 1
Reference: Opening a Rails console with Capistrano 3
exec is needed to replace the current process, otherwise you will not be able to interact with the rails console.
A simple Capistrano 3 solution may be:
namespace :rails do
desc "Run the console on a remote server."
task :console do
on roles(:app) do |h|
execute_interactively "RAILS_ENV=#{fetch(:rails_env)} bundle exec rails console", h.user
end
end
def execute_interactively(command, user)
info "Connecting with #{user}##{host}"
cmd = "ssh #{user}##{host} -p 22 -t 'cd #{fetch(:deploy_to)}/current && #{command}'"
exec cmd
end
end
Then you can call it say, on staging, with: cap staging rails:console. Have fun!
For Capistrano > 3.5 and rbenv. Working in 2021
namespace :rails do
desc "Open the rails console on one of the remote servers"
task :console do |current_task|
on roles(:app) do |server|
server ||= find_servers_for_task(current_task).first
exec %Q[ssh -l #{server.user||fetch(:user)} #{server.hostname} -p #{server.port || 22} -t 'export PATH="$HOME/.rbenv/bin:$PATH"; eval "$(rbenv init -)"; cd #{release_path}; bin/rails console -e production']
end
end
end
I have fiddled with that approach as well, but then resorted to avoiding building my own interactive SSH shell client and just went with this snippet I found that simply uses good old SSH. This might not be suitable if you have some weird SSH gateway proxying going on, but for logging into a box and performing some operations, it works like a charm.
In my experience, capistrano isn't built to work very well with interactive terminals.
If you have to execute things in multiple terminals, I'd suggest iterm, which has a "send to all windows" function that works very well for me:
http://www.iterm2.com/#/section/home
I have a somewhat difficult environment, which is influx ... So bash -lc isn't really an option right now. My solution is similar to #Rocco, but it's a bit more refined.
# run a command in the `current` directory of `deploy_to`
def run_interactively(command)
# select a random server to run on
server = find_servers_for_task(current_task).sample
# cobble together a shell environment
app_env = fetch("default_environment", {}).map{|k,v| "#{k}=\"#{v}\""}.join(' ')
# Import the default environment, cd to the currently deployed app, run the command
command = %Q(ssh -tt -i #{ssh_options[:keys]} #{user}##{server} "env #{app_env} bash -c 'cd #{deploy_to}/current; #{command}'")
puts command
exec command
end
namespace :rails do
desc "rails console on a sidekiq worker"
task :console, role: :sidekiq_normal do
run_interactively "bundle exec rails console #{rails_env}"
end
end
For a Rails console in Capistrano 3 see this gist: https://gist.github.com/joost/9343156
I have just used capistrano-rails-console gem to open rails console and it is working fine.

Resources