"missing argument -c" when restarting Unicorn via Mina - ruby-on-rails

I am trying to deploy my app to my server using Mina and I need the server to be restarted automatically. But unfortunately this doesn't work and I don't know why. Here is what I am trying:
require 'mina/bundler'
require 'mina/rvm'
require 'mina/rails'
require 'mina/git'
...
set :unicorn_conf, "#{shared_path}/config/unicorn.rb"
set :unicorn_pid, "#{deploy_to}/current/tmp/pids/unicorn.pid"
...
task :environment do
invoke :'rvm:use[ruby-2.2.3]'
end
task deploy: :environment do
deploy do
# Put things that prepare the empty release folder here.
# Commands queued here will be run on a new release directory.
invoke :'git:clone'
invoke :'deploy:link_shared_paths'
invoke :'bundle:install'
invoke :'rails:db_migrate'
invoke :'rails:assets_precompile'
invoke :restart_server
end
end
task :restart_server do
if File.exists? unicorn_pid
queue 'kill `cat #{unicorn_pid}`'
end
queue 'bundle exec unicorn -c #{deploy_to}/#{unicorn_conf} -E production -D'
puts "bundle exec unicorn -c #{deploy_to}/#{:unicorn_conf} -E production -D"
end
This last puts statement, I put it just to debug and it prints the string I want. But I still got this error:
/home/webuser/tmpcms/tmp/build-145333668721611/vendor/bundle/ruby/2.2.0/gems/unicorn-5.0.1/bin/unicorn:110:in `block in <top (required)>': missing argument: -c (OptionParser::MissingArgument)
from /home/webuser/tmpcms/tmp/build-145333668721611/vendor/bundle/ruby/2.2.0/gems/unicorn-5.0.1/bin/unicorn:10:in `new'
from /home/webuser/tmpcms/tmp/build-145333668721611/vendor/bundle/ruby/2.2.0/gems/unicorn-5.0.1/bin/unicorn:10:in `<top (required)>'
from /home/webuser/tmpcms/tmp/build-145333668721611/vendor/bundle/ruby/2.2.0/bin/unicorn:23:in `load'
from /home/webuser/tmpcms/tmp/build-145333668721611/vendor/bundle/ruby/2.2.0/bin/unicorn:23:in `<main>'
! bash: line 209: log: command not found
! ERROR: Deploy failed.
I don't know what causes it, can you help me with it?
UPD: It seems to be something with the variable substitution and fetch function, but I still can't understand what is wrong. Here is what I've tested:
task :restart_server => :environment do
if File.exists? unicorn_pid
queue 'kill `cat #{unicorn_pid}`'
end
queue 'cd /home/webuser/tmpcms/current; pwd; bundle exec unicorn -c #{deploy_to}/#{unicorn_conf} -E production -D'
end
bundle exec part doesn't work, it doesn't execute it and prints Connection closed instead. The same thing if I will replace the path in the cd command with cd #{deploy_to} or to cd #{fetch(:deploy_to)}

Oh my, the answer was so easy.
I forgot that I was using single quotes instead of double quotes, and the variables weren't substituted there needed.
Just replacing single quotes with double quotes solved it.

Related

How can we run a rake task of one application through another application (both running on the same server)?

Issue :
I have two applications A & B both running on the same server. I have a script in the filesystem
cd /data/B/
bundle exec rake -T
When i run the script through the rails console of application A it errors as the console loads the gems of A and the rake task fails
eg: system("sh ~/test.sh")
rake aborted!
LoadError: cannot load such file -- log4r
/home/kumolus/api/config/application.rb:9:in `require'
/home/kumolus/api/config/application.rb:9:in `<top (required)>'
/home/kumolus/api/Rakefile:1:in `require'
/home/kumolus/api/Rakefile:1:in `<top (required)>'
(See full trace by running task with --trace)
When i run the script through the unix command line (irrespective of my pwd) it works
cd ~
sh test.sh #works
cd /data/A #my application A's dir
sh ~/test.sh #also works
I need it to work through rails.Any help ? Thanks!
Thanks guys for your help,
I found a solution its due to bundler default behaviour.
I used with_clean_env method with a block and it gets executed.
Thanks Team
Cheers
I think cd'ing to the folder should be sufficient to switch the environment within the shell process which is doing the cd'ing. Try this.
Go into project B and do
which bundle
in the command line, and copy the result somewhere.
Then do
which rake
and copy the result of that.
Then, switch to project A and start a rails console. Then try this:
`cd /path/to/projectB; bundle exec rake -T`
If that doesn't work, try this:
`cd /path/to/projectB; <result of doing "which bundle" earlier> exec rake -T`
Then if it still doesn't work, try this:
`cd /path/to/projectB; <result of doing "which bundle" earlier> exec <result of doing "which rake" earlier> -T`

Gemfile not found when running Cron job with Capistrano 3 and whenever gem

My cron job works fine on my local machine after running whenever -w, after deploy to my VPS, I get this error, release 20150415044915 doesn't exist. any idea?
I looked at my crontab -e, the path also looks fine where 20150502114703 is the correct release:
0 1 * * 1 /bin/bash -l -c 'cd /home/hey_production/releases/20150502114703 && bin/rails runner ....
Error Log:
/usr/local/rvm/gems/ruby-2.1.3/gems/bundler-1.7.3/lib/bundler/definition.rb:22:in `build': /home/hey_production/releases/20150415044915/Gemfile not found (Bundler::GemfileNotFound)
from /usr/local/rvm/gems/ruby-2.1.3/gems/bundler-1.7.3/lib/bundler.rb:154:in `definition'
from /usr/local/rvm/gems/ruby-2.1.3/gems/bundler-1.7.3/lib/bundler.rb:117:in `setup'
from /usr/local/rvm/gems/ruby-2.1.3/gems/bundler-1.7.3/lib/bundler/setup.rb:17:in `<top (required)>'
from /usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/rubygems/core_ext/kernel_require.rb:135:in `require'
from /usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/rubygems/core_ext/kernel_require.rb:135:in `rescue in require'
from /usr/local/rvm/rubies/ruby-2.1.3/lib/ruby/2.1.0/rubygems/core_ext/kernel_require.rb:144:in `require'
from bin/rails:14:in `<main>'
Basically the environment variable is missing that tells the cron where to look for a Gemfile. so you need to add that variable in your environment at the time when cron tries to run this.
You can do that in Your schedule.rb:
env BUNDLE_GEMFILE, ENV["/home/hey_production/current/Gemfile"]
or directly inside crontab file with the command crontab -e(before the cron entries):
BUNDLE_GEMFILE="/home/hey_production/current/Gemfile"
Hope it helps.
EDIT
Forgot the symbol above in schedule.rb
The line in schedule.rb should be like this.
env :BUNDLE_GEMFILE, ENV["/#{path}/Gemfile"]
or
env :BUNDLE_GEMFILE, ENV["/home/hey_production/current/Gemfile"]
As a follow up on previous answer, set the env variable in whenever inside schedule.rb before the schedule blocks like:
def production?
#environment == 'production'
end
set :output, {:error => '/home/current/log/cron_error.log', :standard => '/home/current/log/cron.log'}
every 2.hour, roles: [:utility] do
runner "/home/current/lib/cron_jobs/launch_pending_emails.rb"
end
and inside your deploy.rb environment specific file ie: staging.rb set the env:
set :whenever_roles, [:utility]
set :whenever_environment, defer { stage }
set(:whenever_command) { "STAGE=#{stage} bundle exec whenever" }
require 'whenever/capistrano'

Issue with running bundle using Capistrano

I've seen this issue in a few other questions/Github issues, but they haven't been able to shed enough light to lead to the solution.
The error:
bash: bundle: command not found
SSHKit::Runner::ExecuteError: Exception while executing as my-user#my-IP-address: cd /path-to-my-app/current ; bundle exec unicorn -D -c config/unicorn.rb -E production exit status: 127
cd /path-to-my-app/current ; bundle exec unicorn -D -c config/unicorn.rb -E production stdout: Nothing written
cd /path-to-my-app/current ; bundle exec unicorn -D -c config/unicorn.rb -E production stderr: bash: bundle: command not found
I'm using the latest version of Capistrano on my rails app. Specifically, it's a Rails 4.2.0 app with the gems capistrano-rails and capistrano-rvm.
I have a pretty standard config/unicorn.rb file, which is how I start unicorn while logged into my server. I first kill the current PID with:
kill -9 PID
Then I start unicorn with:
bundle exec unicorn -D -c /path-to-my/config/unicorn.rb -E production
This works great, but obviously I need capistrano to do that, so I essentially have those tasks in my deploy.rb file, however I get that error mentioned above. Here are the two tasks I have in deploy.rb:
namespace :deploy do
namespace :unicorn do
task :restart do
on roles(:app), in: :sequence, wait: 5 do
execute "kill -s USR2 `cat /path-to-my/tmp/pids/app-name.pid`"
end
end
desc 'Start unicorn'
task :start do
on roles(:app), in: :sequence, wait: 5 do
execute "cd #{current_path} ; bundle exec unicorn -D -c config/unicorn.rb -E production"
end
end
end
end
I have a similar task to restart DelayedJob, which throws the same error. The command is execute "cd #{current_path} ; RAILS_ENV=production bin/delayed_job -n2 restart". Like I mentioned above, when logged into the server with my user (same user I use for Capistrano), all of these tasks work as expected.
Every other built-in task works great with Capistrano, like precompiling assets, migrating my database, etc. It is the custom tasks I've added that are getting the error.
Let me know if I can add anything to help solve the problem.
Look at output of Capistrano carefully. Such commands as bundle assets:precompile makes initialization of required ruby version:
cd /<app path>/20150301211440 && ( RAILS_ENV=staging /usr/local/rvm/bin/rvm 2.1.5 do bundle exec rake assets:precompile )
Seems like you don't do it. So, probably you a using system ruby by default and it doesn't have a bundler gem.
Try to specify ruby version use RVM in your commands too. I think it should fix your problem.

How to enter rails console on production via capistrano?

I want to enter the rails console on production server from my local machine via capistrano.
I found some gists, e.g. https://gist.github.com/813291 and when I enter console via
cap production console
I get the following result
192-168-0-100:foldername username $ cap console RAILS_ENV=production
* executing `console'
* executing "cd /var/www/myapp/current && rails console production"
servers: ["www.example.de"]
[www.example.de] executing command
[www.example.de] rvm_path=$HOME/.rvm/ $HOME/.rvm/bin/rvm-shell '1.9.3' -c 'cd /var/www/myapp/current && rails console production'
/var/www/myapp/releases/20120305102218/app/controllers/users_controller.rb:3: warning: already initialized constant VERIFY_PEER
Loading production environment (Rails 3.2.1)
Switch to inspect mode.
and thats it... Now I can enter some text, but nothing happens...
Has anybody an idea how to get that work or another solution for my problem?
I've added my own tasks for this kind of thing:
namespace :rails do
desc "Remote console"
task :console, :roles => :app do
run_interactively "bundle exec rails console #{rails_env}"
end
desc "Remote dbconsole"
task :dbconsole, :roles => :app do
run_interactively "bundle exec rails dbconsole #{rails_env}"
end
end
def run_interactively(command)
server ||= find_servers_for_task(current_task).first
exec %Q(ssh #{user}##{myproductionhost} -t '#{command}')
end
I now say cap rails:console and get a console.
For Capistrano 3 you can add these lines in your config/deploy:
namespace :rails do
desc 'Open a rails console `cap [staging] rails:console [server_index default: 0]`'
task :console do
server = roles(:app)[ARGV[2].to_i]
puts "Opening a console on: #{server.hostname}...."
cmd = "ssh #{server.user}##{server.hostname} -t 'cd #{fetch(:deploy_to)}/current && RAILS_ENV=#{fetch(:rails_env)} bundle exec rails console'"
puts cmd
exec cmd
end
end
To open the first server in the servers list:
cap [staging] rails:console
To open the second server in the servers list:
cap [staging] rails:console 1
Reference: Opening a Rails console with Capistrano 3
exec is needed to replace the current process, otherwise you will not be able to interact with the rails console.
A simple Capistrano 3 solution may be:
namespace :rails do
desc "Run the console on a remote server."
task :console do
on roles(:app) do |h|
execute_interactively "RAILS_ENV=#{fetch(:rails_env)} bundle exec rails console", h.user
end
end
def execute_interactively(command, user)
info "Connecting with #{user}##{host}"
cmd = "ssh #{user}##{host} -p 22 -t 'cd #{fetch(:deploy_to)}/current && #{command}'"
exec cmd
end
end
Then you can call it say, on staging, with: cap staging rails:console. Have fun!
For Capistrano > 3.5 and rbenv. Working in 2021
namespace :rails do
desc "Open the rails console on one of the remote servers"
task :console do |current_task|
on roles(:app) do |server|
server ||= find_servers_for_task(current_task).first
exec %Q[ssh -l #{server.user||fetch(:user)} #{server.hostname} -p #{server.port || 22} -t 'export PATH="$HOME/.rbenv/bin:$PATH"; eval "$(rbenv init -)"; cd #{release_path}; bin/rails console -e production']
end
end
end
I have fiddled with that approach as well, but then resorted to avoiding building my own interactive SSH shell client and just went with this snippet I found that simply uses good old SSH. This might not be suitable if you have some weird SSH gateway proxying going on, but for logging into a box and performing some operations, it works like a charm.
In my experience, capistrano isn't built to work very well with interactive terminals.
If you have to execute things in multiple terminals, I'd suggest iterm, which has a "send to all windows" function that works very well for me:
http://www.iterm2.com/#/section/home
I have a somewhat difficult environment, which is influx ... So bash -lc isn't really an option right now. My solution is similar to #Rocco, but it's a bit more refined.
# run a command in the `current` directory of `deploy_to`
def run_interactively(command)
# select a random server to run on
server = find_servers_for_task(current_task).sample
# cobble together a shell environment
app_env = fetch("default_environment", {}).map{|k,v| "#{k}=\"#{v}\""}.join(' ')
# Import the default environment, cd to the currently deployed app, run the command
command = %Q(ssh -tt -i #{ssh_options[:keys]} #{user}##{server} "env #{app_env} bash -c 'cd #{deploy_to}/current; #{command}'")
puts command
exec command
end
namespace :rails do
desc "rails console on a sidekiq worker"
task :console, role: :sidekiq_normal do
run_interactively "bundle exec rails console #{rails_env}"
end
end
For a Rails console in Capistrano 3 see this gist: https://gist.github.com/joost/9343156
I have just used capistrano-rails-console gem to open rails console and it is working fine.

Delayed job wont start using Capistrano

I cannot start delayed job process using a capistrano recipe. Here's the error I am getting.
/usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.1/lib/delayed/command.rb:62:in `mkdir': File exists - /my_app/server/releases/20101120001612/tmp/pids (Errno::EEXIST)
Here's the capistrano code (NOTE-: I have tried both start/restart commands)
after "deploy:restart", "delayed_job:start"
task :start, :roles => :app do
run "cd #{current_path}; RAILS_ENV=#{rails_env} script/delayed_job -n 2 start"
end
More detail errors from deployment logs -
executing command
[err :: my_server] /usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.1/lib/delayed/command.rb:62:in `mkdir': File exists - /my_app/server/releases/20101120001612/tmp/pids (Errno::EEXIST)
[err :: my_server] from /usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.1/lib/delayed/command.rb:62:in `daemonize'
[err :: my_server] from script/delayed_job:5:in `<main>'
command finished
failed: "sh -c 'cd /my_app/server/current; RAILS_ENV=production script/delayed_job -n 3 restart'" on myserevr
This is a Rails 3 app (v3.0.3)
Seeing the same problem.
It turns out I was missing the ~/apps/application_name/shared/pids directory.
Finally creating it made this problem go away.
No need to set up custom dj_pids directory.
I also got this error and found a couple of issues:
Ensure you have a shared/pids folder.
Ensure you have the correct hooks setup
Your deploy.rb script should contain:
require "delayed/recipes"
after "deploy:stop", "delayed_job:stop"
after "deploy:start", "delayed_job:start"
after "deploy:restart", "delayed_job:restart"
I'd copied the hooks from an old post and they appear to be incorrect now. These are from the actual delayed_job recipe file comments.
I believe cap deploy:setup should create the pids folder but I set things up a different way and it was not created. app/current/tmp/pids links to app/shared/pids and this was causing the false directory exists error.
This is how I fixed the issue, I passed an explicit pids dir parameter using "--pid-dir". Not sure if this is perfect, but it worked.
task :restart, :roles => :app do
run "cd #{current_path}; RAILS_ENV=#{rails_env} script/delayed_job -n #{dj_proc_count} --pid-dir=#{app_root}/shared/dj_pids restart"
end
Add the creation of this directory before
after "deploy:restart", "delayed_job:start"
task :start, :roles => :app do
run "mkdir #{current_path}/tmp/pids"
run "cd #{current_path}; RAILS_ENV=#{rails_env} script/delayed_job -n 2 start"
end
I had the same issue. Turned out that there was an existing
application_name/shared/pids/delayed_job.main.pid
file, which had incorrect owner permissions, which was causing the deployment to fail. Fixing this file's permissions solved the issue for me.
Since the creation of the directories is cheap and fast, use the following callback:
before 'deploy', 'deploy:setup'
This will ensure that structure is always there before each deploy.

Resources