Rubber gem having issue with graphite server - ruby-on-rails

I am having an issue with using rubber, whenever I try cap rubber:bootstrap and had a staging instance, it always stuck on this error.
* executing "sudo -p 'sudo password: ' bash -l -c 'cd /mnt/localstore-production/releases/20120519213905 && RUBBER_ENV=production RAILS_ENV=production ./script/rubber config --force --file=\"role/graphite_server\"'"
servers: ["production.localstore.com"]
[production.localstore.com] executing command
** [out :: production.localstore.com] Instance not found for host: ip-10-2-118-252
** [out :: production.localstore.com]
command finished in 5849ms
failed: "/bin/bash -l -c 'sudo -p '\\''sudo password: '\\'' bash -l -c '\\''cd /mnt/localstore-production/releases/20120519213905 && RUBBER_ENV=production RAILS_ENV=production ./script/rubber config --force --file=\"role/graphite_server\"'\\'''" on production.localstore.com

Actually the issue was that I changed the static IP of the instance from the AWS Console, so why the host info of the instance changed some how.
So I use this command cap rubber:referesh to refresh every thing and then bootstrap the instance and it solved my problem.

Related

Rails Rubber deployment timeout

I am trying to deploy an Amazon EC2 instance using rubber. On running
cap rubber:create_staging I get the following timeout:
* 2014-04-06 20:14:09 executing `rubber:postgresql:setup_apt_sources'
servers: ["production.foo.com"]
** sftp upload #<StringIO:0x0000000420a748> -> /tmp/configure_postgresql_repository
[production.foo.com] /tmp/configure_postgresql_repository
[production.foo.com] done
* sftp upload complete
* executing "sudo -p 'sudo password: ' bash -l /tmp/configure_postgresql_repository"
servers: ["production.foo.com"]
[production.foo.com] executing command
command finished in 1161ms
* executing "sudo -p 'sudo password: ' bash -l -c 'apt-get -q update'"
servers: ["production.foo.com"]
connection failed for: production.foo.com (Timeout::Error: execution expired)
I tried increasing the timeout to 60 seconds, but it didn't work.
Any suggestions?
Looks like the user you are using to connect with to the instance using Rubber/Capistrano doesn't have passwordless sudo access. Try creating the file /etc/sudoers.d/90-user-you-use-for-rubber with the following content:
user-you-user-for-rubber ALL=(ALL) NOPASSWD:ALL

erros in Ubuntu Terminal when deploying my rails app

I have built a ruby on rails application using Ubuntu. My application is on github. Normally I transfer the latest developments from github to my website, www.mywebsite.com, by doing in Terminal:
cap production deploy:migrations
And everything works fine. I see in Terminal the progress of the transfer like:
* executing "rm -rf /var/www/apps/myapp/releases/20130720123042/public/assets &&\\\n mkdir -p /var/www/apps/myapp/releases/20130720123042/public &&\\\n mkdir -p /var/www/apps/myapp/shared/assets &&\\\n ln -s /var/www/apps/myapp/shared/assets /var/www/apps/myapp/releases/20130720123042/public/assets"
servers: ["92.51.243.6"]
[92.51.243.6] executing command
command finished in 626ms
* executing `bundle:install'
* executing "cd /var/www/apps/myapp/releases/20130720123042 && bundle install --gemfile /var/www/apps/myapp/releases/20130720123042/Gemfile --path /var/www/apps/myapp/shared/bundle --deployment --quiet --without development test"
servers: ["92.51.243.6"]
[92.51.243.6] executing command
command finished in 1595ms
* executing "chmod -R g+w /var/www/apps/myapp/releases/20130720123042"
servers: ["92.51.243.6"]
[92.51.243.6] executing command
command finished in 645ms
Now the process doesn't complete as it used to and I get this message:
* executing "cd /var/www/apps/myapp/releases/20130720123042 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile"
servers: ["92.51.243.6"]
[92.51.243.6] executing command
** [out :: 92.51.243.6] rake aborted!
** [out :: 92.51.243.6] Received wrong number of arguments. [nil]
** [out :: 92.51.243.6] /var/www/apps/myapp/shared/bundle/ruby/1.8/gems/omniauth-1.1.0/lib/omniauth/strategy.rb:136:in `initialize'
more stuff here like the line above..
failed: "rvm_path=$HOME/.rvm/ $HOME/.rvm/bin/rvm-shell 'ree-1.8.7-2012.02' -c 'cd /var/www/apps/myapp/releases/20130720123042 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile'" on 92.51.243.6
mypc#ubuntu:~/myapp$
Any idea how I can fix this problem? Thanks.

During cap deploy:cold - command not found for /etc/init.d/unicorn

I'm very close to having my first rails app live up on Linode VPS, but keep on getting a strange error message near the end of cap deploy:cold. I've been following railscasts 335 to deploy my Rails app to a VPS using nginx, Unicorn, PostgreSQL, rbenv and more (unfortunately for me from a Windows machine). I'm hosting on Linode Ubuntu 10.04 LTS Profile. Near the end of the deploy I get this error message:
* ←[32m2013-04-24 13:08:13 executing `deploy:start'←[0m
* ←[33mexecuting "sudo -p 'sudo password: ' /etc/init.d/unicorn_wheretoski start"←[0m
servers: ["xxx.xx.xxx.242"]
[xxx.xx.xxx.242] executing command
** [out :: xxx.xx.xxx.242]
** [out :: xxx.xx.xxx.242] sudo: /etc/init.d/unicorn_wheretoski: command not found
** [out :: xxx.xx.xxx.242]
←[2;37mcommand finished in 309ms←[0m
failed: "env PATH=$HOME/.rbenv/shims:$HOME/.rbenv/bin:$PATH sh -c 'sudo -p '\\''
sudo password: '\\'' /etc/init.d/unicorn_wheretoski start'" on xxx.xx.xxx.242
When I go to the server, it locates the file
:~/apps/wheretoski/current$ ls /etc/init.d/unicorn_wheretoski
/etc/init.d/unicorn_wheretoski
From deploy.rb
namespace :deploy do
%w[start stop restart].each do |command|
desc "#{command} unicorn server"
task command, roles: :app, except: {no_release: true} do
sudo "/etc/init.d/unicorn_#{application} #{command}"
end
end
......
And from unicorn_init.sh
#!/bin/sh
set -e
# Feel free to change any of the following variables for your app:
TIMEOUT=${TIMEOUT-60}
APP_ROOT=/home/deployer/apps/wheretoski/current
PID=$APP_ROOT/tmp/pids/unicorn.pid
CMD="cd $APP_ROOT; bundle exec unicorn -D -c $APP_ROOT/config/unicorn.rb -E production"
AS_USER=deployer
set -u
OLD_PIN="$PID.oldbin"
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
oldsig () {
test -s $OLD_PIN && kill -$1 `cat $OLD_PIN`
}
run () {
if [ "$(id -un)" = "$AS_USER" ]; then
eval $1
else
su -c "$1" - $AS_USER
fi
}
case "$1" in
start)
sig 0 && echo >&2 "Already running" && exit 0
run "$CMD"
;;
I then head over to the VPS and try to execute the various commands and I get an error when executing the following:
deployer#li543-242:~/apps/wheretoski/current$ bundle exec unicorn -D -c $/home/apps/wheretoski/current/config/unicorn.rb -E production
/home/deployer/.rbenv/versions/1.9.3-p125/lib/ruby/gems/1.9.1/gems/bundler-1.3.5/lib/bundler/rubygems_integration.rb:214:in `block in replace_gem': unicorn is not part of the bundle. Add it to Gemfile. (Gem::LoadError)
from /home/deployer/.rbenv/versions/1.9.3-p125/bin/unicorn:22:in `<main>'
Here is what I get for echo $PATH on the VPS:
/home/deployer/.rbenv/shims:/home/deployer/.rbenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/deployer/.rbenv/versions/1.9.3-p125/bin
I have tried with both the unicorn gem under the production group and as part of the main gems, both have produced this same error message. When I open the Gemfile.lock in the current folder on the server Unicorn only shows up under the dependencies, not under the specs.
Thanks for any help!!
Alright, there was a couple of issues here.
1 - I had different versions of bundler on my local machine and the server.
2 - Developing on a Windows machine. I had to put the unicorn gem under a production group in my gemfile, and for whatever reason the gemfile.lock was not created successfully as a result. Had a buddy with a mac pull my code, move unicorn to the main section of the gemfile, and bundle installed it. This created a good Gemfile.lock which is in use now on the server.
Not sure if this will be helpful to others or not, quite the weird error.

How to setup a server for deployment and to do a cold deploy with Capistrano?

What is the proper way to deploy:setup and to do a cold deploy with Capistrano?
Using
this deploy.rb
Capistrano v2.14.2
Vagrant to virtualize my server,
this is my scenario:
when running deploy:setup, Capistrano makes use of root privileges to prepare the directory structure for deployment:
$ cap deploy:setup
* 2013-02-28 14:50:21 executing `deploy:setup'
* executing "sudo -p 'sudo password: ' mkdir -p /home/vagrant/example /home/vagrant/example/releases /home/vagrant/example/shared /home/vagrant/example/shared/system /home/vagrant/example/shared/log /home/vagrant/example/shared/pids"
servers: ["example.com"]
[example.com] executing command
command finished in 29ms
* executing "sudo -p 'sudo password: ' chmod g+w /home/vagrant/example /home/vagrant/example/releases /home/vagrant/example/shared /home/vagrant/example/shared/system /home/vagrant/example/shared/log /home/vagrant/example/shared/pids"
servers: ["example.com"]
[example.com] executing command
command finished in 11ms
yet upon deploy:cold Capistrano attempts to checkout (from git in this case) and write as the vagrant user – the user specified in deploy.rb:
$ cap deploy:cold
* 2013-02-28 14:50:47 executing `deploy:cold'
* 2013-02-28 14:50:47 executing `deploy:update'
** transaction: start
* 2013-02-28 14:50:47 executing `deploy:update_code'
updating the cached checkout on all servers
executing locally: "git ls-remote git#github.com:mariusbutuc/realtime-faye.git master"
command finished in 2360ms
* executing "if [ -d /home/vagrant/example/shared/cached-copy ]; then cd /home/vagrant/example/shared/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard a7c05516bc31c2c18f89057c02f84bfad83a6b59 && git clean -q -d -x -f; else git clone -q git#github.com:mariusbutuc/realtime-faye.git /home/vagrant/example/shared/cached-copy && cd /home/vagrant/example/shared/cached-copy && git checkout -q -b deploy a7c05516bc31c2c18f89057c02f84bfad83a6b59; fi"
servers: ["example.com"]
[example.com] executing command
** [example.com :: out] fatal: could not create work tree dir '/home/vagrant/example/shared/cached-copy'.: Permission denied
command finished in 26ms
*** [deploy:update_code] rolling back
* executing "rm -rf /home/vagrant/example/releases/20130228195049; true"
servers: ["example.com"]
[example.com] executing command
command finished in 7ms
failed: "sh -c 'if [ -d /home/vagrant/example/shared/cached-copy ]; then cd /home/vagrant/example/shared/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard a7c05516bc31c2c18f89057c02f84bfad83a6b59 && git clean -q -d -x -f; else git clone -q git#github.com:mariusbutuc/realtime-faye.git /home/vagrant/example/shared/cached-copy && cd /home/vagrant/example/shared/cached-copy && git checkout -q -b deploy a7c05516bc31c2c18f89057c02f84bfad83a6b59; fi'" on example.com
Of course, the deploy:check report bares no surprises: the vagrant user cannot write in the directories created during deploy:setup since the two users belong to different groups – root:root versus vagrant:vagrant:
$ cap deploy:check
[...]
The following dependencies failed. Please check them and try again:
--> You do not have permissions to write to `/home/vagrant/example'. (example.com)
--> You do not have permissions to write to `/home/vagrant/example/releases'. (example.com)
--> `/home/vagrant/example/shared' is not writable (example.com)
What is the reasoning behind this, and what prerequisite is not satisfied yet so the deployment passes this issue?
The deploy:setup task probably should not be using sudo to create the app directory, since that is likely causing it to be owned by root.
You can turn that off in your deploy.rb file with:
set :use_sudo, false
Since there are no group setting in Capistrano my workaround is to extend such a setting, for example:
set :user, 'vagrant'
set :group, 'vagrant'
and then create a task to "fix" the ownership after running deploy:setup:
after "deploy:setup", :setup_ownership
task :setup_ownership do
run "#{sudo} chown -R #{user}:#{group} #{deploy_to} && chmod -R g+s #{deploy_to}"
end
But the only thing better than fixing an issue is not having it in the first place, so Stuart's answer is both wiser and more elegant.

ferret_server start problem druing the deploy

When do the cap deploy everything works fine except the ferret-server, while restarting server its try to stop the ferret_server in production mode and try to start the ferret_server but it fails due to permission problem .Here is the output from my deploy file** transaction: commit
executing deploy:restart'
triggering before callbacks fordeploy:restart'
executing `ferret:stop'
executing "cd /home/sj/reelinfo/current; script/ferret_server -e production stop || true"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
** [out :: 67.23.28.171] sh: script/ferret_server: Permission denied
command finished
executing "chown www-data -R /home/sj/reelinfo/current/"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
command finished
executing "touch /home/sj/reelinfo/current/tmp/restart.txt"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
command finished
triggering after callbacks for `deploy:restart'
executing `ferret:start'
executing "cd /home/sj/reelinfo/current; script/ferret_server -e production start"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
** [out :: 67.23.28.171] sh: script/ferret_server: Permission denied
command finished
failed: "sh -c \"cd /home/sj/reelinfo/current; script/ferret_server -e production start\"" on 67.23.28.171
I've had this problem as well, the issue is that script/ferret_server did not have executable permissions.
I added the following deploy task to handle the permissions:
before "deploy:restart", "correct_ferret_server_permissions"
task :correct_ferret_server_permissions do
run "chmod a+x #{current_path}/script/ferret_server
end

Resources