I want to deploy Ruby on Rails web site and using Capistrano for this purpose.
After I had filled in the deploy.rb file and ran cap deploy:setup this is what I got
C:\Sites\blog>cap deploy:setup
* ←[32m2012-10-31 15:39:22 executing `deploy:setup'←[0m
* ←[33mexecuting "mkdir -p /var/www/blog /var/www/blog/releases /var/www/blog/
shared /var/www/blog/shared/system /var/www/blog/shared/log /var/www/blog/shared
/pids"←[0m
servers: ["188.121.54.128"]
Password:
[188.121.54.128] executing command
** [out :: 188.121.54.128] This account is currently not available.
←[2;37mcommand finished in 153ms←[0m
failed: "sh -c 'mkdir -p /var/www/blog /var/www/blog/releases /var/www/blog/shar
ed /var/www/blog/shared/system /var/www/blog/shared/log /var/www/blog/shared/pid
s'" on 188.121.54.128
C:\Sites\blog>
Any suggestions on what might be going wrong?
It seems that you cannot even log in into your remote server. Capistrano needs SSH access, so try to ssh in your server using the same credentials of the cap configuration, and you can't, contact your hosting provider to give you ssh access to the machine.
Related
I am getting following Error while creating Elastic Beanstalk Environment
Command failed on instance. Return code: 1 Output: (TRUNCATED)... ^
/var/app/ondeck/config/environment.rb:5:in <top (required)>'
/opt/rubies/ruby-2.4.3/bin/bundle:23:inload'
/opt/rubies/ruby-2.4.3/bin/bundle:23:in `' Tasks: TOP =>
db:migrate => environment (See full trace by running task with
--trace). Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/12_db_migration.sh failed.
For more detail, check /var/log/eb-activity.log using console or EB
CLI.
In /var/log/eb-activity.log file, I found following Errors -
Tasks: TOP => db:migrate => environment (See full trace by running
task with --trace) (Executor::NonZeroExitStatus)
AppDeployStage0/AppDeployPreHook/12_db_migration.sh] : Activity failed.
AppDeployStage0/AppDeployPreHook] : Activity failed.
AppDeployStage0] : Activity failed.
Application update - CommandCMD-AppDeploy failed
I encountered this same problem when using Elastic Beanstalk with an external Amazon RDS database. Basically, the problem is that the Elastic Beanstalk pre-deployment scripts will attempt to migrate the database before it even exists.
There are two ways I discovered for how to solve this.
The first way is to set the RAILS_SKIP_MIGRATIONS=true environment variable on your app configuration. This should allow you to at least get the codebase deployed. After that, you can use eb ssh to shell into the app, browse to the /var/app/current/ folder, and manually run bundle exec rails db:create and bundle exec rails db:migrate.
Another way to solve the problem is to create an app pre-deploy shell script hook file in the /opt/elasticbeanstalk/hooks/appdeploy/pre/ folder.
I used the /opt/elasticbeanstalk/hooks/appdeploy/pre/12_db_migration.sh file as reference, and here's what I came up with.
Create a file in your project called /.ebextensions/0001_rails_db_create.config, with the following contents:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/11_create_db.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
set -xe
EB_SCRIPT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k script_dir)
EB_APP_STAGING_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k app_staging_dir)
EB_APP_USER=$(/opt/elasticbeanstalk/bin/get-config container -k app_user)
EB_SUPPORT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k support_dir)
. $EB_SUPPORT_DIR/envvars
RAKE_TASK="db:create"
. $EB_SCRIPT_DIR/use-app-ruby.sh
cd $EB_APP_STAGING_DIR
if su -s /bin/bash -c "bundle exec $EB_SCRIPT_DIR/check-for-rake-task.rb $RAKE_TASK" $EB_APP_USER; then
if [ "$RAILS_SKIP_DB_CREATE" = "true" ]; then
echo "Skipping database creation (RAILS_SKIP_DB_CREATE=true)."
else
su -s /bin/bash -c "leader_only bundle exec rake db:create" $EB_APP_USER
fi
else
echo "No $RAKE_TASK task in Rakefile, skipping database creation."
fi
Commit that file to your git repo and then run eb deploy.
This should create the shell script hook file which will create the rails db if it doesn't exist. The database migration shell script hook file should run immediately afterwards, since its name starts with the number 12.
Once this script is in place, if you ever want to bypass it, you can set the RAILS_SKIP_DB_CREATE=true environment variable on your app.
I'm following #335 Deploying to a VPS, and when I run cap deploy:cold, everthing goes fine except at the end it reports
executing 'deploy:start'
executing "/etc/init.d/unicorn_just4magic start"
servers: ["106.XXX.XXX.XXX"]
[106.XXX.XXX.XXX] executing command
out :: 106.XXX.XXX.XXX sh: /etc/init.d/unicorn_just4magic: Permission denied
command finished in 502ms
failed: "env PATH=$HOME/.rbenv/shims:$HOME/.rbenv/bin:$PATH sh -c '/etc/init.d/unicorn_just4magic start'" on
106.XXX.XXX.XXX
I can run rails server manually on VPS, and has no problem at all.
But when using cap to deploy, I get the above error. When I visit my site I get Sorry Something went wrong prompt
UPDATE:
deploy.rb is here, and here is the start/restart part
%w[start stop restart].each do |command|
desc "#{command} unicorn server"
task command, roles: :app, except: {no_release: true} do
run "/etc/init.d/unicorn_#{application} #{command}"
end
end
UPDATE2:
now the permission denied prompt does not appear, and I get another problem:
sudo: /etc/init.d/unicorn_just4magic: command not found
I find Capistrano deploy:start with unicorn and During cap deploy:cold - command not found for /etc/init.d/unicorn
I changed the line separator of the shell script, and remove the gemfile.lock from git and set :bundle_flags, ''. Still get the error
I solved it by giving the local file /config/unicorn_init.sh executable permissions by running chmod +x config/unicorn_init.sh on it. Push it to your git repo, cap deploy to server and it worked like a charm for me.
It seems that it does not play well to fiddle with the permissions on the server.
Also, if you can't seem to find the file as you describe ("command not found"), try running a cap deploy:setup once more with the new permissions and go from there. Might be the case that the symlink is not created correctly due to the permission problem?
Hope that helps!
By default Unix user has permissions on its /home/user/ directory
File unicorn_just4magic is not under home direcotry or any allowed to write directory thus you get the "Permission demied" error.
To solve the issue you can:
- Move unicorn_just4magic somewhere under your home directory (this you can set in your unicorn config file)
OR
- add permission on /etc/ directory for your user
$ chown your_username /etc/init.d/unicorn_just4magic
When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server.
Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password.
I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server.
$ cap deploy:setup
"no seed data"
triggering start callbacks for `deploy:setup'
* 13:42:18 == Currently executing `multistage:ensure'
*** Defaulting to `development'
* 13:42:18 == Currently executing `development'
* 13:42:18 == Currently executing `deploy:setup'
triggering before callbacks for `deploy:setup'
* 13:42:18 == Currently executing `db:configure_mongoid'
* executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config"
servers: ["dev1.noob.com", "176.9.24.217"]
Password:
I got the issue, there was staging.rb, development.rb files which were overriding my cap script credentials whenever tried to deploy the application in different env.
I'm using: Rails 3, ruby 1.9.2 and trying to deploy using capistrano. When I run cap deploy:check, capistrano tells me that it can't find git on my deployment server (see below).
Any thoughts on what I'm doing wrong??
Here's my setup.
I have a git repo # github
I have a
laptop with an updated local copy of
the github repo
I have a local
"production" server (192.168.0.103) where the
production app should be deployed
I'm running all commands from the local repo on my laptop (not the production server)
If I run cap deploy:setup, my deploy.rb file successfully adds the "releases" and "shared" directories on my production server (aka 192.168.0.103).
If I run the cap deploy:check command, it fails with the error message of
`git' could not be found in the path (192.168.0.103).
What is strange (to me at least) is that git is definitely installed on 192.168.0.103 and the command that's used to see if git is there (which git) works when I ssh into 192.168.0.103.
So, obviously I'm doing something wrong (maybe in the deploy.rb file?)
Here's a sanitized version of the deploy.rb file
default_run_options[:pty] = true
set :application, "myapp"
set :repository, "git#github.com:xxxxxxx/myapp.git"
set :user, "abcde" #username that's used to ssh into 192.168.0.103
set :scm, :git
set :scm_passphrase, "xxxxxxxx"
set :branch, "master"
set :deploy_via, :remote_cache
set :deploy_to, "/Users/abcde/www"
role :web, "192.168.0.103"
role :app, "192.168.0.103"
Here's the output of cap deploy:check
* executing `deploy:check'
* executing "test -d /Users/abcde/www/releases"
servers: ["192.168.0.103"]
Password:
[192.168.0.103] executing command
command finished
* executing "test -w /Users/abcde/www"
servers: ["192.168.0.103"]
[192.168.0.103] executing command
command finished
* executing "test -w /Users/abcde/www/releases"
servers: ["192.168.0.103"]
[192.168.0.103] executing command
command finished
* executing "which git"
servers: ["192.168.0.103"]
[192.168.0.103] executing command
command finished
The following dependencies failed. Please check them and try again:
--> `git' could not be found in the path (192.168.0.103)
Okay, I think I figured it out.
I was basically having the same problem as described here: http://groups.google.com/group/capistrano/browse_thread/thread/50af1daed0b7a393
Here's a choice excerpt:
I try to deploy an application on a
shared environment on which I
installed git. I have added the path
to bashrc, but this would work only
in an interactive bash. When cap is
logging in, it will not be running
bash. If I run deploy:check it fails
by
--> `git' could not be found in the path (example.com) If i set
:scm_command, "/home/user/opt/bin/git"
the problem is solved with the
deploy:check command, but when I run
deploy:cold, it fails because it
tries to run /home/user/opt/bin/git
locally and I can't even put git in
there, because I use windows on my pc.
adding :scm_command, "path/to/my/git" fixed the issue, although I'm not 100% that this is the correct approach to take.
I would recommend using:
default_run_options[:env] = {'PATH' => '/home/user/opt/bin/git:$PATH'}
This will allow adjusting the PATH system environment variable (and of them more if needed) so that not only the "Capistrano can't find the SCM" problem get solved bu any other similar problems with Capistrano not running in interactive bash (not executing the .bashrc etc.).
I recently changed machines, and had a few rough spots updating Rails. The server itself stayed as it was. Everything seemed to be fine, but not capistrano. When I make changes and update SVN, running
cap deploy
the correct new version of the repository is placed on the server. The logging in the terminal running capistrano shows nothing out of the ordinary, but evidently no restart actually takes place because the server continues to run. Running
cap deploy:restart
Produces
Dans-iMac:rebuild apple$ cap deploy:restart
* executing `deploy:restart'
* executing `accelerator:smf_restart'
* executing `accelerator:smf_stop'
* executing "sudo -p 'sudo password: ' svcadm disable /network/mongrel/urbanistica-production"
servers: ["www.urbanisti.ca"]
Password:
[www.urbanisti.ca] executing command
command finished
* executing `accelerator:smf_start'
* executing "sudo -p 'sudo password: ' svcadm enable -r /network/mongrel/urbanistica-production"
servers: ["www.urbanisti.ca"]
[www.urbanisti.ca] executing command
command finished
* executing `accelerator:restart_apache'
* executing "sudo -p 'sudo password: ' svcadm refresh svc:/network/http:cswapache2"
servers: ["www.urbanisti.ca"]
[www.urbanisti.ca] executing command
command finished
But no evident change takes place. What might be going on? The Mongrel log on the server shows no changes whatever: it's still running the older version that predates the update.
The problem would seem to be in your custom (or, at least non-built-in) restart task. The task accelerator:smf_restart, and associated smf_stop and smf_start tasks, that are being called are not part of the standard Capistrano suite. Did you write those tasks yourself or are they from a Capistrano extension? If so, what extension?
If you can post a link to that extension, or post your Capfile if you wrote them youself, that would help people figuring out more specifically what's going wrong.