Getting Mixlib::ShellOut::ShellCommandFailed - ruby-on-rails

I get the following error while starting one of the instances on OpsWorks. Does anyone have any ideas about this error?
This is printed before the official error announcement (based on request by sethvargo):
[2014-08-13T17:27:08+00:00] INFO: Processing directory[/srv/www/instance/shared/cached-copy] action
delete (opsworks_delayed_job::deploy line 48)
[2014-08-13T17:27:08+00:00] INFO: Processing ruby_block[change HOME to /home/deploy for source checkout] action run (opsworks_delayed_job::deploy line 56)
[2014-08-13T17:27:08+00:00] INFO: ruby_block[change HOME to /home/deploy for source checkout] called
[2014-08-13T17:27:08+00:00] INFO: Processing deploy[/srv/www/instance] action deploy (opsworks_delayed_job::deploy line 65)
[2014-08-13T17:27:09+00:00] INFO: deploy[/srv/www/instance] cloning repo git#github.com:xx/xx.git to /srv/www/instance/shared/cached-copy
[2014-08-13T17:27:17+00:00] INFO: deploy[/srv/www/instance] checked out branch: master onto: deploy reference: 714153bbb6a37f0484526cf4da3eda4fcd8df977
[2014-08-13T17:27:17+00:00] INFO: deploy[/srv/www/instance] synchronizing git submodules
[2014-08-13T17:27:17+00:00] INFO: deploy[/srv/www/instance] enabling git submodules
[2014-08-13T17:27:18+00:00] INFO: deploy[/srv/www/instance] set user to deploy
[2014-08-13T17:27:18+00:00] INFO: deploy[/srv/www/instance] set group to www-data
[2014-08-13T17:27:22+00:00] INFO: deploy[/srv/www/instance] copied the cached checkout to /srv/www/instance/releases/20140813172708
[2014-08-13T17:27:23+00:00] INFO: deploy[/srv/www/instance] set user to deploy
[2014-08-13T17:27:23+00:00] INFO: deploy[/srv/www/instance] set group to www-data
[2014-08-13T17:27:23+00:00] INFO: deploy[/srv/www/instance] running callback before_migrate
[2014-08-13T17:27:23+00:00] INFO: deploy[/srv/www/instance] created directories before symlinking: tmp,public,config
[2014-08-13T17:27:23+00:00] INFO: deploy[/srv/www/instance] linked shared paths into current release: system => public/system, pids => tmp/pids, log => log
[2014-08-13T17:27:23+00:00] INFO: deploy[/srv/www/instance] made pre-migration symlinks
[2014-08-13T17:27:24+00:00] INFO: deploy[/srv/www/instance] set user to deploy
[2014-08-13T17:27:24+00:00] INFO: deploy[/srv/www/instance] set group to www-data
[2014-08-13T17:27:24+00:00] INFO: Gemfile detected. Running bundle install.
[2014-08-13T17:27:24+00:00] INFO: sudo su - deploy -c 'cd /srv/www/instance/releases/20140813172708 && /usr/local/bin/bundle install --path /home/deploy/.bundler/instance --without=test development'
Here is the error:
================================================================================
Error executing action `deploy` on resource 'deploy[/srv/www/instance]'
================================================================================
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '127'
Cookbook Trace:
Cookbook Trace:
---------------
/var/lib/aws/opsworks/cache.stage2/cookbooks/opsworks_commons/libraries/shellout.rb:8:in `shellout'
/var/lib/aws/opsworks/cache.stage2/cookbooks/rails/libraries/rails_configuration.rb:41:in `bundle'
/var/lib/aws/opsworks/cache.stage2/cookbooks/deploy/definitions/opsworks_deploy.rb:103:in `block (3 levels) in from_file'
Resource declaration is:
Resource Declaration:
---------------------
# In /var/lib/aws/opsworks/cache.stage2/cookbooks/deploy/definitions/opsworks_deploy.rb
65: deploy deploy[:deploy_to] do
66: provider Chef::Provider::Deploy.const_get(deploy[:chef_provider])
67: keep_releases deploy[:keep_releases]
68: repository deploy[:scm][:repository]
69: user deploy[:user]
70: group deploy[:group]
71: revision deploy[:scm][:revision]
72: migrate deploy[:migrate]
73: migration_command deploy[:migrate_command]
74: environment deploy[:environment].to_hash
75: create_dirs_before_symlink( deploy[:create_dirs_before_symlink] )
76: symlink_before_migrate( deploy[:symlink_before_migrate] )
77: action deploy[:action]
78:
79: if deploy[:application_type] == 'rails'
80: restart_command "sleep #{deploy[:sleep_before_restart]} && #{node[:opsworks][:rails_stack][:restart_command]}"
81: end
82:

With credit to Seth Vargo, the problem was that the bundler gem was not being installed by OpsWorks. The Chef version is 11.10. We had to add the bundler gem manually to the default Chef setup file.

Faced the same issue while booting an instance under Opsworks.
After debugging, found the reason for the issue was: The Chef version wasn't mentioned anywhere in Stack or Layer settings. Thus, while running the recipes, some default version of chef was being picked up, which did not have bundler installed on it by default. So, when the recipe was trying to run "run bundle install" , it was exiting with an error.
Simple solution is to explicitly add the chef version along other settings(if any) as below under stack or layer settings:
{
<other settings>
"opsworks_bundler": {
"manage_package": "true",
"version": "1.16.3"
}
}

Related

Gitlab upgrade issue in Docker

I am trying to upgrade my Gitlab CE which is running in Docker container. I am upgrading from version 11.9.1 to 14.2.1. I am also following the required upgrade path from official Gitlab documentation which is:
11.9.1->11.11.8->12.0.12->12.1.17->12.10.14->13.0.4->13.1.11->13.8.8->13.12.9->14.0.7->14.2.1
The last version that works is 14.0.7, I am also able to run latest 14.1.x version, but during migration to 14.2.x the following error appears, some migrations does not work.
There was an error running gitlab-ctl reconfigure:
rails_migration[gitlab-rails] (gitlab::database_migrations line 51) had an error: Mixlib::ShellOut::ShellCommandFailed: bash[migrate gitlab-rails database] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/gitlab/resources/rails_migration.rb line 16) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of "bash" "/tmp/chef-script20210903-28-4vm0c2" ----
STDOUT: rake aborted!
StandardError: An error has occurred, all later migrations canceled:
Expected batched background migration for the given configuration to be marked as 'finished', but it is 'active': {:job_class_name=>"CopyColumnUsingBackgroundMigrationJob", :table_name=>"ci_job_artifacts", :column_name=>"id", :job_arguments=>[["id", "job_id"], ["id_convert_to_bigint", "job_id_convert_to_bigint"]]}
Finalize it manualy by running
sudo gitlab-rake gitlab:background_migrations:finalize[CopyColumnUsingBackgroundMigrationJob,ci_job_artifacts,id,'[["id"\, "job_id"]\, ["id_convert_to_bigint"\, "job_id_convert_to_bigint"]]']
For more information, check the documentation
https://docs.gitlab.com/ee/user/admin_area/monitoring/background_migrations.html#database-migrations-failing-because-of-batched-background-migration-not-finished
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/database/migration_helpers.rb:1129:in `ensure_batched_background_migration_is_finished'
/opt/gitlab/embedded/service/gitlab-rails/db/post_migrate/20210706212710_finalize_ci_job_artifacts_bigint_conversion.rb:14:in `up'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:61:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/bin/bundle:23:in `load'
/opt/gitlab/embedded/bin/bundle:23:in `<main>'
Caused by:
Expected batched background migration for the given configuration to be marked as 'finished', but it is 'active': {:job_class_name=>"CopyColumnUsingBackgroundMigrationJob", :table_name=>"ci_job_artifacts", :column_name=>"id", :job_arguments=>[["id", "job_id"], ["id_convert_to_bigint", "job_id_convert_to_bigint"]]}
Finalize it manualy by running
sudo gitlab-rake gitlab:background_migrations:finalize[CopyColumnUsingBackgroundMigrationJob,ci_job_artifacts,id,'[["id"\, "job_id"]\, ["id_convert_to_bigint"\, "job_id_convert_to_bigint"]]']
For more information, check the documentation
https://docs.gitlab.com/ee/user/admin_area/monitoring/background_migrations.html#database-migrations-failing-because-of-batched-background-migration-not-finished
/opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/database/migration_helpers.rb:1129:in `ensure_batched_background_migration_is_finished'
/opt/gitlab/embedded/service/gitlab-rails/db/post_migrate/20210706212710_finalize_ci_job_artifacts_bigint_conversion.rb:14:in `up'
/opt/gitlab/embedded/service/gitlab-rails/lib/tasks/gitlab/db.rake:61:in `block (3 levels) in <top (required)>'
/opt/gitlab/embedded/bin/bundle:23:in `load'
/opt/gitlab/embedded/bin/bundle:23:in `<main>'
Tasks: TOP => db:migrate
(See full trace by running task with --trace)
== 20210706212710 FinalizeCiJobArtifactsBigintConversion: migrating ===========
STDERR:
---- End output of "bash" "/tmp/chef-script20210903-28-4vm0c2" ----
Ran "bash" "/tmp/chef-script20210903-28-4vm0c2" returned 1
Running handlers:
Running handlers complete
Chef Infra Client failed. 11 resources updated in 16 seconds
Thank you for using GitLab Docker Image!
Current version: gitlab-ce=14.2.0-ce.0
Configure GitLab for your system by editing /etc/gitlab/gitlab.rb file
And restart this container to reload settings.
To do it use docker exec:
docker exec -it gitlab editor /etc/gitlab/gitlab.rb
docker restart gitlab
For a comprehensive list of configuration options please see the Omnibus GitLab readme
https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md
If this container fails to start due to permission problems try to fix it by executing:
docker exec -it gitlab update-permissions
docker restart gitlab
Cleaning stale PIDs & sockets
Preparing services...
Starting services...
Configuring GitLab...
/opt/gitlab/embedded/bin/runsvdir-start: line 24: ulimit: pending signals: cannot modify limit: Operation not permitted
/opt/gitlab/embedded/bin/runsvdir-start: line 37: /proc/sys/fs/file-max: Read-only file system
I have tried executing migrations by hand and all other fixes that logs propose, but none of them worked.
I use Ubuntu 20.04 LTS and Docker version 20.10.7, build 20.10.7-0ubuntu1~20.04.1
Ok, turned out that I had to update to 14.0.5 firstly and wait for some background migration to complete (you can see them in menu->admin->monitoring->background migrations).

Booting up Vagrant from Jenkins throws Permission Denied

I am trying to Boot up Vagrant VM from Jenkins. I gave Vagrantfile path in 'Boot up Vagrant VM'.
When the job runs, I get the following error:
Failed to iterate on remote directory vagrant_projs
[ vagrant ]: Executing command :[vagrant, up] in folder /Users/abc/Desktop/vagrant_projs
[vagrant_projs] $ vagrant up
[ vagrant ]: Error starting up vagrant, caught IOException, message: Cannot run program "vagrant" (in directory "/Users/abc/Desktop/vagrant_projs"): error=13, Permission denied
[ vagrant ]: [Ljava.lang.StackTraceElement;#5e144dc9
Build step 'Boot up Vagrant VM' marked build as failure
Finished: FAILURE
I thought it might be because of permissions of Vagrantfile and thus I have chmod 777 to it manually. Even after that it gives me the same error.
UPDATE:
I moved the folder in which Vagrantfile resided from Desktop to user folder and I got different stacktrace.
[workspace] $ vagrant -v
Vagrant 1.7.2
[ vagrant ]: Executing command :[vagrant, up] in folder /Users/abalan15/vagrant_projs
[vagrant_projs] $ vagrant up
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/machine.rb:313:in `unlink': Permission denied - /Users/abalan15/vagrant_projs/.vagrant/machines/default/virtualbox/id (Errno::EACCES)
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/machine.rb:313:in `delete'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/machine.rb:313:in `id='
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/machine.rb:142:in `initialize'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/vagrantfile.rb:75:in `new'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/vagrantfile.rb:75:in `machine'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:614:in `machine'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/plugin/v2/command.rb:168:in `block in with_target_vms'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/plugin/v2/command.rb:192:in `call'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/plugin/v2/command.rb:192:in `block in with_target_vms'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/plugin/v2/command.rb:174:in `each'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/plugin/v2/command.rb:174:in `with_target_vms'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/up/command.rb:74:in `block in execute'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:277:in `block (2 levels) in batch'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:275:in `tap'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:275:in `block in batch'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:274:in `synchronize'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:274:in `batch'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/up/command.rb:58:in `execute'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/cli.rb:42:in `execute'
from /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:301:in `cli'
from /opt/vagrant/bin/../embedded/gems/gems/vagrant-1.7.2/bin/vagrant:174:in `<main>'
Build step 'Boot up Vagrant VM' marked build as failure
Finished: FAILURE
At last!
When we try to invoke/boot Vagrant VM from Jenkins(with vagrant plugin), .vagrant.d & folder with vagrantfile will be invoked. Permission for the jenkins user should be given to these folders.
In mac, right click on these folders, get info, on the bottom(permissions), click the + symbol and add jenkins to the 'Users & Groups'
A few things to check when dealing with permission problems in Jenkins:
What user name is running the container (Tomcat) or the Jenkins native binary?
What user name is the jenkins process / jnlp / slave using (if this isn't on the master Jenkins node)

How to run gerrit cookbook inside docker containers?

I'm running community gerrit cookbook in docker using chef-solo.
If I run the cookbook in a Dockerfile as a build step, it throws an error (check the log). But if I run the image and go inside the container and run the same command, it works fine.
Any idea what's going on?
Its complaining about sudo, yet continues and creates symbolic link. 'target_mode = nil' should not be a problem since it complains about same thing when I run the command inside the container as well but works fine. It ends up complaining about init.d script which does not make sense.
chef-solo as a build step:
RUN chef-solo --log_level debug -c /resources/solo.rb -j /resources/node.json
Logs:
[ :08+01:00] INFO: Processing ruby_block[gerrit-init] action run (gerrit::default line 225)
sudo: sorry, you must have a tty to run sudo
[ :08+01:00] INFO: /opt/gerrit/war/gerrit-2.7.war exist....initailizing gerrit
[ :08+01:00] INFO: ruby_block[gerrit-init] called
[ :08+01:00] INFO: Processing link[/etc/init.d/gerrit] action create (gerrit::default line 240)
[ :08+01:00] DEBUG: link[/etc/init.d/gerrit] created symbolic link from /etc/init.d/gerrit -> /opt/gerrit/install/bin/gerrit.sh
[ :08+01:00] INFO: link[/etc/init.d/gerrit] created
[ :08+01:00] DEBUG: found target_mode == nil, so no mode was specified on resource, not managing mode
[ :08+01:00] DEBUG: found target_uid == nil, so no owner was specified on resource, not managing owner
[ :08+01:00] DEBUG: found target_gid == nil, so no group was specified on resource, not managing group
[ :08+01:00] INFO: Processing link[/etc/rc3.d/S90gerrit] action create (gerrit::default line 244)
[ :08+01:00] DEBUG: link[/etc/rc3.d/S90gerrit] created symbolic link from /etc/rc3.d/S90gerrit -> ../init.d/gerrit
[ :08+01:00] INFO: link[/etc/rc3.d/S90gerrit] created
[ :08+01:00] DEBUG: found target_mode == nil, so no mode was specified on resource, not managing mode
[ :08+01:00] DEBUG: found target_uid == nil, so no owner was specified on resource, not managing owner
[ :08+01:00] DEBUG: found target_gid == nil, so no group was specified on resource, not managing group
[ :08+01:00] INFO: Processing service[gerrit] action enable (gerrit::default line 248)
[ :08+01:00] DEBUG: service[gerrit] supports status, running
================================================================================
Error executing action `enable` on resource 'service[gerrit]'
================================================================================
Chef::Exceptions::Service
-------------------------
service[gerrit]: unable to locate the init.d script!
Resource Declaration:
---------------------
# In /var/chef/cookbooks/gerrit/recipes/default.rb
248: service 'gerrit' do
249: supports :status => false, :restart => true, :reload => true
250: action [ :enable, :start ]
251: end
252:
Compiled Resource:
------------------
# Declared in /var/chef/cookbooks/gerrit/recipes/default.rb:248:in `from_file'
service("gerrit") do
action [:enable, :start]
supports {:status=>true, :restart=>true, :reload=>true}
retries 0
retry_delay 2
guard_interpreter :default
service_name "gerrit"
pattern "gerrit"
cookbook_name :gerrit
recipe_name "default"
end
Containers are not virtual machines, meaning they run single processes and not have process managers running.This explains why chef-solo will have issues creating service resources.
I would suggest reading about some of the emerging support that chef is designing for containers:
https://docs.getchef.com/containers.html
https://github.com/opscode/chef-init
I don't pretend it makes lots of sense at first read. I am yet to be convinced that chef is the best way to build a container.
The actually error was sudo: sorry, you must have a tty to run sudo, linux terminal not assigned due to security reasons, more info in this link here.
By default Docker runs as root, there is no need to do sudo. The cookbook I was running created 'gerrit' user which was causing me to do sudo. I removed the user and ran everything as root. Solved!

Gitlab API Access Connection timed out

I just installed Gitlab and I have an error during the gitlab-shell self check.
The command returns :
root#git:/home/git/gitlab# sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production
Checking Environment ...
Git configured for git user? ... yes
Checking Environment ... Finished
Checking GitLab Shell ...
GitLab Shell version >= 1.9.3 ? ... OK (1.9.3)
Repo base directory exists? ... yes
Repo base directory is a symlink? ... no
Repo base owned by git:git? ... yes
Repo base access is drwxrws---? ... yes
Satellites access is drwxr-x---? ... yes
update hook up-to-date? ... yes
update hooks in repos are links: ...
Thibaud / thibaud-dauce ... repository is empty
Running /home/git/gitlab-shell/bin/check
Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection timed out - connect(2) (Errno::ETIMEDOUT)
from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open'
from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect'
from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout'
from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect'
from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start'
from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start'
from /home/git/gitlab-shell/lib/gitlab_net.rb:76:in `get'
from /home/git/gitlab-shell/lib/gitlab_net.rb:43:in `check'
from /home/git/gitlab-shell/bin/check:11:in `<main>'
gitlab-shell self-check failed
Try fixing it:
Make sure GitLab is running;
Check the gitlab-shell configuration file:
sudo -u git -H editor /home/git/gitlab-shell/config.yml
Please fix the error above and rerun the checks.
Checking GitLab Shell ... Finished
Checking Sidekiq ...
Running? ... yes
Number of Sidekiq processes ... 1
Checking Sidekiq ... Finished
Checking LDAP ...
LDAP is disabled in config/gitlab.yml
Checking LDAP ... Finished
Checking GitLab ...
Database config exists? ... yes
Database is SQLite ... no
All migrations up? ... yes
Database contains orphaned UsersGroups? ... no
GitLab config exists? ... yes
GitLab config outdated? ... no
Log directory writable? ... yes
Tmp directory writable? ... yes
Init script exists? ... yes
Init script up-to-date? ... yes
projects have namespace: ...
Thibaud / thibaud-dauce ... yes
Projects have satellites? ...
Thibaud / thibaud-dauce ... can't create, repository is empty
Redis version >= 2.0.0? ... yes
Your git bin path is "/usr/bin/git"
Git version >= 1.7.10 ? ... yes (1.7.10)
Checking GitLab ... Finished
Of course, Gitlab is running :
root#git:/home/git/gitlab# service gitlab status
The GitLab Unicorn web server with pid 1543 is running.
The GitLab Sidekiq job dispatcher with pid 1736 is running.
GitLab and all its components are up and running.
And my config file :
root#git:/home/git/gitlab# sudo -u git -H cat /home/git/gitlab-shell/config.yml
# GitLab user. git by default
user: git
# Url to gitlab instance. Used for api calls. Should end with a slash.
gitlab_url: "http://git.thibaud-dauce.fr/"
http_settings:
# user: someone
# password: somepass
# ca_file: /etc/ssl/cert.pem
# ca_path: /etc/pki/tls/certs
self_signed_cert: false
# Repositories path
# Give the canonicalized absolute pathname,
# REPOS_PATH MUST NOT CONTAIN ANY SYMLINK!!!
# Check twice that none of the components is a symlink, including "/home".
repos_path: "/home/git/repositories"
# File used as authorized_keys for gitlab user
auth_file: "/home/git/.ssh/authorized_keys"
# Redis settings used for pushing commit notices to gitlab
redis:
bin: /usr/bin/redis-cli
host: 89.234.146.59
port: 6379
# socket: /tmp/redis.socket # Only define this if you want to use sockets
namespace: resque:gitlab
# Log file.
# Default is gitlab-shell.log in the root directory.
# log_file: "/home/git/gitlab-shell/gitlab-shell.log"
# Log level. INFO by default
log_level: INFO
# Audit usernames.
# Set to true to see real usernames in the logs instead of key ids, which is easier to follow, but
# incurs an extra API call on every gitlab-shell command.
audit_usernames: false
I already try to replace in Redis conf host: 127.0.0.1 to host: 89.234.146.59
I also try to add 89.234.146.59 git.thibaud-dauce.fr in /etc/hosts
I have a server running Debian 7 32bits with a container LXC for Gitlab, Ruby is version 2.0.0. I have the same error when I try to push a repo (but I can create one online with the web app)
Do you have any idea ? I really look everywhere and found no solution...

Capistrano error: Host does not exist / svn: Connection closed unexpectedly

I'm running from a Windows Vista machine using:
The latest rails (as of Jan 2. 2010),
Capistrano 2.5.10,
Subversive plugin, and
TortoiseSVN
So far, I have:
created the remote repository,
created ssh keys, and
edited the TortoiseSVN config file.
(I'm not sure if I’ve left out anything.)
Here’s the error message I get when I try to deploy using Capistrano:
* executing `deploy:cold'
* executing `deploy:update'
** transaction: start
* executing `deploy:update_code'
executing locally: "svn info svn+ssh://mydomain.com/home/45454/data/svn/repository/ -rHEAD"
checking for svn... yes
Unable to open connection:
Host does not exist
svn: Connection closed unexpectedly
*** [deploy:update_code] rolling back
* executing "rm -rf /home/45454/containers/rails/wcn/releases/20091230175413; true"
servers: ["mydomain.com"]
[mydomain.com] executing command
command finished
C:/Ruby/lib/ruby/gems/1.8/gems/capistrano-2.5.10/lib/capistrano/recipes/deploy/scm/subversion.rb:58:in `query_revision': tried to run `svn info svn+ssh://mydomain.com/home/45454/data/svn/repository/ -rHEAD' and got unexpected result "" (RuntimeError)
from C:/Ruby/lib/ruby/gems/1.8/gems/capistrano-2.5.10/lib/capistrano/recipes/deploy/scm/base.rb:35:in `send'
from C:/Ruby/lib/ruby/gems/1.8/gems/capistrano-2.5.10/lib/capistrano/recipes/deploy/scm/base.rb:35:in `method_missing'
from C:/Ruby/lib/ruby/gems/1.8/gems/capistrano-2.5.10/lib/capistrano/recipes/deploy/scm/base.rb:63:in `local'
from C:/Ruby/lib/ruby/gems/1.8/gems/capistrano-2.5.10/lib/capistrano/recipes/deploy/scm/base.rb:35:in `method_missing'
from C:/Ruby/lib/ruby/gems/1.8/gems/capistrano-2.5.10/lib/capistrano/recipes/deploy.rb:38:in `load'
from C:/Ruby/lib/ruby/gems/1.8/gems/capistrano-2.5.10/lib/capistrano/configuration/variables.rb:87:in `call'
from C:/Ruby/lib/ruby/gems/1.8/gems/capistrano-2.5.10/lib/capistrano/configuration/variables.rb:87:in `fetch'
from C:/Ruby/lib/ruby/gems/1.8/gems/capistrano-2.5.10/lib/capistrano/configuration/variables.rb:110:in `protect'
... 38 levels...
from C:/Ruby/lib/ruby/gems/1.8/gems/capistrano-2.5.10/lib/capistrano/cli/execute.rb:14:in `execute'
from C:/Ruby/lib/ruby/gems/1.8/gems/capistrano-2.5.10/bin/cap:4
from C:/Ruby/bin/cap:19:in `load'
from C:/Ruby/bin/cap:19
Any ideas on what I should try next?
Looks like when svn goes to checkout the code it can't either resolve the hostname as defined by :repository or it can't ssh into "mydomain.com".
executing locally: "svn info svn+ssh://mydomain.com/home/45454/data/svn/repository/ -rHEAD"
checking for svn... yes
Unable to open connection:
Host does not existsvn: Connection closed unexpectedly
If you're deploying to your own Windows machine, try just using a local svn reference.

Resources