During the initial deployment, cap deploy hangs at the initial clone. It gives this output and sits there forever, without exiting or giving any kind of error:
** [50.18.125.107 :: out] Cloning into 'home/torquebox/apps/releases/20120808033824'...
It is similar to this question, except I'm able to execute the command manually, just not automate it with capistrano.
Server setup:
Ubuntu 12.04 LTS on EC2
TorqueBox server, jruby, java6, postgresql, mysql, apache2, tomcat7
Dev machine:
OSX Lion
Using the ssh keys on my dev machine to access github via forward_agent
Application:
JRuby on Rails
github repo
deploy.rb in a gist
The facts and what I have tried:
cap deploy:setup works fine and does create the directory structure.
if I ssh in manually and execute the commands, the clone works correctly.
I've tried cap with verbose (-v) and debug (-d) and neither gave me any more info.
I tried to ssh into github using the forward_agent on the remote machine to deal with the known hosts bug but thatworked fine too.
I checked the environment variables and realized that not everything was getting loaded because it's not an interactive shell, so I added the additional PATH dirs and other environment variables that are usually loaded by login scripts. I even edited the sshd_config file to allow user environments in non-interactive scripts.
I tried executing the commands manually via cap shell but I'm seeing the same behavior.
The clone operation does create the correct target directory and put a .git directory in it, but the repo seems empty.
I've tried with the remote-cache option on and off, and see the same behavior either way.
I tried using the ec2 dns name for my server rather than the elastic IP, because of this post but that didn't work.
So I'm stuck. I'd really appreciate any suggestions of where to look next to try to figure this out. Let me know if any more info would be helpful.
Thanks!!
Will
The issue is probably the fact that ssh agent forwarding is broken in JRuby. It is fixed as of JRuby 1.7.0.pre2.
see: http://jira.codehaus.org/browse/JRUBY-6181
Related
I am using Capistrano 3. In the past I could successfully deploy to my server.
Now the server migrated and has new parameters:
SSH Access (I updated SSH credentials and made sure that I can connect without password using authorized_keys)
Deploy Dir (I updated staging.rb accordingly along with SSH Credentials)
Now cap could connect to my new server so the Auth seemed fine.
Problems with current directory
However, I got an Error when using cap staging deploy:
SSHKit::Command::Failed: if test ! -d /var/www/my-project/subdomains/dev/current; then echo "Directory does not exist '/var/www/my-project/subdomains/dev/current'" 1>&2; false; fi exit status: 1
I checked and the curiously the current directory was still there (migrated along with the rest). I deleted the current directory because this will be created on the deploy (I thought then).
On the next deploy I got the same error. So I did some googling and I ended up adding the following hook:
# Had to insert this hook after migrating the server
# Maybe this can be removed after the first successful deployment
after 'deploy:set_current_revision', 'deploy:symlink:release'
I think this is not a very clean approach but from then on the current directory was created and I got a little farther with cap staging deploy.
Now whenever I setup Capistrano I am amazed how painless it works but now since I have moved to another server I keep running into issues.
I wonder:
Is there a new way to configure the environment in deploy.rb or staging|production.rb respectively?
Do I have to delete existing shared files (e.g. bundler, tmp, pids etc.) or the current directory when I am on a new environment?
I managed to fix my deploy and I am not sure which of the steps I took were really required.
I documented the solution in this SO Post: Bundler in deployment mode does not find Gems
I'm getting the following error while deploying my rails app to an ubuntu server, I have correctly setup ssh keys and I can ssh to the server but I'm getting the following when I try to do
cap production deploy
This is the error message
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing on host xxxxxx.xxxxxxx.xxx: agent could not sign data with requested identity
I can't figure out what I am doing wrong since I had previously deployed and I just need to update my app to changes I have made. I have not changed my deploy.rb, Capfile or deploy/production.rb files since I last deployed
I solved a similar issue by just issuing ssh-add. It seems that my current environment hasn't properly picked up the keys and readding them fixed the issue.
I had the same error.
ssh-copy-id user#ipaddress
Helped me to solve this.
I had the same issue but in my case I had to delete file .ssh/known_hosts from my local machine.
After upgrading Rails from 4.1.x to 4.2, I started getting similar errors when trying to bundle. I fixed it by removing the shared bundle directory. Here's the steps I took:
SHH into the server
cd /my/app/shared/bundle/ruby
rm -rf 2.1.0 or whatever "version" directory is there
Re run the deploy cap production deploy
You may, at this point, hit a memory snag (I did while deploying to a DigitalOcean droplet). The fix for that is to create and enable a swap file on the droplet.
I have a simple setup with a load-balanced application and want to run some commands for cron setups (for instance, using the whenever gem). However, none of my commands seem to get run on the remote server.
# .elasticbeanstalk/Production.config
container_commands:
20_update_crontab:
command: whenever --update-crontab app
leader_only: true
Even tried:
# .elasticbeanstalk/Production.config
commands:
update_crontab:
command: whenever --update-crontab app
Is there something I am missing? These should run with git aws.push correct?
When I check the logs, I don't really get any information saying it was trying to run:
$ eb logs | grep whenever
Using whenever (0.9.2)
The descriptions on this page are pretty good, just can't figure out why it isn't running.
http://www.infoq.com/news/2012/11/elastic-beanstalk-config-files
Elastic Beanstalk configuration files should be placed in .ebextenstions/. It looks like yours are in .elasticbeanstalk/
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html
Create a configuration file with the extension .config (e.g., myapp.config) and place it in an .ebextensions top-level directory of your source bundle.
I had a similar problem where I was trying to install libjpeg for my EC2 instance using the configuration files and it was never installed. I tried everything and was never able to find a "good solution", but I did solve it though.
Solution:
How to solve it? Set up a completely new Elastic Beanstalk Application and deploy the same app again. After I did this it worked from the start with the new EB App.
Check out my other answer here: https://stackoverflow.com/a/23109410/2335675
Check it out, hope it helps for you.
I am using Capistrano 3 to deploy my app to the production server.
My server has system wide install of rvm. There is nothing extra ordinary about the deploy script.
However when i run cap production deploy The deploy script gives out successful messages and seems that deploy went without a problem.
However when I check the latest release folder is not updated and only the repo folder is updated.
This was supposed to be much easier while using Capistrano 2. But the respective commands to create symlinks etc all are shown to be passed in the console log while depoying while in the server nothing is being done.
Am I missing something about the capistrano 3 changes.
Ask if you need more information.
Capistrano 3 changed the symlink task, if you overrode it or called it specifically like deploy:create_symlink, you may want to audit your code.
http://site.com/users/password/new is returning a 404 in production mode but not in development. I am deploying via capistrano and it looks like it's copying the entire site over properly. I tried running the console in production mode on the server and couldn't find anything. Has anyone seen this before?
Since this path works in development and fails in production I would focus on the differences between your environments.
A common issue is that people commit their changes locally, but do not push them to (e.g.) GitHub before deploying with capistrano. Can you ssh into your server and go to the current path and run rake routes there? Try and check if there are differences.
Once you've confirmed that at least the routes on the server are up to date, try checking the production log while accessing /user/password/new. It should be in /shared/log/production.log. You could ssh there and use tail -f production.log to follow the log while you try to access the path.
On a side note, it seems that you are using Devise. There have been similar issues for the user root path. See for example this question. Perhaps this will shed some light on your problem.