Capistrano3 deploy fails after migrating the server - ruby-on-rails

I am using Capistrano 3. In the past I could successfully deploy to my server.
Now the server migrated and has new parameters:
SSH Access (I updated SSH credentials and made sure that I can connect without password using authorized_keys)
Deploy Dir (I updated staging.rb accordingly along with SSH Credentials)
Now cap could connect to my new server so the Auth seemed fine.
Problems with current directory
However, I got an Error when using cap staging deploy:
SSHKit::Command::Failed: if test ! -d /var/www/my-project/subdomains/dev/current; then echo "Directory does not exist '/var/www/my-project/subdomains/dev/current'" 1>&2; false; fi exit status: 1
I checked and the curiously the current directory was still there (migrated along with the rest). I deleted the current directory because this will be created on the deploy (I thought then).
On the next deploy I got the same error. So I did some googling and I ended up adding the following hook:
# Had to insert this hook after migrating the server
# Maybe this can be removed after the first successful deployment
after 'deploy:set_current_revision', 'deploy:symlink:release'
I think this is not a very clean approach but from then on the current directory was created and I got a little farther with cap staging deploy.
Now whenever I setup Capistrano I am amazed how painless it works but now since I have moved to another server I keep running into issues.
I wonder:
Is there a new way to configure the environment in deploy.rb or staging|production.rb respectively?
Do I have to delete existing shared files (e.g. bundler, tmp, pids etc.) or the current directory when I am on a new environment?

I managed to fix my deploy and I am not sure which of the steps I took were really required.
I documented the solution in this SO Post: Bundler in deployment mode does not find Gems

Related

No Releases Folder Generated in Deployment Capistrano Rails

I am using Capistrano 3.4 and Rails 4.2.
Initially, I could deploy my application with cap production deploy, everything was working great.
Suddenly, whenever I did cap production deploy, no errors at all are thrown, but my current folder was not being updated with the newest changes.
I then rm -rf my entire releases folder to start from scratch, run cap production deploy, and now there is no releases folder being generated, but still no errors are being thrown. Help!
In case anyone else stumbles upon my same problem, I was using a VPS and made an image of my dev server, and created my production server from it.
I made all the changes, but I forgot to update `config/deploy/production.rb' with the ip address of the new server, so nothing was happening.
Every time before first deploy on server you should run
cap production deploy:check
It creates all needed folders, checks access rights and required dependencies.

Capistrano 3 SSHKit::Runner::ExecuteError: Exception while executing on host [hostname ]agent could not sign data with requested identity

I'm getting the following error while deploying my rails app to an ubuntu server, I have correctly setup ssh keys and I can ssh to the server but I'm getting the following when I try to do
cap production deploy
This is the error message
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing on host xxxxxx.xxxxxxx.xxx: agent could not sign data with requested identity
I can't figure out what I am doing wrong since I had previously deployed and I just need to update my app to changes I have made. I have not changed my deploy.rb, Capfile or deploy/production.rb files since I last deployed
I solved a similar issue by just issuing ssh-add. It seems that my current environment hasn't properly picked up the keys and readding them fixed the issue.
I had the same error.
ssh-copy-id user#ipaddress
Helped me to solve this.
I had the same issue but in my case I had to delete file .ssh/known_hosts from my local machine.
After upgrading Rails from 4.1.x to 4.2, I started getting similar errors when trying to bundle. I fixed it by removing the shared bundle directory. Here's the steps I took:
SHH into the server
cd /my/app/shared/bundle/ruby
rm -rf 2.1.0 or whatever "version" directory is there
Re run the deploy cap production deploy
You may, at this point, hit a memory snag (I did while deploying to a DigitalOcean droplet). The fix for that is to create and enable a swap file on the droplet.

Capistrano 3 not updating the releases

I am using Capistrano 3 to deploy my app to the production server.
My server has system wide install of rvm. There is nothing extra ordinary about the deploy script.
However when i run cap production deploy The deploy script gives out successful messages and seems that deploy went without a problem.
However when I check the latest release folder is not updated and only the repo folder is updated.
This was supposed to be much easier while using Capistrano 2. But the respective commands to create symlinks etc all are shown to be passed in the console log while depoying while in the server nothing is being done.
Am I missing something about the capistrano 3 changes.
Ask if you need more information.
Capistrano 3 changed the symlink task, if you overrode it or called it specifically like deploy:create_symlink, you may want to audit your code.

From manual pull on server to Capistrano

I've always deployed my apps through SSH by manually logging in and running git pull origin master, running migrations and pre-compiling assets.
Now i started to get more interested in Capistrano so i gave it a try, i setup a recipe with the repository pointing to github and deploy_to to /home/myusername/apps/greatapp
The current app on the server is already hooked up with Git too so i didn't know why i had to specify the github url in the recipe again, but i ran cap deploy which was successful.
The changes didn't apply, so out of curiosity i browsed to the app folder on the server and found out that Capistrano created folders: shared, releases and current. the latter contained the app, so now i have 2 copies one in /home/myusername/apps/greatapp and another in /home/myusername/apps/greatapp/current.
Is this how it should be? and I have to migrate user uploads to current and destroy the old app?
Does Capistrano pull the repo on my localhost then upload it through SSH or run pull on the server? in other words can someone outline how the deployment works?
Does Capistrano run precompile:assets?
/releases/ is for previous versions incase you want to do cap:rollback.
/current/ as you rightly pointed out is for the current version of your app.
/shared/ is for files and folders that you want to persist between deployments, they typically get symlinked to your /current/ folder as part of your recipe.
Capistrano connects to your server in a shell and then executes the git commands on the server.
Capistrano should automatically put anything in public/system (the rails convention for stored user-uploaded files) into the shared directory, and set up the necessary symlinks.
If you put in the github url, it actually fetches from your github repo. Read https://help.github.com/articles/deploying-with-capistrano for more info.
It does, by default.

Cap deploy hangs at initial clone

During the initial deployment, cap deploy hangs at the initial clone. It gives this output and sits there forever, without exiting or giving any kind of error:
** [50.18.125.107 :: out] Cloning into 'home/torquebox/apps/releases/20120808033824'...
It is similar to this question, except I'm able to execute the command manually, just not automate it with capistrano.
Server setup:
Ubuntu 12.04 LTS on EC2
TorqueBox server, jruby, java6, postgresql, mysql, apache2, tomcat7
Dev machine:
OSX Lion
Using the ssh keys on my dev machine to access github via forward_agent
Application:
JRuby on Rails
github repo
deploy.rb in a gist
The facts and what I have tried:
cap deploy:setup works fine and does create the directory structure.
if I ssh in manually and execute the commands, the clone works correctly.
I've tried cap with verbose (-v) and debug (-d) and neither gave me any more info.
I tried to ssh into github using the forward_agent on the remote machine to deal with the known hosts bug but thatworked fine too.
I checked the environment variables and realized that not everything was getting loaded because it's not an interactive shell, so I added the additional PATH dirs and other environment variables that are usually loaded by login scripts. I even edited the sshd_config file to allow user environments in non-interactive scripts.
I tried executing the commands manually via cap shell but I'm seeing the same behavior.
The clone operation does create the correct target directory and put a .git directory in it, but the repo seems empty.
I've tried with the remote-cache option on and off, and see the same behavior either way.
I tried using the ec2 dns name for my server rather than the elastic IP, because of this post but that didn't work.
So I'm stuck. I'd really appreciate any suggestions of where to look next to try to figure this out. Let me know if any more info would be helpful.
Thanks!!
Will
The issue is probably the fact that ssh agent forwarding is broken in JRuby. It is fixed as of JRuby 1.7.0.pre2.
see: http://jira.codehaus.org/browse/JRUBY-6181

Resources