I'm aware of the keep_releases option in capistrano and I have this set in our deploy script. The problem I'm having is I think more related to permission issues. I tried running cap deploy:cleanup but I get a permission denied error when trying to delete directories inside tmp/cache. I'm using fragment caching which is why I have lots of files inside tmp/cache.
Can someone shed any light how to fix this issue? I have to manually delete the folders in the server in order to clean up the releases folder.
It looks like I just have to pass use_sudo
cap production deploy:cleanup -s use_sudo=true
i will try to change the owner of your directory to the user you use for capistrano:
sudo chown -R capistrano_user /path/to/www/app
Related
I hope you are all well!
For several days now I have been trying to deploy a simple rails app to Railway.app, and failing catastrophically and repeatedly to get it to run.
Here is the github repo:
https://github.com/CaffieneSage/blogApp-rails-
The error I am getting is during the deploy step specifically:
bundler: not executable: bin/rails
I have successfully deployed apps to heroku in the past. I suspect there is something simple that I am missing. I have tried rerolling and deploying the default rails app to simplify things. I have made sure to us postgres instead of SQLite3 as the db. I have spun up an instance of postgres on railway and tried to set my environment variables to point to it. I have had a go within the CLI as well.
Thanks in advance for any advice you may have to offer!
This is my first post on stack overflow, please go easy on me ;]
The issue likely stems from the script bin/rails not having the executable bit on the file.
You can see the file permissions using ls:
ls -l bin/
All of the files will display:
-rw-r—r—
These need to have the executable bit set, so you can run something like:
chmod +x bin/*
After which all the files should have this permission set:
-rwxr-xr-x
Don’t forget to commit the changes.
Read more on file permissions: https://en.wikipedia.org/wiki/File-system_permissions
I have a simple setup with a load-balanced application and want to run some commands for cron setups (for instance, using the whenever gem). However, none of my commands seem to get run on the remote server.
# .elasticbeanstalk/Production.config
container_commands:
20_update_crontab:
command: whenever --update-crontab app
leader_only: true
Even tried:
# .elasticbeanstalk/Production.config
commands:
update_crontab:
command: whenever --update-crontab app
Is there something I am missing? These should run with git aws.push correct?
When I check the logs, I don't really get any information saying it was trying to run:
$ eb logs | grep whenever
Using whenever (0.9.2)
The descriptions on this page are pretty good, just can't figure out why it isn't running.
http://www.infoq.com/news/2012/11/elastic-beanstalk-config-files
Elastic Beanstalk configuration files should be placed in .ebextenstions/. It looks like yours are in .elasticbeanstalk/
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html
Create a configuration file with the extension .config (e.g., myapp.config) and place it in an .ebextensions top-level directory of your source bundle.
I had a similar problem where I was trying to install libjpeg for my EC2 instance using the configuration files and it was never installed. I tried everything and was never able to find a "good solution", but I did solve it though.
Solution:
How to solve it? Set up a completely new Elastic Beanstalk Application and deploy the same app again. After I did this it worked from the start with the new EB App.
Check out my other answer here: https://stackoverflow.com/a/23109410/2335675
Check it out, hope it helps for you.
I'm using carrierwave in a rails app to upload files. It works fine on my development environment, but on my production VM (Ubuntu), I'm getting this error:
An Errno::EACCES occurred in users#update:
Permission denied - /home/yards/apps/yardsapp/releases/20130616143623/public/uploads/tmp/20130616-1438-14186-3184
/usr/local/lib/ruby/1.9.1/fileutils.rb:244:in `mkdir'
I'm pretty sure I understand what is going on, but I can't seem to figure out a fix. My capistrano deploy.rb is set up with the user as root. So when it creates the new release folder on a deploy, the access rights are for root (I think).
Then when I try to upload a file, I get that error because nginx is trying to execute a mkdir as www-data.
I could chown the folder after the deploy and it works...but then another deploy creates another new directory with owner set to root as default.
At least I think this is what is going on. Does anyone have any ideas on how I should be doing this?
Run your deployment as www-data. You might need to adjust the authorized_keys file for the www-data user as well to be able to connect.
To fastest way would be to copy over your authorized_keys file for whatever user you are using at the moment (assuming you are root):
mkdir $WWW_DATA_HOME/.ssh
cp ~/.ssh/authorized_keys $WWW_DATA_HOME/.ssh/authorized_keys
chown www-data:www-data $WWW_DATA_HOME/.ssh/authorized_keys
You might also need to change the shell for the www-data user to log in to it:
chsh -s /bin/bash www-data
Now you should be able to do
ssh www-data#your-host.tld
and log in.
What this came down to was an improper Capistrano configuration. I followed the capistrano docs correctly (and made a 'deployer' user, same thing as the www-data as suggested above) and I have capistrano working like a charm. Also upgraded to Capistrano 3.
When I run my cap deploy, it complains that it can't access the log file:
Rails Error: Unable to access log file. Please ensure that
/var/superduperapp/releases/20120329011558/log/production.log exists
and is chmod 0666. The log level has been raised to WARN and the
output directed to STDERR until the problem is fixed.
It seems that I have to manually create a log folder. Is there a way to do this with Capistrano so whoever is deploying it doesn't have to remember to create the folder each time they do a new deploy?
These folders should be created by capistrano when you run cap deploy:setup, have you ran it? To check if everything is fine you can run cap deploy:check before it.
You can create a custom task to create this directory and launch it as the first task:
task :create_log_share do
run "mkdir -p #{shared_path}/log"
end
before 'deploy:update', :create_log_share
This directory does not need to be created each the time when you deploy. Once is enough. The shared directory never changes.
This is my first deployment. I did a cap deploy:setup which worked fine.
Then, when I try to execute cap deploy:update I run into error messages. Something along the lines of
rm: cannot remove `/var/www/app_name/current': Is a directory
Here is my capfile and directory permissions.
http://pastie.org/1189919
In general, what is the best practice as far as deployment user and permissions are concerned? Should I use root or create a different user. If a different user what exact permissions does it need?
Thanks
Did you create the directories within /var/www/app_name, or were they created by capistrano?
Regardless, the issue you have is that /var/www/app_name/current should not be a directory - it should be a symlink to the current release within /var/www/app_name/releases/. The failure is caused when capistrano has finished creating the new release folder within /var/www/app_name/releases/, and is trying to symlink /var/www/app_name/current to it.
You might be able to fix your issues by renaming /var/www/app_name/current (so you have a backup if things go wrong), and creating a symlink from /var/www/app_name/current to the most recent release within /var/www/app_name/releases/, and then doing a cap deploy. (Delete your backup of current if this works).
As far as best practice goes whatever you do, do not use root. Instead, set up a user (or use an existing user) that has only permissions to the required directories (didn't read your scripts closely, but probably just /var/www/app_name.
To deploy a new release you should invoke cap deploy or cap deploy:migrations, not cap deploy:update.
I also have had such errors. A totally normal task of updating source code and restarting the server always seems to have problems at various points of the simple script.
Sometimes it complains that a hash value at github doesn't match some expected value, sometimes it won't update a directory because it already exists, but mostly with it wanting to create things that exist.
Is there no way to force Capistrano and thus the shell commands to just DO IT ? I would at least appreciate it asking me what it should do should it encounter this type of error instead of just failing and rolling back. Especially when it's a simple file operation.
I end up having to delete things manually on the server so that the Capistrano script will run without failing. This is obviously not the way forward.