I am getting following Error while creating Elastic Beanstalk Environment
Command failed on instance. Return code: 1 Output: (TRUNCATED)... ^
/var/app/ondeck/config/environment.rb:5:in <top (required)>'
/opt/rubies/ruby-2.4.3/bin/bundle:23:inload'
/opt/rubies/ruby-2.4.3/bin/bundle:23:in `' Tasks: TOP =>
db:migrate => environment (See full trace by running task with
--trace). Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/12_db_migration.sh failed.
For more detail, check /var/log/eb-activity.log using console or EB
CLI.
In /var/log/eb-activity.log file, I found following Errors -
Tasks: TOP => db:migrate => environment (See full trace by running
task with --trace) (Executor::NonZeroExitStatus)
AppDeployStage0/AppDeployPreHook/12_db_migration.sh] : Activity failed.
AppDeployStage0/AppDeployPreHook] : Activity failed.
AppDeployStage0] : Activity failed.
Application update - CommandCMD-AppDeploy failed
I encountered this same problem when using Elastic Beanstalk with an external Amazon RDS database. Basically, the problem is that the Elastic Beanstalk pre-deployment scripts will attempt to migrate the database before it even exists.
There are two ways I discovered for how to solve this.
The first way is to set the RAILS_SKIP_MIGRATIONS=true environment variable on your app configuration. This should allow you to at least get the codebase deployed. After that, you can use eb ssh to shell into the app, browse to the /var/app/current/ folder, and manually run bundle exec rails db:create and bundle exec rails db:migrate.
Another way to solve the problem is to create an app pre-deploy shell script hook file in the /opt/elasticbeanstalk/hooks/appdeploy/pre/ folder.
I used the /opt/elasticbeanstalk/hooks/appdeploy/pre/12_db_migration.sh file as reference, and here's what I came up with.
Create a file in your project called /.ebextensions/0001_rails_db_create.config, with the following contents:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/11_create_db.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
set -xe
EB_SCRIPT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k script_dir)
EB_APP_STAGING_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k app_staging_dir)
EB_APP_USER=$(/opt/elasticbeanstalk/bin/get-config container -k app_user)
EB_SUPPORT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k support_dir)
. $EB_SUPPORT_DIR/envvars
RAKE_TASK="db:create"
. $EB_SCRIPT_DIR/use-app-ruby.sh
cd $EB_APP_STAGING_DIR
if su -s /bin/bash -c "bundle exec $EB_SCRIPT_DIR/check-for-rake-task.rb $RAKE_TASK" $EB_APP_USER; then
if [ "$RAILS_SKIP_DB_CREATE" = "true" ]; then
echo "Skipping database creation (RAILS_SKIP_DB_CREATE=true)."
else
su -s /bin/bash -c "leader_only bundle exec rake db:create" $EB_APP_USER
fi
else
echo "No $RAKE_TASK task in Rakefile, skipping database creation."
fi
Commit that file to your git repo and then run eb deploy.
This should create the shell script hook file which will create the rails db if it doesn't exist. The database migration shell script hook file should run immediately afterwards, since its name starts with the number 12.
Once this script is in place, if you ever want to bypass it, you can set the RAILS_SKIP_DB_CREATE=true environment variable on your app.
Related
Scope of problem
Rails 4.2.11
Ansible 2.1.1.0
Ubuntu 14
Ubuntu user: deploy
I have Rails app which I deploy to Ubuntu server by Ansible script.
I have problem of understanding why Rails app create log files with root permissions when Ansible script execute rake tasks.
In my example it's running rake db:migrate but also same behaviour with rake assets:precompile
You can see by photo below that application is deployed via user 'deploy' but after run of rake task it create 2 log files with root permission. After restart of web server it crash with permission denied error, so I need manually change ownership to deploy:deploy
Structure of Rails logger is also looks suspicious. You can check #dev=IO:<STDERR> value. I've checked in another project and there I can see something like #dev=#<File:/var/www/.../log/production.log>
I tried to explore source code of ror4 but so far no luck to understand from there what is happening. Only idea is could be that Rails raise exception when create log file and STDERR become output
Please help if you have similar problem or can point out where I can look to.
The rake script runs as root User.
To answer the question you must add the complete ansible script or the bundle script if you manually add a "sudo" commands or something unusual.
There are different possible positions to define the user in ansible.
Read the become section of the ansible documentation https://docs.ansible.com/ansible/latest/user_guide/become.html
Or give a try with
- name: Run db:migrate
shell: ... rake cmd ...
become: yes
become_user: deploy
become_method: su
Change the become_method to your needs e.g become_flags: '-s /bin/sh' or sudo
I am running a Rails 3.2.18 application on AWS. This application is deployed using Capistrano, including starting Resque workers for the application.
My problem is that AWS can occasionally restart instances with little to no warning, or an instance can be restarted from the AWS console. When this happens, Resque is not started as it is during our normal deployment process.
I have tried to create a shell script in /etc/init.d to start Resque on boot, but this script continues to prompt for a password, and I'm not sure what I'm missing. The essence of the start script is:
/bin/su -l deploy-user -c "/www/apps/deploy-user/current && bundle exec cap <environment> resque:scheduler:start resque:start"
Obviously the above command works as expected when run as the "deploy" user from the bash prompt, but when run via sudo /etc/init.d/resque start, it prompts for a password upon running the first Capistrano command.
Is there something glaring that I am missing? Or perhaps is there a better way to accomplish this?
You should run su with -c parameter to specify commands, and enclose all commands within double quotes:
/bin/su -l deploy-user -c "/www/apps/deploy-user/current && bundle exec cap <environment> resque:scheduler:start resque:start"
Of course, you have other alternatives, like /etc/rc.local.
But if you're going to use an init.d script, I'd suggest to create it propperly (at least start/stop, default runlevels...). Otherwise I'd go with /etc/rc.local or even with a cron job for the deploy-user:
#reboot /www/apps/deploy-user/current && bundle exec cap <environment> resque:scheduler:start resque:start
I would like to run Heroku toolbelt command on the server in a scheduled way:
e.g.
heroku maintenance:on
# do some other stuff
heroku maintenance:off
As far as I could find, there does not seem to be a way to do this, not even a workaround?
You can use Heroku Scheduler and configure the following command:
curl -s https://s3.amazonaws.com/assets.heroku.com/heroku-client/heroku-client.tgz \
| tar xz && ./heroku-client/bin/heroku maintenance:on -a you-app-name-here
For this to work, you need to add a Config Variable named HEROKU_API_KEY and set its value to the "API Key" value from your Accounts page.
I'm following #335 Deploying to a VPS, and when I run cap deploy:cold, everthing goes fine except at the end it reports
executing 'deploy:start'
executing "/etc/init.d/unicorn_just4magic start"
servers: ["106.XXX.XXX.XXX"]
[106.XXX.XXX.XXX] executing command
out :: 106.XXX.XXX.XXX sh: /etc/init.d/unicorn_just4magic: Permission denied
command finished in 502ms
failed: "env PATH=$HOME/.rbenv/shims:$HOME/.rbenv/bin:$PATH sh -c '/etc/init.d/unicorn_just4magic start'" on
106.XXX.XXX.XXX
I can run rails server manually on VPS, and has no problem at all.
But when using cap to deploy, I get the above error. When I visit my site I get Sorry Something went wrong prompt
UPDATE:
deploy.rb is here, and here is the start/restart part
%w[start stop restart].each do |command|
desc "#{command} unicorn server"
task command, roles: :app, except: {no_release: true} do
run "/etc/init.d/unicorn_#{application} #{command}"
end
end
UPDATE2:
now the permission denied prompt does not appear, and I get another problem:
sudo: /etc/init.d/unicorn_just4magic: command not found
I find Capistrano deploy:start with unicorn and During cap deploy:cold - command not found for /etc/init.d/unicorn
I changed the line separator of the shell script, and remove the gemfile.lock from git and set :bundle_flags, ''. Still get the error
I solved it by giving the local file /config/unicorn_init.sh executable permissions by running chmod +x config/unicorn_init.sh on it. Push it to your git repo, cap deploy to server and it worked like a charm for me.
It seems that it does not play well to fiddle with the permissions on the server.
Also, if you can't seem to find the file as you describe ("command not found"), try running a cap deploy:setup once more with the new permissions and go from there. Might be the case that the symlink is not created correctly due to the permission problem?
Hope that helps!
By default Unix user has permissions on its /home/user/ directory
File unicorn_just4magic is not under home direcotry or any allowed to write directory thus you get the "Permission demied" error.
To solve the issue you can:
- Move unicorn_just4magic somewhere under your home directory (this you can set in your unicorn config file)
OR
- add permission on /etc/ directory for your user
$ chown your_username /etc/init.d/unicorn_just4magic
I want to deploy Ruby on Rails web site and using Capistrano for this purpose.
After I had filled in the deploy.rb file and ran cap deploy:setup this is what I got
C:\Sites\blog>cap deploy:setup
* ←[32m2012-10-31 15:39:22 executing `deploy:setup'←[0m
* ←[33mexecuting "mkdir -p /var/www/blog /var/www/blog/releases /var/www/blog/
shared /var/www/blog/shared/system /var/www/blog/shared/log /var/www/blog/shared
/pids"←[0m
servers: ["188.121.54.128"]
Password:
[188.121.54.128] executing command
** [out :: 188.121.54.128] This account is currently not available.
←[2;37mcommand finished in 153ms←[0m
failed: "sh -c 'mkdir -p /var/www/blog /var/www/blog/releases /var/www/blog/shar
ed /var/www/blog/shared/system /var/www/blog/shared/log /var/www/blog/shared/pid
s'" on 188.121.54.128
C:\Sites\blog>
Any suggestions on what might be going wrong?
It seems that you cannot even log in into your remote server. Capistrano needs SSH access, so try to ssh in your server using the same credentials of the cap configuration, and you can't, contact your hosting provider to give you ssh access to the machine.