Upstart task generated by foreman doesn't find file? - ruby-on-rails

I used foreman to export my Procfile to an upstart task.
Procfile:
web: bundle exec rails server
websocket: bundle exec rails runner websocket-server/em_websocket.rb
One of the upstart tasks (they are very alike and fail with the same error):
start on starting app-web
stop on stopping app-web
respawn
env PORT=5000
setuid app
chdir /var/www/app
exec bundle exec rails server
And the error (I got it via dmesg):
[35207.676836] init: Failed to spawn app-websocket-1 main process: unable to execute: No such file or directory
[35207.679577] init: Failed to spawn app-web-1 main process: unable to execute: No such file or directory
When I switch to the app user, I am actually able to run bundle exec rails server from the given directory.
Is there any way to pin down the error a little more? I didn't find any related logs in /var/log/upstart/.

If you installed ruby via RVM it may be possible that the init is run before the rvm script runs. Did you try using absolute references to the bundle bin?
whereis bundle
to obtain it
RVM was apparently not initialized or is not available in the upstart enviroment. Luckily rvm has wrappers for this case: https://rvm.io/integration/init-d

You can run bundle in another way.
Instead of:
web: bundle exec rails server
You need to run:
web: bash -c '~/.rvm/bin/rvm default do bundle exec rails server'
Note: ~/.rvm/bin/rvm - can be replaced with actual path of rvm installation on your server.

Upstart commands require sudo privileges for the underlying user. Have you considered defining some form of passwordless sudo privileges for your app user to run the rails application service restarts?
e.g In Ubuntu creating a new sudoer definition under /etc/sudoers.d/?
username ALL=(ALL) NOPASSWD:ALL
Once defined 'username' should be able to run the rails app via sudo service 'appname' stop|start|restart.
Here is an explanation for providing the sudo privileges to the user. My Capistrano deployment contains a foreman export definition as below -
namespace :foreman do
desc 'Export the Procfile to Ubuntu upstart scripts'
task :export do
on roles(:app) do |host|
log_path = shared_path.join('log')
within release_path do
execute :mv, ".env .envbkup"
execute :echo, "'RACK_ENV=#{fetch(:deploy_env)}' >> .env"
execute :echo, "'RAILS_ENV=#{fetch(:deploy_env)}' >> .env"
execute :bundle, "exec foreman export upstart #{shared_path}/init -a #{fetch(:application)} -u #{host.user} -l #{log_path}"
execute :rm, ".env"
execute :mv, ".envbkup .env"
as :root do
execute :cp, "#{shared_path}/init/* /etc/init/"
end
end
end
end
This capistrano definition is invoked from the deploy_env.rb 'after' action.

Related

Rails migration on ECS

I am trying to figure out how to run rake db:migrate on my ECS service but only on one machine after deployment.
Anyone has experience with that?
Thanks
You may do it via Amazon ECS one-off task.
Build a docker image with rake db migrate as "CMD" in your docker file.
Create a task definition. You may choose one task per host while creating the task-definition and desired task number as "1".
Run a one-off ECS task inside your cluster. Make sure to make it outside service. Once It completed the task then the container will stop automatically.
You can write a script to do this before your deployment. After that, you can define your other tasks as usual.
You can also refer to the container lifecycle in Amazon ECS here: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_life_cycle.html. However, this is the default behavior of the docker.
Let me know if it works for you.
I built a custom shell script to run when my docker containers start ( CMD command in docker ):
#!/bin/sh
web_env=${WEB_ENV:-1}
rails_env=${RAILS_ENV:-staging}
rails_host=${HOST:-'https://www.example.com'}
echo "*****************RAILS_ENV is $RAILS_ENV default to $rails_env"
echo "***************** WEB_ENV is $WEB_ENV default to $web_env"
######## Rails migration ################################################
echo "Start rails migration"
echo "cd /home/app/example && bundle exec rake db:migrate RAILS_ENV=$rails_env"
cd /home/app/example
bundle exec rake db:migrate RAILS_ENV=$rails_env
echo "Complete migration"
if [ "$web_env" = "1" ]; then
######## Generate webapp.conf##########################################
web_app=/etc/nginx/sites-enabled/webapp.conf
replace_rails_env="s~{{rails_env}}~${rails_env}~g"
replace_rails_host="s~{{rails_host}}~${rails_host}~g"
# sed: -i may not be used with stdin in MacOsX
# Edit files in-place, saving backups with the specified extension.
# If a zero-length extension is given, no backup will be saved.
# we use -i.back as backup file for linux and
# In Macosx require the backup to be specified.
sed -i.back -e $replace_rails_env -e $replace_rails_host $web_app
rm "${web_app}.back" # remove webapp.conf.back cause nginx to fail.
# sed -i.back $replace_rails_host $web_app
# sed -i.back $replace_rails_server_name $web_app
######## Enable Web app ################################################
echo "Web app: enable nginx + passenger: rm -f /etc/service/nginx/down"
rm -f /etc/service/nginx/down
else
######## Create Daemon for background process ##########################
echo "Sidekiq service enable: /etc/service/sidekiq/run "
mkdir /etc/service/sidekiq
touch /etc/service/sidekiq/run
chmod +x /etc/service/sidekiq/run
echo "#!/bin/sh" > /etc/service/sidekiq/run
echo "cd /home/app/example && bundle exec sidekiq -e ${rails_env}" >> /etc/service/sidekiq/run
fi
echo ######## Custom Service setup properly"
What I did was to build a docker image to be run as a web server ( Nginx + passenger) or Sidekiq background process. The script will decide whether it is a web or Sidekiq via ENV variable WEB_ENV and rails migration will always get executed.
This way I can be sure the migration always up to date. I think this will work perfectly for a single task.
I am using a Passenger docker that has been designed very easy to customize but if you use another rails app server you can learn from the docker design of Passenger to apply to your own docker design.
For example, you can try something like:
In your Dockerfile:
CMD ["/start.sh"]
Then you create a start.sh where you put the commands which you want to execute:
start.sh
#! /usr/bin/env bash
echo "Migrating the database..."
rake db:migrate

Best practice for rails docker-compose db:create db:migrate

I have simple rails application with docker-compose.yml file.
It consists from two containers - db container with PostgreSQL and web container with rails app.
In dockerfile for web part I have such lines in CMD
CMD RAILS_ENV=production rake db:create db:migrate && \
bundle exec rails s -p 3000 -b '0.0.0.0' --environment=production
So in line rake db:create db:migrate I create db if it is a first run of db container, and run migrate.
But if it is only update of web part - I need only to run db:migrate, and db:create (as it should) give me error
ERROR: database "myapp_production" already exists
STATEMENT: CREATE DATABASE "myapp_production" ENCODING = 'unicode'
Everything working fine, but I think there is a better way.
What is a best way to handle this situation?
I have the same development stack and here is that I'm doing.
Here is a Dockerfile for postgres which I'm extend:
FROM postgres:9.4
ADD db/init.sql /docker-entrypoint-initdb.d/
EXPOSE 5432
CMD ["postgres"]
From the docker postgres documentation:
If you would like to do additional initialization in an image derived
from this one, add one or more *.sql or *.sh scripts under
/docker-entrypoint-initdb.d (creating the directory if necessary).
After the entrypoint calls initdb to create the default postgres user
and database, it will run any *.sql files and source any *.sh scripts
found in that directory to do further initialization before starting
the service.
My init.sql:
CREATE USER database_user;
CREATE DATABASE database_production;
GRANT ALL PRIVILEGES ON DATABASE database_production TO database_user;
After that my RUN command in the web container points to the run.sh script:
#!/usr/bin/env bash
echo "Bundling gems"
bundle install --jobs 8 --retry 3
echo "Clearing logs"
bin/rake log:clear
echo "Run migrations"
bundle exec rake db:migrate
echo "Seed database"
bundle exec rake db:seed
echo "Removing contents of tmp dirs"
bin/rake tmp:clear
echo "Starting app server ..."
bundle exec rails s -p 3000 -b '0.0.0.0'
That's it. My database created in the db container, and web app only does migration.

How to update ENV variable on Passenger standalone restart

I'm using Capistrano to deploy my application. Application runs on Passenger standalone. When I redeploy the application the Passenger still uses the Gemfile from the the old release because BUNDLE_GEMFILE environment variable has not been updated.
Where I should put the updated path to Gemfile so that Passenger would pick it up on restart?
The server startup command is in monit and I just call monit scripts from Capistrano tasks except for restart where I just touch the restart.txt.
namespace :deploy do
task :stop do
run("sudo /usr/bin/monit stop my_app_#{rails_env}")
end
task :restart do
run("cd #{current_path} && touch tmp/restart.txt")
end
task :start do
run("sudo /usr/bin/monit start my_app_#{rails_env}")
end
The startup command in monit is:
start program = "/bin/su - app_user -l -c 'cd /home/app_user/current && bundle exec passenger start -d -p 8504 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.8504.pid /home/app_user/current'"
I already tried to add the BUNDLE_GEMFILE into the startup command like this:
start program = "/bin/su - app_user -l -c 'cd /home/app_user/current && BUNDLE_GEMFILE=/home/app_user/current/Gemfile bundle exec passenger start -d -p 8504 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.8504.pid /home/app_user/current'"
But it didn't work since the path /home/app_user/current is a symlink to a release path and that release path was picked up instead.
Simple solution.
Define the Gemfile to be used in the server start command. For example:
BUNDLE_GEMFILE=/home/app_user/current/Gemfile bundle exec passenger start -d -p 9999 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.9999.pid /home/app_user/current
The earlier solution (setting the BUNDLE_GEMFILE env variable in .profile) is not good. When you are deploying a new version of your application and there is a new gem in the bundle the migrations etc. will fail because it will still use the Gemfile defined in the env variable.

Can you run a rails console or rake command in the elastic beanstalk environment?

I have set up a RoR environement on AWS' elastic beanstalk. I am able to ssh into my EC2 instance.
My home directory is /home/ec2-user, which is effectively empty.
If I move up a directory, there is also a /home/webapp directory that i do not have access to.
Is there a way to run a rake command or rails console on my elastic beanstalk instance?
If I type rails console I get Usage: rails new APP_PATH [options]
If I type RAILS_ENV=production bundle exec rails console, I get "Could not locate Gemfile"
For rails, jump to /var/app/current then as #juanpastas said, run RAILS_ENV=production bundle exec rails c
Don't know why, but since EBS runs everything as root, this worked for me:
sudo su
bundle exec rails c production
None of these solutions mentioned here worked for me, so I cooked up a little script that I put in script/aws-console.
You can run it from the /var/app/current directory as root:
eb ssh
cd /var/app/current
sudo script/aws-console
My script can be found as a Gist here.
None of the other answers worked for me so I went looking - this is working for me now on an elastic beanstalk 64bit amazon linux 2016.03 V2.1.2 ruby 2.2 (puma) stack
cd /var/app/current
sudo su
rake rails:update:bin
bundle exec rails console
that returns me the expected console
Loading production environment (Rails 4.2.6)
irb(main):001:0>
For Ruby 2.7:
if you don't need environment variables:
BUNDLE_PATH=/var/app/current/vendor/bundle/ bundle exec rails c
It looks like environment variables are not loaded automatically anymore, which might prevent rails console from starting.
I solved it by creating this .ebextensions file:
# Simply call `sudo /var/app/scripts/rails_c`
commands:
create_script_dir:
command: "mkdir -p /var/app/scripts"
ignoreErrors: true
files:
"/var/app/scripts/export_envvars":
mode: "000755"
owner: root
group: root
content: |
#!/opt/elasticbeanstalk/.rbenv/shims/ruby
if __FILE__ == $0
require 'json'
env_file = '/var/app/scripts/envvars'
env_vars = env_vars = JSON.parse(`/opt/elasticbeanstalk/bin/get-config environment`)
str = ''
env_vars.each do |key, value|
new_key = key.gsub(/\s/, '_')
str << "export #{new_key}=\"#{value}\"\n"
end
File.open(env_file, 'w') { |f| f.write(str) }
end
"/var/app/scripts/rails_c":
mode: "000755"
owner: root
group: root
content: |
. ~/.bashrc
/var/app/scripts/export_envvars
. /var/app/scripts/envvars
cd /var/app/current
/opt/elasticbeanstalk/.rbenv/shims/bundle exec rails c
Create a .ebextension file named setvars.config and add those lines to it
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "export \(.key)=\"\(.value)\""' > /etc/profile.d/sh.local
packages:
yum:
jq: []
Then deploy your code again it should work.
reference: https://aws.amazon.com/ar/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
For Ruby 2.7:
As someone said, if you don't need env vars, run the following
BUNDLE_PATH=/var/app/current/vendor/bundle/ bundle exec rails c
However, if you need ENV, I recommend doing this as per AWS doc:
https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-linux2/
tl;dr
On Amazon Linux 2, all environment properties are centralised into a single file called /opt/elasticbeanstalk/deployment/env. No user can access these outside the app. So, they recommend to add some hook scripts after deploy to basically create a copy.
#!/bin/bash
#Create a copy of the environment variable file.
cp /opt/elasticbeanstalk/deployment/env /opt/elasticbeanstalk/deployment/custom_env_var
#Set permissions to the custom_env_var file so this file can be accessed by any user on the instance. You can restrict permissions as per your requirements.
chmod 644 /opt/elasticbeanstalk/deployment/custom_env_var
#Remove duplicate files upon deployment.
rm -f /opt/elasticbeanstalk/deployment/*.bak
If because of some reason you don't want to run as root, do the following to pass env vars from root into new user environment:
sudo -u <user> -E env "PATH=$PATH" bash -c 'cd /var/app/current/ && <wtv you want to run>'
I like to create an eb_console file at the root of my rails app, then chmod u+x it. It contains the following:
ssh -t ec2-user#YOUR_EC2_STATION.compute.amazonaws.com 'cd /var/app/current && bin/rails c'
This way, I just have to run:
./eb_console
like I would have run heroku run bundle exec rails c.
#!/bin/sh
shell_join () {
ruby -r shellwords -e 'puts Shellwords.join(ARGV)' "$#"
}
command_str () {
printf 'set -e; . /etc/profile.d/eb_envvars.sh; . /etc/profile.d/use-app-ruby.sh; set -x; exec %s\n' "$(shell_join "$#")"
}
exec sudo su webapp -c "$(command_str "$#")"
Put above file somewhere in your source code, deploy, eb ssh into the eb instance, cd /var/app/current, and then execute path/to/above/script bin/rails whatever argumeents you usually use.
Reason why I have written above script is:
When using sudo, it drops some environment variables which might actually be needed for your rails app; so manually load the profiles which the Elastic Beanstalk platform provides.
Current Beanstalk ruby platform assumes you run rails application on user webapp, a non-login-able user, so it would be wise to run your command in this user.
For the latest ruby version, please use the following command:
BUNDLE_PATH=/opt/rubies/ruby-2.6.3/lib/ruby/gems/2.6.0/ bundle exec rails c production
Running it with sudo is not needed.
add an eb extension shortcut:
# .ebextensions/irb.config
files:
"/home/ec2-user/irb":
mode: "000777"
owner: root
group: root
content: |
sudo su - -c 'cd /var/app/current; bundle exec rails c'
then:
$ eb ssh
$ ./irb
irb(main):001:0>
None of these were working for me, including the aws-console script. I finally ended up creating a script directory in /var/app/current and then creating a rails file in that directory as outline by this answer on another SO question.
eb ssh myEnv
cd /var/app/current
sudo mkdir script
sudo vim script/rails
Add this to file and save:
echo #!/usr/bin/env ruby
# This command will automatically be run when you run "rails" with Rails 3 gems installed from the root of your application.
APP_PATH = File.expand_path('../../config/application', __FILE__)
require File.expand_path('../../config/boot', __FILE__)
require 'rails/commands'
Then make it executable and run it:
sudo chmod +x script/rails
sudo script/rails console
And it worked.
You have to find the folder with your Gemfile :p.
To do that, I would take a look in you web server config there should be a config that tells you where your app directory is.
Maybe you know where your app is.
But in case you don't know, I would give a try to:
grep -i your_app_name /etc/apache/*
grep -i your_app_name /etc/apache/sites-enabled/*
To search files containing your_app_name in Apache config.
Or if you are using nginx, replace apache above by nginx.
after you find application folder, cd into it and run RAILS_ENV=production bundle exec rails c.
Making sure that your application is configured to run in production in Apache or nginx configuration.

ROR, Redis, Resque, God & Cron on Ubuntu Server - Boot

I have made several jobs that god takes care of in my ruby application. However when the server reboots the job stops. I want to avoid this so I've made this script on my server. It looks like this.
my_app.sh
#!/bin/bash
# god tasks
#
case $1 in
start)
/usr/local/rvm/gems/ruby-1.9.3-p194/bin/god
/usr/local/rvm/gems/ruby-1.9.3-p194/bin/god start
/usr/local/rvm/gems/ruby-1.9.3-p194/bin/god load /usr/local/Linux/apache2/www/hej.se/ruby/config/resque.god
/usr/local/rvm/gems/ruby-1.9.3-p194/bin/god load /usr/local/Linux/apache2/www/hej.se/ruby/config/resque_schedule.god
;;
esac
exit 0
If I log in manually and write
"/etc/init.d/my_app start"
it gives me
Sending 'start' command
No matching task or group
Sending 'load' command with action 'leave'
The following tasks were affected:
resque-0
resque-1
resque-2
resque-3
resque-4
Sending 'load' command with action 'leave'
The following tasks were affected:
resque_scheduler
And everything works, it does what I want it to do, i.e the jobs.
I have tried several ways to start this script on boot (Linux 10.4.4 LTS), rc.local, rc-default and now my latest attempt is crontab.
The script must be run under my user and not root, (it can't find the ruby installation if I run it under root).
Because of this I've configured the crontab under my user account:
#reboot /etc/init.d/my_app start
Sadly this doesn't work... I don't what I'm doing wrong. And this should probably not be necessary. I mean shouldn't you be able to this per auto when booting up the ruby application?
Im using passenger on this server, I don't know if this has something to do with it?
The solution below with the changes I made to the sh:
my_app.sh
bash -c "source /usr/local/rvm/scripts/rvm && /usr/local/rvm/gems/ruby-1.9.3-p194/bin/god"
bash -c "source /usr/local/rvm/scripts/rvm && /usr/local/rvm/gems/ruby-1.9.3-p194/bin/god start"
bash -c "source /usr/local/rvm/scripts/rvm && /usr/local/rvm/gems/ruby-1.9.3-p194/bin/god load /usr/local/Linux/apache2/www/hej.se/ruby/config/resque.god"
bash -c "source /usr/local/rvm/scripts/rvm && /usr/local/rvm/gems/ruby-1.9.3-p194/bin/god load /usr/local/Linux/apache2/www/hej.se/ruby/config/resque_schedule.god"
Forget the cronjob.
Centos/Fedora:
sudo chmod a+x /etc/init.d/my_app
sudo chkconfig --add my_app
sudo chkconfig my_app on
Ubuntu/Debian:
sudo update-rc.d my_app defaults
Both of these symlink the script to /etc/rc1.d, /etc/rc2.d, etc., and make the script available to run on boot for those runlevels.

Resources