daemonizing sidekiq with upstart script - is not working - ruby-on-rails

I'm trying to daemonize sidekiq using two upstart scripts following this example.
Basically the workers service starts a fixed number of sidekiq services.
The problem is that the sidekiq script fails at the line of code where I am starting sidekiq. I've tried to run the command directly in bash and it works fine.
I tried all different commented lines and none works.
So my question is what am I doing wrong? Where can I see the error messages?
This is my modified sidekiq script:
# /etc/init/sidekiq.conf - Sidekiq config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Sidekiq instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Sidekiq instances at once.
#
# Save this config as /etc/init/sidekiq.conf then mange sidekiq with:
# sudo start sidekiq index=0
# sudo stop sidekiq index=0
# sudo status sidekiq index=0
#
# or use the service command:
# sudo service sidekiq {start,stop,restart,status}
#
description "Sidekiq Background Worker"
respawn
respawn limit 15 5
# no "start on", we don't want to automatically start
stop on (stopping workers or runlevel [06])
# TERM and USR1 are sent by sidekiqctl when stopping sidekiq. Without declaring these as normal exit codes, it just respawns.
normal exit 0 TERM USR1
instance $index
script
exec /bin/bash <<EOT
# use syslog for logging
# exec &> /dev/kmsg
# pull in system rbenv
# export HOME=/home/deploy
# source /etc/profile.d/rbenv.sh
cd /home/rails
touch /root/sidekick_has_started
sidekiq -i ${index} -e production
# exec sidekiq -i ${index} -e production
# exec /usr/local/rvm/gems/ruby-2.0.0-p353/gems/sidekiq-3.1.3/bin/sidekiq -i ${index} -e production
touch /root/sidekick_has_started_2
EOT
end script

You are right, RVM env are required to be loaded in. Try this:
.....
.....
script
exec /bin/bash <<EOT
#export HOME=/home/deploy
source /usr/local/rvm/environments/ruby-2.0.0-p353#global
cd /home/rails
exec sidekiq -i ${index} -e production
.....
.....
Does it work?

Related

How do I run delayed_jobs in Kubernetes?

I can log into console from one of the pods (on kubernetes) and run this command:
RAILS_ENV=production bin/delayed_job start
The jobs are run correctly doing that. However when the pods are deleted or restarted, the jobs stop running.
I also tried adding the command above in an initializer file (eg config/initializers/delayed_jobs_runner.rb), but I get a recursive loop when starting the app.
Another thing I tried to do is create a new file called my-jobs.yaml with this
apiVersion: batch/v1
kind: Job
metadata:
name: job
spec:
template:
spec:
containers:
- name: job
image: gcr.io/test-app-123/somename:latest
command: ["/bin/bash", "-l", "-c"]
args: ["RAILS_ENV=production bundle exec rake jobs:work"]
restartPolicy: Never
backoffLimit: 4
I then do kubectl apply -f my-jobs.yaml, but the jobs aren't running.
Any idea how to run delayed_jobs correctly in kubernetes?
EDIT: Here's my Dockerfile:
FROM gcr.io/google_appengine/ruby
# Install 2.5.1 if not already preinstalled by the base image
RUN cd /rbenv/plugins/ruby-build && \
git pull && \
rbenv install -s 2.5.1 && \
rbenv global 2.5.1 && \
gem install -q --no-rdoc --no-ri bundler
# --version 1.11.2
ENV RBENV_VERSION 2.5.1
# Copy the application files.
COPY . /app/
# Install required gems.
RUN bundle install --deployment && rbenv rehash
# Set environment variables.
ENV RACK_ENV=production \
RAILS_ENV=production \
RAILS_SERVE_STATIC_FILES=true
# Run asset pipeline.
RUN bundle exec rake assets:precompile
CMD ["setup.sh"]
# Reset entrypoint to override base image.
ENTRYPOINT ["/bin/bash"]
################### setup.sh ############################
cd /app && RAILS_ENV=production bundle exec script/delayed_job -n 2 start
bundle exec foreman start --formation "$FORMATION"
#########################################################
Running multiple processes in one docker container is problematic as you cannot easily observe lifetime of particular process - every container need one process which is "main" and when it exit, container also exit.
As looking on Github (https://github.com/collectiveidea/delayed_job#user-content-running-jobs) I would strongly suggest to change a little your starting command to run it in foreground because now when you are starting Kubernetes job with daemons
- job is ending immediately as docker container lifetime is directly related to "main" foreground process lifetime so when you run only background process your main process exit immediately and your container too.
Change your command to:
RAILS_ENV=production script/delayed_job run
What start worker in foreground so your Kubernetes Job won't exit. Please note also that Kubernetes Jobs are not intended to such infinitive tasks (job should has start and end) so I would suggest to use ReplicaSet for that
Now I am doing this:
this_pid=$$
(while [[ $(ps -ef | grep delayed_job | grep -v -e grep -e tail | head -c1 | wc -c) -ne 0 ]]; do sleep 10; done; kill -- -$this_pid) &
after starting multiple workers. And after this I tail -f the logs so that those go to the standard output of the container. I am quite crazy, so I am also running logrotate to keep the logs in check. The rails environment is pretty big anyway, so the container needs to be pretty big, and we need to be able to run many jobs and I don't want many pods running to do so. This seems to be efficient and will stop and restart if the workers die for some reason.

cron jobs not working aws whenever gem rails

I know this is a very common question but I feel like I need a specific answer to help find out where I am going wrong...
Loaded whenever gem to manage cron jobs - it works fine in development but when I loaded the app into AWS I can't seem to it work...
When I SSH into the instance I can run crontab -l and it lists the whenever tasks but it just doesn't seem to actually execute them. I also cant find any log files to read into why it's not firing.
This is what I pulled from the eb activity log..
+++ GEM_ROOT=/opt/rubies/ruby-2.3.6/lib/ruby/gems/2.3.0
++ (( 0 != 0 ))
+ cd /var/app/current
+ su -c 'bundle exec whenever --update-cron'
[write] crontab file updated
+ su -c 'crontab -l'
# Begin Whenever generated tasks for:
/var/app/current/config/schedule.rb at: 2018-01-10 06:08:24 +0000
0 * * * * /bin/bash -l -c 'cd /var/app/current && bundle exec
bin/rails runner -e production '\''Trendi.home'\'' >>
app/views/pages/cron.html.erb 2>&1'
# End Whenever generated tasks for:
/var/app/current/config/schedule.rb at: 2018-01-10 06:08:24 +0000
[2018-01-10T06:08:24.705Z] INFO [15603] - [Application update app-
a3a0-180109_230627#16/AppDeployStage1/AppDeployPostHook] : Completed
activity. Result:
Successfully execute hooks in directory
/opt/elasticbeanstalk/hooks/appdeploy/post.
This is my config file from ebextensions folder
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/01_cron.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Using similar syntax as the appdeploy pre hooks that is
managed by AWS
set -xe
EB_SCRIPT_DIR=$(/opt/elasticbeanstalk/bin/get-config container
-k script_dir)
EB_SUPPORT_DIR=$(/opt/elasticbeanstalk/bin/get-config container
-k support_dir)
EB_DEPLOY_DIR=$(/opt/elasticbeanstalk/bin/get-config container
-k app_deploy_dir)
. $EB_SUPPORT_DIR/envvars
. $EB_SCRIPT_DIR/use-app-ruby.sh
cd $EB_DEPLOY_DIR
su -c "bundle exec whenever --update-cron"
su -c "crontab -l"
My Schedule.rb
ENV['RAILS_ENV'] = "production"
set :output, "app/views/pages/cron.html.erb"
every 1.hour, at: ":00"do # 1.minute 1.day 1.week 1.month 1.year
is also supported
runner "Trendi.home", :environment => 'production'
end
And my task that is stored at /lib/
module Trendi
def self.home
#exectuted task code here
end
You could try a simpler command in your crontab and change it to fire every minute. E.g.
/bin/echo "test" >> /home/username/testcron.log
That way you can quickly rule out if it's the cronjob that's the culprit. Keep in mind to use full paths to each command. So in your case you might want to change the "bundle" command to use the full path. You can find the path with the "which" command.
Also, are you sure you are correctly escaping here?
-e production '\''Trendi.home'\''
Wouldn't this be more adequate?
-e production "Trendi.home"

Rails migration on ECS

I am trying to figure out how to run rake db:migrate on my ECS service but only on one machine after deployment.
Anyone has experience with that?
Thanks
You may do it via Amazon ECS one-off task.
Build a docker image with rake db migrate as "CMD" in your docker file.
Create a task definition. You may choose one task per host while creating the task-definition and desired task number as "1".
Run a one-off ECS task inside your cluster. Make sure to make it outside service. Once It completed the task then the container will stop automatically.
You can write a script to do this before your deployment. After that, you can define your other tasks as usual.
You can also refer to the container lifecycle in Amazon ECS here: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_life_cycle.html. However, this is the default behavior of the docker.
Let me know if it works for you.
I built a custom shell script to run when my docker containers start ( CMD command in docker ):
#!/bin/sh
web_env=${WEB_ENV:-1}
rails_env=${RAILS_ENV:-staging}
rails_host=${HOST:-'https://www.example.com'}
echo "*****************RAILS_ENV is $RAILS_ENV default to $rails_env"
echo "***************** WEB_ENV is $WEB_ENV default to $web_env"
######## Rails migration ################################################
echo "Start rails migration"
echo "cd /home/app/example && bundle exec rake db:migrate RAILS_ENV=$rails_env"
cd /home/app/example
bundle exec rake db:migrate RAILS_ENV=$rails_env
echo "Complete migration"
if [ "$web_env" = "1" ]; then
######## Generate webapp.conf##########################################
web_app=/etc/nginx/sites-enabled/webapp.conf
replace_rails_env="s~{{rails_env}}~${rails_env}~g"
replace_rails_host="s~{{rails_host}}~${rails_host}~g"
# sed: -i may not be used with stdin in MacOsX
# Edit files in-place, saving backups with the specified extension.
# If a zero-length extension is given, no backup will be saved.
# we use -i.back as backup file for linux and
# In Macosx require the backup to be specified.
sed -i.back -e $replace_rails_env -e $replace_rails_host $web_app
rm "${web_app}.back" # remove webapp.conf.back cause nginx to fail.
# sed -i.back $replace_rails_host $web_app
# sed -i.back $replace_rails_server_name $web_app
######## Enable Web app ################################################
echo "Web app: enable nginx + passenger: rm -f /etc/service/nginx/down"
rm -f /etc/service/nginx/down
else
######## Create Daemon for background process ##########################
echo "Sidekiq service enable: /etc/service/sidekiq/run "
mkdir /etc/service/sidekiq
touch /etc/service/sidekiq/run
chmod +x /etc/service/sidekiq/run
echo "#!/bin/sh" > /etc/service/sidekiq/run
echo "cd /home/app/example && bundle exec sidekiq -e ${rails_env}" >> /etc/service/sidekiq/run
fi
echo ######## Custom Service setup properly"
What I did was to build a docker image to be run as a web server ( Nginx + passenger) or Sidekiq background process. The script will decide whether it is a web or Sidekiq via ENV variable WEB_ENV and rails migration will always get executed.
This way I can be sure the migration always up to date. I think this will work perfectly for a single task.
I am using a Passenger docker that has been designed very easy to customize but if you use another rails app server you can learn from the docker design of Passenger to apply to your own docker design.
For example, you can try something like:
In your Dockerfile:
CMD ["/start.sh"]
Then you create a start.sh where you put the commands which you want to execute:
start.sh
#! /usr/bin/env bash
echo "Migrating the database..."
rake db:migrate

Puma Upstart not loading ENV variables

I've deployed an app in production in an Ubuntu Server VM. It uses Puma, so I've followed this guide: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-puma-and-nginx-on-ubuntu-14-04
to configure it there (it is currently working properly on heroku, we are looking to migrate it to this new server).
This is my /etc/init/puma-manager.conf
# /etc/init/puma-manager.conf - manage a set of Pumas
description "Manages the set of puma processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Puma processes you want
# to run on this machine
env PUMA_CONF="/etc/puma.conf"
pre-start script
for i in `cat $PUMA_CONF`; do
app=`echo $i | cut -d , -f 1`
logger -t "puma-manager" "Starting $app"
start puma app=$app
done
end script
And my /etc/init/puma.conf
description "Puma Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping puma-manager or runlevel [06])
# change apps to match your deployment user if you want to use this as a less privileged user (recommended!)
setuid user
setgid user
respawn
respawn limit 3 30
instance ${app}
script
# source ENV variables manually as Upstart doesn't, eg:
. /etc/server-vars
exec /bin/bash <<'EOT'
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
cd $app
logger -t puma "Starting server: $app"
exec bundle exec puma -C config/puma.rb
EOT
end script
It works properly BUT it is not setting the ENV variables I specify in:
/etc/server-vars
I don't want to put all ENV vars directly into this script because they are many, and it limits the usability of the script.
The solution for me was to use "set -a" before sourcing the environment file. Here's the documentation describing what set -a does: The Set Builtin
Try 'set -a' before sourcing your environment file as you can see in the following example:
# /etc/init/puma.conf - Puma config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See puma-manager.conf for how to manage all Puma instances at once.
#
# Save this config as /etc/init/puma.conf then manage puma with:
# sudo start puma app=PATH_TO_APP
# sudo stop puma app=PATH_TO_APP
# sudo status puma app=PATH_TO_APP
#
# or use the service command:
# sudo service puma {start,stop,restart,status}
#
description "Puma Background Worker"
# no "start on", we don't want to automatically start
start on runlevel [2345]
stop on runlevel [06]
# change apps to match your deployment user if you want to use this as a less privileged user (recommended!)
setuid deploy
setgid deploy
respawn
respawn limit 3 30
instance ${app}
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv/rvm
# quoted heredoc to tell /bin/sh not to interpret
# variables
# source ENV variables manually as Upstart doesn't, eg:
#. /etc/environment
exec /bin/bash <<'EOT'
set -a
. /etc/environment
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
logger -t puma "Starting server: $app"
cd $app
exec bundle exec puma -C /home/deploy/brilliant/config/puma.rb
EOT
end script

Puma restart fails on reboot using EC2 + Rails + Nginx + Capistrano

I have successfully used capistrano to deploy my rails app to Ubuntu EC2. Everything works great on deploy. Rails app name is deseov12
My issue is that Puma does not start on boot which will be necessary as production EC2 instances will be instantiated on demand.
Puma will start when deploying via Capistrano, it will also start when running
cap production puma:start
on local machine.
It will also start on server after a reboot if I run the following commands:
su - deploy
[enter password]
cd /home/deploy/deseov12/current && ( export RACK_ENV="production" ; ~/.rvm/bin/rvm ruby-2.2.4 do bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon )
I have followed directions from the Puma jungle tool to make Puma start on boot by using upstart as follows:
Contents of /etc/puma.conf
/home/deploy/deseov12/current
Contents of /etc/init/puma.conf and /home/deploy/puma.conf
# /etc/init/puma.conf - Puma config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Puma instances at once.
#
# Save this config as /etc/init/puma.conf then manage puma with:
# sudo start puma app=PATH_TO_APP
# sudo stop puma app=PATH_TO_APP
# sudo status puma app=PATH_TO_APP
#
# or use the service command:
# sudo service puma {start,stop,restart,status}
#
description "Puma Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping puma-manager or runlevel [06])
# change apps to match your deployment user if you want to use this as a less privileged user $
setuid deploy
setgid deploy
respawn
respawn limit 3 30
instance ${app}
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv/rvm
# quoted heredoc to tell /bin/sh not to interpret
# variables
# source ENV variables manually as Upstart doesn't, eg:
#. /etc/environment
exec /bin/bash <<'EOT'
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
cd $app
logger -t puma "Starting server: $app"
exec bundle exec puma -C current/config/puma.rb
EOT
end script
Contents of /etc/init/puma-manager.conf and /home/deploy/puma-manager.conf
# /etc/init/puma-manager.conf - manage a set of Pumas
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See puma.conf for how to manage a single Puma instance.
#
# Use "stop puma-manager" to stop all Puma instances.
# Use "start puma-manager" to start all instances.
# Use "restart puma-manager" to restart all instances.
# Crazy, right?
#
description "Manages the set of puma processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Puma processes you want
# to run on this machine
env PUMA_CONF="/etc/puma.conf"
pre-start script
for i in `cat $PUMA_CONF`; do
app=`echo $i | cut -d , -f 1`
logger -t "puma-manager" "Starting $app"
start puma app=$app
done
end script
Contents of /home/deploy/deseov12/shared/puma.rb
#!/usr/bin/env puma
directory '/home/deploy/deseov12/current'
rackup "/home/deploy/deseov12/current/config.ru"
environment 'production'
pidfile "/home/deploy/deseov12/shared/tmp/pids/puma.pid"
state_path "/home/deploy/deseov12/shared/tmp/pids/puma.state"
stdout_redirect '/home/deploy/deseov12/shared/log/puma_error.log', '/home/deploy/deseov12/shar$
threads 0,8
bind 'unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock'
workers 0
activate_control_app
prune_bundler
on_restart do
puts 'Refreshing Gemfile'
ENV["BUNDLE_GEMFILE"] = "/home/deploy/deseov12/current/Gemfile"
end
However, I have not been able to make Puma start up automatically after a server reboot. It just does not start.
I would certainly appreciate some help
EDIT: I just noticed something that could be a clue:
when running the following command as deploy user:
sudo start puma app=/home/deploy/deseov12/current
ps aux will show a puma process for a few seconds before it disappears.
deploy 4312 103 7.7 183396 78488 ? Rsl 03:42 0:02 puma 2.15.3 (tcp://0.0.0.0:3000) [20160106224332]
this puma process is different from the working process launched by capistrano:
deploy 5489 10.0 12.4 858088 126716 ? Sl 03:45 0:02 puma 2.15.3 (unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock) [20160106224332]
This is finally solved after a lot of research. It turns out the issue was threefold:
1) the proper environment was not being set when running the upstart script
2) the actual production puma.rb configuration file when using capistrano can be found in the home/deploy/deseov12/shared directory not in the /current/ directory
3) not demonizing the puma server properly
To solve these issues:
1) This line should be added to the start of the script in /etc/init/puma.conf and /home/deploy/puma.conf:
env RACK_ENV="production"
2) and 3) this line
exec bundle exec puma -C current/config/puma.rb
should be replaced with this one
exec bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon
After doing this, the puma server starts properly on reboot or new instance generation. Hope this helps someone avoid hours of troubleshooting.

Resources