ROR, Redis, Resque, God & Cron on Ubuntu Server - Boot - ruby-on-rails

I have made several jobs that god takes care of in my ruby application. However when the server reboots the job stops. I want to avoid this so I've made this script on my server. It looks like this.
my_app.sh
#!/bin/bash
# god tasks
#
case $1 in
start)
/usr/local/rvm/gems/ruby-1.9.3-p194/bin/god
/usr/local/rvm/gems/ruby-1.9.3-p194/bin/god start
/usr/local/rvm/gems/ruby-1.9.3-p194/bin/god load /usr/local/Linux/apache2/www/hej.se/ruby/config/resque.god
/usr/local/rvm/gems/ruby-1.9.3-p194/bin/god load /usr/local/Linux/apache2/www/hej.se/ruby/config/resque_schedule.god
;;
esac
exit 0
If I log in manually and write
"/etc/init.d/my_app start"
it gives me
Sending 'start' command
No matching task or group
Sending 'load' command with action 'leave'
The following tasks were affected:
resque-0
resque-1
resque-2
resque-3
resque-4
Sending 'load' command with action 'leave'
The following tasks were affected:
resque_scheduler
And everything works, it does what I want it to do, i.e the jobs.
I have tried several ways to start this script on boot (Linux 10.4.4 LTS), rc.local, rc-default and now my latest attempt is crontab.
The script must be run under my user and not root, (it can't find the ruby installation if I run it under root).
Because of this I've configured the crontab under my user account:
#reboot /etc/init.d/my_app start
Sadly this doesn't work... I don't what I'm doing wrong. And this should probably not be necessary. I mean shouldn't you be able to this per auto when booting up the ruby application?
Im using passenger on this server, I don't know if this has something to do with it?
The solution below with the changes I made to the sh:
my_app.sh
bash -c "source /usr/local/rvm/scripts/rvm && /usr/local/rvm/gems/ruby-1.9.3-p194/bin/god"
bash -c "source /usr/local/rvm/scripts/rvm && /usr/local/rvm/gems/ruby-1.9.3-p194/bin/god start"
bash -c "source /usr/local/rvm/scripts/rvm && /usr/local/rvm/gems/ruby-1.9.3-p194/bin/god load /usr/local/Linux/apache2/www/hej.se/ruby/config/resque.god"
bash -c "source /usr/local/rvm/scripts/rvm && /usr/local/rvm/gems/ruby-1.9.3-p194/bin/god load /usr/local/Linux/apache2/www/hej.se/ruby/config/resque_schedule.god"

Forget the cronjob.
Centos/Fedora:
sudo chmod a+x /etc/init.d/my_app
sudo chkconfig --add my_app
sudo chkconfig my_app on
Ubuntu/Debian:
sudo update-rc.d my_app defaults
Both of these symlink the script to /etc/rc1.d, /etc/rc2.d, etc., and make the script available to run on boot for those runlevels.

Related

How do I run delayed_jobs in Kubernetes?

I can log into console from one of the pods (on kubernetes) and run this command:
RAILS_ENV=production bin/delayed_job start
The jobs are run correctly doing that. However when the pods are deleted or restarted, the jobs stop running.
I also tried adding the command above in an initializer file (eg config/initializers/delayed_jobs_runner.rb), but I get a recursive loop when starting the app.
Another thing I tried to do is create a new file called my-jobs.yaml with this
apiVersion: batch/v1
kind: Job
metadata:
name: job
spec:
template:
spec:
containers:
- name: job
image: gcr.io/test-app-123/somename:latest
command: ["/bin/bash", "-l", "-c"]
args: ["RAILS_ENV=production bundle exec rake jobs:work"]
restartPolicy: Never
backoffLimit: 4
I then do kubectl apply -f my-jobs.yaml, but the jobs aren't running.
Any idea how to run delayed_jobs correctly in kubernetes?
EDIT: Here's my Dockerfile:
FROM gcr.io/google_appengine/ruby
# Install 2.5.1 if not already preinstalled by the base image
RUN cd /rbenv/plugins/ruby-build && \
git pull && \
rbenv install -s 2.5.1 && \
rbenv global 2.5.1 && \
gem install -q --no-rdoc --no-ri bundler
# --version 1.11.2
ENV RBENV_VERSION 2.5.1
# Copy the application files.
COPY . /app/
# Install required gems.
RUN bundle install --deployment && rbenv rehash
# Set environment variables.
ENV RACK_ENV=production \
RAILS_ENV=production \
RAILS_SERVE_STATIC_FILES=true
# Run asset pipeline.
RUN bundle exec rake assets:precompile
CMD ["setup.sh"]
# Reset entrypoint to override base image.
ENTRYPOINT ["/bin/bash"]
################### setup.sh ############################
cd /app && RAILS_ENV=production bundle exec script/delayed_job -n 2 start
bundle exec foreman start --formation "$FORMATION"
#########################################################
Running multiple processes in one docker container is problematic as you cannot easily observe lifetime of particular process - every container need one process which is "main" and when it exit, container also exit.
As looking on Github (https://github.com/collectiveidea/delayed_job#user-content-running-jobs) I would strongly suggest to change a little your starting command to run it in foreground because now when you are starting Kubernetes job with daemons
- job is ending immediately as docker container lifetime is directly related to "main" foreground process lifetime so when you run only background process your main process exit immediately and your container too.
Change your command to:
RAILS_ENV=production script/delayed_job run
What start worker in foreground so your Kubernetes Job won't exit. Please note also that Kubernetes Jobs are not intended to such infinitive tasks (job should has start and end) so I would suggest to use ReplicaSet for that
Now I am doing this:
this_pid=$$
(while [[ $(ps -ef | grep delayed_job | grep -v -e grep -e tail | head -c1 | wc -c) -ne 0 ]]; do sleep 10; done; kill -- -$this_pid) &
after starting multiple workers. And after this I tail -f the logs so that those go to the standard output of the container. I am quite crazy, so I am also running logrotate to keep the logs in check. The rails environment is pretty big anyway, so the container needs to be pretty big, and we need to be able to run many jobs and I don't want many pods running to do so. This seems to be efficient and will stop and restart if the workers die for some reason.

Rails migration on ECS

I am trying to figure out how to run rake db:migrate on my ECS service but only on one machine after deployment.
Anyone has experience with that?
Thanks
You may do it via Amazon ECS one-off task.
Build a docker image with rake db migrate as "CMD" in your docker file.
Create a task definition. You may choose one task per host while creating the task-definition and desired task number as "1".
Run a one-off ECS task inside your cluster. Make sure to make it outside service. Once It completed the task then the container will stop automatically.
You can write a script to do this before your deployment. After that, you can define your other tasks as usual.
You can also refer to the container lifecycle in Amazon ECS here: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_life_cycle.html. However, this is the default behavior of the docker.
Let me know if it works for you.
I built a custom shell script to run when my docker containers start ( CMD command in docker ):
#!/bin/sh
web_env=${WEB_ENV:-1}
rails_env=${RAILS_ENV:-staging}
rails_host=${HOST:-'https://www.example.com'}
echo "*****************RAILS_ENV is $RAILS_ENV default to $rails_env"
echo "***************** WEB_ENV is $WEB_ENV default to $web_env"
######## Rails migration ################################################
echo "Start rails migration"
echo "cd /home/app/example && bundle exec rake db:migrate RAILS_ENV=$rails_env"
cd /home/app/example
bundle exec rake db:migrate RAILS_ENV=$rails_env
echo "Complete migration"
if [ "$web_env" = "1" ]; then
######## Generate webapp.conf##########################################
web_app=/etc/nginx/sites-enabled/webapp.conf
replace_rails_env="s~{{rails_env}}~${rails_env}~g"
replace_rails_host="s~{{rails_host}}~${rails_host}~g"
# sed: -i may not be used with stdin in MacOsX
# Edit files in-place, saving backups with the specified extension.
# If a zero-length extension is given, no backup will be saved.
# we use -i.back as backup file for linux and
# In Macosx require the backup to be specified.
sed -i.back -e $replace_rails_env -e $replace_rails_host $web_app
rm "${web_app}.back" # remove webapp.conf.back cause nginx to fail.
# sed -i.back $replace_rails_host $web_app
# sed -i.back $replace_rails_server_name $web_app
######## Enable Web app ################################################
echo "Web app: enable nginx + passenger: rm -f /etc/service/nginx/down"
rm -f /etc/service/nginx/down
else
######## Create Daemon for background process ##########################
echo "Sidekiq service enable: /etc/service/sidekiq/run "
mkdir /etc/service/sidekiq
touch /etc/service/sidekiq/run
chmod +x /etc/service/sidekiq/run
echo "#!/bin/sh" > /etc/service/sidekiq/run
echo "cd /home/app/example && bundle exec sidekiq -e ${rails_env}" >> /etc/service/sidekiq/run
fi
echo ######## Custom Service setup properly"
What I did was to build a docker image to be run as a web server ( Nginx + passenger) or Sidekiq background process. The script will decide whether it is a web or Sidekiq via ENV variable WEB_ENV and rails migration will always get executed.
This way I can be sure the migration always up to date. I think this will work perfectly for a single task.
I am using a Passenger docker that has been designed very easy to customize but if you use another rails app server you can learn from the docker design of Passenger to apply to your own docker design.
For example, you can try something like:
In your Dockerfile:
CMD ["/start.sh"]
Then you create a start.sh where you put the commands which you want to execute:
start.sh
#! /usr/bin/env bash
echo "Migrating the database..."
rake db:migrate

Unable to restart rails Delayed Job on system reboot using cron

I'm using Ubuntu 14.04.4 LTS
Crontab:
SHELL=/bin/bash
#reboot ~/Projects/MyAPI/startworkers.sh;
startup script:
# /Projects/MyAPI/startworkers.sh
#!/bin/bash
source /home/server-linux/.bashrc
cd ~/Projects/LucyAPI
# Start background workers
bin/delayed_job --pool=tracking:2 --pool=emailverify:6 start
I expect there to be 6 delayed jobs running after reboot. However, none of them start. However, if I manually execute start.sh everything works as expected.
What am I doing wrong?
I think you might need a /bin/bash as part of the crontab and also the absolute path to the user home. Crontab example:
#reboot /bin/bash -l -c '/home/your_user_name/Projects/MyAPI/startworkers.sh'
I would also strongly recommend using the whenever gem to handle your crontab. You can find it here Whenever Gem

Manage sidekiq with init.d script using RVM

I'm using the init.d script provided (the init.d script from the sidekiq github repo), but I am on an ubuntu system with RVM installed system wide.
I cannot seem to figure out how to cd into my app directory and issue the command without there being some complaining in the log and nothing actually starting.
Question: What should the startup command for sidekiq look like in my init.d script when I am using RVM? My user is named ubuntu. Currently I have this in my init.d script:
START_CMD="$BUNDLE exec $SIDEKIQ"
# where bundle is /usr/local/rvm/gems/ruby-1.9.3-p385/bin/bundle
# and sidekiq is sidekiq
# I've also tried with the following args: -e $APP_ENV -P $PID_FILE -C $APP_CONFIG/sidekiq.yml -d -L $LOG_FILE"
RETVAL=0
start() {
status
if [ $? -eq 1 ]; then
[ `id -u` == '0' ] || (echo "$SIDEKIQ runs as root only .."; exit 5)
[ -d $APP_DIR ] || (echo "$APP_DIR not found!.. Exiting"; exit 6)
cd $APP_DIR
echo "Starting $SIDEKIQ message processor .. "
echo "in dir `pwd`"
su - ubuntu -c "$START_CMD >> $LOG_FILE 2>&1 &"
RETVAL=$?
#Sleeping for 8 seconds for process to be precisely visible in process table - See status ()
sleep 8
[ $RETVAL -eq 0 ] && touch $LOCK_FILE
return $RETVAL
else
echo "$SIDEKIQ message processor is already running .. "
fi
}
My sidekiq.log gives me this error:Could not locate Gemfile. However, I print the working directory and I am most definitely in my app's current directory, according to the echo pwd, at the time this command is executed.
When I take out the su - ubuntu -c [command here], I get this error:
/usr/bin/env: ruby_noexec_wrapper: No such file or directory
My solution is to just start the process manually. When I manually cd into my app directory and issue this command:
bundle exec sidekiq -d -L log/sidekiq.log -P tmp/pids/sidekiq.pid
things go as planned, and then
sudo /etc/init.d/sidekiq status
tells me things are up and running.
Also, sudo /etc/init.d/sidekiq stop and status work as expected.
I wrote a blog post a few months ago on my experience writing an init.d script for Sidekiq, however I was using rbenv rather than RVM.
https://cdyer.co.uk/blog/init-script-for-sidekiq-with-rbenv/
I think you should be able to use something almost identical except for modifying the username and app dir variables.
use wrappers:
BUNDLER=/usr/local/rvm/wrappers/ruby-1.9.3-p385/bundle
in case the bundler wrapper is not available generate it with:
rvm wrapper ruby-1.9.3-p385 --no-links bundle # OR:
rvm wrapper ruby-1.9.3-p385 --no-links --all
you can use aliases to make it easier:
rvm alias create my_app 1.9.3-p385
and then use it like this:
BUNDLER=/usr/local/rvm/wrappers/my_app/bundle
this way you will not have to change the script when application ruby changes - just update the alias, there is a good description/integration for this in rvm-capistrano => https://github.com/wayneeseguin/rvm-capistrano/#create-application-alias-and-wrappers

Tiny tiny rss monit

Help!
I want to set up a monitoring service on my Debian server, that will monitor and start wen needed the updater for tiny tiny rss. The problem is that it is a php foreground process normally run in a screen on a non-root user.
I can run it as:
php ./update_daemon2.php
or better putting it in the background and in order to run it from a different account
sudo -u tinyrssuser php ./update_deamon2.php -daemon > /dev/null & disown $!
I have installed monit, but cant seem to find a way to have it detect if t is running.
I would prefer to keep with monit but it is not necessary.
Any ideas would be appreciated.
Found the answer at:
http://510x.se/notes/posts/Install_Tiny_Tiny_RSS_on_Debian/
But use this instead under /etc/init.d/
http://mylostnotes.blogspot.co.il/2013/03/tiny-tiny-rss-initd-script.html
make sure to set the user and group
Create an upstart script /etc/init/ttrss.conf:
description "TT-RSS Feed Updater"
author "The Epyon Avenger <epyon_avenger on TT-RSS forums>"
env USER=www-data
env TTRSSDIR=/var/www/ttrss
start on started mysql
stop on stopping mysql
respawn
exec start-stop-daemon --start --make-pidfile --pidfile /var/run/ttrss.pid --chdir $TTRSSDIR --chuid $USER --group $USER --exec /usr/bin/php ./update_daemon2.php >> /var/log/ttrss/ttrss. log 2>&1
Start the script:
sudo start --system ttrss
Add the following lines to your monit conf:
check process ttrss with pidfile /var/run/ttrss.pid
start program = "/sbin/start ttrss"
stop program = "/sbin/stop ttrss"

Resources