AWS - Elastic Beanstalk Sidekiq Not Starting Automatically - ruby-on-rails

Hey I'm running a rails application on an AWS Elastic Beanstalk instance and have run into a couple issues using Sidekiq for background jobs. I'm using Sidekiq to send welcome and password reset emails.
The application is running Rails 4.2.3. The Elastic Beanstalk instance is running 64bit Amazon Linux 2015.03 v2.0.1 running Ruby 2.2 (Puma). I used the following tutorial to set up a Redis node and Sidekiq on Elastic Beanstalk, and used the sidekiq.config from the following GIST to place inside my .ebextentions folder. I did not use any of the other recommended config files from the GIST.
Everything seemed to go off without a hitch. The application deployed correctly and there were no errors in any of the logs. However, when I created a new user, triggering a welcome email, the email failed to send. I checked the Sidekiq web application and it appeared that my jobs were scheduled but never executed. On further inspection I determined that Sidekiq wasn't actually running. I connected to the EC2 instance via SSH and manually ran bundle exec sidekiq and the email sent.
For some reason Sidekiq isn't getting run when the application is being deployed.
Here are the contents of my sidekiq.config:
# Sidekiq interaction and startup script
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/50_restart_sidekiq":
mode: "000755"
content: |
#!/bin/bash
. /opt/elasticbeanstalk/containerfiles/envvars
PIDFILE=$EB_CONFIG_APP_PIDS/sidekiq.pid
cd $EB_CONFIG_APP_CURRENT
if [ -f $PIDFILE ]
then
SIDEKIQ_LIVES=$(/bin/ps -o pid= -p `cat $PIDFILE`)
if [ -z $SIDEKIQ_LIVES ]
then
rm -rf $PIDFILE
else
kill -TERM `cat $PIDFILE`
sleep 10
rm -rf $PIDFILE
fi
fi
BUNDLE=/usr/local/bin/bundle
SIDEKIQ=/usr/local/bin/sidekiq
$BUNDLE exec $SIDEKIQ \
-e production \
-P /var/app/containerfiles/pids/sidekiq.pid \
-C /var/app/current/config/sidekiq.yml \
-L /var/app/containerfiles/logs/sidekiq.log \
-d
"/opt/elasticbeanstalk/hooks/appdeploy/pre/03_mute_sidekiq":
mode: "000755"
content: |
#!/bin/bash
. /opt/elasticbeanstalk/containerfiles/envvars
PIDFILE=$EB_CONFIG_APP_PIDS/sidekiq.pid
if [ -f $PIDFILE ]
then
SIDEKIQ_LIVES=$(/bin/ps -o pid= -p `cat $PIDFILE`)
if [ -z $SIDEKIQ_LIVES ]
then
rm -rf $PIDFILE
else
kill -USR1 `cat $PIDFILE`
sleep 10
fi
fi
What am I missing? Are there other files I need to add to my .ebextentions folder? Why isn't Sidekiq being initiated?

Related

Make rails and sidekiq work together from different Docker containers (but can't use docker-compose)

I'm moving a rails app from Heroku to a linux server and deploying it using Caprover. It's an app very dependent on background jobs, which I run with sidekiq.
I've managed to make it work by running both the rails server (bundle exec rails server -b 0.0.0.0 -p80 &) and sidekiq (bundle exec sidekiq &) from a script that launches both in the CMD of the Dockerfile.
But I guess it would be much better (separation of concerns) if the rails server was in one Docker container and sidekiq in another one. But I can't figure out how to connect them. How do I tell my rails app that sidekiq lives in another container?
Because I use Caprover I'm limited to Dockerfiles to deploy my images, so I can't use docker-compose.
Is there a way to tell rails that it should use a certain sidekiq found in a certain Docker container? Caprover uses Docker swarm if that is of any help.
Am I thinking about this the wrong way?
My setup, currently, is as follows:
1 Docker container with rails server + sidekiq
1 Docker container with the postgres DB
1 Docker container with the Redis DB
My desired setup would be:
1 Docker container with rails server
1 Docker container with sidekiq
1 Docker container with postgres DB
1 Docker container with Redis DB
Is that even possible with my current limitations?
My rails + sidekiq Dockerfile is as follows:
FROM ruby:2.6.4-alpine
#
RUN apk update && apk add nodejs yarn postgresql-client postgresql-dev tzdata build-base ffmpeg
RUN apk add --no-cache --upgrade bash
RUN mkdir /myapp
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install --deployment --without development test
COPY . /myapp
RUN yarn
RUN bundle exec rake yarn:install
# Set production environment
ENV RAILS_ENV production
ENV RAILS_SERVE_STATIC_FILES true
# Assets, to fix missing secret key issue during building
RUN SECRET_KEY_BASE=dumb bundle exec rails assets:precompile
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 80
COPY start_rails_and_sidekiq.sh /myapp/start_rails_and_sidekiq.sh
RUN chmod +x /myapp/start_rails_and_sidekiq.sh
# Start the main process.
WORKDIR /myapp
CMD ./start_rails_and_sidekiq.sh
the start_rails_and_sidekiq.sh looks like this:
#!/bin/bash
# Start the first process
bundle exec rails server -b 0.0.0.0 -p80 &
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start Rails server: $status"
exit $status
fi
# Start the second process
bundle exec sidekiq &
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start Sidekiq: $status"
exit $status
fi
# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds
while sleep 60; do
ps aux |grep puma |grep -q -v grep
PROCESS_1_STATUS=$?
ps aux |grep sidekiq |grep -q -v grep
PROCESS_2_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
I'm totally lost!
Thanks in advance!
Method 1
According to CapRover docs, it seems that it is possible to run Docker Compose on CapRover, but I haven't tried it myself (yet).
Method 2
Although this CapRover example is for a different web app, the Internal Access principle is the same:
You can simply add a srv-captain-- prefix to the name of the container if you want to access it from another container.
However, isn't this method how you told your Rails web app where to find the PostgreSQL DB container? Or are you accessing it through an external subdomain name?

PUT Elasticsearch Ingest Pipeline by default

We currently use Elasticsearch for storage of Spring Boot App logs that are sent by Filebeat and use Kibana to visualise this.
Our entire architecture is dockerized inside a docker-compose file. Currently, our when we start the stack, we have to wait for Elasticsearch to start, then PUT our Ingest Pipeline, then restart Filebeat, and only then do our logs show up properly ingested in Kibana.
I'm quite new to this, but I was wondering if there is no way to have Elasticsearch save ingest pipelines so that you do not have to load them every single time? I read about mounting volumes or running custom scripts to wait for ES and PUT when ready, but all of this seems very cumbersome for a use case that to me seems like the default?
We used a similar approach to ozlevka, by running a script during the build process of our custom Elasticsearch image.
This is our script:
#!/bin/bash
# This script sets up the Elasticsearch docker instance with the correct pipelines and templates
baseUrl='localhost:9200'
contentType='Content-Type:application/json'
# filebeat
ingestUrl=$baseUrl'/_ingest/pipeline/our-pipeline?pretty'
payload='/usr/share/elasticsearch/config/our-pipeline.json'
/usr/share/elasticsearch/bin/elasticsearch -p /tmp/pid > /dev/null &
# wait until Elasticsearch is up
# you can get logs if you change /dev/null to /dev/stderr
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' -XPUT $ingestUrl -H$contentType -d#$payload)" != "200" ]]; do
echo "Waiting for Elasticsearch to start and posting pipeline..."
sleep 5
done
kill -SIGTERM $(cat /tmp/pid)
rm /tmp/pid
echo -e "\n\n\nCompleted Elasticsearch Setup, refer to logs for details"
I suggest using the startscript into filebeat container.
The script will ping elasticsearch be ready, after this create pipeline and start filebeat.
#!/usr/bin/env bash -e
START_FILE=/tmp/.es_start_file
http () {
local path="${1}"
curl -XGET -s -k --fail http://${ELASTICSEARCH_HOST}:{$ELASTICSEARCH_PORT}${path}
}
pipeline() {
curl -XPUT -s -k --fail http://${ELASTICSEARCH_HOST}:{$ELASTICSEARCH_PORT}/_ingest/pipeline/$PIPELINE_NAME -d #pipeline.json
}
while true; do
if [ -f "${START_FILE}" ]; then
pipeline
/usr/bin/filebeat -c filebeat.yaml &
exit 0
else
echo 'Waiting for elasticsearch cluster to become green'
if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
touch ${START_FILE}
fi
fi
done
This method will be good for docker-compose and docker swarm. For k8s preferable create readiness probe.

Rails migration on ECS

I am trying to figure out how to run rake db:migrate on my ECS service but only on one machine after deployment.
Anyone has experience with that?
Thanks
You may do it via Amazon ECS one-off task.
Build a docker image with rake db migrate as "CMD" in your docker file.
Create a task definition. You may choose one task per host while creating the task-definition and desired task number as "1".
Run a one-off ECS task inside your cluster. Make sure to make it outside service. Once It completed the task then the container will stop automatically.
You can write a script to do this before your deployment. After that, you can define your other tasks as usual.
You can also refer to the container lifecycle in Amazon ECS here: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_life_cycle.html. However, this is the default behavior of the docker.
Let me know if it works for you.
I built a custom shell script to run when my docker containers start ( CMD command in docker ):
#!/bin/sh
web_env=${WEB_ENV:-1}
rails_env=${RAILS_ENV:-staging}
rails_host=${HOST:-'https://www.example.com'}
echo "*****************RAILS_ENV is $RAILS_ENV default to $rails_env"
echo "***************** WEB_ENV is $WEB_ENV default to $web_env"
######## Rails migration ################################################
echo "Start rails migration"
echo "cd /home/app/example && bundle exec rake db:migrate RAILS_ENV=$rails_env"
cd /home/app/example
bundle exec rake db:migrate RAILS_ENV=$rails_env
echo "Complete migration"
if [ "$web_env" = "1" ]; then
######## Generate webapp.conf##########################################
web_app=/etc/nginx/sites-enabled/webapp.conf
replace_rails_env="s~{{rails_env}}~${rails_env}~g"
replace_rails_host="s~{{rails_host}}~${rails_host}~g"
# sed: -i may not be used with stdin in MacOsX
# Edit files in-place, saving backups with the specified extension.
# If a zero-length extension is given, no backup will be saved.
# we use -i.back as backup file for linux and
# In Macosx require the backup to be specified.
sed -i.back -e $replace_rails_env -e $replace_rails_host $web_app
rm "${web_app}.back" # remove webapp.conf.back cause nginx to fail.
# sed -i.back $replace_rails_host $web_app
# sed -i.back $replace_rails_server_name $web_app
######## Enable Web app ################################################
echo "Web app: enable nginx + passenger: rm -f /etc/service/nginx/down"
rm -f /etc/service/nginx/down
else
######## Create Daemon for background process ##########################
echo "Sidekiq service enable: /etc/service/sidekiq/run "
mkdir /etc/service/sidekiq
touch /etc/service/sidekiq/run
chmod +x /etc/service/sidekiq/run
echo "#!/bin/sh" > /etc/service/sidekiq/run
echo "cd /home/app/example && bundle exec sidekiq -e ${rails_env}" >> /etc/service/sidekiq/run
fi
echo ######## Custom Service setup properly"
What I did was to build a docker image to be run as a web server ( Nginx + passenger) or Sidekiq background process. The script will decide whether it is a web or Sidekiq via ENV variable WEB_ENV and rails migration will always get executed.
This way I can be sure the migration always up to date. I think this will work perfectly for a single task.
I am using a Passenger docker that has been designed very easy to customize but if you use another rails app server you can learn from the docker design of Passenger to apply to your own docker design.
For example, you can try something like:
In your Dockerfile:
CMD ["/start.sh"]
Then you create a start.sh where you put the commands which you want to execute:
start.sh
#! /usr/bin/env bash
echo "Migrating the database..."
rake db:migrate

Start Unicorn with Runit and User's RVM

I'm deploying my Rails App servers with Chef. Have just swapped to RVM from a source install of Ruby (because I was having issues with my deploy user).
Now I have my deploy sorted, assets compiled and bundler's installed all my gems.
The problem I have is supervising Unicorn with Runit..
RVM is not installed as root user - only as my deploy user has it, as follows:
$ rvm list
rvm rubies
=* ruby-2.0.0-p247 [ x86_64 ]
I can manually start Unicorn successfully from my deploy user. However, it won't start as part of runit.
My run file looks like this. I have also tried the solution in this SO question unsuccessfully..
#!/bin/bash
cd /var/www/html/deploy/production/current
exec 2>&1
exec chpst -u deploy:deploy /home/deploy/.rvm/gems/ruby-2.0.0-p247/bin/unicorn -E production -c config/unicorn_production.rb
If I run it manually, I get this error:
/usr/bin/env: ruby_noexec_wrapper: No such file or directory
I created a small script (gist here) which does run as root. However, if I call this from runit, I see the workers start but I get two processes for runit and I can't stop or restart the service:
Output of ps:
1001 29062 1 0 00:08 ? 00:00:00 unicorn master -D -E production -c /var/www/html/deploy/production/current/config/unicorn_production.rb
1001 29065 29062 9 00:08 ? 00:00:12 unicorn worker[0] -D -E production -c /var/www/html/deploy/production/current/config/unicorn_production.rb
root 29076 920 0 00:08 ? 00:00:00 su - deploy -c cd /var/www/html/deploy/production/current; export GEM_HOME=/home/deploy/.rvm/gems/ruby-2.0.0-p247; /home/deploy/.rvm/gems/ruby-2.0.0-p247/bin/unicorn -D -E production -c /var/www/html/deploy/production/current/config/unicorn_production.rb
1001 29083 29076 0 00:08 ? 00:00:00 -su -c cd /var/www/html/deploy/production/current; export GEM_HOME=/home/deploy/.rvm/gems/ruby-2.0.0-p247; /home/deploy/.rvm/gems/ruby-2.0.0-p247/bin/unicorn -D -E production -c /var/www/html/deploy/production/current/config/unicorn_production.rb
What should I do here? Move back to monit which worked nicely?
your run file is doing it wrong, you are using the binary without setting the environment, for that purpose you should use wrappers:
rvm wrapper ruby-2.0.0-p247 --no-links unicorn
To simplify the script use alias so it does not need to be changed when you decide which ruby should be used:
rvm alias create my_app_unicorn ruby-2.0.0-p247
And change the script to:
#!/bin/bash
cd /var/www/html/deploy/production/current
exec 2>&1
exec chpst -u deploy:deploy /home/deploy/.rvm/wrappers/my_app_unicorn/unicorn -E production -c config/unicorn_production.rb
This will ensure proper environment is used for execution of unicorn and any time you want change ruby used to run it just crate alias to a new ruby.

Manage sidekiq with init.d script using RVM

I'm using the init.d script provided (the init.d script from the sidekiq github repo), but I am on an ubuntu system with RVM installed system wide.
I cannot seem to figure out how to cd into my app directory and issue the command without there being some complaining in the log and nothing actually starting.
Question: What should the startup command for sidekiq look like in my init.d script when I am using RVM? My user is named ubuntu. Currently I have this in my init.d script:
START_CMD="$BUNDLE exec $SIDEKIQ"
# where bundle is /usr/local/rvm/gems/ruby-1.9.3-p385/bin/bundle
# and sidekiq is sidekiq
# I've also tried with the following args: -e $APP_ENV -P $PID_FILE -C $APP_CONFIG/sidekiq.yml -d -L $LOG_FILE"
RETVAL=0
start() {
status
if [ $? -eq 1 ]; then
[ `id -u` == '0' ] || (echo "$SIDEKIQ runs as root only .."; exit 5)
[ -d $APP_DIR ] || (echo "$APP_DIR not found!.. Exiting"; exit 6)
cd $APP_DIR
echo "Starting $SIDEKIQ message processor .. "
echo "in dir `pwd`"
su - ubuntu -c "$START_CMD >> $LOG_FILE 2>&1 &"
RETVAL=$?
#Sleeping for 8 seconds for process to be precisely visible in process table - See status ()
sleep 8
[ $RETVAL -eq 0 ] && touch $LOCK_FILE
return $RETVAL
else
echo "$SIDEKIQ message processor is already running .. "
fi
}
My sidekiq.log gives me this error:Could not locate Gemfile. However, I print the working directory and I am most definitely in my app's current directory, according to the echo pwd, at the time this command is executed.
When I take out the su - ubuntu -c [command here], I get this error:
/usr/bin/env: ruby_noexec_wrapper: No such file or directory
My solution is to just start the process manually. When I manually cd into my app directory and issue this command:
bundle exec sidekiq -d -L log/sidekiq.log -P tmp/pids/sidekiq.pid
things go as planned, and then
sudo /etc/init.d/sidekiq status
tells me things are up and running.
Also, sudo /etc/init.d/sidekiq stop and status work as expected.
I wrote a blog post a few months ago on my experience writing an init.d script for Sidekiq, however I was using rbenv rather than RVM.
https://cdyer.co.uk/blog/init-script-for-sidekiq-with-rbenv/
I think you should be able to use something almost identical except for modifying the username and app dir variables.
use wrappers:
BUNDLER=/usr/local/rvm/wrappers/ruby-1.9.3-p385/bundle
in case the bundler wrapper is not available generate it with:
rvm wrapper ruby-1.9.3-p385 --no-links bundle # OR:
rvm wrapper ruby-1.9.3-p385 --no-links --all
you can use aliases to make it easier:
rvm alias create my_app 1.9.3-p385
and then use it like this:
BUNDLER=/usr/local/rvm/wrappers/my_app/bundle
this way you will not have to change the script when application ruby changes - just update the alias, there is a good description/integration for this in rvm-capistrano => https://github.com/wayneeseguin/rvm-capistrano/#create-application-alias-and-wrappers

Resources