How to start Sidekiq worker on Ubuntu VPS (Digitalocean) - ruby-on-rails

I already setup Redis, Sidekiq and Rails app, I can access it form //url/sidekiq, but how do I start the Sidekiq worker on a VPS? On my local I do:
bundle exec sidekiq -q carrierwave,5 default
What should I do on a VPS hosting?
Thanks

Looks like this is a duplicate of this question: how to detach sidekiq process once started in terminal
You have to run the following command from your Rails root:
bundle exec sidekiq -d -L sidekiq.log -q mailers,5 -q default -e production
This will detach the process so you can quit the ssh session and the command will keep running in the background, logging the output to the sidekiq.log file.
Take care to choose the appropriate position for the log file, and don't forget to setup a logrotate rule for it.

Related

How to run sidekiq in background - what is the best approch with rails app running on Nginx

I'm using Sidekiq 6.0.1.
I'm trying to run in the background, here is the command I'm using:
bundle exec sidekiq -d -L log/sidekiq.log -C config/sidekiq.yml -e development
This is showing
ERROR: Daemonization mode was removed in Sidekiq 6.0, please use a proper process supervisor to start and manage your services
ERROR: Logfile redirection was removed in Sidekiq 6.0, Sidekiq will only log to STDOUT
My application is of Ruby on Rails and deployed using the Nginx web server.
What would be the best approach to run the sidekiq in the background so my rails application can run the workers?
If you are running on Linux, learn to use systemd.
https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process

How to run Rails migrations and seeding in Amazon Elastic Beanstalk single container Docker environment

I'm working on deploying a Rails application to Elastic Beanstalk using docker and so far everything has worked out. I'm at the point where the application needs to run migrations and seeding of the database, and I'm having trouble figuring out exactly how I need to proceed. It appears that any commands in the /.ebextensions folder run in the context of the host machine and not the docker container. Is that correct?
I'm fine with running a command to execute migrations inside of the docker container after startup, but how do I ensure that the migrations only run on a single instance? Is there an environment variable or some other way I can tell what machine is the leader from within the docker container?
Update: I posted a question in the Amazon Elastic Beanstalk forums asking how to run "commands from Docker host on the container" on the 6th/Aug/15'. You can follow the conversations there as well as they are useful.
I'm not sure the solution you have proposed is going to work. It appears that the current process for EB Docker deployment runs container commands before the new docker container is running, which means that you can't use docker exec on it. I suspect that your commands will execute against the old container which is not yet taken out of service.
After much trial and error I got this working through using container commands with a shell script.
container_commands:
01_migrate_db:
command: ".ebextensions/scripts/migrate_db.sh"
leader_only: true
And the script:
if [ "${PROCESS}" = "WEB" ]; then
. /opt/elasticbeanstalk/hooks/common.sh
EB_SUPPORT_FILES=$(/opt/elasticbeanstalk/bin/get-config container -k support_files_dir)
EB_CONFIG_DOCKER_ENV_ARGS=()
while read -r ENV_VAR; do
EB_CONFIG_DOCKER_ENV_ARGS+=(--env "$ENV_VAR")
done < <($EB_SUPPORT_FILES/generate_env)
echo "Running migrations for aws_beanstalk/staging-app"
docker run --rm "${EB_CONFIG_DOCKER_ENV_ARGS[#]}" aws_beanstalk/staging-app bundle exec rake db:migrate || echo "The Migrations failed to run."
fi
true
I wrap the whole script in a check to ensure that migrations don't run on background workers.
I then build the ENV in exactly the same way that EB does when starting the new container so that the correct environment is in place for the migrations.
Finally I run the command against the new container which has been created but is not yet running - aws_beanstalk/staging-app. It exits at the end of the migration and the --rm removes the container automatically.
Update: This solution, though seemingly correct, doesn't work as intended (it seemed it was at first though). For reasons best explained in nmott's answer below. Will leave it here for posterity.
I was able to get this working using container_commands via the .ebextensions directory config files. Learn more about container commands here. And I quote ...
The commands in container_commands are processed in alphabetical
order by name. They run after the application and web server have been
set up and the application version file has been extracted, but before
the application version is deployed. They also have access to
environment variables such as your AWS security credentials.
Additionally, you can use leader_only. One instance is chosen to be
the leader in an Auto Scaling group. If the leader_only value is set
to true, the command runs only on the instance that is marked as the
leader.
So, applying that knowledge ... the container_commands.config will be ...
# .ebextensions/container_commands.config
container_commands:
01_migrate_db:
command: docker exec `docker ps -l -q -f 'status=running'` rake db:migrate RAILS_ENV=production
leader_only: true
ignoreErrors: false
02_seed_db:
command: docker exec `docker ps -l -q -f 'status=running'` rake db:seed RAILS_ENV=production
leader_only: true
ignoreErrors: false
That runs the migrations first and then seeds the database. We use docker exec [OPTIONS] CONTAINER_ID COMMAND [ARG...] which runs the appended COMMAND [ARG...] in the context of the existing container (not the host). And we get CONTAINER_ID by running docker ps -q.
Use .ebextensions/01-environment.config:
container_commands:
01_write_leader_marker:
command: touch /tmp/is_leader
leader_only: true
Now add directory /tmp to volumes in Dockerfile / Dockerrun.aws.json.
Then check set all initialization commands like db migration in sh script that first check if file /tmp/is_leader exists and executes them only in this case.
Solution 1: run migration when you start server
In the company I work for we have literally equivalent for this line to start the production server:
bundle exec rake db:migrate && bundle exec puma -C /app/config/puma.rb
https://github.com/equivalent/docker_rails_aws_elasticbeanstalk_demmo_app/blob/master/puppies/script/start_server.sh.:
https://github.com/equivalent/docker_rails_aws_elasticbeanstalk_demmo_app/blob/master/puppies/Dockerfile
And yes this is Load balanced environment (3 - 12 instances depending on load) and yes they all execute this script. (we do load balance by introducing 1 instance at a time during deployment)
The thing is the first batch of deployment (first instance up ) will execute the bundle exec rake db:migrate and run the migrations (meaning it will run the DB changes)
and then once done it will run the server bundle exec puma -C /app/config/puma.rb
The second deployment batch (2nd instance) will
also run the bundle exec rake db:migrate but will not do anything (as there are no pending migrations).
It will just continue to the second part of the script bundle exec puma -C /app/config/puma.rbo
So honestly I don't think this is the perfect solution but is pragmatic and works for our team
I don't believe there is any generic "best practice" for EB out there for Rails running migrations as some
application teams don't want to run the migrations after the deployment while others (like our team) they
do want to run them straight after deployment.
Solution 2: background worker Enviromnet to run migrations
if you have Worker like Delayed job, Sidekiq, Rescue on own EB enviroment you can configure them to run
the migrations:
bundle exec rake db:migrate && bundle exec sidekiq)
So first you willdeploy the worker and once the worker is deployed then deploy webserver that will not run the migrations
e.g.: just bundle exec puma
Solution 3 Hooks
I agree that using EB hoos ore ok far this but
honestly I use eb hooks only for more complex
devops stuff (like pulling ssl certificates for the Nginx web-server) not for running migrations)
anyway hooks were already covered in this SO question so I'll not repeat the solution. I will just reference this article that will help you understand them:
https://blog.eq8.eu/article/aws-elasticbeanstalk-hooks.html
Conclusion
It's really up to you to figure out what is the best for your application. But honestly EB is really simple tool
(compared to tools like Ansible or Kubernetes) No mater what you implement as long as it works its ok :)
One more helpful link for EB for Rails developers:
talk: AWS Elastic Beanstalk & Docker for Rails developers

how to start sidekiq automatically on localhost

I have a Rails app which uses a Procfile to start sidekiq automatically on heroku. I'd like it to start sidekiq automatically on localhost (I currently just 'bundle exec sidekiq' in a separate window). Here's my procfile:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec sidekiq
How would I do this? I do have foreman installed locally
You can create a Procfile.dev file that's meant to be used in development. To use it, just do 'foreman start -f Procfile.dev' from the terminal. Passing the -f option allows you to set the path the Procfile to use.
By the way, would probably be good to gitignore your Procfile.dev, as well. That way, others in your team could have their own Procfile.dev.
Hope that helps!

Unable to bind to port 80, but running on the current shell works without any issues

I get the following error while trying to run "cap production unicorn:start"
F, [2013-07-12T04:36:18.134045 #28998] FATAL -- : error adding listener addr=0.0.0.0:80
/home/ec2-user/apps/foo_prod/shared/bundle/ruby/2.0.0/gems/unicorn-4.6.3/lib/unicorn/socket_helper.rb:147:in `initialize': Permission denied - bind(2) (Errno::EACCES)
Running the following command manually, does work without any issues. What could be the problem here?
rvmsudo unicorn_rails -c config/unicorn/production.rb -D --env production
You need root access to bind to lower ports like port 80. Command rvmsudo executes in root context and therefore it works.
Cap task executes in a normal user context (probably deploy) depending on your configuration. You should add sudo ability to cap deploy user and make sure your cap task uses sudo to start unicorn.
Answer by #Iuri G. gives you reason and possible solution.
I have another suggestion, unless you have extremely compelling reason to run Unicorn with port 80, change that to a higher port (>1024), like 3000. This will solve your problem.
If it is an application that is exposed to public, it is too easy to overwhelm Unicorn and make your application unavailable to end users. In such a case, do put Unicorn behind a proxy (like Nginx). The proxy will be on port 80 and Unicorn on a higher port.
In my development environment, using RubyMine, I ran into this recently.
I used SSH to redirect port 80 to 8080.
sudo ssh -t -L 80:127.0.0.1:8080 user#0.0.0.0
I assume you are running Ubuntu as production server. On your server you need to edit your sudoers file:
First type select-editor and select nano (or another editor you feel confortable with)
Then at the bottom of the file, before the include line, add this line:
%deployer ALL=(ALL)NOPASSWD:/path/to/your/unicorn_rails
You need to replace deployer by the user name you are using with capistrano, and to replace /path/to/your/unicorn_rails with its correct path. This will allow your deployer user to "sudo unicorn_rails" without being prompt for a password.
Finally edit your unicorn:start capistrano task, and add rvmsudo ahead of your command line that start unicorn:
rvmsudo unicorn_rails -c config/unicorn/production.rb -D --env production
If it does not work you can try this instead
bundle exec sudo unicorn_rails -c config/unicorn/production.rb -D --env production

Unicorn init script - not starting at boot

I'm very new to system administration and have no idea how init.d works. So maybe I'm doing something wrong here.
I'm trying to start unicorn on boot, but somehow it just fails to start everytime. I'm able to manually do a start/stop/restart by simply service app_name start. Can't seem to understand why unicorn doesn't start at boot if manual starting stopping of service works. Some user permission issue maybe ??
My unicorn init script and the unicorn config files are available here https://gist.github.com/1956543
I'm setting up a development environment on Ubuntu 11.1 running inside a VM.
UPDATE - Could it be possible because of the VM ? I'm currently sharing the entire codebase (folder) with the VM, which also happens to contain the unicorn config needed to start unicorn.
Any help would be greatly appreciated !
Thanks
To get Unicorn to run when your system boots, you need to associate the init.d script with the default set of "runlevels", which are the modes that Ubuntu enters as it boots.
There are several different runlevels, but you probably just want the default set. To install Unicorn here, run:
sudo update-rc.d <your service name> defaults
For more information, check out the update-rc.d man page.
You can configure a cron job to start the unicorn server on reboot
crontab -e
and add
#reboot /bin/bash -l -c 'service unicorn_<your service name> start >> /<path to log file>/cron.log 2>&1'

Resources