Sidekiq consumes too much memory - ruby-on-rails

I am using Sidekiq with God in my Rails app. I am using Passenger and Nginx.
I see many processes (30-50) running by sidekiq which consume about 1000MB of RAM.
Processes like:
sidekiq 3.4.1 my_app_name [0 of 1 busy] - about 30 processes.
ruby /home/myuser/.rvm/ruby-2.1.5/bin/sidekiq --environment ... - about 20 processes.
How to tell sidekiq to not run so many threads.
my config for sidekiq (config/sidekiq.yml):
---
:concurrency: 1
:queues:
- default
- mailer
and config for sidekiq for god:
num_workers = 1
num_workers.times do |num|
God.watch do |w|
...
w.start = "bundle exec sidekiq --environment #{rails_env} --config #{rails_root}/config/sidekiq.yml --daemon --logfile #{w.log}"

The problem is with "--daemon" (or "-d") parameter which runs it as a daemon. No need to run it as daemon. Just remove this parameter.

Related

Sidekiq deployment fails with "key not found: "MY_APP_DATABASE_PASSWORD"

This is my rookie first question to the community.
Background:
I try to deploy Sidekiq on a my own Jessie Debian server for a Rails 5.0.6 app that works with Phusion Passenger with a user "deploy" . I have Redis 3.2.6 installed and tested ok. I've opted for a Systemd daemon to start Sidekiq as a system service.
Here is the configuration :
[Unit]
Description=sidekiq
After=syslog.target network.target
[Service]
Type=simple
WorkingDirectory=/var/www/my_app/code
ExecStart=/bin/bash -lc 'bundle exec sidekiq -e production -C config/sidekiq.yml'
User=deploy
Group=deploy
UMask=0002
# if we crash, restart
RestartSec=4
#Restart=on-failure
Restart=always
# output goes to /var/log/syslog
StandardOutput=syslog
StandardError=syslog
# This will default to "bundler" if we don't specify it
SyslogIdentifier=sidekiq
[Install]
WantedBy=multi-user.target
Here is sidekiq.yml
---
:verbose: true
:concurrency: 4
:pidfile: tmp/pids/sidekiq.pid
:queues:
- [critical, 2]
- default
- low
production:
:concurrency: 15
And finally #config/initializers/sidekiq.rb:
Sidekiq.configure_server do |config|
config.redis = { url: "redis://#{ENV['SITE']}:6379/0", password: ENV['REDIS_PWD'] }
end
Sidekiq.configure_client do |config|
config.redis = { url: "redis://#{ENV['SITE']}:6379/0", password: ENV['REDIS_PWD'] }
end
How it fails
I've been trying to solve the following error found in /var/log/syslog:
Dec 18 00:13:39 jjflo systemd[1]: Started sidekiq.
Dec 18 00:13:48 jjflo sidekiq[8159]: Cannot load `Rails.application.database_configuration`:
Dec 18 00:13:48 jjflo sidekiq[8159]: key not found: "MY_APP_DATABASE_PASSWORD"
which ends up in a sequence of sidekiq failure and a retry...
Yet another try
I have tried the following and this works :
cd /var/www/my_app/code
su - deploy
/bin/bash -lc 'bundle exec sidekiq -e production -C config/sidekiq.yml'
Could someone help me connect the dots, please ?
Environment variable were obviously the problem. Since I was using
ExecStart=/bin/bash -lc 'bundle...
where -l was refering to a bash interactive session, I had to get into .bashrc of deploy user to move the export lines at the top of the file instead of the bottom where they used to be, or at least before this line of .bashrc :
case $- in
This post helped me a lot.

Heroku not starting workers

My Heroku app is not starting any workers. I scale the worker first:
heroku ps:scale resque=1 -a test-eagle
Scaling dynos... done, now running resque at 1:Free
Then when I check the workers, I see:
heroku ps:workers -a test-eagle
<app> is running 0 workers
What could be worng here? This is how my Procfile looks:
web: bundle exec puma -C config/puma.rb
resque: env TERM_CHILD=1 bundle exec rake resque:work QUEUE=* COUNT=1
Or is it because it is a free app which can only handle 1 web worker and no other dynos?
Edit:
When I check with heroku ps -a <appname> I see that just after starting the worker is crashed: worker.1: crashed. This is without doing anything in the application itself.
UPDATE: Well, I have a "free" app running that happens to run Puma, too. So, I updated Procfile as follows:
web: bundle exec puma -C config/puma.rb
resque: env TERM_CHILD=1 bundle exec rake resque:work QUEUE=* COUNT=1
After that, I pushed the app to Heroku and ran heroku ps:scale, as you specified. It worked as follows:
D:\Bitnami\rubystack-2.2.5-3\projects\kctapp>heroku ps -a kctapp
=== web (Free): bundle exec puma -C config/puma.rb (1)
web.1: up 2016/06/06 19:38:24 -0400 (~ 1s ago)
D:\Bitnami\rubystack-2.2.5-3\projects\kctapp>heroku ps:scale resque=1 -a kctapp
Scaling dynos... done, now running resque at 1:Free
D:\Bitnami\rubystack-2.2.5-3\projects\kctapp>heroku ps -a kctapp
=== web (Free): bundle exec puma -C config/puma.rb (1)
web.1: up 2016/06/06 19:38:24 -0400 (~ 51s ago)
=== resque (Free): env TERM_CHILD=1 bundle exec rake resque:work QUEUE=* COUNT=1 (1)
resque.1: crashed 2016/06/06 19:39:18 -0400 (~ -3s ago)
Note that it did crash. But, I don't have any code running there, so that could be why? Also, note that I use the "heroku ps" command as "heroku ps:workers" for me throws an error saying it is deprecated.
This is my config/puma.rb, if that helps:
workers Integer(ENV['WEB_CONCURRENCY'] || 4)
threads_count = Integer(ENV['MAX_THREADS'] || 8)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 5000
environment ENV['RACK_ENV'] || 'development'
With edit: I missed the scale command...
See scaling in Heroku here. The options that I see are web, worker, rake or console, not resque. I tried your command and it didn't recognize "that formation". I'm curious about it.
Checking a free app, it does not give you the option to add a worker dyno. Checking a hobby app, you can add workers to it. With professional apps, you can mix and match the dyno type between web and worker using 1X, 2X, and Performance dynos.

Sidekiq - Prevent worker from being executed in specific machine

I'm working on a Rails project that uses Sidekiq. Our Sidekiq implementation has two workers (WorkerA, that reads queue_a, and WorkerB, which reads queue_b). One of them has to be executed in the same server the Rails app is and the other one in a different server(s). How can I prevent WorkerB from being executed in the first server, and vice versa? Can a Sidekiq process be configured to run just specific workers?
EDIT:
The Redis server is in the same machine the Rails app is.
Use a hostname-specific queue. config/sidekiq.yml:
---
:verbose: false
:concurrency: 25
:queues:
- default
- <%= `hostname`.strip %>
In your worker:
class ImageUploadProcessor
include Sidekiq::Worker
sidekiq_options queue: `hostname`.strip
def perform(filename)
# process image
end
end
More detail on my blog:
http://www.mikeperham.com/2013/11/13/advanced-sidekiq-host-specific-queues/
well, here is the way to start sidekiq with options
nohup bundle exec sidekiq -q queue_a queue_b -c 5 -e #{Rails.env} -P #{pidfile} 2>&1 &
you can start sidekiq with specific workers
you can run nohup bundle exec sidekiq -q queue_a -c 5 -e #{Rails.env} -P #{pidfile} 2>&1 & to execute only WorkA
to distinguish different workers on different servers, just do like below:
system "nohup bundle exec sidekiq -q #{workers_string} -c 5 -e #{Rails.env} -P #{pidfile} 2>&1 &"
def workers_string
if <on server A> # using ENV or serverip to distinguish
"queue_a"
elsif <on server B>
"queue_b queue_b ..."
end
end
#or you can set the workers_strings into config file on different servers

Thin processes die without message

I have two Thin servers running for a Rails app. I start them up with bundle exec thin start.
chdir: /[root]/current
environment: production
address: 0.0.0.0
port: 3001
timeout: 30
log: /[root]/log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 100
require: []
wait: 30
threadpool_size: 20
servers: 2
daemonize: true
When I wait a few hours usually one of the two servers is gone (e.g., only see one with htop or with pgrep -lf thin). And even worse, sometimes both of them are gone after 10 hours or so which results in a 500 error via the browser. Furthermore, when I start 3 or 4 servers 2 of the 4 processes die within 1 minute on average.
I don't see error messages in my Rails production.log nor in the thin.[port] log files specified in the app.yml file.
Is there a way to keep the Thin servers running?
Are you sure you can run your server with bundle exec -C app.yml start?
Try bundle exec thin -C app.yml start

Unicorn & Heroku - is a different configuration for each dyno possible?

I'm currently running my app on 2 Heroku dynos. From what I've looked up so far I need to add something similar to config/unicorn.rb:
worker_processes 3
timeout 30
#resque_pid = nil
before_fork do |server, worker|
#resque_pid ||= spawn("bundle exec rake " + \
"resque:work QUEUES=scrape,geocode,distance,mailer")
end
I have a few different background jobs to process, some need to be run single-threaded and some concurrently. The problem with this configuration is that on both Unicorn instances it will spawn exactly the same resque worker (same queues etc).
It would greatly simplify everything if I could change the type of queues each worker processes - or even have one instance running a resque worker and the other a sidekiq worker.
Is this possible?
Perhaps you are confusing unicorn worker_processes and Heroku workers?
you can use your Procfile to start Heroku workers for each queue and a unicorn process for handling web requests.
try this setup
/config/unicorn.rb
worker_processes 4 # amount of unicorn workers to spin up
timeout 30 # restarts workers that hang for 30 seconds
/Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec sidekiq
worker: bundle exec rake resque:work QUEUES=scrape,geocode,distance,mailer

Resources