I ran htop in my production server to see what was eating my RAM. A lot of sidekiq process is running, is this normal?
Press Shift-H. htop shows individual threads as processes by default. There is only one actual sidekiq process.
You seem to have configured it to have 25 workers.
By default, one sidekiq process creates 25 threads.
If that's crushing your machine with I/O, you can adjust it down:
sidekiq -c 10
https://github.com/mperham/sidekiq/wiki/Advanced-Options
If you are not using JRuby then it's likely these all are seperate processes that consume memory.
Related
Can someone describe me how sidekiq workers relate to pids in OS?
I have ubuntu 14.04 on aws and I configured my QA environment in sidekiq.yml to use 2 workers. As it shown here.
But what I see in my OS is next:
So 2 workers equals 6 processes. But is correct. How do I limit processes for sidekiq? Or what should I do to reduce memory usage.
Sometimes my server runs out of memory. So shutdown occurs.
Any help appreciated.
Those are threads in the htop listing, not processes.
I'm under the impression that free dynos will spin down after a while.
What happens to a script that's running currently with my main ruby server / fires off PhantomJS sraper every now and again?
Do I need a dedicated worker process for this or will Heroku Scheduler do just fine alongside a paid dyno?
I've no issues paying for it, the development always takes a hot second and their workers are a little pricey.
Thanks in advance.
If you want to periodically run a script, Heroku Scheduler is really the ideal way to do this. It'll use one-off dynos, which DO count towards your free dyno allocation each month, but only run during the duration of the task, and stop afterwards.
This is much cheaper, for instance, than running a dedicated worker dyno that is up 24x7, vs a one-off dyno (powered by Heroku Scheduler) which only runs for a few minutes per day.
I have Rails app which uses Sidekiq for background process. To deploy this application I use capistrano, ubuntu server and apache passenger. To start and restart Sidekiq I use capistrano-sidekiq gem.
My problem is - when Sidekiq is running, amount of memory (RAM) used by Sidekiq is growing up. And when Sidekiq finished all processes (workers) it keeps holding a large amount of RAM and not reseting it.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
ubuntu 2035 67.6 45.4 3630724 1838232 ? Sl 10:03 133:59 sidekiq 3.5.0 my_app [0 of 25 busy]
How to make Sidekiq to reset used memory after workers finished their work?
Sidekiq uses thread to execute jobs.
And threads share the same memory as the parent process.
So if one job uses a lot of memory the Sidekiq process memory usage will grow up and won't be released by Ruby.
Resque uses another technique it executes every jobs in another process therefore when the job is done, the job's process exits and the memory is released.
One way to prevent your Sidekiq process from using too much memory is to use Resque's forking method.
You could have your job main method executed in another process and wait until that new process exits
ex:
class Job
include Process
def perform
pid = fork do
# your code
end
waitpid(pid)
end
end
A couple of days ago I noticed a strange thing - from time to time server stops processing request for some time. At the top output it looks like this:
ten Unicorn workers process requests;
then, for some reason, they stop doing anything. I mean, all ten workers have 'sleeping' status;
for a ten-fifteen seconds they sleep;
and then suddenly all then workers at the same time start processing requests (lots of them were queued for 10s);
I have the following setup:
nginx, unicorn 4.6.2, postgres, redis for sessions and cache, MRI ruby 2.0.0p353.
My first thought was to blame redid (because if redis doesn't give sessions, all process will wait for it), but it seems it is not the case, because while unicorn workers freeze, redis serving other processes that do background jobs.
I don't understand what is the reason of this strange behaviour.
If someone have some thoughts on the matter I would gladly check it. If you need additional information - just tell me what to do, and I'll try to provide it.
UPDATE:
Unicorn config
strace on unicorn worker
strace on unicorn master
strace on nginx
It turned out (with the help of strace on worker processes) workers were trying to write logs on the disk. Disk was heavy loaded and processes were blocked.
My Rails application has a number of tasks which are offloaded into background processes, such as image resizing and uploading to S3. I'm using delayed_job to manage these processes.
These processes, particularly thumbnailing PDFs (using Ghostscript) and resizing images (using ImageMagick), are CPU intensive and often consume 100% CPU time. Since these jobs are running on the same (RedHat Linux) server as the web application itself, as well as the DB, they can lead to our web application being unresponsive.
One solution is to get another server on which to run only the background jobs. I guess this would be the optimal solution? However, since this isn't something I can do immediately I wonder whether it would be possible to somehow make the background jobs run at a lower operating system priority, and hence consume less CPU cycles in doing their work?
Thoughts appreciated.
If I'm not mistaken, delayed_job uses worker processes that will handle all the background jobs. It should be easily possible to alter the OS scheduling priority of the process when you start it.
So instead of, for example:
ruby script/delayed_job -e production -n 2 start
try:
nice -n 15 ruby script/delayed_job -e production -n 2 start