Is that normal that Sidekiq eats the 25% of my RAM even if there are no jobs running (0/10 busy)?
I'm using jemalloc as suggested online: it seems that the consumption has decreased a bit, but not so much.
RAM usage is a function of your app code and the gems you load. Use a profiler like derailed_benchmarks to profile RAM usage in your app. Lowering the concurrency from 10 to 5 might help a little.
Related
I am using sidekiq with my Rails app on Heroku mainly for processing mail (ActiveJob). I have been using sidekiq 3.x contently for the past year or so. Recently, we got more traffic on our application and as we saw memory usage nearing the allotted max of 512MB, we decided to update to sidekiq 4.0.1.
I was expecting to see a great reduction in memory usage on the sidekiq dyno, but instead observed exactly the opposite! I eventually had to upgrade the dyno to 1GB mem.
Now, I really want to investigate what is causing this increase in memory usage, but I don't know exactly where to start. The only change I made is updating the gems that were considered leaky, according to this: https://github.com/ASoftCo/leaky-gems
Anyone, got some good advice how to track memory usage of the heroku dyno that is running sidekiq? I have sidekiq running with the default concurrency of 25 and connected to the redis-cloud addon provided by heroku.
Reduce the concurrency. More concurrency == more memory usage.
Heroku recently announced that cedar-10 will no longer be supported after this year in November. Switching to cedar-14 led to an increase in memory consumption until I experienced R14 "Memory Quota exceeded" errors and had to restart heroku. The same problem with increase in memory usage occured with unicorn before I started using unicorn_worker_killer gem. Is there a known issue with cedar-14 and unicorn/unicorn_worker_killer? I didn't find anything.
Here is a nice link for your 'problem' : http://blog.codeship.com/debugging-a-memory-leak-on-heroku/
It describe perfectly the continous increase in memory over time. The same 'problem' happen with Puma, there is also a Puma Worker Killer Gem
One thing to note is that you can tune your garbage collector Configuration to be more agressive. Just be careful, you can mess pretty everything with a single bad configuration.
There is -at the moment- no magic solution for this problem. We encounter it too in production, however the memory usage sometimes stabilize, just below the limit where swapping start.
As an immediate action, we choose to reduce the number of workers per dyno, reducing it to 2, and increasing the number of dyno dynamically with HireFire.
You have a loot of tools that can help, here is a list we use each days to track expensive queries / allocations :
https://github.com/brynary/rack-bug/
https://github.com/binarylogic/memorylogic
Good luck, it's not a simple problem to solve and I don't think that there is a universal true solution for it right now.
I'm developing an API on Appfog and want to know what to focus on (more memory with one instances or more instances with lower memory).
Appfog gives you free 2GB of RAM and up to 16 instances if each instances get 128 MB RAM.
My application uses PHP, MySql and Memcachier.
I want to launch it soon and want to know which configuration is best for my server.
What is the benefit with more RAM or instances?
Thanks for helping :)
Best Regards,
Johnny
You want as many instances as your app will run without running out of memory :). More instances means better performance and uptime. However, if an instance runs out of memory it will be shut down leaving your app running with fewer instances until they all collapse. You can diagnose this problem with the af apps and af logs <appname> --all commands. If the app is running at < 100% regularly then the instance memory budget may be too low. When there are down instances, the logs command may reveal memory limit reached errors.
Memory Recommendations
Here are some memory recommendations to start out with: Wordpress with several installed plugins will need > 512mb to be stable. For lean custom PHP apps 128mb is usually sufficient but should be watched. If an app is using a framework try 256mb. These memory limits may seem high but it's really the peak memory usage not the average usage.
Load Test
Load testing using Seige can help find a memory / instance balance. It does this by determining if your app is peaking out over the memory limit. Scale the app down to 1 instance and siege with 5, 10, and 15 concurrent connections progressively increasing by 5 until the app falls over. If the app does stop, bump the memory up and try again.
I am running a Redmine instance with Passenger and Nginx. With only a handful of issues in the database, Redmine consumes over 80mb of RAM.
Can anyone share tips for reducing Redmine's memory usage. The Redmine instance is used by 3 people and I am willing to sacrifice on speed.
There are not really and low hanging fruits. And if there were, we would've already included and activated them by default.
80 MB RSS (as opposed to virtual size which can be much more) is actually pretty good. In normal operation, it will use between 70 and 120 MB RSS per process (depending on the deployment model, rather few on passenger).
As andrea suggested, you can reduce your overall memory footprint by about one third when you use REE (Ruby Enterprise Edition, which is also free). But this saving can only achieved when you run more than one process (each requiring the above memory). REE achieves this saving by optimizing Ruby for a technique called Copy on Write, so that additional application processes take less memory.
So I'm sorry, your (hypothetical) 128 MB vServer will probably not suffice. For a small installation, you might be able to squeeze a minimal installation into 256MB, but it only starts to be anything but a complete pain in the ass at 512 MB (including database).
That's because of how Rails applications work in contrast to things like PHP. They require a running application server instance. That instance is typically able to answer one request at a time, using about the same amount of memory all the time. So your memory consumption is roughly equivalent to the number of application processes you run, independent of actual load. But if you tune your system properly, you can get quite a number of reqs/s out of one process.
May be i am replying very late but i got stuck in the same issue and I found a link to optimize ruby/rails memory usage, which works for me
http://community.webfaction.com/questions/2476/how-can-i-reduce-my-rubyrails-memory-usage-when-running-redmine
It may be helpful for someone else.
I'm using collectiveidea's delayed_job with my Ruby on Rails app (v2.3.8), and running about 40 background jobs with it on an 8GB RAM Slicehost machine (Ubuntu 10.04 LTS, Apache 2).
Let's say I ssh into my server with no workers running. When I do free -m, I'm see I'm generally using about 1GB of RAM out of 8. Then after starting the workers and waiting about a minute for them to be utilized by the code, I'm up to about 4GB. If I come back in an hour or two, I'll be at 8GB and into the swap memory, and my website will be generating 502 errors.
So far I've just been killing the workers and restarting them, but I'd rather fix the root of the problem. Any thoughts? Is this a memory leak? Or, as a friend suggested, do I need to figure out a way to run garbage collection?
Actually, Delayed::Job 3.0 leaks memory in Ruby 1.9.2 if your models have serialized attributes. (I'm in the process of researching a solution.)
Here's someone who seemed to have solved it, http://spacevatican.org/2012/1/26/memory-leak-in-yaml-on-ruby-1-9-2
Here's the issue from Delayed::Job https://github.com/collectiveidea/delayed_job/issues/336
Just about every time someone asks about this, the problem is in their code. Try using one of the available profiling tools to find where your job is leaking. ( https://github.com/wycats/ruby-prof or similar.)
Triggering GC at the end of each job will reduce your max memory usage at the cost of thrashing your throughput. It won't stop Ruby from bloating to the max size required by any individual job, however, since Ruby can't free memory back to the OS. I don't recommend taking this approach.