I have functionality which is using multi-threading for downloading files, and Fastapi not releasing memory after tasks are done. Api using 1 worker. If I repeat tasks, memory is appending all the time, not releasing previously. Does anyone know how to find what is problem?
Related
I have a ruby on rails app, where we validate records from huge excel files(200k records) in background via sidekiq. We also use docker and hence a separate container for sidekiq. When the sidekiq is up, memory used is approx 120Mb, but as the validation worker begins, the memory reaches upto 500Mb (that's after a lot of optimisation).
Issue is even after the job is processed, the memory usage stays at 500Mb and is never freed, not allowing any new jobs to be added.
I manually start garbage collection using GC.start after every 10k records and also after the job is complete, but still no help.
This is most likely not related to Sidekiq, but to how Ruby allocates from and releases memory back to the OS.
Most likely the memory can not be released because of fragmentation. Besides optimizing your program (process data chunkwise instead of reading it all into memory) you could try and tweak the allocator or change the allocator.
There has been a lot written about this specific issue with Ruby/Memory, I really like this post by Nate Berkopec: https://www.speedshop.co/2017/12/04/malloc-doubles-ruby-memory.html which goes into all the details.
The simple "solution" is:
Use jemalloc or, if not possible, set MALLOC_ARENA_MAX=2.
The more complex solution would be to try and optimize your program further, so that it does not load that much data in the first place.
I was able to cut memory usage in a project from 12GB to < 3GB by switching to jemalloc. That project dealt with a lot of imports/exports and was written quite poorly and it was an easy win.
I'm using Puma server and DelayedJob.
It seems that the memory taken by each job isn't released and I slowly get a bloat causing me to restart my dyno (Heroku).
Any reason why the dyno won't return to the same memory usage figure before the job was performed?
Any way to force releasing it? I tried calling GC but it doesn't seem to help.
You can have one of the following problems. Or actually all of them:
Number 1. This is not an actual problem, but a misconception about how Ruby releases memory to operating system. Short answer: it doesn't. Long answer: Ruby manages an internal list of free objects. Whenever your program needs to allocate new objects, it will get those objects from this free list. If there are no more objects there, Ruby will allocate new memory from operating system. When objects are garbage collected they go back to the free list. So Ruby still have the allocated memory. To illustrate it better, imagine that your program is normally using 100 MB. When at some point program will allocate 1 GB, it will hold this memory until you restart it.
There are some good resource to learn more about it here and here.
What you should do is to increase your dyno size and monitor your memory usage over time. It should stabilize at some level. This will show you your normal memory usage.
Number 2. You can have an actual memory leak. It can be in your code or in some gem. Check out this repository, it contains information about well known memory leaks and other memory issues in popular gems. delayed_job is actually listed there.
Number 3. You may have unoptimized code that is using more memory than needed and you should try to investigate memory usage and try to decrease it. If you are processing large files, maybe you should do it in smaller batches etc.
I'm using have a server which running on 4gb ram...
Each uploaded picture will be watermark, so i decided to put in background process. However, when there are a lot of requests of uploading pictures..the server will facing high memory issue, and the memory don't deallocate themselves.
My Questions:
- why sidekiq worker terminate?
- is rmagick memory leak?
- how to handle this situation?
You've provided nowhere near the amount of details needed for us to give you informed advice, but I'll try something general: How many Sidekiq workers are you running? Consider reducing the #, then queue up tons of request to simulate a heavy load; keep doing that until you have few enough workers that Sidekiq can comfortably handle the worst load. (Or until you confirm that the issue appears the same even when there's only 1 Sidekiq worker!)
Once you've done that, then you'll have a better feel for the contours of the problem: something that looks like an Rmagick memory leak issue when your server is overloaded, may look different (or give you more ideas on how to address it) when you reduce the workload.
Also take a look at this similar SO question about Rmagick and memory leaks; it might be worth forcing garbage collection to limit how much damage any given leak can cause.
I have an Jruby On Rails app which uses several WS's to gather data. The app processes the data displays it to the user, the user make his changes and then is sent back to the WS.
The problem here is that i store everything in the cache (session based) which is using memory store. But from time to time with no clear reason (for me at least) this error pops up:
ActionView::Template::Error (GC overhead limit exceeded)
I read what I could find about it and apparently this means that the Garbage Collector spends to much time in trying to free memory and no real progress is being made in that direction. My guess is that since everything is stored cache like into memory the GC tries to free it and can't do it and throws this error.
So here are the questions.
How can I get around this ?
If I switch from memory store to Redis, if my assumptions are correct, will this problem still appear.
Will the GC try to free Redis's memory area ? (Might be a stupid question but ... please help nonetheless :) )
Thank you.
Redis is a separate process, so your app garbage collector won't touch it. However, it is possible that redis is using up all the memory that would otherwise be available for the app, or that you are actually using a process memory cache and not redis, which is a different type of memory store.
I am using Lucene.Net-2.3.2.1 in my project. My project also supporting multithreading environment. Lucene Indexing service is working as Windows Service. Problem is when the service is running, it's memory blockage is gradually increasing. So after some hours, it shows a memory of 150 mb in Task Manager where as it start with 13 mb.so it has a memory increasing behavior. I identified by dotTrace Profiler that in Lucene.Net there are some methods and objects that increased the memory. From Call Tree one of my dotTrace out identify that Index(), Segment() related functions hold's memory increased as long as the service perform. So it at a time, it will crash the system.
Please help me how i can recover my application from this memory leakage.
Increasing memory usage doesn't necessarily implies a memory leak. Memory leaks in .NET are not that common, but there are a few options you should check
Events. Make sure that all event listeners are detached from the publisher as soon as they are no longer used. Failing to do so will keep the listeners alive as long as the publisher is alive.
If the code uses any disposable resource that holds handles to native code, be sure to call Dispose on these as soon as they are no longer needed.
A blocking finalizer will prevent other finalizable objects from being garbage collected, so make sure finalizers don't do any more than they have to (and in many cases they are probably not needed anyway).
If you want to examine which objects are being kept alive as well as why they are not collected, I recommend using WinDbg + Sos.