Why Dart goes very slow after some time? - memory

We have some problems with Dart. It seems like after some period of time the garbage collector can't clear the memory in VM, so application hangs. Anyone with this issue? Are there any memory limits?

You should reuse your objects instead of creating new ones. You should use pool pattern:
http://en.wikipedia.org/wiki/Object_pool_pattern
Be careful about canvas and it's proper destruction.
Another GC performance papers:
http://blog.tojicode.com/2012/03/javascript-memory-optimization-and.html
http://qt-project.org/doc/qt-5/qtquick-performance.html

Are there any memory limits?
Yes. Dart apparently runs with a maximum sizes that can be configured at launch time:
How to run a dart program with big memory?
(The following applies to all garbage-collected languages ...)
If your application starts to run out of space (i.e. the heap is slowly filing with objects that the GC can't remove) then you may get into a nasty situation where the GC runs more and more frequently, and manages to reclaim less and less memory each time. Eventually you run out of memory, but before that happens the application gets really slow.
The solution is typically to do one or both of the following:
Find what is causing the memory to run out. It is typically not that you are allocating too many objects. Rather, the typical cause is that the unwanted objects are all still reachable ... via some data structure that your application has built.
Set the "quick death" tuning option for the GC .... if available. For example, Java garbage collectors can be configured to measure the time spent garbage collecting. (The GC overhead.) When the GC overhead exceeds a preset ratio, the Java virtual machine throws an OutOfMemoryError to "pull the plug".

Related

DelayedJob doesn't release memory

I'm using Puma server and DelayedJob.
It seems that the memory taken by each job isn't released and I slowly get a bloat causing me to restart my dyno (Heroku).
Any reason why the dyno won't return to the same memory usage figure before the job was performed?
Any way to force releasing it? I tried calling GC but it doesn't seem to help.
You can have one of the following problems. Or actually all of them:
Number 1. This is not an actual problem, but a misconception about how Ruby releases memory to operating system. Short answer: it doesn't. Long answer: Ruby manages an internal list of free objects. Whenever your program needs to allocate new objects, it will get those objects from this free list. If there are no more objects there, Ruby will allocate new memory from operating system. When objects are garbage collected they go back to the free list. So Ruby still have the allocated memory. To illustrate it better, imagine that your program is normally using 100 MB. When at some point program will allocate 1 GB, it will hold this memory until you restart it.
There are some good resource to learn more about it here and here.
What you should do is to increase your dyno size and monitor your memory usage over time. It should stabilize at some level. This will show you your normal memory usage.
Number 2. You can have an actual memory leak. It can be in your code or in some gem. Check out this repository, it contains information about well known memory leaks and other memory issues in popular gems. delayed_job is actually listed there.
Number 3. You may have unoptimized code that is using more memory than needed and you should try to investigate memory usage and try to decrease it. If you are processing large files, maybe you should do it in smaller batches etc.

Delphi XE4 64 bit out of memory

We have a multi-threaded client - server project, we recently upgraded the server side applications on 64 - bit architecture. Solved many problems, our application now works steadily week under a heavy load. But after this period, the application on the server crashes with "Out of memory" error. At this time, free memory is available in large quantities, it seems that the problem of memory fragmentation. Is there any possibility to defragment memory, some tools? Or may be some other reasons "Out of memory" in a similar situation?
Memory allocation:
Total amount of memory : 96Gb
Physical : 48Gb
Virtual : 48Gb
The amount of free physical memory at the time of the collapse : 3GB
The amount of free virtual memory at the time of collapse : 45Gb
Maximum size of memory allocated per thread : 1GB
As you have surmised, your problem is fragmentation. There's nothing you can do to defragment memory--any tool that would attempt to do so would have to have a complete map of every pointer in your program. Note that even .NET's garbage collector can't accomplish this with the large object heap, I have crashed a 32-bit net application with only 100mb actually used.
Instead, what you need to do is avoid the fragmentation in the first place. Generally that means object pools, saving old objects for reuse rather than freeing them and later reallocating them.
Another option (if the service isn't time critical and needs to be online 100% of every second) is to restart your service every 24 hours or so (either via the task scheduler or from within your own program).
If from within your own program, you can do it in one of two ways, depending on what type of service you have (if there can exist two instances of your service at the very same - brief - time):
1) Execute a second instance of your service from within your currently running service and then terminate
2) Execute a tiny helper program that waits f.ex. 5 seconds and then (re)starts your service, then terminate your currently running service
But the best way would be to avoid the fragmentation in the first place, as Loren Pechtel writes.

Should a process always consume the same amount of memory if executed in the same way?

Hi folks and thanks for your time in advance.
I'm currently extending our C# test framework to monitor the memory consumed by our application. The intention being that a bug is potentially raised if the memory consumption significantly jumps on a new build as resources are always tight.
I'm using System.Diagnostics.Process.GetProcessByName and then checking the PrivateMemorySize64 value.
During developing the new test, when using the same build of the application for consistency, I've seen it consume differing amounts of memory despite supposedly executing exactly the same code.
So my question is, if once an application has launched, fully loaded and in this case in it's idle state, hence in an identical state from run to run, can I expect the private bytes consumed to be identical from run to run?
I need to clarify that I can expect memory usage to be consistent as any degree of varience starts to reduce the effectiveness of the test as a degree of tolerance would need to be introduced, something I'd like to avoid.
So...
1) Should the memory usage be 100% consistent presuming the application is behaving consistenly? This was my expectation.
or
2) Is there is any degree of variance in the private byte usage returned by windows or in the memory it allocates when requested by an app?
Currently, if the answer is memory consumed should be consistent as I was expecteding, the issue lies in our app actually requesting a differing amount of memory.
Many thanks
H
Almost everything in .NET uses the runtime's garbage collector, and when exactly it runs and how much memory it frees depends on a lot of factors, many of which are out of your hands. For example, when another program needs a lot of memory, and you have a lot of collectable memory at hand, the GC might decide to free it now, whereas when your program is the only one running, the GC heuristics might decide it's more efficient to let collectable memory accumulate a bit longer. So, short answer: No, memory usage is not going to be 100% consistent.
OTOH, if you have really big differences between runs (say, a few megabytes on one run vs. half a gigabyte on another), you should get suspicious.
If the program is deterministic (like all embedded programs should be), then yes. In an OS environment you are very unlikely to get the same figures due to memory fragmentation and numerous other factors.
Update:
Just noted this a C# app, so no, but the numbers should be relatively close (+/- 10% or less).

How to use AQTime's memory allocation profiler in a program that uses a large amount of memory?

I'm finding AQTime hard to use because it interferes with the original program too much. If I have a program that uses, for example, 300MB of ram I can use AQTime's allocation profiler without a problem, and find out where most of the memory is being used. However I notice that running under AQTime, the original program uses more like 1GB while it's being profiled.
Right now I'm trying to reduce memory usage in a program which is using 1.4GB of memory. If I run it under AQTime, then the original program uses all of the 2GB address space and crashes. I can of course invent a smaller set of test data and estimate how the memory usage will scale with the full data set - but the reason I'm using a profiler in the first place is to try to avoid this sort of guesswork.
I already have AQTime set to 'Collect stack information - None' and all the check boxes to do with checking memory integrity are switched off, and I've tried restricting the area being profiled to just a few classes but this doesn't seem to improve anything. Is there a way to use AQTime that produces a smaller overhead? Or failing that, what other approaches are there to get a good idea of the memory being used?
The app is written in Delphi 2010 and I'm using AQTime 6.
NB: On top of the increased memory usage, running under AQTime slows the app down an awful lot, making the whole exercise not just impossible but impractical too :-P
AFAIK the allocation profiler will track memory block allocation regardless of profiling areas. Profiling areas are used to track classes instantiation. Of course memory-profiling an application that allocates a large amount of memory is a issue, you may try to use the LARGE_ADRESS_AWARE flag, and the /3GB boot switch, or use a 64 bit system (as long as you have at least 4GB of memory, or more). Also you can take snapshot of the application state before it crashes, to see where the memory is allocated. Profiling takes time, anyway, you may have to let it run for a while.

"Lucene.net-2.3.2.1” memory leakages problem

I am using Lucene.Net-2.3.2.1 in my project. My project also supporting multithreading environment. Lucene Indexing service is working as Windows Service. Problem is when the service is running, it's memory blockage is gradually increasing. So after some hours, it shows a memory of 150 mb in Task Manager where as it start with 13 mb.so it has a memory increasing behavior. I identified by dotTrace Profiler that in Lucene.Net there are some methods and objects that increased the memory. From Call Tree one of my dotTrace out identify that Index(), Segment() related functions hold's memory increased as long as the service perform. So it at a time, it will crash the system.
Please help me how i can recover my application from this memory leakage.
Increasing memory usage doesn't necessarily implies a memory leak. Memory leaks in .NET are not that common, but there are a few options you should check
Events. Make sure that all event listeners are detached from the publisher as soon as they are no longer used. Failing to do so will keep the listeners alive as long as the publisher is alive.
If the code uses any disposable resource that holds handles to native code, be sure to call Dispose on these as soon as they are no longer needed.
A blocking finalizer will prevent other finalizable objects from being garbage collected, so make sure finalizers don't do any more than they have to (and in many cases they are probably not needed anyway).
If you want to examine which objects are being kept alive as well as why they are not collected, I recommend using WinDbg + Sos.

Resources