How to profile a dart app? - dart

I'm trying the demo of start, which is a pretty simple web site built on dart.
When I run it, the initial memory usage is 10M, but when I visit the home page, refresh it again and again, the memory is growing fast until it gets to 78M, and will never get back.
I want to find what uses the memory, and is there any memory leak, but I don't know how to do it. Is it any tool can help me to profile a dart app?

It has already been pointed out in the comments that there are ways to get a CPU profile from the VM on Linux (https://code.google.com/p/dart/wiki/Profiling).
As far as I understand what you are really looking for is to get a heap or memory profile. While it is possible to print an object histogram when the program terminates (see below), we do not have any convenient way to get the object histogram while your server is running. We do hope to be able to add this capability over the next months.
To print the object histogram when the Dart script exits, you should pass the flag
--print_object_histogram to the Dart VM. This will print the averages of the live objects at the end of each major GC over the life of the program. This can be fine to get a quick overview, but is not ideal to track down and identify real problems.

Related

Are there memory limitations when outputting to a ScrolledText widget?

I am fairly new to Python and to GUI programming, and have been learning the Tkinter package to further my development.
I have written a simple data logger that sends a command to a device via a serial or TCP connection, and then reads the response back, displaying it in a ScrolledText widget. In addition, I have a button that allows me to save the contents of the ScrolledText widget into a text file.
I was testing my software by sending a looped command, with a 0.5 second delay between commands. The aim was to test the durability of the logger so it may later be deployed to automatically monitor and log the output of the devices it is connected to.
After 30-40 minutes, I find that the program crashes on my Windows 7 system, and I suspect that it may be caused by a memory issue. The crash is a rather nondescript, "pythonw.exe has stopped working" message. When I monitor the process using Windows Task Manager, the memory used by pythonw.exe increases each time a response is read, and will eventually reach nearly 2Gb.
It may be that I need to rethink my logic and have the software log to the disk in 'real time', while the ScrolledText box overwrites the oldest data after x-number of lines... However, for my own education, I was wondering if there was a better way to manage the memory used by ScrolledText?
Thanks in advance!
In general, no, there are no memory limitations with writing to a scrolled text widget. Internally, the text is stored in an efficient b-tree (efficient, unless all the data is a single line, since the b-tree leaves are lines). There might be a limit of some sort, but it would likely be in the millions of lines or so.

Getting details on application RAM usage

According to process explorer / task manager my application has a private working set size of around 190MB even while not performing a specific task, which is way more than I would expect it to need. Using FastMM I have validated that none of this is an actual memory leak in a traditional sense.
I have also read the related discussion going on here, which suggests using FastMM's LogMemoryManagerStateToFile();. However the output generated states "21299K Allocated, 49086K Overhead", which combined (70MB) is way less than the task manager suggests.
Is there any way I can find out what causes the huge differences, might 190MB even be an expectable value for an application with ~15 forms? Also, is having 70% overhead "bad", any way of reducing that number?
You can use VMMap from Sysinternals to get a complete overview of the virtual memory addres space your proces is using. This should allow you to work out the difference you are seeing between taks manager and FastMM.
I doubt that FastMM reports or even can report sections like Mapped File, Shareable, Page Table while those sections do occupy Private WS.
DDDebug can give you insights about memory allocation by objects in your app. You can monitor changes live.
Test the trial version or checkout the introductory video on the website.

Process memory management queries

Here is a picture summarizing my understanding of process memory layout as organized by kernel. I would like to understand
1)When is Segmentation and Paging process takes place? During compilation or right after the program is executed
2) At any given instance is it by any means possible to access the physical address of any given entity(variable, object) in my process
I found little information in Understanding kernel book or may be the explanation is too far from my understanding I am not sure. may be someone can help me in this
#Keen Learner, 1) Segmentation and Paging process takes place right after the program is been executed. Segmentation fault occurs only when some part of the code present in the program tries to access protected memory or memory which is not present in its process/virtual memory block. Paging process, since we cannot have all the process related pages at the same time in the main memory. Appropriate Page is only brought in or swapped out accordingly during execution of the process. 2) As far as I know there is no mechanism/means to access physical address of a variable because everything we play around with is an virtual address and converting it to physical address is the job of MMU. Hope I have cleared your doubts :-)

memory not freed in matlab?

I am running a script that animates a plot (simulation of a water flow). After a while, I kill the loop by doing ctrl-c.
After doing this several times I get the error:
??? Error: Out of memory.
And after I start receiving that error, every call to my script will generate it.
Now, it happens before anything inside the function that I am calling is executed, i.e even if I add the line a=1 as the first line of the function I am calling, I still get the error and no printout, so the code inside the function doesn't even get executed.
What could be causing this?
There are several possible reasons.
Most likely your script creates some variables that are filling up the memory. Run
clear all
before restarting the script, so that all the variables are cleared, or change your script to a function (which will automatically erase all temporary variables after the function returns). Note that this also clears all loaded functions, so your next execution of the script has to load them again which will slow down the next execution by a (usually tiny) bit. It may be sufficient to call clear only.
Maybe you're animating by plotting several plots over one another (without clearing the axes first). Thus you might run out of Java heap space. You can close the open figures individually, or run
close all
You can also increase the amount of Java Memory Matlab uses on your system (see instructions here) - note that the limit is generally rather low, annoyingly so if you want to tons of figures.
Especially if you're running an older version of Windows, you may get your memory fragmented. Matlab needs contiguous blocks of free space to assign variables. To check for memory fragmentation, run
memory
and look at the number for the maximum possible variable size. If this is much smaller than the size available for all arrays, it's time to restart Matlab (I guess if you use a Windows version that would require a reboot to fix the problem, you may want to look into getting a new computer with Win7).
You can also try the pack command, eg:
close all;
clear all;
pack;
to clear memory. Although after a recent mathworks seminar I asked one of the mathworks guru's and he also conformed #Andrew Janke's comment regarding memory fragmentation. Usually quitting and restarting matlab sorts this out for me (on XP).
clear all close all are straight-forward ways to free memory, which are known by all non-beginners.
The main issue is that when you have done some data large data processing, and cleared/closed everything off - there is still significant memory used by matlab.
This is a currently major problem with matlab, and to my knowledge there is no solution rather than restarting matlab, which is a pity.
It sounds like you are not clearing any of your variables. You should either provide a way to stop the loop without hitting ctrl-c (write a simple GUI with a "Stop" button and your display) and then clean up your workspace in the script or clear your variables at the start of the script.
Are you intentionally storing all the data (or some large component) on each iteration of your loop?

How to Test for Memory Leaks?

We have an application with hundreds of possible user actions, and think about how enhancing memory leak testing.
Currently, here's the way it happens: When manually testing the software, if it appears that our application consumes too much memory, we use a memory tool, find the cause and fix it. It's a rather slow and not efficient process: the problems are discovered late and it relies on the good will of one developer.
How can we improve that?
Internally check that some actions (like "close file") do recover some memory and log it?
Assert on memory state inside our unit tests (but it seems this would be a tedious task) ?
Manually regularly check it from time to time?
Include that check each time a new user story is implemented?
Which language?
I'd use a tool such as Valgrind, try to fully exercise the program and see what it reports.
first line of defense:
check list with common memory
allocation related errors for
developers
coding guidelines
second line of defense:
code reviews
static code analyis (as a part of build process)
memory profiling tools
If you work with unmanaged language (like C/C++) you can efficiently discover most of the memory leaks by hijacking memory management functions. For example you can track all memory allocations/deallocations.
It seems to me that the core of the problem is not so much finding memory leaks as knowing when to test for them. You say you have lots of user actions, but you don't say what sequences of user actions are meaningful. If you can generate meaningful sequences at random, I'd argue hard for random testing. On random tests you would measure
Code coverage (with gcov or valgrind)
Memory usage (with valgrind)
Coverage of the user actions themselves
By "coverage of user actions" I mean statements like the following:
For every pair of actions A and B, if there is a meaningful sequence of actions in which A is immediately followed by B, then we have tested such a sequence.
If that's not true, then you can ask for what fraction of pairs A and B it is true.
If you have the CPU cycles to afford it, you would probably also benefit from running valgrind or another memory-checking tool either before every commit to your source-code repository or during a nightly build.
Automate!
In my company we have programmed an endless action path for our application. The java garbage collector should clean all unused maps and list and something like that. So we let the application start with the endless action path and look, whether the memory use size is growing.
The check which fields are not deleted you can use JProfiler for Java.
Replace new and delete with your custom versions and log every act of allocation/deallocation.
Speaking generally (not about testing, rather to fight the issue in its origin), smartpointers help to avoid this problem. Fortunately, C++11 standard provides new convenient smart pointer classes (shared_ptr, unique_ptr).

Resources