MonoDroid application drain batery faster? - xamarin.android

a MonoDroid application runs on Mono Runtime, but Dalvik VM is also loaded, ok?
And MonoDroid application use Java libraries over C# libraries, thus, to use a method, is
necessary two calls?
App -> c# -> JAVA
MonoDroid applications spends more batery?

To a large extent, I think this borders on micro-optimization. Yes, there is some additional overhead in method calls due to JNI, but this should be fairly trivial in the grand scheme of things (as opposed to, say, XML processing, or image manipulation, or...). Furthermore, all of RAM will need to be powered anyway (that's how DRAM works, and I doubt they're using SRAM for RAM in these devices), so the fact that two VMs are loaded into memory shouldn't cause any additional battery use either.
CPU time will be a determining factor, but I highly doubt that JNI will be a significant contributor (lacking profiling data that suggests otherwise).

Related

Windows to embedded port: data and code memory size

I am in the process of porting a windows 7 library to an embedded platform. In order to do so my employer asks me the amount of memory (and CPU but let us concentrate on the memory for now) that my system will need once ported - so he can size the board to my needs.
I had a look on the internet and there seem not be exist much information about this question, hence my questions:
in order to get a rough idea of the memory footprint of the code in flash memory (code only without memory for data), I read on the Internet that I should sum the size of all the dlls I use. It seems that all compilers and platforms give a different size for the code footprint but overall the size of the code (without data) is often very close. Do you confirm?
in order to deal with the memory required by the data only (heap + stack but no code), I had a look at the task manager (and process explorer). It seems the overall amount of data which I use is specified in the 'peak working set'. I have a few questions about it though:
2.a. Does the 'working set' include the heap + stack memory or does it correspond to the heap only?
2.b. Does the 'working set' include the size for the code as well? (as I am on windows 7, the code is also stored in RAM and not in flash as on embedded systems), or does it only correspond to the data?
2.c. it seems the 'peak working set' reflects the maximum amount of physical memory that was actually in RAM from the time the program was started, but it does not reflect the size the program could take afterwards (if I happen to allocate memory at runtime - which would be bad ;) - the peak value would go on increasing). Do you confirm?
2.d. Hence, do you also confirm that if I do not allocate memory at runtime, the 'peak working set' should roughly be the maximum size of RAM my embedded system will need? Up to a bit of size difference due to the difference in systems technology...
Thanks,
Antoine.
Unless you are intending to run your application on Windows Embedded, then looking at the code and data usage in Windows is not going to be much of an indicator of anything useful!
1) DLLs are libraries - not all the code within them will be utilised by your code. Most embedded systems are statically linked and the linker will link only modules that are actually referenced in your by your code. So taking the sum of the DLL dependencies is likley to lead to a gross over estimation of memory requirement.
2) Windows memory management is profligate with memory use - because it can be and to do so generally improves performance of typical desktop systems. For example, an thread stack in Windows is typically of the order to 2Mb - you may seldom use that much, but Windows gives it to you in any case because it can and to do so errs on the side of safety. A thread stack in an embedded system will typically range from a few tens of bytes to a few tens of kilobytes - it depends on your application.
Windows task manager shows what Windows allocates to your process, that may not relate to what your process needs. Also your application is using Windows services - all the memory used for kernel and device services will not show up as part of your process, but your embedded system may still need those.
If you do use your Windows prototype code to assess the embedded system requirements, then your best place to start is by getting the linker to generate a map file, which will give a detailed description of memory usage in terms of statically allocated data and code size.
Code size depends not only on the performance of the compiler, but also on the efficiency of the instruction set. Some architectures achieve higher code density than others. Windows application code size is never a good indicator of embedded code size because its execution environment is likley to be so much different. For example an pre-emptive multitasking RTOS kernel on a 32bit ARM can be implemented in less than 10Kb of code, a file system perhaps another 10, and network stack anything from 10 to 30K, USB another 10. As you can see this is a different world to desktop code.
Data memory usage is more easily determined perhaps; but you do that through analysis of your application rather than observing what Windows does. There is the data your application instantiates directly, and then there is data instantiated by libraries and device drivers you might call - in Windows the latter is likley to be relatively large and out of your control. Typical embedded systems libraries for things such a s network stacks, USB, file systems etc. are fall smaller and far more deterministic in both performance and size.
Your better bet is to describe your application in terms of its general purpose, performance requirements, real-time constraints, and its hardware requirements (display, networking, I/O, mass storage etc.), and then look at comparable solutions or at the libraries you will need to implement your solution; most embedded systems are "bare board" and do not have the services you find in Windows unless you write them or use third-party solutions - Windows is seldom a comparable solution to an embedded system.
If it is just a library rather than an application, then build it for a likley target using a Windows hosted GCC cross-compiler and see how big it ends up. You don't need hardware for that or even expend any money.

How to cap memory usage of Haskell threads

In a Haskell program compiled with GHC, is it possible to programmatically guard against excessive memory usage? That is, have it notify the program when memory usage reaches a specified limit, preferably indicating the offending thread.
For example, suppose I want to write a server, hosting a scripting language interpreter, that users can connect to. It's Turing-complete, so programs could theoretically use unlimited memory or time. Suppose each client is handled with a separate thread. If a client writes an infinite loop that consumes memory very quickly, I want to ensure that the thread consumes no more than, say, 1 MB of memory, before being alerted with an exception. I do not want other users to be affected when that happens.
This is probably possible using separate processes and ulimit, but:
I would rather keep it in one program, to avoid the complexity of inter-process communication.
I need to support both Linux and Windows, so I would prefer to keep it platform-agnostic if possible.
Edward Z. Yang and David Mazières have developed an extension to GHC that supports dynamic resource limits, and discuss it at http://ezyang.com/rlimits.html They also provide a version of GHC 7.8 that supports this.
Unfortunately, their work was not included in GHC upstream.
May not be exactly what you want. But, as documented here you have a ghc compile option:
-Ksize, update: Oops, sorry, -K is for stack overflows. Still, you can check that link.
In your example, you may need to modify the source of the scripting language interpreter, make some twists to the memory mgmt. module(s), of course IF it has some managed memory allocation features, the interpreter can complain about an execessive use of memory quota by an API callback to your host application.

Cachegrind under Xen

I have an application written in C++ that someone else has written in a way that's supposed to maximally take advantage of cpu caches. This application runs on a guest Ubuntu OS that is using paravirtualization. I ran cachegrind and received very low cache miss rates.
Since my OS is virtualized, can I be sure that these values are in fact correct in showing that the cpu cache is being well used for my application?
Cachegrind is a simulator. A real CPU may actually perform differently (e.g. your real CPU may have a different cache hierarchy to cachegrind, different sizes of cache, a different replacement policy and so forth). You would need to watch the real CPUs performance counters to know for sure how well your program was really performing on real hardware with respect to cache.

What is meaning of small footprint in terms of programming?

I heard many libraries such as JXTA and PjSIP have smaller footprints. Is this pointing to small resource consumption or something else?
Footprint designates the size occupied by your application in computer RAM memory.
Footprint can have different meaning when speaking about memory consumption.
In my experience, memory footprint often doesn't include memory allocated on the heap (dynamic memory), or resource loaded from disc etc. This is because dynamic allocations are non constant and may vary depending on how the application or module is used. When reporting "low footprint" or "high footprint", a constant or top measure of the required space is usually wanted.
If for example including dynamic memory in the footprint report of an image editor, the footprint would entirely depend on the size of the image loaded into the application by the user.
In the context of a third party library, the library author can optimize the static memory footprint of the library by assuring that you never link more code into your application binary than absolutely needed. A common method used for doing this in for instance C, is to distribute library functions to separate c-files. This is because most C linkers will link all code from a c-file into your application, not just the function you call. So if you put a single function in the c-file, that's all the linker will incoporate into your application when calling it. If you put five functions in the c-file the linker will probably link all of them into your app even if you only use one of them.
All this being said, the general (academic) definition of footprint includes all kinds of memory/storage aspects.
From Wikipedia Memory footprint article:
Memory footprint refers to the amount of main memory that a program uses or references while running.
This includes all sorts of active memory regions like code segment containing (mostly) program instructions (and occasionally constants), data segment (both initialized and uninitialized), heap memory, call stack, plus memory required to hold any additional data structures, such as symbol tables, debugging data structures, open files, shared libraries mapped to the current process, etc., that the program ever needs while executing and will be loaded at least once during the entire run.
Generally it's the amount of memory it takes up - the 'footprint' it leaves in memory when running. However it can also refer to how much space it takes up on your harddrive - although these days that's less of an issue.
If you're writing an app and have memory limitations, consider running a profiler to keep track of how much your program is using.
It does refer to resources. Particularly memory. It requires a smaller amount of memory when running.
yes, resources such as memory or disk
Footprint in Computing i-e for computer programs or Computer machines is referred as the occupied device memory , for a program , process , code ,etc

Memory profiling tool for Delphi?

I set up a project and ran it, and looked at it in Process Explorer, and it turns out it's using about 5x more RAM than I would have guessed, just to start up. Now if my program's going too slowly, I hook it up to a profiler and have it tell me what's using all my cycles. Is there any similar tool I can hook it up to and have it tell me what's using all my RAM?
AQTime can help with that too.
What figures are you using from Process Explorer?
"Memory Use" in Windows is not a straightforward topic. Almost every application incorporates some form of memory manager that attempts to satisfy the memory needs of the application, about which the operating system has surprisingly little knowledge - the OS knows what memory the applications memory manager is using, but that is not always the same thing as what your application is actually using.
A simple way to see this is to watch the memory use reported by Task Manager.... start up a Delphi application, note it's "memory use" in Task Manager. Then minimise that application to the taskbar and you should see the memory use fall. Even restoring the application again won't result in the memory use climbing back to the previous level.
In crude terms, when you minimise the application the memory manager takes that as a cue that it should return any unnecessarily "used" memory back to the OS. That is, memory that the memory manager is using to efficiently service your application but which your application itself is not actually using.
The memory manager should also return this memory to the system if the system requires it, due to low memory conditions for example. The minimise to taskbar "trick" is simply a sensible optimisation - since a minimised app is typically not actively in use it's an opportune time to do such "housekeeping" automatically.
(This is not "a bad thing", it's just something to be aware of when considering "memory use")
To make matters worse, in addition to memory that the memory manager is using but which your application is not, there is also the question of "commit charge", which won't necessarily show up as memory that is used by either your application OR it's memory manager!
In a Delphi application (from Delphi 2006 onward) the memory manager is FastMM and that has a built in tool that will show you what your application memory use is like from "the inside" (or at least it used to have such a tool - I've not used it in a while).
iirc using it was a question of simply adding a unit to your project and creating a form at runtime (via some "debug only" menu item on the Help menu, or whatever mechanism you choose) that would then give you a "map" of your memory usage.
If you are using a version of Delphi earlier than 2006 you can still use FastMM - it's free and open source. Just download it from sourceforge.
AQTime has been for us an amazing profiling tool. It works amazingly well and allowed us to pinpoint bottlenecks in places where never thought were any, while sometimes showing us there was no bottleneck where we were sure there was.
It is, along with Finalbuilder, Araxis Merge, and TestComplete, an indispensable tool!
In addition to the others: Before I switched to D2006+ (and started using fastmm) I used AQTime's free memproof. It has some issues but it is workable.

Resources