I am learning about isolate in dart/flutter, in almost documents say that isolates don't share memory each other, but it not say how much maximum memory in an isolate. Is it limited by App maximum memory or each isolate have a separate memory space and doesn't depend on total initial memory allocated of Application?
Thank for any help.
Update
I found information at Dart Glossary: "Dart supports concurrent execution by way of isolates, which you can think of as processes without the overhead. Each isolate has its own memory and code, which can’t be affected by any other isolate"
See https://github.com/dart-lang/sdk/issues/34886
You can use --old_gen_heap_size to set the memory limit in megabytes.
You can specify such options by setting the environment variable like for example
DART_VM_OPTIONS="--old_gen_heap_size=2048 --observe"
The memory seems to be for the whole VM instance, not per isolate.
To get all available options use
dart --help --verbose
or
dart -h -v
Related
I create my docker (python flask).
How can I calculate what is the limit to put for memory and CPU?
Do we have some tools that run performance tests on docker with different limitation and then advise what is the best limitation numbers to put?
With an application already running inside of a container, you can use docker stats to see the current utilization of CPU and memory. While there it little harm in setting CPU limits too low (it will just slow down the app, but it will still run), be careful to keep memory limits above the worst case scenario. When apps attempt to exceed their memory limit, they will be killed and usually restarted by a restart policy/orchestration tool. If the limit is set too low, you may find your app in a restart loop.
This is more about the consumption of your specific Flask application, you can probably take use the resource module in Python to calculate them.
More information here and here.
Intellij IDEA memory settings may be customized by editing -Xmx option at idea64.vmoptions. Let's say, I set
-Xmx2g
and then open 5 different projects at the same time. Can IDEA eventually consume up to ~10gb of memory, or it will be limited to 2gb+some overhead?
At memory usage monitor in the lower right corner of IDEA window, I see the different value of allocated memory for each project. On the other hand, these values seem to correlate over time.
It's the same single process running in the same JVM, so the limit is for all the windows/projects.
I have a MPI/Pthread program in which each MPI process will be running on a separate computing node. Within each MPI process, certain number of Pthreads (1-8) are launched. However, no matter how many Pthreads are launched within a MPI process, the overall performance is pretty much the same. I suspect all the Pthreads are running on the same CPU core. How can I assign threads to different CPU cores?
Each computing node has 8 cores.(two Quad core Nehalem processors)
Open MPI 1.4
Linux x86_64
Questions like this are often dependent on the problem at hand. Most likely, you are running into a resource lock issue (where the threads are competing for a lock) -- this would look like only one core was doing any work, because only one thread can (effectively) do any work at any given time.
Setting CPU affinity for a certain thread is not a good solution. You should allow for the OS scheduler to optimally determine the physical core assignment for a given pthread.
Look at your code and try to figure out where you are locking where you shouldn't be, or if you've come up with a correct parallel solution to the problem at hand. You should also test a version of the program using only pthreads (not MPI) and see if scaling is achieved.
i am writing a concurrent program and i need to know the number of cores of the system so then the program will know how many processes to open.
Is there command to get this inside Erlang code?
Thnx.
You can use
erlang:system_info(logical_processors_available)
to get the number of cores that can be used by the erlang runtime system.
There is also:
erlang:system_info(schedulers_online)
which tells you how many scheduler threads are actually running.
To get the number of available cores, use the logical_processors flag to erlang:system_info/1:
1> erlang:system_info(logical_processors).
8
There are two companion flags to this one: logical_processors_online shows how many are in use, and logical_processors_available show how many are available (it will return unknown when all logical processors available are online).
To know how to parallelize your code, you should rely on schedulers_online which will return the number of actual Erlang schedulers that are available in your current VM instance:
1> erlang:system_info(schedulers_online).
8
Note however that parallelizing on this value alone might not be enough. Sometimes you have other processes running that need some CPU time and sometimes your algorithm would benefit from even more parallelism (waiting on IO for example). A rule of thumb is to use the value obtained from schedulers_online as a multiplier for parallelism, but always test with different multiples to see what works best for your application.
How this information is exposed will be very operating system specific (unless you happen to be writing an operating system of course).
You didn't say what operating system you're working on. In the case of Linux, you can get the data from /proc/cpuinfo, however there are subtleties with the meaning of hyperthreading and the issue of multiple cores on the same die using a shared L2 cache (effectively you've got a NUMA architecture).
I am running ejabberd 2.1.10 server on Linux (Erlang R14B 03).
I am creating XMPP connections using a tool in batches and sending message randomly.
ejabberd is accepting most of the connections.
Even though connections are increasing continuously,
value of erlang:memory(total) is observed to be with-in a range.
But if I check the memory usage of ejabberd process using top command, I can observe that memory usage by ejabberd process is increasing continuously.
I can see that difference between the values of erlang:memory(total) and the memory usage shown by top command is increasing continuously.
Please let me know the reason for the difference in memory shown.
Is it because of memory leak? Is there anyway I can debug this issue?
What for the additional memory (difference between the erlang & top command) is used if it is not memory leak?
A memory leak in either the Erlang VM itself or in the non-Erlang parts of ejabberd would have the effect you describe.
ejabberd contains some NIFs - there are 10 ".c" files in ejabberd-2.1.10.
Was your ejabberd configured with "--enable-nif"?
If so, try comparing with a version built using "--disable-nif", to see if it has different memory usage behaviour.
Other possibilities for debugging include using Valgrind for detecting and locating the leak. (I haven't tried using it on the Erlang VM; there may be a number of false positives, but with a bit of luck the leak will stand out, either by size or by source.)
A final note: the Erlang process's heap may have been fragmented. The gaps among allocations would count towards the OS process's size; It doesn't look like they are included in erlang:memory(total).