How to run a dart program with big memory? - memory

I'm trying to read a very big file(more than 1G) in a dart program, but it throws out of memory exception.
How do I configure the command line to make it run with a bigger memory? Just like:
-Xmx1G
in Java.

The VM has a flag to increase the heap size: --old_gen_heap_size. For example --old_gen_heap_size=1024 would set it to 1GB.
This flag is among the developer-flags and is not considered stable. It could change or go away.

Related

Rascal MPL how to increase heap size?

I just stumbled upon this old post that mentions Java heap space and how to change its parameters in Eclipse (in the eclipse.ini file). How do I do set these in the new VSCode environment?
The Eclipse environment is quite different from the VScode environment. Everything runs in a single JVM there, and so it was difficult for plugin writers to programmatically increase the heap size. This led to writing manual pages on the topic.
In VScode we have a JVM for every Rascal process:
for every terminal REPL there is one JVM
for the Rascal LSP server there is one JVM
for the generic DSL-parametrized LSP server there is one JVM
there is one JVM for every deployed DSL server written in Rascal
And the extension code starts these JVMs, so we can control how much memory they receive. The latest release does this by assessing the total available memory on your machine and allocating a sumptuous amount for every process.
And so there is no configuration option for the user anymore, as we had to add for the Eclipse situation.

Netlogo v.4.0.5memory issues and I've tried everything I could think of

I am running a Netlogo model in v. 4.0.5 and the model uses too much memory and then quits. I have tried to change the memory limits per the instructions in the user manual to no avail. The program doesn't even open when I make increase the memory. I can't run it through RNetLogo because it no longer supports version 4. I know this topic has been touched on before but the previous responses have not resolved my issue. I've changed the output to table instead of spreadsheet as well. I'd like to up the memory to at least 3GB. Any help would be greatly appreciated!
By default, NetLogo 4.0 (which dates all the way back to 2007!) runs in 32-bit mode on Mac OS X, which limits your heap size to 2G.
You have two choices:
Choice 1: Upgrade to NetLogo 5.0 or later. These versions run in 64-bit mode by default.
Choice 2: Launch NetLogo 4.0 from the command line, instead of using the provided app bundle. Info.plist will be bypassed, so you specify the heap size you want on the command line instead. These commands seem to work on my Mac:
export JAVA_HOME=$(/usr/libexec/java_home -v 1.6)
cd /Applications/NetLogo\ 4.0.5
java -server -d64 -Xmx4096M -jar NetLogo.jar
After launching NetLogo this way, in the System tab of the About NetLogo dialog I see:
Java HotSpot(TM) 64-Bit Server VM 1.6.0_65 (Apple Inc.; 1.6.0_65-b14-466.1-11M4716)
operating system: Mac OS X 10.10.3 (x86_64 processor)
Java heap: used = 8 MB, free = 176 MB, max = 3640 MB
note "64-Bit Server" and the higher-than-default heap max value.
It might also be possible to somehow edit the app bundle to launch in 64-bit mode; I don't know.
Before you add more memory I'd double check my program for nested loops. It's so easy in netlogo to make 4 or 5 layers of nested loops without even realizing it, and this can really slow the program down. are you sure you've completely optimized your program?

Using massif on process which is "killed 9"

I'm trying to do a memory profiling for a program which consumes too much memory and gets killed by OS (FreeBSD) with 9 signal. That happens on some specific data, so profiling it on another (e.g. smaller) data set would not give much help. When program is killed 9 massif doesn't generate any output at all. What could be done in this situation to get memory profiled?
If you have a recent Valgrind version (>= 3.7.0),
Valgrind has an embedded gdbserver so it can be used together with gdb.
Before your application starts to run under Valgrind, you can put breakpoints.
When a breakpoint is encountered, GDB monitor commands are available
to invoke Valgrind tool specific functionalities.
For example, with Massif, you can trigger the production of a report.
With Memcheck, you can do a leak search, examine validity bits, ...
It is also possible to trigger these monitor commands from the shell command
line (using the Valgrind vgdb utility)

Weka GUI - Not enough memory, won't load?

This same installation of Weka has loaded for me in the past. I am simply trying to load the Weka GUI (double click on the icon) and I get the following error. How can I fix it?
OutOfMemory
Not enough memory. Please load a smaller dataset or use a larger heap size.
- initial JVM size: 122.4 MB
- total memory used: 165.3 MB
- max. memory avail.: 227.6 MB
Note:
The Java heap size can be specified with the -Xmx option.
etc..
I am not loading Weka from the command line, so how can I stop this from occurring?
Just write an answer here for ubuntu users.
If you apt-get install weka, you will have a script installed at /usr/bin/weka
The first a few lines look like below:
#!/bin/bash
. /usr/lib/java-wrappers/java-wrappers.sh
# default options
CLASS="weka.gui.GUIChooser"
MEMORY="256m"
GUI=""
Just modify the line starts with MEMORY so that you have larger upper bound.
MEMORY="2048m"
I'm not sure why you were able to use it before but not now. However, you can specify a larger heap size by changing the RunWeka.ini configuration file. On a Windows machine it should be in the Weka folder of your Program Files directory. You could try a line specifying, for example,
maxheap=200m
There might already be such an option in that file that you can simply change to a larger number.
Here is how to do it on Mac:
right-click on the main Weka file (that opens the Gui) and select "Show Package Contents";
open Info.plist file with any text editor;
change the -Xmx option.
viola

PermGen problems with Lift and Jetty

I'm developing on the standard Lift platform (maven and jetty). I'm repeatedly (once every couple of days) getting this:
Exception in thread "7048009#qtp-3179125-12" java.lang.OutOfMemoryError: PermGen space
2009-09-15 19:41:38.629::WARN: handle failed
java.lang.OutOfMemoryError: PermGen space
This is in my dev environment. It's not a problem because I can keep restarting the server. In deployment I'm not having these problems so it's not a real issue. I'm just curious.
I don't know too much about the JVM. I think I'm correct in thinking that permanent generation memory is for things like classes and interned strings? What I remember is a bit mixed up with the .NET memory model...
Any reason why this is happening? Are the defaults just crazily low? Is it to do with all the auxiliary objects that Scala has to create for Function objects and similar FP things? Every time I restart Jetty with newly written code (every few minutes) I imagine it re-loads classes etc. But even so, it cant' be that many can it? And shouldn't the JVM be able to deal with a large number of classes?
Cheers
Joe
From this post:
This exception occurred for one simple reason :
the permgenspace is where class properties, such as methods, fields, annotations, and also static variables, etc. are stored in the Java VM, but this space has the particularity to not being cleaned by the garbage collector.
So if your webapp uses or creates a lot of classes (I’m thinking dynamic generations of classes), chances are you met this problem.
Here are some solutions that helped me get rid of this exception :
-XX:+CMSClassUnloadingEnabled : this setting enables garbage collection in the permgenspace
-XX:+CMSPermGenSweepingEnabled : allows the garbage collector to remove even classes from the memory
-XX:PermSize=64M -XX:MaxPermSize=128M : raises the amount of memory allocated to the permgenspace
May be this could help.
Edit July 2012 (almost 3 years later):
Ondra Žižka comments (and I have updated the answer above):
JVM 1.6.0_27 says: Please use:
CMSClassUnloadingEnabled (Whether class unloading enabled when using CMS GC)
in place of CMSPermGenSweepingEnabled in the future
See the full Hotspot JVM Options - The complete reference for mroe.
If you see this when running mvn jetty:run,
set the MAVEN_OPTS.
For Linux:
export MAVEN_OPTS="-XX:+CMSClassUnloadingEnabled -XX:PermSize=256M -XX:MaxPermSize=512M"
mvn jetty:run
For Windows:
set "MAVEN_OPTS=-XX:+CMSClassUnloadingEnabled -XX:PermSize=256M -XX:MaxPermSize=512M"
mvn jetty:run
Should be fine now. If not, increase -XX:MaxPermSize.
You can also put these permanently to your environment.
For Linux, append the export line to ~/.bashrc
For Windows, press Win-key + PrintScreen, and go Advanced > Environment.
See also http://support.microsoft.com/kb/310519.
This is because of the reloading of classes as you suggested. If you are using lots of libraries etc. the sum of classes will rapidly grow for each restart. Try monitoring your jetty instance with VisualVM to get an overview of memory consumption when reloading.
The mailing list (http://groups.google.com/group/liftweb/) is the official support forum for Lift, and where you'll be able to get a better answer. I don't know the particulars of your dev setup (you don't go into much detail), but I assume you're reloading your war in Jetty without actually restarting it. Lift doesn't perform dynamic class generation (as suggested by VonC above), but Scala compiles each closure as a separate class. If you're adding and removing closures to your code over the course of several days, it's possible that too many classes are being loaded and never unloaded and taking up perm space. I'd suggest you enable the options JVM options mentioned by VonC above and see if they help.
The permanent generation is where the JVM puts stuff that will probably not be (garbage) collected like custom classloaders.
Depending on what you are deploying, the perm gen setting can be low. Some applications and/or containers combination do contain some memory leaks, so when an app gets undeployed sometimes some stuff like class loaders are not collected, resulting in filling the Perm Space thus generating the error you are having.
Unfortunately, currently the best option in this case is to max up the perm space with the following jvm flag (example for 192m perm size):
-XX:MaxPermSize=192M (or 256M)
The other option is to make sure that either the container or the framework do not leak memory.

Resources