I am setting ANT_OPTS in the environment to "-Xms256m -Xmx1024m". After setting this, I am not able to run ant files from command prompt. It throws me an error of:
"Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine."
Although I have enough physical memory available (more than 2048m available) to allot 1024m for ANT_OPTS, but still it throws the above error. Can there be any other reason why I cannot set Xmx to 1024m ?
Anyway, here is how to fix it:
Go to Start->Control Panel->System->Advanced(tab)->Environment Variables->System Variables->New:
Variable name: _JAVA_OPTIONS
Variable value: -Xmx512M
or
set _JAVA_OPTS="-Xmx512M"
or
Change the ant call as shown as below.
<exec>
<arg value="-J-Xmx512m" />
</exec>
then build the files again using the ant.
It worked for me.
You don't mention what OS you're running. If you're on Windows (especially 32-bit) I often see problems allocating more than, say, 800MB as heap, regardless of how much actual memory you have available. This isn't really Windows bashing: the Windows JVM wants to allocate all of its heap in a contiguous chunk and if it can't it fails to start.
I think Java maximum memory on Windows XP does a good job of explaining the problem and how you might try to solve it.
What ever you set initially as minimum heap, the JVM will try to allocate at start up.It seems in your machine (32 bit machine I assume) the JVM is unable to allocate and JVM start up fails. Try setting -Xms to 128 or less. It should work.
Related
We are running Teamcity connected to TFS. On our Teamcity server there are two Java processes one for running Teamcity and one for connecting to TFS. The Teamcity process, I am able to increase the amount of RAM associated with it by updating the following environment variable TEAMCITY_SERVER_MEM_OPTS.
The other process for connecting to TFS, I am not able to increase the RAM. I was able to retrieve the command line for this process and noticed there was only 1gb of memory allocated to it. As can be seen by the following command line.
C:\TeamCity\jre\bin\java -Dfile.encoding=UTF-8 -Dcom.microsoft.tfs.jni.native.base-
directory=C:\TeamCity\webapps\ROOT\WEB-INF\plugins\tfs\tfsSdk\14.119.2\native -Xmx1024M
The real issue is the second Java process is taking up 100% CPU and hopefully by increasing the memory for this process it will alleviate the issue.
I'm baffled. I have a VM running Ubuntu 14.04. I've followed procedures here: http://clang.llvm.org/docs/LibASTMatchersTutorial.html and am at the step to run ninja. This builds llvm and clang. Now, my VM is no slouch, I gave it 6GB of RAM and 4 CPUs and a 20GB swap file. The biggest problem comes at link time - it seems to start a large number of ld processes, each using at least 3-4GB or virtual memory, and at some point a lot of CPU each. But the the swap file grew to over 12GB and the processes are all IO bound, but I don't know if they are doing something useful, or thrashing. All I know is the disk is getting hammered and the jobs run forever. I've actually just dropped the CPU count to the VM to 1, to see if it might be more efficient with less parallelism, as I surmised the issue may be thrashing.
I suppose my disk could be slow... Any ideas? Should I be using make instead of ninja? My expertise is not Linux (although I'm getting there :-) ) So I'm following the tutorial but perhaps it is not recommended the "best" way to build the clang / llvm programs.
I have been there, It's happening with the latest svn release (but not if you get clang 3.8 or older releases). What is happening is that since during development a lot of debug information is also being generated for each compilation unit the file sizes are becoming big.
The solution is to turn off all the debug info that's been attached by default. You probably are not going to debug clang, so won't need it. SO instead of just doing this
cmake -G Ninja ../llvm -DLLVM_BUILD_TESTS=ON
What you should do is
cmake -G Ninja ../llvm -DLLVM_BUILD_TESTS=ON -DCMAKE_BUILD_TYPE=Release
All the other steps remain the same. Now I have not tested this with ninja, but have verified it with make on ubuntu (this tutorial, I modified the same thing in step 7). This should owkr as weel.
I am running a php script as a process on a Linux shell. With different tools (top, xdebug, ...) I see the dynamic memory consumption (heap memory) of this very complex script continuously rising.
How can I find out exactly the line in the code or the variable or the place, that is causing this behavior? Where is the memory leak of the php script?
Additional information:
Linux version 2.6.30-gentoo-r4
PHP Version 5.2.10-pl0-gentoo
I can modify the script
I can use xdebug
Please give a reason for closing this question.
Try this at suspect areas
echo memory_get_usage();
// Suspect code here
echo memory_get_usage();
When Starting Mahout i got an error message like this
root#fazil-VPCEB45FG:/usr/local/mahout/bin# ./mahout
hadoop binary is not in PATH,HADOOP_HOME/bin,HADOOP_PREFIX/bin, running locally
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
i have installed openjdk0.6, Whether openjdk is supported by mahout
There's no OpenJDK 0.6 -- you mean 1.6. Yes Java 6 is supported, as you see in the project documentation. This does not seem to have anything to do with Mahout as it's an error from the JVM itself. The error itself says the problem: you requested a heap that's too large. So, I'd go see what heap you requested in Hadoop config and check it. This is the kind of info you should post in a question.
It's exactly what is says in the error message
Could not reserve enough space for object heap
check your your hadoop config files: hadoop-env.sh and
mapred-site.xml for any properties where you have allocated memory to JVM through the Xmx parameter and lower the values if you don't have enough physical memory.
If you have plenty of ram and you run java on 64-bit OS you need to add a -d64 java option to enforce 64-bit mode (it's not done by default in some cases)
Edit: for stand alone mode (your case) just use a proper Xmx value and -d64 if it is 64bits OS
JAVA_HEAP_MAX parameter in mahout file you're running should be lowered. It was 3GB in the mahout version I downloaded.
I have been developing a website using grails and demo'ing it using Cloudfoundry. Grails and Cloudfoundry are awesome! The are easy to use with support from grails plugins and tools in STS. My app uses MySQL, MongoDB, SpringSecurity, and more. I have only used it with one user logged in and I periodically get java.lang.OutOfMemoryError: PermGen space I have increased the memory to 1G using the grails plugin. I tried to set JAVA_OPTS to increase the memory and this did not work. I am going to examine where the memory is being used, but it seems that one user and a tiny set of demo data should not be pushing the memory limits.Does Cloudfoundry support apps that require larger memory? After reading this post I set the MaxPermSize to 512M and I no longer have out of memory errors. I'm using grails cmdline on windows and I cannot get more than one JAVA_OPTS set, only the first in a list is used. grails cf-env-add JAVA_OPTS "-XX:MaxPermSize=512m -Xms512M -Xmx512M" This one setting has added stability to my demo site.
The original poster has an answer in the original question:
After reading this post I set the MaxPermSize to 512M and I no longer have out of memory errors. I'm using grails cmdline on windows and I cannot get more than one JAVA_OPTS set, only the first in a list is used. grails cf-env-add JAVA_OPTS "-XX:MaxPermSize=512m -Xms512M -Xmx512M" This one setting has added stability to my demo site
It's a PermGen space, and at this case it's in development mode only (or when you're using Tomcat and redeploy your app few times, its not a CloudFoundry case).
See official FAQ: OMG I get OutOfMemoryErrors or PermGen Space errors when running Grails in development mode. What do I do?