When Starting Mahout i got an error message like this
root#fazil-VPCEB45FG:/usr/local/mahout/bin# ./mahout
hadoop binary is not in PATH,HADOOP_HOME/bin,HADOOP_PREFIX/bin, running locally
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
i have installed openjdk0.6, Whether openjdk is supported by mahout
There's no OpenJDK 0.6 -- you mean 1.6. Yes Java 6 is supported, as you see in the project documentation. This does not seem to have anything to do with Mahout as it's an error from the JVM itself. The error itself says the problem: you requested a heap that's too large. So, I'd go see what heap you requested in Hadoop config and check it. This is the kind of info you should post in a question.
It's exactly what is says in the error message
Could not reserve enough space for object heap
check your your hadoop config files: hadoop-env.sh and
mapred-site.xml for any properties where you have allocated memory to JVM through the Xmx parameter and lower the values if you don't have enough physical memory.
If you have plenty of ram and you run java on 64-bit OS you need to add a -d64 java option to enforce 64-bit mode (it's not done by default in some cases)
Edit: for stand alone mode (your case) just use a proper Xmx value and -d64 if it is 64bits OS
JAVA_HEAP_MAX parameter in mahout file you're running should be lowered. It was 3GB in the mahout version I downloaded.
Related
What is the process to set memory limit for Java 11 inside Docker?
Which JDK to use for production environment?
Thanks
As java 11 (10+) can automatically detect the container's memory you can set memory limit on your container and it should WAI:
docker run -m 512 ....
As for the choice of JDK, you can either use oracle JDK which is licensed or open source OpenJDK.
More details in this article: https://www.docker.com/blog/improved-docker-container-integration-with-java-10/
You can set the maximum heap size using the hotspot JVM -Xmx option (see java 11 options)
You may use AdoptOpenJDK for production as they are kept up tp date.
My server is running some java processes (Wowza media Server).
And 1 day, it had been error "out of memory java heap".
I want zabbix to detect this issue to send notify email.
Anyone know about this please help, or just give me an idea.
Thank a lot.
I tried to find some commandline to get java heap size,
java -XX:+PrintFlagsFinal -version | grep HeapSize
but this is not what i want.
I want to get the value of heap memory at the time i run the command
You can use JMX to monitor JVM metrics (cpu, threads, memory).
JMX monitoring has native support in Zabbix in the form of a Zabbix daemon called “Zabbix Java gateway”, introduced since Zabbix 2.0.
You can see the documentation here.
As mentioned you can use JVM . For
The interface for the item key "jmx [java.lang: type = Memory,
HeapMemoryUsage.committed]
Does your Host in Zabbix have JMX interface configured? You can see how it should look like in the documentation link, mentioned above. in section: Configuring JMX interface
p.s mostly it is server_ip and port 6969.
Installed Neo4j CE 3.3.0 on windows RAM 8GB. I referred to https://medium.com/#david.allen_3172/using-nlp-in-neo4j-ac40bc92196f for installation of the open NLP and APOC packages.
Plugins were copied to the plugins folder (graphaware-nlp-3.3.0.51.1, graphaware-server-enterprise-all-3.3.0.51 and nlp-opennlp-3.3.0.51.1).
The configuration setting were added to neo4j.conf file as given in https://github.com/graphaware/neo4j-nlp
When I restart the Neo4j server, it takes a lot of time and then gives me the following error mesasge:
Caused by: java.lang.OutOfMemoryError: Java heap space Exception in
thread "GraphAware Starter" java.lang.RuntimeException: Error while
initializing model of class: class
opennlp.tools.namefind.TokenNameFinderModel
at
com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.loadModel(OpenNLPPipeline.java:504)
at
com.graphaware.nlp.processor.opennlp.OpenNLPPipeline.lambda$loadNamedEntitiesFinders$2(OpenNLPPipeline.java:162)
at java.util.HashMap$EntrySpliterator.forEachRemaining(Unknown
Without the plugins of NLP, Neo4j starts fine. Any help appreciated here on the minimum requirements of RAM/hardware.
The language models require memory to be loaded. So I would suggest to use at least 4GB or more. Furthermore you should use Enterprise version not CE.
I'm baffled. I have a VM running Ubuntu 14.04. I've followed procedures here: http://clang.llvm.org/docs/LibASTMatchersTutorial.html and am at the step to run ninja. This builds llvm and clang. Now, my VM is no slouch, I gave it 6GB of RAM and 4 CPUs and a 20GB swap file. The biggest problem comes at link time - it seems to start a large number of ld processes, each using at least 3-4GB or virtual memory, and at some point a lot of CPU each. But the the swap file grew to over 12GB and the processes are all IO bound, but I don't know if they are doing something useful, or thrashing. All I know is the disk is getting hammered and the jobs run forever. I've actually just dropped the CPU count to the VM to 1, to see if it might be more efficient with less parallelism, as I surmised the issue may be thrashing.
I suppose my disk could be slow... Any ideas? Should I be using make instead of ninja? My expertise is not Linux (although I'm getting there :-) ) So I'm following the tutorial but perhaps it is not recommended the "best" way to build the clang / llvm programs.
I have been there, It's happening with the latest svn release (but not if you get clang 3.8 or older releases). What is happening is that since during development a lot of debug information is also being generated for each compilation unit the file sizes are becoming big.
The solution is to turn off all the debug info that's been attached by default. You probably are not going to debug clang, so won't need it. SO instead of just doing this
cmake -G Ninja ../llvm -DLLVM_BUILD_TESTS=ON
What you should do is
cmake -G Ninja ../llvm -DLLVM_BUILD_TESTS=ON -DCMAKE_BUILD_TYPE=Release
All the other steps remain the same. Now I have not tested this with ninja, but have verified it with make on ubuntu (this tutorial, I modified the same thing in step 7). This should owkr as weel.
I am setting ANT_OPTS in the environment to "-Xms256m -Xmx1024m". After setting this, I am not able to run ant files from command prompt. It throws me an error of:
"Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine."
Although I have enough physical memory available (more than 2048m available) to allot 1024m for ANT_OPTS, but still it throws the above error. Can there be any other reason why I cannot set Xmx to 1024m ?
Anyway, here is how to fix it:
Go to Start->Control Panel->System->Advanced(tab)->Environment Variables->System Variables->New:
Variable name: _JAVA_OPTIONS
Variable value: -Xmx512M
or
set _JAVA_OPTS="-Xmx512M"
or
Change the ant call as shown as below.
<exec>
<arg value="-J-Xmx512m" />
</exec>
then build the files again using the ant.
It worked for me.
You don't mention what OS you're running. If you're on Windows (especially 32-bit) I often see problems allocating more than, say, 800MB as heap, regardless of how much actual memory you have available. This isn't really Windows bashing: the Windows JVM wants to allocate all of its heap in a contiguous chunk and if it can't it fails to start.
I think Java maximum memory on Windows XP does a good job of explaining the problem and how you might try to solve it.
What ever you set initially as minimum heap, the JVM will try to allocate at start up.It seems in your machine (32 bit machine I assume) the JVM is unable to allocate and JVM start up fails. Try setting -Xms to 128 or less. It should work.