Tomcat - How to limit the maximum memory Tomcat will use - memory

I am running Tomcat on a small VPS (256MB/512MB) and I want to explicitly limit the amount of memory Tomcat uses.
I understand that I can configure this somehow by passing in the java maximum heap and initial heap size arguments;
-Xmx256m
-Xms128m
But I can't find where to put this in the configuration of Tomcat 6 on Ubuntu.
Thanks in advance,
Gav

On Ubuntu, the correct way to customize Tomcat variables is by editing the file
/etc/default/tomcat5.5
(or /etc/default/tomcat6 if you have a newer version running)
Inside that file, set the JAVA_OPTS variable as described in the other replies here,
for example
JAVA_OPTS="-Xmx512m"
to set a maximum memory of 512 MB.

Set JAVA_OPTS in your init script,
export JAVA_OPTS="-Djava.awt.headless=true -server -Xms48m -Xmx1024M -XX:MaxPermSize=512m"

You can add this to the JAVA_OPTS variable in the bin/catalina.sh startup script.
JAVA_OPTS="-Xms128m -Xmx256m"

Related

Jacoco agent (output=file) not (creating /writing to) file .exec(not using any maven plugins) [duplicate]

In a shell script, I have set the JAVA_OPTS environment variable (to enable remote debugging and increase memory), and then I execute the jar file as follows:
export JAVA_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,address=8001,server=y,suspend=n -Xms512m -Xmx512m"
java -jar analyse.jar $*
But it seems there is no effect of the JAVA_OPTS env variable as I cannot connect to remote-debugging and I see no change in memory for the JVM.
What could be the problem?
PS: I cannot use those settings in the java -jar analyse.jar $* command because I process command line arguments in the application.
You can setup _JAVA_OPTIONS instead of JAVA_OPTS. This should work without $_JAVA_OPTIONS.
I don't know of any JVM that actually checks the JAVA_OPTS environment variable. Usually this is used in scripts which launch the JVM and they usually just add it to the java command-line.
The key thing to understand here is that arguments to java that come before the -jar analyse.jar bit will only affect the JVM and won't be passed along to your program. So, modifying the java line in your script to:
java $JAVA_OPTS -jar analyse.jar $*
Should "just work".
In the past 12 years some changes were made.
Environment variable JAVA_OPTS was and is NOT a standardized option. It is evaluated by some shell script wrappers for Java based tools, an example of how this works is in the answer from ZoogieZork.
The environment variable _JAVA_OPTIONS mentioned by HEX is nowadays deprecated/undocumented.
Starting with Java 9, the recommended way to do what you wanted is the variable JDK_JAVA_OPTIONS, see Using the JDK_JAVA_OPTIONS Launcher Environment Variable in the Oracle Java 9 documentation, and this comprehensive answer What is the difference between JDK_JAVA_OPTIONS and JAVA_TOOL_OPTIONS when using Java 11?.

Spark Cloudera - Worker Memory Setting [duplicate]

I am configuring an Apache Spark cluster.
When I run the cluster with 1 master and 3 slaves, I see this on the master monitor page:
Memory
2.0 GB (512.0 MB Used)
2.0 GB (512.0 MB Used)
6.0 GB (512.0 MB Used)
I want to increase the used memory for the workers but I could not find the right config for this. I have changed spark-env.sh as below:
export SPARK_WORKER_MEMORY=6g
export SPARK_MEM=6g
export SPARK_DAEMON_MEMORY=6g
export SPARK_JAVA_OPTS="-Dspark.executor.memory=6g"
export JAVA_OPTS="-Xms6G -Xmx6G"
But the used memory is still the same. What should I do to change used memory?
When using 1.0.0+ and using spark-shell or spark-submit, use the --executor-memory option. E.g.
spark-shell --executor-memory 8G ...
0.9.0 and under:
When you start a job or start the shell change the memory. We had to modify the spark-shell script so that it would carry command line arguments through as arguments for the underlying java application. In particular:
OPTIONS="$#"
...
$FWDIR/bin/spark-class $OPTIONS org.apache.spark.repl.Main "$#"
Then we can run our spark shell as follows:
spark-shell -Dspark.executor.memory=6g
When configuring it for a standalone jar, I set the system property programmatically before creating the spark context and pass the value in as a command line argument (I can make it shorter than the long winded system props then).
System.setProperty("spark.executor.memory", valueFromCommandLine)
As for changing the default cluster wide, sorry, not entirely sure how to do it properly.
One final point - I'm a little worried by the fact you have 2 nodes with 2GB and one with 6GB. The memory you can use will be limited to the smallest node - so here 2GB.
In Spark 1.1.1, to set the Max Memory of workers.
in conf/spark.env.sh, write this:
export SPARK_EXECUTOR_MEMORY=2G
If you have not used the config file yet, copy the template file
cp conf/spark-env.sh.template conf/spark-env.sh
Then make the change and don't forget to source it
source conf/spark-env.sh
In my case, I use ipython notebook server to connect to spark. I want to increase the memory for executor.
This is what I do:
from pyspark import SparkContext
from pyspark.conf import SparkConf
conf = SparkConf()
conf.setMaster(CLUSTER_URL).setAppName('ipython-notebook').set("spark.executor.memory", "2g")
sc = SparkContext(conf=conf)
According to Spark documentation you can change the Memory per Node with command line argument --executor-memory while submitting your application. E.g.
./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master spark://master.node:7077 \
--executor-memory 8G \
--total-executor-cores 100 \
/path/to/examples.jar \
1000
I've tested and it works.
The default configuration for the worker is to allocate Host_Memory - 1Gb for each worker. The configuration parameter to manually adjust that value is SPARK_WORKER_MEMORY, like in your question:
export SPARK_WORKER_MEMORY=6g.

azk - How to increase a VM memory in azk?

I am trying to increase the memory of VM into AZK. Is there some enviroment variable for do that? Can someone help me please?
azk (http://azk.io/)
The amount of memory must be set before starting azk agent. So, be sure the agent is down and run:
export AZK_VM_MEMORY=[memory size in MB]
azk agent start
As a shorthand, you can put the export command into your .profile, .bashrc or .zshrc file (depending on the shell you are using) to make that config persistent between different terminal sessions.
Note: by default, azk uses 1/6 of the total memory (or 512MB, whichever is greater) for the VM

How can I run Neo4j with larger heap size, specify -server and correct GC strategy

As a someone who never really messed with the JVM much how can I ensure my Neo4j instances are running with all of the recommended JVM settings. E.g. Heap size, server mode, and -XX:+UseConcMarkSweepGC
Should these be set inside a config file? Can I set the dynamically at runtime? Are they set at a system level? Can I have different settings when running two instances of neo4j on the same machine?
It is a bit fuzzy at what point all of these things get set.
I am running neo4j inside a docker container so that is something to consider as well.
Dockerfile as follows. I am starting neo4j with the console command
FROM dockerfile/java:oracle-java8
# INSTALL OS DEPENDENCIES AND NEO4J
ADD /files/neo4j-enterprise-2.1.3-unix.tar.gz /opt/neo
RUN rm /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j-server.properties
ADD /files/neo4j-server.properties /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j-server.properties
#RUN mv -f /files/neo4j-server.properties /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j-server.properties
EXPOSE 7474
CMD ["console"]
ENTRYPOINT ["/opt/neo/neo4j-enterprise-2.1.3/bin/neo4j"]
Ok, so you are using the Neo4j server script. In this case you should configure the low level JVM properties in neo4j.properties which should also live in the conf directory. Basically do the same thing for neo4j.properties as you already do for neo4j-server.properties. Create the properties file in your Docker context and configure the properties you want to add. Then in the Dockerfile use:
ADD /files/neo4j.properties /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j.properties
The syntax in the properties files is the following (from the documetnation):
# initial heap size (in MB)
wrapper.java.initmemory=<value>
# maximum heap size (in MB)
wrapper.java.maxmemory=<value>
# additional literal JVM parameter, where N is a number for each
wrapper.java.additional.N=<value>
See also http://docs.neo4j.org/chunked/stable/server-performance.html.
One way to test whether the settings are applied is to run jinfo <pid> in the Docker container, where is the process id of the Neo4j JVM. To enter the container, you can either change the entrypoint to /bin/bash at the command line when you run the container or you use nsenter. The latter would be my choice.

How to run `play` in a 512M vps -- it reports `Could not reserve enough space for object heap`?

I'm running play2 on a 512M vps.
It can create a new app:
play new test
But can't start that test project:
cd test
play
It reports such an error:
[freewind#289144 test]$ play
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
[freewind#289144 test]$
After some research, I found play2 will invoke play-2.0/framework/build, and build has following settings:
I tried to modify the play-2.0/play shell, from:
java ${DEBUG_PARAM} -Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled
-XX:MaxPermSize=384M -Dfile.encoding=UTF8 -Dplay.version="${PLAY_VERSION}"
-Dsbt.ivy.home=`dirname $0`/../repository -Dplay.home=`dirname $0`
-Dsbt.boot.properties=`dirname $0`/sbt/sbt.boot.properties
-jar `dirname $0`/sbt/sbt-launch.jar "$#"
We can see that the Xms is 512M, the vps hasn't enough memory for it.
So I change it to:
java ${DEBUG_PARAM} -Xms112M -Xmx300M -Xss1M
-XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=84M -Dfile.encoding=UTF8
...
This time, the error message is changed:
Error occurred during initialization of VM
Cannot create VM thread. Out of system resources.
What should I do?
Assuming you're running the Sun Hotspot VM, run it like this:
_JAVA_OPTIONS="-Xmx384m" play <your commands>
And you'll get what you need. When the VM launches, it includes the contents of the _JAVA_OPTIONS environment variable along with any other command-line Java options you specify. You'll know it was picked up because you'll see the following message on your console:
Picked up _JAVA_OPTIONS: -Xmx384m
The shell command above defines the variable only for execution of the rest of the shell command. If you wanted to make it more durable, you could say something like
export _JAVA_OPTIONS="-Xmx384m"
and put that in .bash_profile, or .profile, etc.
The _JAVA_OPTIONS environment variable is poorly documented, and I'm not sure how widely it is supported, but I'm pretty sure it works on Linux, BSD* (like Mac OS), and...I don't know what else.
I faced the same issue but I found the reason and the solution.
It is a java parameter in play. I do a simple check:
java -Xms512M -Xmx1024M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256M -version
This does not work, but
java -Xms512M -Xmx1024M -Xss1M -XX:+CMSClassUnloadingEnabled
does work! and
java -Xms512M -Xmx512M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256M
works too.
You have to modify the build.bat as I did: on maximum mem size or change the maximum permanent size.
I build and develop locally. I then run "play dist" to create a distribution which contains a start script. I deploy to my 512MB VPS using Fabric and do not have any memory issues.
Another way is to use the following command (it works when you dont use play dist but have the framework installed on the server aswell, maybe it works with the standalone package too but I have not tested it):
play "start 6000" -Xms64m -Xmx128m -server
the "start 6000" will start the server listening on port 6000.
play stage && target/start -Xmx384m

Resources