I am trying to increase the memory of VM into AZK. Is there some enviroment variable for do that? Can someone help me please?
azk (http://azk.io/)
The amount of memory must be set before starting azk agent. So, be sure the agent is down and run:
export AZK_VM_MEMORY=[memory size in MB]
azk agent start
As a shorthand, you can put the export command into your .profile, .bashrc or .zshrc file (depending on the shell you are using) to make that config persistent between different terminal sessions.
Note: by default, azk uses 1/6 of the total memory (or 512MB, whichever is greater) for the VM
Related
Can anyone tell me how to set environment variable of U-boot from normal user space terminal, i.e, once the kernel image loaded, then I need to open terminal and change the environment variable in u-boot, in next reboot the change in environment variable of U-boot should be reflected.
Look at /boot/uEnv.txt or /boot/boot.txt depending on your distribution.
For the latter, you might need to run mkscr after modifying it.
I'm trying to increase C-stack size in rstudio server 0.99 on CentOS6 editing /etc/rstudio/rserver.conf file as follow:
rsession-stack-limit-mb=20
But "rstudio-server verify-installation" returns this message:
The option 'rsession-stack-limit-mb' is deprecated and will be discarded.
If I put this setting within /etc/rstudio/rsession.conf I obtain this message:
unrecognised option 'rsession-stack-limit-mb'
Someone can help me to find right configuration?
Thanks in advance
Diego
I guess you use the free version of RStudio Server. According to https://github.com/rstudio/rstudio/blob/master/src/cpp/server/ServerOptions.cpp, it seems like you have to need a commercial version if you'd like to manage memory limits in RStudio Server.
Or, you can use "ulimit" command on CentOS, e.g., "ulimit -s 20000". Then, run R from the Linux command line or in batch mode.
I am configuring an Apache Spark cluster.
When I run the cluster with 1 master and 3 slaves, I see this on the master monitor page:
Memory
2.0 GB (512.0 MB Used)
2.0 GB (512.0 MB Used)
6.0 GB (512.0 MB Used)
I want to increase the used memory for the workers but I could not find the right config for this. I have changed spark-env.sh as below:
export SPARK_WORKER_MEMORY=6g
export SPARK_MEM=6g
export SPARK_DAEMON_MEMORY=6g
export SPARK_JAVA_OPTS="-Dspark.executor.memory=6g"
export JAVA_OPTS="-Xms6G -Xmx6G"
But the used memory is still the same. What should I do to change used memory?
When using 1.0.0+ and using spark-shell or spark-submit, use the --executor-memory option. E.g.
spark-shell --executor-memory 8G ...
0.9.0 and under:
When you start a job or start the shell change the memory. We had to modify the spark-shell script so that it would carry command line arguments through as arguments for the underlying java application. In particular:
OPTIONS="$#"
...
$FWDIR/bin/spark-class $OPTIONS org.apache.spark.repl.Main "$#"
Then we can run our spark shell as follows:
spark-shell -Dspark.executor.memory=6g
When configuring it for a standalone jar, I set the system property programmatically before creating the spark context and pass the value in as a command line argument (I can make it shorter than the long winded system props then).
System.setProperty("spark.executor.memory", valueFromCommandLine)
As for changing the default cluster wide, sorry, not entirely sure how to do it properly.
One final point - I'm a little worried by the fact you have 2 nodes with 2GB and one with 6GB. The memory you can use will be limited to the smallest node - so here 2GB.
In Spark 1.1.1, to set the Max Memory of workers.
in conf/spark.env.sh, write this:
export SPARK_EXECUTOR_MEMORY=2G
If you have not used the config file yet, copy the template file
cp conf/spark-env.sh.template conf/spark-env.sh
Then make the change and don't forget to source it
source conf/spark-env.sh
In my case, I use ipython notebook server to connect to spark. I want to increase the memory for executor.
This is what I do:
from pyspark import SparkContext
from pyspark.conf import SparkConf
conf = SparkConf()
conf.setMaster(CLUSTER_URL).setAppName('ipython-notebook').set("spark.executor.memory", "2g")
sc = SparkContext(conf=conf)
According to Spark documentation you can change the Memory per Node with command line argument --executor-memory while submitting your application. E.g.
./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master spark://master.node:7077 \
--executor-memory 8G \
--total-executor-cores 100 \
/path/to/examples.jar \
1000
I've tested and it works.
The default configuration for the worker is to allocate Host_Memory - 1Gb for each worker. The configuration parameter to manually adjust that value is SPARK_WORKER_MEMORY, like in your question:
export SPARK_WORKER_MEMORY=6g.
As a someone who never really messed with the JVM much how can I ensure my Neo4j instances are running with all of the recommended JVM settings. E.g. Heap size, server mode, and -XX:+UseConcMarkSweepGC
Should these be set inside a config file? Can I set the dynamically at runtime? Are they set at a system level? Can I have different settings when running two instances of neo4j on the same machine?
It is a bit fuzzy at what point all of these things get set.
I am running neo4j inside a docker container so that is something to consider as well.
Dockerfile as follows. I am starting neo4j with the console command
FROM dockerfile/java:oracle-java8
# INSTALL OS DEPENDENCIES AND NEO4J
ADD /files/neo4j-enterprise-2.1.3-unix.tar.gz /opt/neo
RUN rm /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j-server.properties
ADD /files/neo4j-server.properties /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j-server.properties
#RUN mv -f /files/neo4j-server.properties /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j-server.properties
EXPOSE 7474
CMD ["console"]
ENTRYPOINT ["/opt/neo/neo4j-enterprise-2.1.3/bin/neo4j"]
Ok, so you are using the Neo4j server script. In this case you should configure the low level JVM properties in neo4j.properties which should also live in the conf directory. Basically do the same thing for neo4j.properties as you already do for neo4j-server.properties. Create the properties file in your Docker context and configure the properties you want to add. Then in the Dockerfile use:
ADD /files/neo4j.properties /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j.properties
The syntax in the properties files is the following (from the documetnation):
# initial heap size (in MB)
wrapper.java.initmemory=<value>
# maximum heap size (in MB)
wrapper.java.maxmemory=<value>
# additional literal JVM parameter, where N is a number for each
wrapper.java.additional.N=<value>
See also http://docs.neo4j.org/chunked/stable/server-performance.html.
One way to test whether the settings are applied is to run jinfo <pid> in the Docker container, where is the process id of the Neo4j JVM. To enter the container, you can either change the entrypoint to /bin/bash at the command line when you run the container or you use nsenter. The latter would be my choice.
I am running Tomcat on a small VPS (256MB/512MB) and I want to explicitly limit the amount of memory Tomcat uses.
I understand that I can configure this somehow by passing in the java maximum heap and initial heap size arguments;
-Xmx256m
-Xms128m
But I can't find where to put this in the configuration of Tomcat 6 on Ubuntu.
Thanks in advance,
Gav
On Ubuntu, the correct way to customize Tomcat variables is by editing the file
/etc/default/tomcat5.5
(or /etc/default/tomcat6 if you have a newer version running)
Inside that file, set the JAVA_OPTS variable as described in the other replies here,
for example
JAVA_OPTS="-Xmx512m"
to set a maximum memory of 512 MB.
Set JAVA_OPTS in your init script,
export JAVA_OPTS="-Djava.awt.headless=true -server -Xms48m -Xmx1024M -XX:MaxPermSize=512m"
You can add this to the JAVA_OPTS variable in the bin/catalina.sh startup script.
JAVA_OPTS="-Xms128m -Xmx256m"