Kubernetes throwing OOM for pods running a JVM - docker

I am running Docker containers containing JVM (java8u31). These containers are deployed as pods in a kubernetes cluster. Often I get OOM for the pods and Kubernetes kills the pods and restarts it. I am having issues in finding the root cause for these OOMs as I am new to Kubernetes.
Here are the JVM parameters
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Xms700M -Xmx1000M -XX:MaxRAM=1536M -XX:MaxMetaspaceSize=250M
These containers are deployed as stateful set and following is the resource allocation
resources:
requests:
memory: "1.5G"
cpu: 1
limits:
memory: "1.5G"
cpu: 1
so the total memory allocated to the container matches the MaxRam
If I use -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/etc/opt/jmx/java_pid%p.hprof that doesn't help because the pod is getting killed and recreated and started as soon as there is a OOM so everything within the pod is lost
The only way to get a thread or HEAP dump is to SSH into the pod which also I am not able to take because the pod is recreated after an OOM so I don't get the memory footprint at the time of OOM. I SSH after an OOM which is not much help.
I also profiled the code using visualVM, jHat but couldn't find substantial memory footprint which could lead to a conclusion of too much memory consumption by the threads running within the JVM or a probable leak.
Any help is appreciated to resolve the OOM thrown by Kubernetes.

When your application in a pod reaches the limits of memory you set by resources.limits.memory or namespace limit, Kubernetes restarts the pod.
The Kubernetes part of limiting resources is described in the following articles:
Kubernetes best practices: Resource requests and limits
Resource Quotas
Admission control plugin: ResourceQuota
Assign Memory Resources to Containers and Pods
Memory consumed by Java application is not limited to the size of the Heap that you can set by specifying the options:
-Xmssize Specifies the initial heap size.
-Xmxsize Specifies the maximum heap size.
Java application needs some additional memory for metaspace, class space, stack size, and JVM itself needs even more memory to do its tasks like garbage collection, JIT optimization, Off-heap allocations, JNI code.
It is hard to predict total memory usage of JVM with reasonable precision, so the best way is to measure it on the real deployment with usual load.
I would recommend you to set the Kubernetes pod limit to double Xmx size, check if you are not getting OOM anymore, and then gradually decrease it to the point when you start getting OOM. The final value should be in the middle between these points.
You can get more precise value from memory usage statistics in a monitoring system like Prometheus.
On the other hand, you can try to limit java memory usage by specifying the number of available options, like the following:
-Xms<heap size>[g|m|k] -Xmx<heap size>[g|m|k]
-XX:MaxMetaspaceSize=<metaspace size>[g|m|k]
-Xmn<young size>[g|m|k]
-XX:SurvivorRatio=<ratio>
More details on that can be found in these articles:
Properly limiting the JVM’s memory usage (Xmx isn’t enough)
Why does my Java process consume more memory than Xmx
The second way to limit JVM memory usage is to calculate heap size based on the amount of RAM(or MaxRAM). There is a good explanation of how it works in the article:
The default sizes are based on the amount of memory on a machine, which can be set with the -XX:MaxRAM=N flag.
Normally, that value is calculated by the JVM by inspecting the amount of memory on the machine.
However, the JVM limits MaxRAM to 1 GB for the client compiler, 4 GB for 32-bit server compilers, and 128 GB for 64-bit compilers.
The maximum heap size is one-quarter of MaxRAM .
This is why the default heap size can vary: if the physical memory on a machine is less than MaxRAM , the default heap size is one-quarter of that.
But even if hundreds of gigabytes of RAM are available, the most the JVM will use by default is 32 GB: one-quarter of 128 GB. The default maximum heap calculation is actually this:
Default Xmx = MaxRAM / MaxRAMFraction
Hence, the default maximum heap can also be set by adjusting the value of the - XX:MaxRAMFraction=N flag, which defaults to 4.
Finally, just to keep things interesting, the -XX:ErgoHeapSizeLimit=N flag can also be set to a maximum default value that the JVM should use.
That value is 0 by default (meaning to ignore it); otherwise, that limit is used if it is smaller than MaxRAM / MaxRAMFraction .
The initial heap size choice is similar, though it has fewer complications. The initial heap size value is determined like this:
Default Xms = MaxRAM / InitialRAMFraction
As can be concluded from the default minimum heap sizes, the default value of the InitialRAMFraction flag is 64.
The one caveat here occurs if that value is less than 5 MB —or, strictly speaking, less than the values specified by -XX:OldSize=N (which defaults to 4 MB) plus -XX:NewSize=N (which defaults to 1 MB).
In that case, the sum of the old and new sizes is used as the initial heap size.
This article gives you a good point to start tuning your JVM for web-oriented application:
Java VM Options You Should Always Use in Production

If you are able to run on Java 11 (or 10) instead of 8, the memory limit options have been much improved (plus the JVM is cgroups-aware). Just use -XX:MaxRAMPercentage (range 0.0, 100.0):
$ docker run -m 1GB openjdk:11 java -XshowSettings:vm -XX:MaxRAMPercentage=80 -version
VM settings:
Max. Heap Size (Estimated): 792.69M
Using VM: OpenJDK 64-Bit Server VM
openjdk version "11.0.1" 2018-10-16
OpenJDK Runtime Environment (build 11.0.1+13-Debian-2)
OpenJDK 64-Bit Server VM (build 11.0.1+13-Debian-2, mixed mode, sharing)
That way, you can easily specify 80% of available container memory for the heap, which wasn't possible with the old options.

Thanks #VAS for your comments. Thanks for the kubernetes links.
After few tests I think that its not a good idea to specify XMX if you are using -XX:+UseCGroupMemoryLimitForHeap since XMX overrides it. I am still doing some more tests & profiling.
Since my requirement is running a JVM inside a docker container. I did few tests as mentioned in the posts by #Eugene. Considering every app running inside a JVM would need HEAP and some native memory, I think we need to specify -XX:+UnlockExperimentalVMOptions, XX:+UseCGroupMemoryLimitForHeap, -XX:MaxRAMFraction=1 (considering only the JVM running inside the container, at the same time its risky) -XX:MaxRAM (I think we should specify this if MaxRAMFraction is 1 so that you leave some for native memory)
Few tests:
As per below docker configuration, the docker is allocated 1 GB considering you only have the JVM running inside the container. Considering docker's allocation to 1G and I also want to allocate some to the process/native memory, I think I should use MaxRam=700M so that I have 300 MB for native.
$ docker run -m 1GB openjdk:8u131 java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XX:MaxRAM=700M -XshowSettings:vm -version
VM settings:
Max. Heap Size (Estimated): 622.50M
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
Now specifying XX:MaxRAMFraction=1 might be killing:
references: https://twitter.com/csanchez/status/940228501222936576?lang=en
Is -XX:MaxRAMFraction=1 safe for production in a containered environment?
Following would be better, please note I have removed MaxRAM since MaxRAMFraction > 1 :
$ docker run -m 1GB openjdk:8u131 java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -XshowSettings:vm -version
VM settings:
Max. Heap Size (Estimated): 455.50M
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
This gives rest of the 500M for native e.g. could be used for MetaSpace by specifying -XX:MaxMetaspaceSize:
$ docker run -m 1GB openjdk:8u131 java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -XX:MaxMetaspaceSize=200M -XshowSettings:vm -version
VM settings:
Max. Heap Size (Estimated): 455.50M
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
Logically and also as per the above references, it makes sense to specify -XX:MaxRAMFraction >1. This also depends on the application profiling done.
I am still doing some more tests, will update these results or post. Thanks

recently I've come also across similar issue
java 11.0.11+9 + kubernetes running docker containers in pod
similar config as op
resources:
requests:
memory: "1G"
cpu: 400m
limits:
memory: "1G"
with -XX:MaxRAMPercentage=60.0
Our service uploads and downloads a lot of data. Therefore direct memory is being used and in this issue I've found that MaxDirectMemorySize is equal to heapsize. So if we calculate the memory usage it could go behind limit 1G (1G * 0.6 * 2). In this case we've increased memory to 1.5G and changed -XX:MaxRAMPercentage=35.0 so we have enough space for heap + direct memory and even for some OS related tasks. Be cautious when you set up MaxRAMPercentage or Xmx in container environment.

Related

Nifi 1.6.0 memory leak

We're running Docker containers of NiFi 1.6.0 in production and have to come across a memory leak.
Once started, the app runs just fine, however, after a period of 4-5 days, the memory consumption on the host keeps on increasing. When checked in the NiFi cluster UI, the JVM heap size used hardly around 30% but the memory on the OS level goes to 80-90%.
On running the docker starts command, we found that the NiFi docker container is consuming the memory.
After collecting the JMX metrics, we found that the RSS memory keeps growing. What could be the potential cause of this? In the JVM tab of cluster dialog, young GC also seems to be happening in a timely manner with old GC counts shown as 0.
How do we go about identifying in what's causing the RSS memory to grow?
You need to replicate that in a non-docker environment, because with docker, memory is known to raise.
As I explained in "Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container", docker has some bugs (like issue 10824 and issue 15020) which prevent an accurate report of the memory consumed by a Java process within a Docker container.
That is why a plugin like signalfx/docker-collectd-plugin mentions (two weeks ago) in its PR -- Pull Request -- 35 to "deduct the cache figure from the memory usage percentage metric":
Currently the calculation for memory usage of a container/cgroup being returned to SignalFX includes the Linux page cache.
This is generally considered to be incorrect, and may lead people to chase phantom memory leaks in their application.
For a demonstration on why the current calculation is incorrect, you can run the following to see how I/O usage influences the overall memory usage in a cgroup:
docker run --rm -ti alpine
cat /sys/fs/cgroup/memory/memory.stat
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
dd if=/dev/zero of=/tmp/myfile bs=1M count=100
cat /sys/fs/cgroup/memory/memory.stat
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
You should see that the usage_in_bytes value rises by 100MB just from creating a 100MB file. That file hasn't been loaded into anonymous memory by an application, but because it's now in the page cache, the container memory usage is appearing to be higher.
Deducting the cache figure in memory.stat from the usage_in_bytes shows that the genuine use of anonymous memory hasn't risen.
The signalFX metric now differs from what is seen when you run docker stats which uses the calculation I have here.
It seems like knowing the page cache use for a container could be useful (though I am struggling to think of when), but knowing it as part of an overall percentage usage of the cgroup isn't useful, since it then disguises your actual RSS memory use.
In a garbage collected application with a max heap size as large, or larger than the cgroup memory limit (e.g the -Xmx parameter for java, or .NET core in server mode), the tendency will be for the percentage to get close to 100% and then just hover there, assuming the runtime can see the cgroup memory limit properly.
If you are using the Smart Agent, I would recommend using the docker-container-stats monitor (to which I will make the same modification to exclude cache memory).
Yes, NiFi docker has memory issues, shoots up after a while & restarts on its own. On the other hand, the non-docker works absolutely fine.
Details:
Docker:
Run it with 3gb Heap size & immediately after the start up it consumes around 2gb. Run some processors, the machine's fan runs heavily & it restarts after a while.
Non-Docker:
Run it with 3gb Heap size & it takes 900mb & runs smoothly. (jconsole)

Spring boot is consuming too much RAM

I have created some services in spring boot, I have 11 fat jars and I deploy them in docker containers, my doubt was that every jar was consuming between 1 and 1.5 GB of RAM without any use, I check the RAM by running:
docker stats containername
At first I thought that it was the java container and I tried to change to one that uses alpine but nothing changed, so I think the only problem is my jar. Is there a way to change the RAM that the jar is using? Or this behavior is normal because every jar has an embedded tomcat? Or maybe is better to put some jars together and deploy them as war and use only one tomcat for a group of "jars"? Can someone share his/her experience?,
Thanks in advance.
This is how Java behaves in general. The JVM takes as much memory as you give it, and it will perform a process called Garbage collection (What is the garbage collector in Java) to free up space once it decides it should do so.
However, if you don't tell your JVM how much memory it can use, it will use the system defaults, which depend on your systems memory and the amount of cores you have. You can verify this using the following command (How is the default Java heap size determined):
java -XX:+PrintFlagsFinal -version | grep HeapSize
On my machine, that's an initial heap memory of 256MiB and a maximum heap size of 4GiB. However, that doesn't mean that your application needs it.
A good way of measuring your memory is by using a monitoring tool like jvisualvm. Additionally, you could use actuator's /health endpoint to see the heap memory usage as well.
Your heap memory usage will normally have a sawtooth pattern (Why a sawtooth shaped graph), where the memory is gradually being used, and eventually freed by the garbage collector.
The memory that is left over after a garbage collection are usually objects that cannot be destroyed because they're still in use. You could see this as your working memory. Now, to configure your -Xmx you'll have to see how your application behaves after trying it out:
Configure it below your normal memory usage and your application will go out of memory, throwing an OutOfMemoryError.
Configure it too low but above your minimal memory usage, and you will see a huge performance hit, due to the garbage collector continuously having to free memory.
Configure it too high and you'll reserve memory you won't need in most of the cases, so wasting too much resources.
From the screenshot above, you can see that my application reserves about 1GiB of memory for heap usage, while it only uses about 30MiB after a garbage collection. That means that it has a way too high -Xmx value, so we could change it to different values and see how the application behaves.
People often prefer to work in powers of 2 (even though there is no limitation, as seen in jvm heap setting pattern). In my case, I need to go with at least 30MiB, since that's the amount of memory my application uses at all times. So that means I could try -Xmx32m, see how it performs, and adjust if it goes out of memory or performs worse.
You can set memory usage of docker container using -e JAVA_OPTS="-Xmx64M -Xms64M".
docker file:
FROM openjdk:8-jre-alpine
VOLUME ./mysql:/var/lib/mysql
ADD /build/libs/application.jar app.jar
ENTRYPOINT exec java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar
image run:
docker run -d --name container-name -p 9100:9100 -e JAVA_OPTS="-Xmx512M -Xms512M" imagename:tag
Here i set 512Mb memory usage . you can set 1g or as per your requirement. After run using this check your memory usage. it will max 512Mb.
After taking a look into the openjkd DockerHub image documentation it seems that you can set the Default Heap Size by setting -XX:MaxRAM=...:
RAM limit is supported by Windows Server containers, but currently JVM
cannot detect it. To prevent excessive memory allocations,
-XX:MaxRAM=... option must be specified with the value that is not bigger than a containers RAM limit.
From the oracle docs:
Default Heap Size Unless the initial and maximum heap sizes are specified on the command line, they are calculated based on the amount
of memory on the machine.

Kubernetes pods restart issue anomaly

My Java microservices are running in k8s cluster hosted on AWS EC2 instances.
I have around 30 microservice(a good mix of nodejs and Java 8) running in a K8s cluster. I am facing a challange where my java application pods gets restart unexpectedly which leads to increase in application 5xx count.
To debug this, I started a newrelic agent in pod along with application and found the following graph:
Where I can see that, I have Xmx value as 6GB and my uses is max 5.2GB.
This clearly stats that JVM is not crossing the Xmx value.
But when I describe the pod and look for last state it says "Reason:Error" with "Exit code: 137"
Then on further investigation I find that my Pod average memory uses is close to its limit all the time.(Allocated 9Gib, uses ~9Gib). I am not able to understand why memory uses is so high in Pod even thogh I have only one process running((JVM) and that too is restricted with 6Gib Xmx.
When I login to my worker nodes and check the status of docker containers I can see the last container of that appriction with Exited state and says "Container exits with non-zero exit code 137"
I can see the wokernode kernel logs as:
which shows kernel is terminitaing my process running inside container.
I can see I have lot of free memory in my worker node.
I am not sure why my pods get restart again and again is this k8s behaviour or something spoofy in my infrastructure. This force me to move my application from Container to VM again as this leades to increase in 5xx count.
EDIT: I am getting OOM after increasing memory to 12GB.
I am not getting sure why POD is getting killed because of OOM th
ough JVM xmx is 6 GB only.
Need help!
Some older Java versions( prior to Java 8 u131 release) don’t recognize that they are running in a container. So even if you specify maximum heap size for the JVM with -Xmx, the JVM will set the maximum heap size based on the host’s total memory instead of the memory available to the container and then when a process tries to allocate memory over its limit(defined in a pod/deployment spec) your container is getting OOMKilled.
These problems might not pop up when running your Java apps in K8 cluster locally, because the difference between pod memory limit and total local machine memory aren’t big. But when you run it in production on nodes with more memory available, then JVM may go over your container memory limit and will be OOMKilled.
Starting from Java 8(u131 release) it is possible to make JVM be “container-aware” so that it recognizes constraints set by container control groups (cgroups).
For Java 8(from U131 release) and Java9 you can set this experimental flags to JVM:
-XX:+UnlockExperimentalVMOptions
-XX:+UseCGroupMemoryLimitForHeap
It will set the heap size based on your container cgroups memory limit, which is defined as "resources: limits" in your container definition part of the pod/deployment spec.
There still probably can be cases of JVM’s off-heap memory increase in Java 8, so you might monitor that, but overall those experimental flags must be handling that as well.
From Java 10 these experimental flags are the new default and are enabled/disabled by using this flag:
-XX:+UseContainerSupport
-XX:-UseContainerSupport
Since you have limitedthe maximum memory usage of your pod to 9Gi, it will be terminated automatically when the memory usage get to 9Gi.
In GCloud App Engine you can Specify max. CPU usage threshold, e.b. 0.6. Meaning that if CPU reaches 0.6 of 100% - 60% - a new instance will spawn.
I did not come across such a setting, but maybe: Kubernetes POD/Deployment has similar configuration parameter. Meaning, if RAM of POD reaches 0.6 of 100%, terminate POD. In your case that would be 60% of 9GB = ~5GB. Just some Food for thought.

What are the recommended settings for the jvm memory of RestHeart?

In the documentation, it does not specify the memory needed for the JVM, in the post on the performance either.
RESTHeart runs on java 8 and java 8 uses the metaspace memory model which usually does not need jvm memory tuning at all.
we at softinstigate.com usually run restheart with docker and run it on aws ecs service.
we configure restheart threading as follows:
# Number of I/O threads created for non-blocking tasks. at least 2. suggested value: core*2
io-threads: 2
# Number of threads created for blocking tasks (such as ones involving db access). suggested value: core*16
worker-threads: 8
on aws ecs we set a soft memory limit of 1Gb for the docker container running restheart and we never had a memory issue (even on heavy load)

JMeter OutOfMemoryError

I am facing below OutOfMemor errors, and JMeter stops working....
java.lang.OutOfMemoryError: Java heap space Dumping heap to
java_pid4412.hprof ... Heap dump file created [591747609 bytes in
71.244 secs] Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space Exception in thread
"AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space
Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError:
Java heap space Exception in thread "AWT-EventQueue-0"
java.lang.OutOfMemoryError: Java heap space
How can it be resolved?
My System is having very good specification like 16GB RAM, 2x Quad Core processors, with 146 GB HDD.
Can anyone help me?
Your Heap dump shows that you are using default JMeter settings of 512 Mo.
so even if you have 16gb you are not using them.
Replace default JVM optional in jmeter.bat to the right size:
set HEAP=-server -Xms768m -Xmx768m -Xss128k
set NEW=-XX:NewSize=1024m -XX:MaxNewSize=1024m
Also look at:
http://wiki.apache.org/jmeter/JMeterFAQ#JMeter_keeps_getting_.22Out_of_Memory.22_errors.__What_can_I_do.3F
http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
How much memory have you allocated for the JVM? Somewhere aroung 512 MB?
The configuration is
java -Xms<initial heap size> -Xmx<maximum heap size>
To optimize OutOfMemoryError these steps should be followed:
Increase the Java Heap Size:
JMeter is a Java tool it runs with JVM. To obtain maximum capability, we need to provide maximum resources to JMeter during execution.First, we need to increase heap size (Inside JMeter bin directory, we get jmeter.bat/sh).
HEAP=-Xms512m –Xmx512m
It means default allocated heap size is minimum 512MB, maximum 512MB. Configure it as per you own machine configuration. It should also be kept in mind that OS also need some amount of memory, so all of the physical RAM shouldn't be allocated.
Run Tests in Non-GUI Mode:
JMeter is Java GUI application. It also has the non-GUI edition which is very resource intensive(CPU/RAM). If we run Jmeter in non-GUI mode , it will consume less resource and we can run more thread.
Disable ALL Listeners during the Test Run. They are only for debugging and use them to design the desired script.
Listeners should be disabled during load tests. Enabling them causes additional overheads, which consume valuable resources that are needed by more important elements of your test.
Use Up-to-Date Software:
Java and JMeter should be kept updated.
Decide Which Metrics You Need to Store:
When it comes to storing requests and response headers, assertion results and response data can consume a lot of memory! So it is wise try not to store these values on JMeter unless it’s absolutely necessary.
Tweak JVM:
The following JVM arguments in JMeter startup scripts can also be added or modified:
1. Add memory allocation rate:
NEW=-XX:NewSize=128m -XX:MaxNewSize=512m
This means memory will be increased at this rate.
2.-server - this switches JVM into “server” mode with runtime parameters optimization. In this mode, JMeter starts more slowly, but the overall throughput will be higher.
3. -d64 - While using a 64-bit OS, using this parameter can explicitly tell JVM to run in 64-bit mode.
4. -XX:+UseConcMarkSweepGC - this forces the usage of the CMS garbage collector. It will lower the overall throughput but leads to much shorter CPU intensive garbage collections.
5. -XX:+DisableExplicitGC - this prevents applications from forcing expensive garbage collections and helps avoid unexpected pauses.
For better and more elaborated understanding, this blog about 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure is helpful.
A lot of these answers are out-of-date. Currently using jmeter v5.1.1 r1855137:
# Set var to increase available memory
JVM_ARGS="-Xms2048m -Xmx4096m"
# Run jmeter via sh script, e.g.:
/jmeter/5.1.1/libexec/bin/jmeter.sh -n -t testfile.jmx -l results.jtl -j log.txt
You can verify that the increase in memory is available via the log.txt file, which will show the following using the values above:
INFO o.a.j.JMeter: Max memory =3817865216
I also had this problem and it did not matter how much I adjusted the configuration java -Xms<initial heap size> -Xmx<maximum heap size>, as I always ran out of memory. In the end I found out that running JMeter in GUI mode (especially with listeners) causes a bottleneck. The best way to use JMeter, especially for extended testing or running multiple slave servers, is in non-GUI mode, which looks something like this:
jmeter -n -t testplan.jmx -r
Check out this link and read how to do remote testing the proper way: http://wiki.apache.org/jmeter/JMeterFAQ#How_to_do_remote_testing_the_.27proper_way.27.3F. Read the section on remote testing the 'proper way'.
Hope this helps.
You should check whether you're not using a tree result listener during your tests with many users.
Check jmeter best practices to avoid this kind of issues.
Regards
Though your server has 16 GB RAM, JMeter's default heap size 512 MB, Increase heap size by following below steps
1. Open jmeter file using vi editor /text editor
2. Search for "HEAP"
3. Change minimum (-Xms) and Maximum (-Xmx) heap values as you required
4. Save and quit (!wq enter)
5. Start Jmeter by sh jmeter.sh or bash jmeter.bat or java -jar ApacheJMeter.jar
You have to change HEAP SIZE in jmeter.bat file. First you have to convert 2GB or 6GB (OR the size which you want to put) size into mb and then save it and relaunch .bat file.
Set value for "set HEAP=-Xms512m -Xmx512m"
In Jmeter version 3.x it is mentioned in $JMETER_HOME/bin/jemter.sh(jmeter.bat):
## Environment variables:
## JVM_ARGS - optional java args, e.g. -Dprop=val
## e.g.
## JVM_ARGS="-Xms512m -Xmx512m" jmeter.sh etc.
so in your case you can set it as much as sufficient to your needs for example:
JVM_ARGS="-Xms1024m -Xmx1024m"
Run Jmeter in non-GUI mode. Increase the heap size of the memory. Add very less/no listeners.
For running Jmeter in non-GUI mode, go to bin directory and open command prompt in that window. use the following command "jmeter.bat -n -t Test.jmx -l Test.csv" here Test.jmx is the test file I need to open in non-GUI mode and Test.csv is the file in which I need my results stored.
For increasing size of the memory use the command HEAP="-Xms512m -Xmx2048m" here 512 is the already allocated memory and 2048 is the memory I need to allot to Jmeter.
Hope this helps
Adjust the heap size as mentioned in the other answers and also take some best practices into account
When running a test (not when validating of course) use the non-gui mode
Disable any heavy listener such as View Result Tree but instead use a Simple Data Writer and analyze your data afterwards
These 2 items will already greatly increase your performance and heap size usage

Resources