Cassandra eats memory - memory

I have Cassandra 2.1 and following properties set:
MAX_HEAP_SIZE="5G"
HEAP_NEWSIZE="800M"
memtable_allocation_type: heap_buffers
top utility shows that cassandra eats 14.6G virtual memory:
KiB Mem: 16433148 total, 16276592 used, 156556 free, 22920 buffers
KiB Swap: 16777212 total, 0 used, 16777212 free. 9295960 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
23120 cassand+ 20 0 14.653g 5.475g 29132 S 318.8 34.9 27:07.43 java
It also dies with various OutOfMemoryError exceptions when I am accessing it from Spark.
How I can prevent this "OutOfMemoryErrors" and reduce memory usage?

Cassandra do eat to much memory but it can be controlled but tuning the GC [Garbage Collection] setting.
GC parameters are contained in the bin/cassandra.in.sh file in the JAVA_OPTS variable.
you can apply these settings in JAVA_OPTS
-XX:+UseConcMarkSweepGC
-XX:ParallelCMSThreads=1
-XX:+CMSIncrementalMode
-XX:+CMSIncrementalPacing
-XX:CMSIncrementalDutyCycleMin=0
-XX:CMSIncrementalDutyCycle=10
Or instead of specifying MAX_HEAP_SIZE and HEAP_NEWSIZE these parameter let cassandra'script specify these parameter Because it will assign best values for these parameter.

Related

jvm in kubernetes/docker running out of memory faster than standalone

We are moving our JDK 1.8v131 JVM servers to Kubernetes/Docker environment.
We have few JVM servers running in stand alone VMs and few running Kubernetes/Docker environment and both types are present in production.
With the same load Kubernetes/Docker JVMs are running out of memory whereas JVMs in VMs are running fine without issues.
We used exact SAME JVM parameters for running in VM & Container.
Any ideas how to fix this issue?
Here are the options:
Environment:
JAVA_MEM_OPTS: -Xms2048M -Xmx2048M
-XX:MaxPermSize=256M -XX:+ExitOnOutOfMemoryError -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/heapdumps/${HOSTNAME}_$(date +%Y%m%d_%H_%M_%S).hprof
JAVA_GC_OPTS: -Dnogclogging=true -XX:+PrintGC -XX:+PrintGCDetail
2018-12-07T15:43:21.42043862Z {Heap before GC invocations=2880 (full
625): 2018-12-07T15:43:21.420465613Z PSYoungGen total
435712K, used 249344K
2018-12-07T15:43:21.420469712Z eden space 249344K, 100% used 2018-12-07T15:43:21.420472561Z from space 186368K, 0% used
2018-12-07T15:43:21.420475332Z to space 228352K, 0% used
2018-12-07T15:43:21.420477921Z ParOldGen total 1398272K,
used 1397679K 2018-12-07T15:43:21.420480674Z object space
1398272K, 99% used 2018-12-07T15:43:21.420483127Z Metaspace
used 229431K, capacity 249792K, committed 249968K, reserved 1271808K
2018-12-07T15:43:21.420485549Z class space used 24598K,
capacity 27501K, committed 27544K, reserved 1048576K
2018-12-07T15:43:22.628605014Z 2018-12-07T15:43:21.420+0000:
124733.208: ] ] 1647023K->1646334K(1833984K), ], 1.2079201 secs] [Times: user=1.98 sys=0.01, real=1.21 secs]
2018-12-07T15:43:22.62868917Z Heap after GC invocations=2880 (full
625): 2018-12-07T15:43:22.628794768Z PSYoungGen total
435712K, used 248654K
2018-12-07T15:43:22.628799885Z eden space 249344K, 99% used 2018-12-07T15:43:22.628803713Z from space 186368K, 0% used
2018-12-07T15:43:22.628807485Z to space 228352K, 0% used
2018-12-07T15:43:22.628811115Z ParOldGen total 1398272K,
used 1397679K 2018-12-07T15:43:22.62881498Z object space
1398272K, 99% used 2018-12-07T15:43:22.628818943Z Metaspace
used 229431K, capacity 249792K, committed 249968K, reserved 1271808K
2018-12-07T15:43:22.628827543Z class space used 24598K,
capacity 27501K, committed 27544K, reserved 1048576K
2018-12-07T15:43:22.628831766Z } 2018-12-07T15:43:22.632712004Z
{Heap before GC invocations=2881 (full 626):
2018-12-07T15:43:22.63273803Z PSYoungGen total 435712K, used
249344K
2018-12-07T15:43:22.632742051Z eden space 249344K, 100% used **
**2018-12-07T15:43:22.63274617Z from space 186368K, 0% used 2018-12-07T15:43:22.632752151Z to space 228352K, 0% used
2018-12-07T15:43:22.632756279Z ParOldGen total 1398272K, used
1397679K 2018-12-07T15:43:22.632760269Z object space
1398272K, 99% used 2018-12-07T15:43:22.632764456Z Metaspace
used 229431K, capacity 249792K, committed 249968K, reserved 1271808K
2018-12-07T15:43:22.632768599Z class space used 24598K,
capacity 27501K, committed 27544K, reserved 1048576K
2018-12-07T15:43:23.164683101Z 2018-12-07T15:43:22.632+0000:
124734.420:
SERVER RESTARTS HERE
Did you set your container memory resouce requests and limits? Jdk 8u131 doesn't know that it is running inside a container. It still sees the host VMs resources. That could be why your JVM inside the container is killed immediately.
There's a good article from redhat back in 2017.
https://developers.redhat.com/blog/2017/03/14/java-inside-docker/

Dask Scheduler Memory

our dask scheduler process seems to balloon in memory as time goes on and executions continue. Currently we see it using 5GB of mem, which seems high since all the data is supposedly living on the worker nodes:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
31172 atoz 20 0 5486944 5.071g 7100 S 23.8 65.0 92:38.64 dask-scheduler
when starting up the scheduler we would be below 1GB of memory use. Restarting the network doing a client.restart() doesn't seem to help, only a kill of the scheduler process itself and restart will free up the memory.
What is the expected usage of memory per single task executed?
Is the scheduler really only maintaining pointers to which worker contains the future's result?
----edit----
I think my main concern here is why a client.restart() doesn't seem to release the memory being used by the scheduler process. I'm obviously not expecting it to release all memory, but to get back to a base level. We are using client.map to execute our function across a list of different inputs. After executing, doing a client restart over and over and taking snapshots of our scheduler memory we see the following growth:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
27955 atoz 20 0 670556 507212 13536 R 43.7 6.2 1:23.61 dask-scheduler
27955 atoz 20 0 827308 663772 13536 S 1.7 8.1 16:25.85 dask-scheduler
27955 atoz 20 0 859652 696408 13536 S 4.0 8.5 19:18.04 dask-scheduler
27955 atoz 20 0 1087160 923912 13536 R 62.3 11.3 20:03.15 dask-scheduler
27955 atoz 20 0 1038904 875788 13536 S 3.7 10.7 23:57.07 dask-scheduler
27955 atoz 20 0 1441060 1.163g 12976 S 4.3 14.9 35:54.45 dask-scheduler
27955 atoz 20 0 1646204 1.358g 12976 S 4.3 17.4 37:05.86 dask-scheduler
27955 atoz 20 0 1597652 1.312g 12976 S 4.7 16.8 37:40.13 dask-scheduler
I guess I was just surprised that after doing a client.restart() we don't see the memory usage go back to some baseline.
----further edits----
Some more info about what we're running, since the suggestion was if we were passing in large data structures, to send them directly to the workers.
we send a dictionary as an input for each task, when json dumping the dict, most are under 1000 characters.
---- even further edits: Reproduced issue ----
We reproduced this issue again today. I killed off the scheduler and restarted it, we had about 5.4 GB of free memory, we then ran the function that I'll paste below across 69614 dictionary objects that really hold some file based information (all of our workers are mapped to the same NFS datastore and we are using Dask as a distributed file analysis system.
Here is the function (note: squarewheels4 is a homegrown lazy file extraction and analysis package, it uses Acora and libarchive as its base for getting files out of a compressed archive and indexing the file.)
def get_mrc_failures(file_dict):
from squarewheels4.platforms.ucs.b_series import ChassisTechSupport
from squarewheels4.files.ucs.managed.chassis import CIMCTechSupportFile
import re
dimm_info_re = re.compile(r"(?P<slot>[^\|]+)\|(?P<size>\d+)\|.*\|(?P<pid>\S+)")
return_dict = file_dict
return_dict["return_code"] = "NOT_FILLED_OUT"
filename = "{file_path}{file_sha1}/{file_name}".format(**file_dict)
try:
sw = ChassisTechSupport(filename)
except Exception as e:
return_dict["return_code"] = "SW_LOAD_ERROR"
return_dict["error_msg"] = str(e)
return return_dict
server_dict = {}
cimcs = sw.getlist("CIMC*.tar.gz")
if not cimcs:
return_dict["return_code"] = "NO_CIMCS"
return_dict["keys_list"] = str(sw.getlist("*"))
return return_dict
for cimc in cimcs:
if not isinstance(cimc, CIMCTechSupportFile): continue
cimc_id = cimc.number
server_dict[cimc_id] = {}
# Get MRC file
try:
mrc = cimc["*MrcOut.txt"]
except KeyError:
server_dict[cimc_id]["response_code"] = "NO_MRC"
continue
# see if our end of file marker is there, should look like:
# --- END OF FILE (Done!
whole_mrc = mrc.read().splitlines()
last_10 = whole_mrc[-10:]
eof_line = [l for l in last_10 if b"END OF FILE" in l]
server_dict[cimc_id]["response_code"] = "EOF_FOUND" if eof_line else "EOF_MISSING"
if eof_line:
continue
# get DIMM types
hit_inventory_line = False
dimm_info = []
dimm_error_lines = []
equals_count = 0
for line in whole_mrc:
# regex each line... sigh
if b"DIMM Inventory" in line:
hit_inventory_line = True
if not hit_inventory_line:
continue
if hit_inventory_line and b"=========" in line:
equals_count += 1
if equals_count > 2:
break
continue
if equals_count < 2:
continue
# we're in the dimm section and not out of it yet
line = str(line)
reg = dimm_info_re.match(line)
if not reg:
#bad :/
dimm_error_lines.append(line)
continue
dimm_info.append(reg.groupdict())
server_dict[cimc_id]["dimm_info"] = dimm_info
server_dict[cimc_id]["dimm_error_lines"] = dimm_error_lines
return_dict["return_code"] = "COMPLETED"
return_dict["server_dict"] = server_dict
return return_dict
```
the futures are generated like:
futures = client.map(function_name, file_list)
After in this state my goal was to try and recover and have dask release the memory that it had allocated, here were my efforts:
before cancelling futures:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21914 atoz 20 0 6257840 4.883g 2324 S 0.0 62.6 121:21.93 dask-scheduler
atoz#atoz-sched:~$ free -h
total used free shared buff/cache available
Mem: 7.8G 7.1G 248M 9.9M 415M 383M
Swap: 8.0G 4.3G 3.7G
while cancelling futures:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21914 atoz 20 0 6258864 5.261g 5144 R 60.0 67.5 122:16.38 dask-scheduler
atoz#atoz-sched:~$ free -h
total used free shared buff/cache available
Mem: 7.8G 7.5G 176M 9.4M 126M 83M
Swap: 8.0G 4.1G 3.9G
after cancelling futures:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21914 atoz 20 0 6243760 5.217g 4920 S 0.0 66.9 123:13.80 dask-scheduler
atoz#atoz-sched:~$ free -h
total used free shared buff/cache available
Mem: 7.8G 7.5G 186M 9.4M 132M 96M
Swap: 8.0G 4.1G 3.9G
after doing a client.restart()
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21914 atoz 20 0 6177424 5.228g 4912 S 2.7 67.1 123:20.04 dask-scheduler
atoz#atoz-sched:~$ free -h
total used free shared buff/cache available
Mem: 7.8G 7.5G 196M 9.4M 136M 107M
Swap: 8.0G 4.0G 4.0G
Regardless of what I ran through the distributed system, my expectation was that after cancelling the futures it would be back to at least close to normal... and after doing a client.restart() we would definitely be near our normal baseline. Am I wrong here?
--- second repro ----
Reproduced the behavior (although not total memory exhaustion) using these steps:
Here's my worker function
def get_fault_list_v2(file_dict):
import libarchive
return_dict = file_dict
filename = "{file_path}{file_sha1}/{file_name}".format(**file_dict)
with libarchive.file_reader(filename) as arc:
for e in arc:
pn = e.pathname
return return_dict
I ran that across 68617 iterations / files
before running we saw this much memory being utilized:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12256 atoz 20 0 1345848 1.107g 7972 S 1.7 14.2 47:15.24 dask-scheduler
atoz#atoz-sched:~$ free -h
total used free shared buff/cache available
Mem: 7.8G 3.1G 162M 22M 4.5G 4.3G
Swap: 8.0G 3.8G 4.2G
After running we saw this much:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12256 atoz 20 0 2461004 2.133g 8024 S 1.3 27.4 66:41.46 dask-scheduler
After doing a client.restart we saw:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12256 atoz 20 0 2462756 2.134g 8144 S 6.6 27.4 66:42.61 dask-scheduler
Generally a task should take up less than a kilobyte on the scheduler. There are a few things you can trip up on that result in storing significantly more, the most common of which is including data within the task graph, which is shown below.
Data included directly in a task graph is stored on the scheduler. This commonly occurs when using large data directly in calls like submit:
Bad
x = np.random.random(1000000) # some large array
future = client.submit(np.add, 1, x) # x gets sent along with the task
Good
x = np.random.random(1000000) # some large array
x = client.scatter(x) # scatter data explicitly to worker, get future back
future = client.submit(np.add, 1, x) # only send along the future
This same principle exists using other APIs as well. For more information, I recommend providing an mcve. It's quite hard to help otherwise.

Why does docker see the container is hitting the rss limit?

I'm trying to understand why the limits have decided a task needs to be killed, and how it's doing the accounting. When my GCE Docker container kills a process, it shows something like:
Task in /404daacfcf6b9e55f71b3d7cac358f0dc921a2d580eed460c2826aea8e43f05e killed as a result of limit of /404daacfcf6b9e55f71b3d7cac358f0dc921a2d580eed460c2826aea8e43f05e
memory: usage 2097152kB, limit 2097152kB, failcnt 74571
memory+swap: usage 0kB, limit 18014398509481983kB, failcnt 0
kmem: usage 0kB, limit 18014398509481983kB, failcnt 0
Memory cgroup stats for /404daacfcf6b9e55f71b3d7cac358f0dc921a2d580eed460c2826aea8e43f05e: cache:368KB rss:2096784KB rss_huge:0KB mapped_file:0KB writeback:0KB inactive_anon:16KB active_anon:2097040KB inactive_file:60KB active_file:36KB unevictable:0KB
[ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
[ 4343] 0 4343 5440 65 15 0 0 bash
[ 4421] 0 4421 265895 6702 77 0 0 npm
[ 4422] 0 4422 12446 2988 28 0 0 gunicorn
[ 4557] 0 4557 739241 346035 1048 0 0 gunicorn
[ 4560] 0 4560 1086 24 8 0 0 sh
[ 4561] 0 4561 5466 103 15 0 0 bash
[14594] 0 14594 387558 168790 672 0 0 node
Memory cgroup out of memory: Kill process 4557 (gunicorn) score 662 or sacrifice child
Killed process 4557 (gunicorn) total-vm:2956964kB, anon-rss:1384140kB, file-rss:0kB
Supposedly the memory hit a 2GB usage limit, and something needs to die. According to the cgroup stats, I appear to have 2GB of usage in active_anon and rss.
When I look at the table of process stats, I don't see where the 2GB is:
For rss, I see the two major processes 346035 + 168790 = 514MB?
For total_vm, I see three major processes 265895 + 739241 + 387558 = 1.4GB?
But when it decides to kill the gunicorn process, it says it had 3GB of Total VM and 1.4GB of Anon RSS. I don't see how this follows from the above numbers at all...
For most of it's life, according to top, the gunicorn process appears to hum along with 555m RES and 2131m VIRT and 22% MEM * 2.5GB box = 550MB of memory usage. (I haven't yet been able to time it properly to peek at top values at the time it dies...)
Can someone help me understand this?
Under what accounting, do these sum to 2GB of usage? (virtual? rss? something else?)
Is there something else besides top/ps I should use to track how much memory a process is using for the purposes of docker's killing it?
From what I know, the total_vm and rss are counted in 4kB (refer to: https://stackoverflow.com/a/43611576), instead of kB.
So for pid<4557>:
rss=346035, means anon-rss:1384140kB (=346035*4kB)
total_vm=739241, means total-vm:2956964kB(=739241*4kB)
This will explain your mem usage very well.

How to speed up sonarqube analysis job?

I have one java based application which is having huge line of source code(~1m).Now I am using jenkins with sonar-runner-2.4 to run analysis with code coverage and test cases count.I have upgraded sonarqube server from 5.4 to 6.3.1.Before upgrade this job took 9hrs to complete the whole analysis (still it is very much long time but fine) but after upgrade to sonarqube-6.3.1 same job taking 13hrs to complete the same analysis.
How do I improve analysis time at least my earlier time 9hr ?
EDIT
Here is my JAVA_OPTS for sonarqube-6.3.1 instance
sonar.web.javaOpts=-Xmx6G -Xms2G -XX:MaxPermSize=1G -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true
Available Hardware :
$lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Stepping: 5
CPU MHz: 1596.000
BogoMIPS: 3999.44
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 4096K
NUMA node0 CPU(s): 0-3
NUMA node1 CPU(s): 4-7
Available Memory :
$free -m
total used free shared buff/cache available
Mem: 128714 58945 66232 430 3535 68298
Swap: 32767 957 31810
sonar-project.properties for the long running job:
sonar-project.properties
As you haven't really given many details, I can't really give many details in the answer, but the simple answer is that you have to make the scan do less work.
Look at your codebase. Is your scan processing generated classes? Is it scanning test classes? Is it scanning classes that have little real business logic? If you answer "yes" to any of those, consider excluding those classes.
Look at the SonarQube plugins you're using. Are you running every possible plugin you can run? Are there some heuristics you don't need to run, or perhaps you could run less frequently?

Why does my garbage collection log show 3.8GB as the max available heap size while I have allocated 4GB as the max heap size?

I have a 64-bit hotspot JDK version 1.7.0 installed on a 64-bit RHEL 6 machine. I use the following JVM options for my tomcat application.
CATALINA_OPTS="${CATALINA_OPTS} -Dfile.encoding=UTF8 -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false -Duser.timezone=EST5EDT"
# General Heap sizing
CATALINA_OPTS="${CATALINA_OPTS} -Xms4096m -Xmx4096m -XX:NewSize=2048m -XX:MaxNewSize=2048m -XX:PermSize=512m -XX:MaxPermSize=512m -XX:+UseCompressedOops -XX:+DisableExplicitGC"
# Enable the CMS GC policy
CATALINA_OPTS="${CATALINA_OPTS} -XX:+UseConcMarkSweepGC -XX:CMSWaitDuration=15000 -XX:+CMSParallelRemarkEnabled -XX:+CMSCompactWhenClearAllSoftRefs -XX:+CMSConcurrentMTEnabled -XX:+CMSScavengeBeforeRemark -XX:+CMSClassUnloadingEnabled"
# Verbose Garbage Collection Logging
CURRENT_DATE=`date +%Y%m%d%H%M%S`
CATALINA_OPTS="${CATALINA_OPTS} -verbose:gc -XX:+PrintGCDetails -Xloggc:${CATALINA_BASE}/logs/gc-${CURRENT_DATE}.log -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution"
When I have a Garbage Collection analysis, the GC logs show a maximum available heap of only 3.8GB instead of 4GB allocated to the JVM. Why is that?
New Generation (2048M) consists of 80% Eden (1638.4M) and two Survivor Spaces (10% or 204.8M each):
Heap
par new generation total 1887488K, used 134226K [0x00000006fae00000, 0x000000077ae00000, 0x000000077ae00000)
eden space 1677824K, 8% used [0x00000006fae00000, 0x00000007031148e0, 0x0000000761480000)
from space 209664K, 0% used [0x0000000761480000, 0x0000000761480000, 0x000000076e140000)
to space 209664K, 0% used [0x000000076e140000, 0x000000076e140000, 0x000000077ae00000)
concurrent mark-sweep generation total 2097152K, used 242K [0x000000077ae00000, 0x00000007fae00000, 0x00000007fae00000)
At any time one of survivor spaces is empty (see Generations).
So, the useful heap size is 1638.4 + 204.8 + 2048 = 3891.2 MB

Resources