When Cassandra is running almost all RAM is consumed, why? - memory

I have CentOS 6.8, Cassandra 3.9, 32 GB RAM. When I start Cassandra and once it is started, it starts consuming the memory and start adding up 'Cached' memory value when I start querying from CQLSH or Apache Spark and in this process, very less memory remain for other processing like cron execution.
Here are some details from my system
free -m
total used free shared buffers cached
Mem: 32240 32003 237 0 41 24010
-/+ buffers/cache: 7950 24290
Swap: 2047 25 2022
And here is the output of top -M command
top - 08:54:39 up 5 days, 16:24, 4 users, load average: 1.22, 1.20, 1.29
Tasks: 205 total, 2 running, 203 sleeping, 0 stopped, 0 zombie
Cpu(s): 3.5%us, 1.2%sy, 19.8%ni, 75.3%id, 0.1%wa, 0.1%hi, 0.0%si, 0.0%st
Mem: 31.485G total, 31.271G used, 219.410M free, 42.289M buffers
Swap: 2047.996M total, 25.867M used, 2022.129M free, 23.461G cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14313 cassandr 20 0 595g 28g 22g S 144.5 91.3 300:56.34 java
You can see only 220 MB is left and 23.46 is cached.
My question is how to configure Cassandra so that it can use 'cached' memory to certain value and leave more RAM available for other processes.
Thanks in advance.

In linux in general cached memory as your 23g is just really fine. This memory is used as filesystem cache and so on - not by cassandra itself. Linux systems tend to use all available memory.
This helps to speed up your system in many ways to prevent disk reads.
You can still use the cached memory - just start processes and use your ram, the kernel will free it immediatly.

You can set the sizes in cassandra-env.sh under conf folder. This article should help. http://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsTuneJVM.html

Related

already use docker limit, why server cpu load average still high

cpu:8
memory:64G
when use docker swarm to deploy application, already limit cpu and memory
docker service update java --reserved-cpu 2 --limit-cpu 2 --limit-memory 4G --reserve-memory 4G
use top to check server,
top - 11:09:26 up 889 days, 19:31, 16 users, load average: 72.50, 78.28, 55.54
Tasks: 271 total, 2 running, 183 sleeping, 0 stopped, 0 zombie
%Cpu(s): 36.7 us, 7.2 sy, 0.0 ni, 52.4 id, 0.0 wa, 0.0 hi, 3.7 si, 0.0 st
KiB Mem : 65916324 total, 27546636 free, 8884904 used, 29484784 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 56442876 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
15566 root 20 0 12.563g 2.871g 17580 S 199.0 4.6 42:06.92 java
45076 root 20 0 19.018g 337692 16680 S 167.2 0.5 0:07.38 java
14692 root 20 0 1941688 122152 50868 S 4.0 0.2 617:42.71 dockerd
have not other application, why load average run so high
could someone help to check it,thx

understanding docker container cpu usages

docker stats shows that the cpu usage to be very high. But top command out shows that 88.3% cpu is not being used. Inside the container is a java service httpthrift service.
docker stats :
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
8a0488xxxx5 540.9% 41.99 GiB / 44 GiB 95.43% 0 B / 0 B 0 B / 35.2 MB 286
top output :
top - 07:56:58 up 2 days, 22:29, 0 users, load average: 2.88, 3.01, 3.05
Tasks: 13 total, 1 running, 12 sleeping, 0 stopped, 0 zombie
%Cpu(s): 8.2 us, 2.7 sy, 0.0 ni, 88.3 id, 0.0 wa, 0.0 hi, 0.9 si, 0.0 st
KiB Mem: 65959920 total, 47983628 used, 17976292 free, 357632 buffers
KiB Swap: 7999484 total, 0 used, 7999484 free. 2788868 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8823 root 20 0 58.950g 0.041t 21080 S 540.9 66.5 16716:32 java
How to reduce the cpu usage and bring it under 100%?
According to the top man page:
When operating in Solaris mode (`I' toggled Off), a task's cpu usage will be divided by the total number of CPUs. After issuing this command, you'll be told the new state of this toggle.
So by pressing the key I when using top in interactive mode, you will switch to the Solaris mode and the CPU usage will be divided by the total number of CPUs (or cores).
P.S.: This option is not available on all versions of top.

Spark: Not enough space to cache red in container while still a lot of total storage memory

I have a 30 node cluster, each node has 32 core, 240 G memory (AWS cr1.8xlarge instance). I have the following configurations:
--driver-memory 200g --driver-cores 30 --executor-memory 70g --executor-cores 8 --num-executors 90
I can see from the job tracker that I still have a lot of total storage memory left, but in one of the containers, I got the following message saying Storage limit = 28.3 GB. I am wondering where does this 28.3 GB came from? My memoryFraction for storage is 0.45
And how do I solve this Not enough space to cache rdd issue? Should I do more partition or change default parallelism ... since I still have a lot of total storage memory unused. Thanks!
15/12/05 22:39:36 WARN storage.MemoryStore: Not enough space to cache rdd_31_310 in memory! (computed 1326.6 MB so far)
15/12/05 22:39:36 INFO storage.MemoryStore: Memory use = 9.6 GB (blocks) + 18.1 GB (scratch space shared across 4 tasks(s)) = 27.7 GB. Storage limit = 28.3 GB.
15/12/05 22:39:36 WARN storage.MemoryStore: Not enough space to cache rdd_31_136 in memory! (computed 1835.8 MB so far)
15/12/05 22:39:36 INFO storage.MemoryStore: Memory use = 9.6 GB (blocks) + 18.1 GB (scratch space shared across 5 tasks(s)) = 27.7 GB. Storage limit = 28.3 GB.
15/12/05 22:39:36 INFO executor.Executor: Finished task 136.0 in stage 12.0 (TID 85168). 1272 bytes result sent to driver

How to improve Nginx, Rails, Passenger memory usage?

I currently have a rails app set up on a Digital Ocean VPS (1GB RAM) trough Cloud 66. The problem being that the VPS' memory runs full with Passenger processes.
The output of passenger-status:
# passenger-status
Version : 4.0.45
Date : 2014-09-23 09:04:37 +0000
Instance: 1762
----------- General information -----------
Max pool size : 2
Processes : 2
Requests in top-level queue : 0
----------- Application groups -----------
/var/deploy/cityspotters/web_head/current#default:
App root: /var/deploy/cityspotters/web_head/current
Requests in queue: 0
* PID: 7675 Sessions: 0 Processed: 599 Uptime: 39m 35s
CPU: 1% Memory : 151M Last used: 1m 10s ago
* PID: 7686 Sessions: 0 Processed: 477 Uptime: 39m 34s
CPU: 1% Memory : 115M Last used: 10s ago
The max_pool_size seems to be configured correctly.
The output of passenger-memory-stats:
# passenger-memory-stats
Version: 4.0.45
Date : 2014-09-23 09:10:41 +0000
------------- Apache processes -------------
*** WARNING: The Apache executable cannot be found.
Please set the APXS2 environment variable to your 'apxs2' executable's filename, or set the HTTPD environment variable to your 'httpd' or 'apache2' executable's filename.
--------- Nginx processes ---------
PID PPID VMSize Private Name
-----------------------------------
1762 1 51.8 MB 0.4 MB nginx: master process /opt/nginx/sbin/nginx
7616 1762 53.0 MB 1.8 MB nginx: worker process
### Processes: 2
### Total private dirty RSS: 2.22 MB
----- Passenger processes -----
PID VMSize Private Name
-------------------------------
7597 218.3 MB 0.3 MB PassengerWatchdog
7600 565.7 MB 1.1 MB PassengerHelperAgent
7606 230.8 MB 1.0 MB PassengerLoggingAgent
7675 652.0 MB 151.7 MB Passenger RackApp: /var/deploy/cityspotters/web_head/current
7686 652.1 MB 116.7 MB Passenger RackApp: /var/deploy/cityspotters/web_head/current
### Processes: 5
### Total private dirty RSS: 270.82 MB
.. 2 Passenger RackApp processes, OK.
But when I use the htop command, the output is as follows:
There seem to be a lot of Passenger Rackup processes. We're also running Sidekiq with the default configuration.
New Relic Server reports the following memory usage:
I tried tuning Passenger settings, adding a load balancer and another server but honestly don't know what to do from here. How can I find out what's causing so much memory usage?
Update: I had to restart ngnix because of some changes and it seemed to free quite a lot of memory.
Press Shift-H to hide threads in htop. Those aren't processes but threads within a process. The key column is RSS: you have two passenger processes at 209MB and 215MB and one Sidekiq process at 154MB.
Short answer: this is completely normal memory usage for a Rails app. 1GB is simply a little small if you want multiple processes for each. I'd cut down passenger to one process.
Does your application create child processes? If so, then it's likely that those extra "Passenger RackApp" processes are not actually processes created by Phusion Passenger, but are in fact processes created by your own app. You should double check whether your app spawns child processes and whether you clean up those child processes correctly. Also double check whether any libraries you use, also properly clean up their child processes.
I see that you're using Sidekiq and you've configured 25 Sidekiq processes. Those are also eating a lot of memory. A Sidekiq process eats just as much memory as a Passenger RackApp process, because both of them load your entire application (including Rails) in memory. Try reducing the number of Sidekiq processes.

Memory usage on Debian Linux

I am trying to figure out what is wrong with my Debian server - I am getting warnings of having not enough free memory - top (as you can see below) is saying that 1.8G is consumed, but I am unable to find which application is responsible for it. There is only Tomcat running, which consumes, according to top, ~25 % and so 530m. But There is more than 1 GB left, which I am unable to find!
Tasks: 54 total, 1 running, 53 sleeping, 0 stopped, 0 zombie
Cpu(s):100.0%us, 0.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 2150400k total, 1877728k used, 272672k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3271 root 18 0 1559m 530m 12m S 0 25.2 1:44.31 java
1568 mysql 15 0 270m 71m 7332 S 0 3.4 0:50.79 mysqld
(Full top output here)
Linux systems always try to use as much ram as available for various functions like caching of executables our even just page reads from disk. That's what you bought your fast RAM for after all.
You can find out more about your system by doing a
cat /proc/meminfo
More info in this helpful blog post
If you find out a lot is used in cache then you don't have to worry about the system.if individual processes warn you about memory issues then you'll have to check their settings for any memory limiting settings. Many server processes have those, like php or java based processes.
Questions of this nature are also probably more at home at https://serverfault.com/
As i see, your 'Free' command returned NO swap space
Swap: 0k total, 0k used, 0k free, 0k cached
either there is no swap partition available
this swap space is not mounted
one can manual make a swapfile
and mount this file as becoming the active swap
howto make swap
To test your real usage
reboot the machine and
test the used amount
retest after 1 hour
some processes are memory hoggs
like apache or ntop
refer:
check memory
display sorted
memory usage

Resources