Passenger memory stats decomposing - ruby-on-rails

I've been looking for info on fusion passenger's memory stats (passenger-memory-stats) and haven't quite found what I'm looking for. For example below is stats for my latest app.
---- Passenger processes -----
PID VMSize Private Name
------------------------------
4238 22.9 MB 0.3 MB PassengerWatchdog
4241 31.7 MB 0.4 MB PassengerHelperAgent
4243 42.5 MB 6.7 MB Passenger spawn server
4246 72.9 MB 0.7 MB PassengerLoggingAgent
4403 273.3 MB 31.4 MB Passenger ApplicationSpawner: /var/www/TheApp
4556 281.0 MB 34.9 MB Rack: /var/www/TheApp
### Processes: 6
### Total private dirty RSS: 74.42 MB
What I'd like to grasp in more detail is
What is the relation between VMSize and Private and what to look for in those?
As of my understanding now, the private dirty RSS is the real memory usage and if so, do I have to care for the virtual memory sizes at all?

Related

Stardog on VM Linux Ubutu - memory capacity

We are experiencing performance problems with Stardog requests (about 500 000ms minimum to get an answer). We followed the Debian Based Systems installation described in the Stardog documentation and have a stardog service installed in our Ubutu VM.
Azure machine: Standard D4s v3 (4 virtual processors, 16 Gb memory)
Total amount of memory of the VM = 16 Gio of memory
We tested several JVM environment variables
Xms4g -Xmx4g -XX:MaxDirectMemorySize=8g
Xms8g -Xmx8g -XX:MaxDirectMemorySize=8g
We also tried to upgrade the VM with a machine but without success:
Azure: Standard D8s v3 - 8 virtual processors, 32 Gb memory
By doing the command: systemctl status stardog in the machine with 32Gio memory
we get :
stardog.service - Stardog Knowledge Graph
Loaded: loaded (/etc/systemd/system/stardog.service; enabled; vendor prese>
Active: active (running) since Tue 2023-01-17 15:41:40 UTC; 1min 35s ago
Docs: https://www.stardog.com/
Process: 797 ExecStart=/opt/stardog/stardog-server.sh start (code=exited, s>
Main PID: 969 (java)
Tasks: 76 (limit: 38516)
Memory: 1.9G
CGroup: /system.slice/stardog.service
└─969 java -Dstardog.home=/var/opt/stardog/ -Xmx8g -Xms8g XX:MaxD
stardog-admin server status :
Access Log Enabled : true
Access Log Type : text
Audit Log Enabled : true
Audit Log Type : text
Backup Storage Directory : .backup
CPU Load : 1.88 %
Connection Timeout : 10m
Export Storage Directory : .exports
Memory Heap : 305M (Max: 8.0G)
Memory Mode : DEFAULT{Starrocks.block_cache=20, Starrocks.dict_block_cache=10, Native.starrocks=70, Heap.dict_value=50, Starrocks.txn_block_cache=5, Heap.dict_index=50, Starrocks.untracked_memory=20, Starrocks.memtable=40, Starrocks.buffer_pool=5, Native.query=30}
Memory Query Blocks : 0B (Max: 5.7G)
Memory RSS : 4.3G
Named Graph Security : false
Platform Arch : amd64
Platform OS : Linux 5.15.0-1031-azure, Java 1.8.0_352
Query All Graphs : false
Query Timeout : 1h
Security Disabled : false
Stardog Home : /var/opt/stardog
Stardog Version : 8.1.1
Strict Parsing : true
Uptime : 2 hours 18 minutes 51 seconds
Knowing that there is only stardog server installed in this VM, 8G JVM Heap Memory & 20G Direct Memory for Java, is it normal to have 1.9G in memory (No process in progress)
and 4.1G (when the query is in progress)
"databases.xxxx.queries.latency": {
"count": 7,
"max": 471.44218324400003,
"mean": 0.049260736982859085,
"min": 0.031328932000000004,
"p50": 0.048930366,
"p75": 0.048930366,
"p95": 0.048930366,
"p98": 0.048930366,
"p99": 0.048930366,
"p999": 0.048930366,
"stddev": 0.3961819852037625,
"m15_rate": 0.0016325388459502614,
"m1_rate": 0.0000015369791915358426,
"m5_rate": 0.0006317127755974434,
"mean_rate": 0.0032760240366080024,
"duration_units": "seconds",
"rate_units": "calls/second"
Of all your queries the slowest took 8 minutes to complete while the others completed very quickly. Best to identify the slow query and profile it.

What are memory requirments for OrientDB/Can I run on EC2 micro?

I would like to run OrientDB on an EC2 micro (free tier) instance. I am unable to find official documentation for OrientDB that gives memory requirements, however I found this question that says 512MB should be fine. I am running an EC2 micro instance which has 1GB RAM. However, when I try to run OrientDB I get the JRE error shown below. My initial thought was that I needed to increase the jre memory using -xmx, but I guess it would be the shell script that would do this.. Has anyone successfully run OrientDB in an EC2 micro instance or run into this problem?
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000007a04a0000, 1431699456, 0) failed; error='Cannot allocate memory' (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 1431699456 bytes for committing reserved memory.
An error report file with more information is saved as:
/tmp/jvm-14728/hs_error.log
Here are the contents of the error log:
OS:Linux
uname:Linux 4.14.47-56.37.amzn1.x86_64 #1 SMP Wed Jun 6 18:49:01 UTC 2018 x86_64
libc:glibc 2.17 NPTL 2.17
rlimit: STACK 8192k, CORE 0k, NPROC 3867, NOFILE 4096, AS infinity
load average:0.00 0.00 0.00
/proc/meminfo:
MemTotal: 1011168 kB
MemFree: 322852 kB
MemAvailable: 822144 kB
Buffers: 83188 kB
Cached: 523056 kB
SwapCached: 0 kB
Active: 254680 kB
Inactive: 369952 kB
Active(anon): 18404 kB
Inactive(anon): 48 kB
Active(file): 236276 kB
Inactive(file): 369904 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 36 kB
Writeback: 0 kB
AnonPages: 18376 kB
Mapped: 31660 kB
Shmem: 56 kB
Slab: 51040 kB
SReclaimable: 41600 kB
SUnreclaim: 9440 kB
KernelStack: 1564 kB
PageTables: 2592 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 505584 kB
Committed_AS: 834340 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 49152 kB
DirectMap2M: 999424 kB
CPU:total 1 (initial active 1) (1 cores per cpu, 1 threads per core) family 6 model 63 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, avx, avx2, aes, erms, tsc
/proc/cpuinfo:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2676 v3 # 2.40GHz
stepping : 2
microcode : 0x3c
cpu MHz : 2400.043
cache size : 30720 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm cpuid_fault invpcid_single pti fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass
bogomips : 4800.05
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
Memory: 4k page, physical 1011168k(322728k free), swap 0k(0k free)
vm_info: OpenJDK 64-Bit Server VM (24.181-b00) for linux-amd64 JRE (1.7.0_181-b00), built on Jun 5 2018 20:36:03 by "mockbuild" with gcc 4.8.5 20150623 (Red Hat 4.8.5-28)
time: Mon Aug 20 20:51:08 2018
elapsed time: 0 seconds
Orient can easily run in 512MB though your performance and throughput will not be as high. In OrientDB 3.0.x you can use the environment variable ORIENTDB_OPTS_MEMORY to set it. On the command line I can, for example run:
cd $ORIENTDB_HOME/bin
export ORIENTDB_OPTS_MEMORY="-Xmx512m"
./server.sh
(where $ORIENTDB_HOME is where you have OrientDB installed) and I'm running with 512MB of memory.
As an aside, if you look in $ORIENTDB_HOME/bin/server.sh you'll see that there is even code to check if the server is running on a Raspberry Pi and those range from 256MB to 1GB so the t2.micro will run just fine.

When Cassandra is running almost all RAM is consumed, why?

I have CentOS 6.8, Cassandra 3.9, 32 GB RAM. When I start Cassandra and once it is started, it starts consuming the memory and start adding up 'Cached' memory value when I start querying from CQLSH or Apache Spark and in this process, very less memory remain for other processing like cron execution.
Here are some details from my system
free -m
total used free shared buffers cached
Mem: 32240 32003 237 0 41 24010
-/+ buffers/cache: 7950 24290
Swap: 2047 25 2022
And here is the output of top -M command
top - 08:54:39 up 5 days, 16:24, 4 users, load average: 1.22, 1.20, 1.29
Tasks: 205 total, 2 running, 203 sleeping, 0 stopped, 0 zombie
Cpu(s): 3.5%us, 1.2%sy, 19.8%ni, 75.3%id, 0.1%wa, 0.1%hi, 0.0%si, 0.0%st
Mem: 31.485G total, 31.271G used, 219.410M free, 42.289M buffers
Swap: 2047.996M total, 25.867M used, 2022.129M free, 23.461G cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14313 cassandr 20 0 595g 28g 22g S 144.5 91.3 300:56.34 java
You can see only 220 MB is left and 23.46 is cached.
My question is how to configure Cassandra so that it can use 'cached' memory to certain value and leave more RAM available for other processes.
Thanks in advance.
In linux in general cached memory as your 23g is just really fine. This memory is used as filesystem cache and so on - not by cassandra itself. Linux systems tend to use all available memory.
This helps to speed up your system in many ways to prevent disk reads.
You can still use the cached memory - just start processes and use your ram, the kernel will free it immediatly.
You can set the sizes in cassandra-env.sh under conf folder. This article should help. http://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsTuneJVM.html

Spark: Not enough space to cache red in container while still a lot of total storage memory

I have a 30 node cluster, each node has 32 core, 240 G memory (AWS cr1.8xlarge instance). I have the following configurations:
--driver-memory 200g --driver-cores 30 --executor-memory 70g --executor-cores 8 --num-executors 90
I can see from the job tracker that I still have a lot of total storage memory left, but in one of the containers, I got the following message saying Storage limit = 28.3 GB. I am wondering where does this 28.3 GB came from? My memoryFraction for storage is 0.45
And how do I solve this Not enough space to cache rdd issue? Should I do more partition or change default parallelism ... since I still have a lot of total storage memory unused. Thanks!
15/12/05 22:39:36 WARN storage.MemoryStore: Not enough space to cache rdd_31_310 in memory! (computed 1326.6 MB so far)
15/12/05 22:39:36 INFO storage.MemoryStore: Memory use = 9.6 GB (blocks) + 18.1 GB (scratch space shared across 4 tasks(s)) = 27.7 GB. Storage limit = 28.3 GB.
15/12/05 22:39:36 WARN storage.MemoryStore: Not enough space to cache rdd_31_136 in memory! (computed 1835.8 MB so far)
15/12/05 22:39:36 INFO storage.MemoryStore: Memory use = 9.6 GB (blocks) + 18.1 GB (scratch space shared across 5 tasks(s)) = 27.7 GB. Storage limit = 28.3 GB.
15/12/05 22:39:36 INFO executor.Executor: Finished task 136.0 in stage 12.0 (TID 85168). 1272 bytes result sent to driver

How to improve Nginx, Rails, Passenger memory usage?

I currently have a rails app set up on a Digital Ocean VPS (1GB RAM) trough Cloud 66. The problem being that the VPS' memory runs full with Passenger processes.
The output of passenger-status:
# passenger-status
Version : 4.0.45
Date : 2014-09-23 09:04:37 +0000
Instance: 1762
----------- General information -----------
Max pool size : 2
Processes : 2
Requests in top-level queue : 0
----------- Application groups -----------
/var/deploy/cityspotters/web_head/current#default:
App root: /var/deploy/cityspotters/web_head/current
Requests in queue: 0
* PID: 7675 Sessions: 0 Processed: 599 Uptime: 39m 35s
CPU: 1% Memory : 151M Last used: 1m 10s ago
* PID: 7686 Sessions: 0 Processed: 477 Uptime: 39m 34s
CPU: 1% Memory : 115M Last used: 10s ago
The max_pool_size seems to be configured correctly.
The output of passenger-memory-stats:
# passenger-memory-stats
Version: 4.0.45
Date : 2014-09-23 09:10:41 +0000
------------- Apache processes -------------
*** WARNING: The Apache executable cannot be found.
Please set the APXS2 environment variable to your 'apxs2' executable's filename, or set the HTTPD environment variable to your 'httpd' or 'apache2' executable's filename.
--------- Nginx processes ---------
PID PPID VMSize Private Name
-----------------------------------
1762 1 51.8 MB 0.4 MB nginx: master process /opt/nginx/sbin/nginx
7616 1762 53.0 MB 1.8 MB nginx: worker process
### Processes: 2
### Total private dirty RSS: 2.22 MB
----- Passenger processes -----
PID VMSize Private Name
-------------------------------
7597 218.3 MB 0.3 MB PassengerWatchdog
7600 565.7 MB 1.1 MB PassengerHelperAgent
7606 230.8 MB 1.0 MB PassengerLoggingAgent
7675 652.0 MB 151.7 MB Passenger RackApp: /var/deploy/cityspotters/web_head/current
7686 652.1 MB 116.7 MB Passenger RackApp: /var/deploy/cityspotters/web_head/current
### Processes: 5
### Total private dirty RSS: 270.82 MB
.. 2 Passenger RackApp processes, OK.
But when I use the htop command, the output is as follows:
There seem to be a lot of Passenger Rackup processes. We're also running Sidekiq with the default configuration.
New Relic Server reports the following memory usage:
I tried tuning Passenger settings, adding a load balancer and another server but honestly don't know what to do from here. How can I find out what's causing so much memory usage?
Update: I had to restart ngnix because of some changes and it seemed to free quite a lot of memory.
Press Shift-H to hide threads in htop. Those aren't processes but threads within a process. The key column is RSS: you have two passenger processes at 209MB and 215MB and one Sidekiq process at 154MB.
Short answer: this is completely normal memory usage for a Rails app. 1GB is simply a little small if you want multiple processes for each. I'd cut down passenger to one process.
Does your application create child processes? If so, then it's likely that those extra "Passenger RackApp" processes are not actually processes created by Phusion Passenger, but are in fact processes created by your own app. You should double check whether your app spawns child processes and whether you clean up those child processes correctly. Also double check whether any libraries you use, also properly clean up their child processes.
I see that you're using Sidekiq and you've configured 25 Sidekiq processes. Those are also eating a lot of memory. A Sidekiq process eats just as much memory as a Passenger RackApp process, because both of them load your entire application (including Rails) in memory. Try reducing the number of Sidekiq processes.

Resources