Memory used but i can't see process that used it (Debian) - memory

Here is my problem:
top - 11:32:47 up 22:20, 2 users, load average: 0.03, 0.72, 1.27
Tasks: 112 total, 1 running, 110 sleeping, 1 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8193844k total, 7508292k used, 685552k free, 80636k buffers
Swap: 2102456k total, 15472k used, 2086984k free, 7070220k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28555 root 20 0 57424 38m 1492 S 0 0.5 0:06.38 bash
28900 root 20 0 39488 7732 3176 T 0 0.1 0:03.12 python
28553 root 20 0 72132 5052 2600 S 0 0.1 0:00.22 sshd
28859 root 20 0 70588 3424 2584 S 0 0.0 0:00.06 sshd
29404 root 20 0 70448 3320 2600 S 0 0.0 0:00.06 sshd
28863 root 20 0 42624 2188 1472 S 0 0.0 0:00.02 sftp-server
29406 root 20 0 19176 1984 1424 S 0 0.0 0:00.00 bash
2854 root 20 0 115m 1760 488 S 0 0.0 5:37.02 rsyslogd
29410 root 20 0 19064 1400 1016 R 0 0.0 0:05.14 top
3111 ntp 20 0 22484 604 460 S 0 0.0 10:26.79 ntpd
3134 proftpd 20 0 64344 452 280 S 0 0.0 6:29.16 proftpd
2892 root 20 0 49168 356 232 S 0 0.0 0:31.58 sshd
1 root 20 0 27388 284 132 S 0 0.0 0:01.38 init
3121 root 20 0 4308 248 172 S 0 0.0 0:16.48 mdadm
As you can see 7.5 GB of memory is used, but there is no process that use it.
How it can be, and how to fix this?
Thanks for answer.

www.linuxatemyram.com
It's too good of a site to ruin by copy/pasting the entire contents here.

in order to see all process you can use that command:
ps aux
and then try to sort with different filters
ps faux
Hope that helps.
If your system starts using the swap file - then you have high memory load. Depends on the file system, programs that you use - linux system may allocate all of your system memory - but that doesn't mean that they are using it.
Lots of ubuntu and debian servers that we use have free memory 32 or 64 mb but don't use swap.
I'm not Linux-gure however, so please someone to correct me if I'm wrong :)

I don't have a Linux box handy to experiment, but it looks like you can sort top's output with interactive commands, so you could bring the biggest memory users to the top. Check the man page and experiment.
Update: In the version of top I have (procps 3.2.7), you can hit "<" and ">" to change the field it's sorting by. Doesn't actually say what field it is, you have to look at how the display is changing. It's not hard once you experiment a little.
However, Arrowmaster's point (that it's probably being used for cache) is a better answer. Use "free" to see how much is being used.

I had a similar problem. I was running Raspbian on a Pi B+ with a TP-Link USB Wireless LAN stick connected. The stick caused a problem which resulted in nearly all memory being consumed on system start (around 430 of 445 MB). Just like in your case, the running processes did not consume that much memory. When I removed the stick and rebooted everything was fine, just 50 MB memory consumption.

Related

already use docker limit, why server cpu load average still high

cpu:8
memory:64G
when use docker swarm to deploy application, already limit cpu and memory
docker service update java --reserved-cpu 2 --limit-cpu 2 --limit-memory 4G --reserve-memory 4G
use top to check server,
top - 11:09:26 up 889 days, 19:31, 16 users, load average: 72.50, 78.28, 55.54
Tasks: 271 total, 2 running, 183 sleeping, 0 stopped, 0 zombie
%Cpu(s): 36.7 us, 7.2 sy, 0.0 ni, 52.4 id, 0.0 wa, 0.0 hi, 3.7 si, 0.0 st
KiB Mem : 65916324 total, 27546636 free, 8884904 used, 29484784 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 56442876 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
15566 root 20 0 12.563g 2.871g 17580 S 199.0 4.6 42:06.92 java
45076 root 20 0 19.018g 337692 16680 S 167.2 0.5 0:07.38 java
14692 root 20 0 1941688 122152 50868 S 4.0 0.2 617:42.71 dockerd
have not other application, why load average run so high
could someone help to check it,thx

Finding the memory consumption of each redis DB

The problem
One of my Python Redis clients fails with the following exception:
redis.exceptions.ResponseError: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
I have checked the redis machine, and it seems to be out of memory:
free
total used free shared buffers cached
Mem: 3952 3656 295 0 1 9
-/+ buffers/cache: 3645 306
Swap: 0 0 0
top
top - 15:35:03 up 14:09, 1 user, load average: 0.06, 0.17, 0.16
Tasks: 114 total, 2 running, 112 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.2 us, 0.3 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.2 st
KiB Mem: 4046852 total, 3746772 used, 300080 free, 1668 buffers
KiB Swap: 0 total, 0 used, 0 free. 11364 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1102 root 20 0 3678836 3.485g 736 S 1.3 90.3 10:12.53 redis-server
1332 ubuntu 20 0 41196 3096 972 S 0.0 0.1 0:00.12 zsh
676 root 20 0 10216 2292 0 S 0.0 0.1 0:00.03 dhclient
850 syslog 20 0 255836 2288 124 S 0.0 0.1 0:00.39 rsyslogd
I am using a few dozens Redis DBs in a single Redis instance. Each DB is denoted by numeric ids given to redis-cli, e.g.:
$ redis-cli -n 80
127.0.0.1:6379[80]>
How do I know how much memory does each DB consume, and what are the largest keys in each DB?
How do I know how much memory does each DB consume, and what are the largest keys in each DB?
You CANNOT get the used memory for each DB. With INFO command, you can only get the totally used memory for Redis instance. Redis records the newly allocated memory size, each time it dynamically allocates some memory. However, it doesn't do such record for each DB. Also, it doesn't have any record for the largest keys.
Normally, you should config your Redis instance with the maxmemory and maxmemory-policy (i.e. eviction policy when the maxmemory is reached).
You can write some sh-script like to this (show element count in each DB):
#!/bin/bash
max_db=501
i=0
while [ $i -lt $max_db ]
do
echo "db_nubner: $i"
redis-cli -n $i dbsize
i=$((i+1))
done
Example output:
db_nubner: 0
(integer) 71
db_nubner: 1
(integer) 0
db_nubner: 2
(integer) 1
db_nubner: 3
(integer) 1
db_nubner: 4
(integer) 0
db_nubner: 5
(integer) 1
db_nubner: 6
(integer) 28
db_nubner: 7
(integer) 1
I know that we can have a one database with large key, but anyway, in some cases this script can help.

Find memory leaks in a Rails application

I have a web application in Ruby on Rails. We use mongrel clusters started on apcahe httpd to run the application. We have been facing an issue of huge memory consumption in the application. (RedHat,Ruby 1.8.7, Rails 2.3.5, RAM 8GB)
The thing is after we start the web server(start the mongrel clusters), the memory usage seems to be increasing. For example, if the free memory(RAM) when I started the web server was 6GB. After 2 days, the free memory becomes 3GB even at the time of no traffic in the site. If the web server is not restarted for a week, the memory seems to increase and use full 8GB RAM and cause issues of "no memory to allocate" for processes like pdf generation using "PrinceXML", mail sending using sendmail (I think these are memory ). When the web server is restarted, the free memory goes back to 6GB.
Is this a case of memory leak in the Rails application? How to check the application for memory leaks? I have found a tool for checking memory leaks bleak_house but when I install it as a gem as shown in this link, it is giving No command bleak found when I run 'bleak /tmp/bleak.5979.000.dump' to analyze.
I am using PrinceXML to generate PDF reports and sendmail for mail sending purposes. This server has also got a instance of Jasper Server running. Anyone please help.
Here is the result of the top command at the time of memory overload.
-bash-3.2$ top
top - 10:34:10 up 14 days, 7:40, 2 users, load average: 0.24, 0.40, 0.39
Tasks: 181 total, 1 running, 177 sleeping, 2 stopped, 1 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8173984k total, 8011564k used, 162420k free, 10044k buffers
Swap: 2096472k total, 152624k used, 1943848k free, 2012016k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
858 **nt*rsc 15 0 12748 1168 832 R 173.5 0.0 0:00.36 top
1 root 15 0 10356 108 76 S 0.0 0.0 0:17.10 init
2 root RT -5 0 0 0 S 0.0 0.0 0:00.10 migration/0
3 root 34 19 0 0 0 S 0.0 0.0 0:00.09 ksoftirqd/0
4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
5 root RT -5 0 0 0 S 0.0 0.0 0:00.12 migration/1
6 root 34 19 0 0 0 S 0.0 0.0 0:00.12 ksoftirqd/1
7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1
8 root RT -5 0 0 0 S 0.0 0.0 0:00.70 migration/2
9 root 34 19 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/2
10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2
11 root RT -5 0 0 0 S 0.0 0.0 0:00.67 migration/3
12 root 34 19 0 0 0 S 0.0 0.0 0:00.11 ksoftirqd/3
13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3
14 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/0
15 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/1
16 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/2
I'd try using passenger (which automatically restarts and manages rails instances that grow too large in memory - much easier than rebooting mongrels that have strayed off sane memory constraints). Also you might have luck with ruby enterprise edition fork of 1.8.7 which backports some memory management fixes from 1.9 (like allowing the VM to shrink when it's using less memory) - that change might have worked it's way back to normal 1.8.7 though, although I am not sure. The claim with REE is that you can reduce memory consumption with 33% for rails applications.
Ruby stuff generally tend to grow over time and need rebooting, with passenger it does that automatically for you. It's worked perfectly for me so I can really recommend it.
http://www.modrails.com/
It also has good memory analytic functions
http://www.modrails.com/documentation/Users%20guide%20Apache.html#_analysis_and_system_maintenance

Swapping problem for Rails app on slicehost

I have a Rails 2.3.8 app hosted and running on slicehost (256M). I am not familiar at all with the back-end, I basically followed the steps from the slicehost tutorials to install Apache. The memory usage being very high, I then changed my Apache conf file to reduce the MaxClient number to 10... but my slice is still swapping.
Here is what the memory usage I get after just a few clicks on my site:
top - 23:57:12 up 28 min, 2 users, load average: 0.43, 0.54, 0.30
Tasks: 79 total, 1 running, 78 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 97.8%id, 0.1%wa, 0.0%hi, 0.0%si, 2.0%st
Mem: 262364k total, 258656k used, 3708k free, 260k buffers
Swap: 524280k total, 262772k used, 261508k free, 6328k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4004 web-app 20 0 178m 72m 1888 S 0 28.4 0:04.38 ruby1.8
4001 web-app 20 0 172m 61m 1932 S 0 24.2 0:02.72 ruby1.8
3941 root 20 0 164m 57m 1672 S 0 22.5 0:21.44 ruby
3990 web-app 20 0 209m 21m 1696 S 0 8.4 0:18.00 ruby1.8
3950 web-app 20 0 165m 7464 1548 S 0 2.8 0:20.40 ruby1.8
3684 mysql 20 0 224m 6504 2084 S 0 2.5 0:14.34 mysqld
3938 root 20 0 53632 3048 1036 S 1 1.2 0:01.50 starling
3839 root 20 0 243m 1456 1248 S 0 0.6 0:00.34 apache2
3897 www-data 20 0 243m 1452 1072 S 0 0.6 0:00.04 apache2
3894 www-data 20 0 243m 1368 1008 S 0 0.5 0:00.04 apache2
3895 www-data 20 0 243m 1220 960 S 0 0.5 0:00.02 apache2
3888 root 20 0 46520 1204 1100 S 0 0.5 0:02.29 ruby1.8
3866 root 20 0 17648 1184 896 S 0 0.5 0:00.08 bash
3896 www-data 20 0 243m 1180 952 S 0 0.4 0:00.00 apache2
3964 www-data 20 0 243m 1164 956 S 0 0.4 0:00.02 apache2
3892 www-data 20 0 243m 1132 956 S 0 0.4 0:00.00 apache2
3948 www-data 20 0 243m 1132 956 S 0 0.4 0:00.00 apache2
3962 www-data 20 0 243m 1132 956 S 0 0.4 0:00.02 apache2
3963 www-data 20 0 243m 1132 956 S 0 0.4 0:00.00 apache2
3965 www-data 20 0 243m 1080 888 S 0 0.4 0:00.00 apache2
3887 root 20 0 89008 960 796 S 0 0.4 0:00.00 ApplicationPool
I'm not sure what to do next... I could upgrade to a larger slice but for now I have almost no traffic on this app, so I think it's more a problem with my configuration or maybe my code?
Any concrete recommendations would be welcome!
Thanks
It looks like your rails app is using all your available memory. I would recommend three things:
Upgrade the memory on your server. 256MB is not very much for a Rails app. Going to 512 may alleviate your problem. If that solves it, you then need to consider the additional cost ($18/mo) vs how much time it will take to track down performance issues.
Profile your application to figure out which requests are consuming the most memory. This is likely going to be places where you're finding a lot of records and possibly including some associated tables too. There are a couple of tools out there to help you narrow down possible trouble areas. I've used oink but there are definitely others. Once you figure out where the problems are, you can make some tweaks to try and reduce the memory usage.
Assuming you're using Passenger with Apache, you can reduce the number of concurrent requests in the Passenger config file. This might be useful for that https://serverfault.com/questions/15350/running-ruby-on-rails-app-on-apache-passenger-to-much-memory
In short, 256MB is tight for a Rails application. You did not really give any specifics on how you are running rails, but I assume you are using Apache with the Passenger module. The Passenger module can be configured on how many instances it keeps running. You have 4 ruby instances running under the web-app account. I guess those come from Passenger. In the configuration, you can limit how many instances Passenger starts. This will reduce the memory requirements.
On the other hand, when working with only 256MB, and when you are only hosting 1 rails application, it might be better to go for another setup. The setup that I used myself before was an Nginx web server, and a mongrel cluster with 2 mongrels (on 192MB, and application was only for testing purposes). Basically that means that at any one time, you can process 2 (and only 2) rails requests in parallel. The setup is maybe a bit harder than Apache+Passenger, but definitely not difficult. I think that is a more performant solution when you stick with the 256MB.

Why are ruby processes at 100% CPU on passenger

I have a rails app (2.3.5) running on a VPS with 4 cores # 2 GHz and 4GB memory. I am running nginx (0.7.61) and phusion passenger(2.2.14) on Ruby Enterprise (1.8.7-2010.01) with the max pool size set at 30. My problem is that it seems as if every ruby process that is executing a rails request runs at near 100% cpu. If I run TOP they drop off every time the display refreshes so they are not getting hung, but they are still running at 100%.
Is there any way I can bring this down? Or at least figure out what portion of code is spiking the CPU? Is this a normal behavior?
Here is the TOP output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2427 psadmin 25 0 91904 76m 2696 R 100 1.9 739:05.96 Rails: /var/www/apps/main_rails_app/current
3457 psadmin 25 0 98180 82m 2532 R 100 2.0 711:21.91 Rails: /var/www/apps/main_rails_app/current
2415 psadmin 25 0 93952 77m 2708 R 99 1.9 727:49.31 Rails: /var/www/apps/main_rails_app/current
3455 psadmin 25 0 99204 83m 2528 R 69 2.0 726:04.70 Rails: /var/www/apps/main_rails_app/current
2791 psadmin 16 0 98044 81m 2492 S 31 2.0 0:10.16 Rails: /var/www/apps/main_rails_app/current
8034 psadmin 15 0 8160 3656 1772 S 1 0.1 0:35.39 nginx: worker process
8035 psadmin 15 0 8324 3696 1732 S 0 0.1 0:31.34 nginx: worker process
2588 psadmin 15 0 197m 183m 2712 S 0 4.5 1:02.16 Rails: /var/www/apps/main_rails_app/current
Thanks!
Edit: Tried strace with follow forks as mentioned below. This is the output that is dumped over and over:
sudo strace -f -p 3455
clock_gettime(CLOCK_MONOTONIC, {394577, 508326476}) = 0
select(0, [], [], [], {0, 0}) = 0 (Timeout)
--- SIGVTALRM (Virtual timer expired) # 0 (0) ---
sigreturn()
check your logs for suspicious behavior. In general rails does suck a bunch of cpu though...you could also try pointing strace at the offending pids.

Resources