High CPU usage by Google assistant on RPi - google-assistant-sdk

I have installed Google assistant on RPi but my board becomes hot due high constant CPU usage:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
717 pi 20 0 240432 63868 17740 S 24,1 6,7 17:57.44 googlesamp+
996 pi 20 0 8104 3196 2740 R 1,4 0,3 0:16.61 top
Is this normal?

Related

already use docker limit, why server cpu load average still high

cpu:8
memory:64G
when use docker swarm to deploy application, already limit cpu and memory
docker service update java --reserved-cpu 2 --limit-cpu 2 --limit-memory 4G --reserve-memory 4G
use top to check server,
top - 11:09:26 up 889 days, 19:31, 16 users, load average: 72.50, 78.28, 55.54
Tasks: 271 total, 2 running, 183 sleeping, 0 stopped, 0 zombie
%Cpu(s): 36.7 us, 7.2 sy, 0.0 ni, 52.4 id, 0.0 wa, 0.0 hi, 3.7 si, 0.0 st
KiB Mem : 65916324 total, 27546636 free, 8884904 used, 29484784 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 56442876 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
15566 root 20 0 12.563g 2.871g 17580 S 199.0 4.6 42:06.92 java
45076 root 20 0 19.018g 337692 16680 S 167.2 0.5 0:07.38 java
14692 root 20 0 1941688 122152 50868 S 4.0 0.2 617:42.71 dockerd
have not other application, why load average run so high
could someone help to check it,thx

understanding docker container cpu usages

docker stats shows that the cpu usage to be very high. But top command out shows that 88.3% cpu is not being used. Inside the container is a java service httpthrift service.
docker stats :
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
8a0488xxxx5 540.9% 41.99 GiB / 44 GiB 95.43% 0 B / 0 B 0 B / 35.2 MB 286
top output :
top - 07:56:58 up 2 days, 22:29, 0 users, load average: 2.88, 3.01, 3.05
Tasks: 13 total, 1 running, 12 sleeping, 0 stopped, 0 zombie
%Cpu(s): 8.2 us, 2.7 sy, 0.0 ni, 88.3 id, 0.0 wa, 0.0 hi, 0.9 si, 0.0 st
KiB Mem: 65959920 total, 47983628 used, 17976292 free, 357632 buffers
KiB Swap: 7999484 total, 0 used, 7999484 free. 2788868 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8823 root 20 0 58.950g 0.041t 21080 S 540.9 66.5 16716:32 java
How to reduce the cpu usage and bring it under 100%?
According to the top man page:
When operating in Solaris mode (`I' toggled Off), a task's cpu usage will be divided by the total number of CPUs. After issuing this command, you'll be told the new state of this toggle.
So by pressing the key I when using top in interactive mode, you will switch to the Solaris mode and the CPU usage will be divided by the total number of CPUs (or cores).
P.S.: This option is not available on all versions of top.

When Cassandra is running almost all RAM is consumed, why?

I have CentOS 6.8, Cassandra 3.9, 32 GB RAM. When I start Cassandra and once it is started, it starts consuming the memory and start adding up 'Cached' memory value when I start querying from CQLSH or Apache Spark and in this process, very less memory remain for other processing like cron execution.
Here are some details from my system
free -m
total used free shared buffers cached
Mem: 32240 32003 237 0 41 24010
-/+ buffers/cache: 7950 24290
Swap: 2047 25 2022
And here is the output of top -M command
top - 08:54:39 up 5 days, 16:24, 4 users, load average: 1.22, 1.20, 1.29
Tasks: 205 total, 2 running, 203 sleeping, 0 stopped, 0 zombie
Cpu(s): 3.5%us, 1.2%sy, 19.8%ni, 75.3%id, 0.1%wa, 0.1%hi, 0.0%si, 0.0%st
Mem: 31.485G total, 31.271G used, 219.410M free, 42.289M buffers
Swap: 2047.996M total, 25.867M used, 2022.129M free, 23.461G cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14313 cassandr 20 0 595g 28g 22g S 144.5 91.3 300:56.34 java
You can see only 220 MB is left and 23.46 is cached.
My question is how to configure Cassandra so that it can use 'cached' memory to certain value and leave more RAM available for other processes.
Thanks in advance.
In linux in general cached memory as your 23g is just really fine. This memory is used as filesystem cache and so on - not by cassandra itself. Linux systems tend to use all available memory.
This helps to speed up your system in many ways to prevent disk reads.
You can still use the cached memory - just start processes and use your ram, the kernel will free it immediatly.
You can set the sizes in cassandra-env.sh under conf folder. This article should help. http://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsTuneJVM.html

Memory used but i can't see process that used it (Debian)

Here is my problem:
top - 11:32:47 up 22:20, 2 users, load average: 0.03, 0.72, 1.27
Tasks: 112 total, 1 running, 110 sleeping, 1 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8193844k total, 7508292k used, 685552k free, 80636k buffers
Swap: 2102456k total, 15472k used, 2086984k free, 7070220k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28555 root 20 0 57424 38m 1492 S 0 0.5 0:06.38 bash
28900 root 20 0 39488 7732 3176 T 0 0.1 0:03.12 python
28553 root 20 0 72132 5052 2600 S 0 0.1 0:00.22 sshd
28859 root 20 0 70588 3424 2584 S 0 0.0 0:00.06 sshd
29404 root 20 0 70448 3320 2600 S 0 0.0 0:00.06 sshd
28863 root 20 0 42624 2188 1472 S 0 0.0 0:00.02 sftp-server
29406 root 20 0 19176 1984 1424 S 0 0.0 0:00.00 bash
2854 root 20 0 115m 1760 488 S 0 0.0 5:37.02 rsyslogd
29410 root 20 0 19064 1400 1016 R 0 0.0 0:05.14 top
3111 ntp 20 0 22484 604 460 S 0 0.0 10:26.79 ntpd
3134 proftpd 20 0 64344 452 280 S 0 0.0 6:29.16 proftpd
2892 root 20 0 49168 356 232 S 0 0.0 0:31.58 sshd
1 root 20 0 27388 284 132 S 0 0.0 0:01.38 init
3121 root 20 0 4308 248 172 S 0 0.0 0:16.48 mdadm
As you can see 7.5 GB of memory is used, but there is no process that use it.
How it can be, and how to fix this?
Thanks for answer.
www.linuxatemyram.com
It's too good of a site to ruin by copy/pasting the entire contents here.
in order to see all process you can use that command:
ps aux
and then try to sort with different filters
ps faux
Hope that helps.
If your system starts using the swap file - then you have high memory load. Depends on the file system, programs that you use - linux system may allocate all of your system memory - but that doesn't mean that they are using it.
Lots of ubuntu and debian servers that we use have free memory 32 or 64 mb but don't use swap.
I'm not Linux-gure however, so please someone to correct me if I'm wrong :)
I don't have a Linux box handy to experiment, but it looks like you can sort top's output with interactive commands, so you could bring the biggest memory users to the top. Check the man page and experiment.
Update: In the version of top I have (procps 3.2.7), you can hit "<" and ">" to change the field it's sorting by. Doesn't actually say what field it is, you have to look at how the display is changing. It's not hard once you experiment a little.
However, Arrowmaster's point (that it's probably being used for cache) is a better answer. Use "free" to see how much is being used.
I had a similar problem. I was running Raspbian on a Pi B+ with a TP-Link USB Wireless LAN stick connected. The stick caused a problem which resulted in nearly all memory being consumed on system start (around 430 of 445 MB). Just like in your case, the running processes did not consume that much memory. When I removed the stick and rebooted everything was fine, just 50 MB memory consumption.

Why are ruby processes at 100% CPU on passenger

I have a rails app (2.3.5) running on a VPS with 4 cores # 2 GHz and 4GB memory. I am running nginx (0.7.61) and phusion passenger(2.2.14) on Ruby Enterprise (1.8.7-2010.01) with the max pool size set at 30. My problem is that it seems as if every ruby process that is executing a rails request runs at near 100% cpu. If I run TOP they drop off every time the display refreshes so they are not getting hung, but they are still running at 100%.
Is there any way I can bring this down? Or at least figure out what portion of code is spiking the CPU? Is this a normal behavior?
Here is the TOP output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2427 psadmin 25 0 91904 76m 2696 R 100 1.9 739:05.96 Rails: /var/www/apps/main_rails_app/current
3457 psadmin 25 0 98180 82m 2532 R 100 2.0 711:21.91 Rails: /var/www/apps/main_rails_app/current
2415 psadmin 25 0 93952 77m 2708 R 99 1.9 727:49.31 Rails: /var/www/apps/main_rails_app/current
3455 psadmin 25 0 99204 83m 2528 R 69 2.0 726:04.70 Rails: /var/www/apps/main_rails_app/current
2791 psadmin 16 0 98044 81m 2492 S 31 2.0 0:10.16 Rails: /var/www/apps/main_rails_app/current
8034 psadmin 15 0 8160 3656 1772 S 1 0.1 0:35.39 nginx: worker process
8035 psadmin 15 0 8324 3696 1732 S 0 0.1 0:31.34 nginx: worker process
2588 psadmin 15 0 197m 183m 2712 S 0 4.5 1:02.16 Rails: /var/www/apps/main_rails_app/current
Thanks!
Edit: Tried strace with follow forks as mentioned below. This is the output that is dumped over and over:
sudo strace -f -p 3455
clock_gettime(CLOCK_MONOTONIC, {394577, 508326476}) = 0
select(0, [], [], [], {0, 0}) = 0 (Timeout)
--- SIGVTALRM (Virtual timer expired) # 0 (0) ---
sigreturn()
check your logs for suspicious behavior. In general rails does suck a bunch of cpu though...you could also try pointing strace at the offending pids.

Resources