So I have a simple Go web app I deployed as a Docker container. I am running a t2.small instance on AWS with CoreOS AMI.
The container is very small, only using about 10MB of memory according to docker stat:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
8e230506e99a 0.00% 11.11 MB / 2.101 GB 0.53% 49.01 MB / 16.39 MB 1.622 MB / 0 B
However the CoreOS instance seems to be using a lot of memory:
$ free
total used free shared buffers cached
Mem: 2051772 1686012 365760 25388 253096 1031836
-/+ buffers/cache: 401080 1650692
Swap: 0 0 0
As you can see it's using almost 1.7GB of memory of its 2GB total memory with only about 300MB left. And this seems to be slowly getting worse.
I've had the instance running for about 3 days now and the free memory started at around 400MB after fresh launch and starting a single Docker container.
Is this something I should worry about? Or is CoreOS supposed to use so so much memory when my little Go app in a container only uses tiny 10MB.
Because a lot of that memory usage is buffers and cache. The better indicator is your application from Docker (which is likely close if it is a small Go app) and the OS total usage minux buffers and cache on the second line (which is closer to 400 MB used).
See https://unix.stackexchange.com/a/152301/6515 for a decent explanation.
Related
I am running postgres timescaledb on my docker swarm node. I set limit of CPU to 4 and Mem limit to 32G. When I check docker stats, I can see this output:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
c6ce29c7d1a4 pg-timescale.1.6c1hql1xui8ikrrcsuwbsoow5 341.33% 20.45GiB / 32GiB 63.92% 6.84GB / 5.7GB 582GB / 172GB 133
CPU% is oscilating around 400%. Node has 6CPUs and average load has been 1 - 2 (1minute load average), So according to me, with my limit of CPUs - 4, the maximum load should be oscilating around 6. My current load is 20 (1minute load average), and output of top command from inside of postgres show 50-60%.
My service configuration limit:
deploy:
resources:
limits:
cpus: '4'
memory: 32G
I am confused, all values are different so what is real CPU usage of postgres and how to limit it ? My server load is pushed to maximum even limit of postgres is set to 4. Inside postgres I can see from htop that there is 6 cores and 64G MEM so its looks like it has all resources of the hosts. From docker stats maximum cpu is 400% - corelate with limit of 4 cpus.
Load average from commands like top in Linux refer to the number of processes running or waiting to run on average over some time period. CPU limits used by docker specify the number of CPU cycles over some timeframe permitted for processes inside of a cgroup. These aren't really measuring the same thing, especially when you factor in things like I/O waiting. You can have a process waiting for a read from disk, that wants to run but is blocked on that I/O call, increasing your load measurements on the host, but not using any CPU cycles.
When calculating how much CPU to allocate to a cgroup, no only do you need to factor in the I/O and other system needs of the process, but consider queuing theory when you approach saturation on the CPU. The closer you get to 100% utilization of the CPU, the longer the queue of processes ready to run will likely be, resulting in significant jumps in load measurements.
Setting these limits correctly will likely require trial and error because not all processes are the same, and not all workload on the host is the same. A batch processing job that kicks off at irregular intervals and saturates the drives and network will have a very different impact on the host from a scientific computation that is heavily CPU and memory bound.
I have a C program running in an alipine docker container. The image size is 10M on both OSX and on ubuntu.
On OSX, when I run this image, using the 'docker stats' I see it uses 1M RAM and so in the docker compose file I allocate a max of 5M within my swarm.
However, on Ubuntu 16.04.4 LTS the image is also 10M but when running it uses about 9M RAM, and I have to increase the max allocated memory in my compose file.
Why is there a such a difference in RAM usage between OSX and Ubuntu?
Even though we have different OSs, I would have thought once you are running inside a framework, then you would behave similarly on different machines. So I would have thought there should be comparable memory usage.
Update:
Thanks for the comments. So 'stats' may be inaccurate, and there are differences so best to baseline on linux. As an aside, but I think interesting, the reason for asking this question is to understand the 'under the hood' in order to tune my setup for a large number of deployed programs. Originally, when I tested I tried to allocate the smallest amount of maximum RAM on ubuntu, this resulted in a lot of disk thrashing something I didn't see or hear on my Macbook, (no hard disks!).
Some numbers which are completely my setup but also I think are interesting.
1000 docker containers, 1 C program each, 20M RAM MAX per container, Server load of 98, Server runs 4K processes in total, [1000 C programs total]
20 docker containers, 100 C programs each, 200M RAM MAX per container, Server load of 5 to 50, Server runs 2.3K processes in total, [2000 C programs total].
This all points at give your docker images a good amount of MAX RAM and it is nicer to your server to have fewer docker containers running.
I have a tutum node and it's always near full memory consumption. If I upgrade the node to more memory, it's again full.
To be more specific: Once I started with a node and, say, 6 services I can't add another one because the node is always full. I have to restart from scratch or buy another node:
> watch -n 5 free
total used free shared buffers cached
Mem: 4048280 3902288 145992 22796 310708 1334052
-/+ buffers/cache: 2257528 1790752
Swap: 0 0 0
The interesting thing is, that tutum itself does not know about that and thinks the node has more than 50% of his memory:
What may be the reason for that? There is nothing else running on the node, it's been completly provisioned via tutum.
The underlying cloud provider is digital ocean, docker version is 1.8.3.
I get following error, while running a MapReduce job in YARN cluster:
Application application_1394582929977_164223 failed 2 times
due to AM Container for appattempt_1394582929977_164223_000002 exited with exitCode: 143
due to: Container [pid=28402,containerID=container_1394582929977_164223_02_000001] is running beyond virtual memory limits.
Current usage: 2.5 GB of 5 GB physical memory used; 10.5 GB of 10.5 GB virtual memory used.
Killing container.
2.5 GB of 5 GB physical memory is used. However, all of the virtual memory gets used. How can I override the virtual memory settings, to increase it for my job or analyse my job to figure out why so much of virtual memory is needed?
Search for
yarn.nodemanager.vmem-pmem-ratio
yarn.nodemanager.vmem-check-enabled
On my Virtual Server running Debian, I have impression, that there is wrongly configured memory, even though my provider claims everything works correctly.
Even with 3GB of RAM, I keep running out of memory, even though top commands claims it still has enough memory.
Is there a way to test, that the free memory is actually usable? For instance, if I had 1,5 GB of memory, I would like to create a block of 1 GB and see that everything still works correctly.
Thanks,
Which applications are you using? There must be a reason that you are running out of memory.
Try command free:
$ free -m
total used free shared buffers cached
Mem: 3022 2973 48 0 235 1948
-/+ buffers/cache: 790 2232
Swap: 3907 0 3907
This will show you something like the above (that is a 3 GB machine of my own).
Always check the system log of your machine if you have memory issues.
# tail /var/log/syslog