Pagecache and dirty pages in paused container - docker

I have a Java application running in Ubuntu 14.04 container. The application relies OS pagecache to speed-up reads and writes. The container is issued a pause command which according to docker documentation triggers a cgroup freezer https://www.kernel.org/doc/Documentation/cgroups/freezer-subsystem.txt.
What happens to dirty pages and pagecache of the paused container? Are they flushed to disk? Or the whole notion of container-scope pagecache is wrong and dirty pages for all containers are managed at the docker host level?
docker host free -m:
user#0000 ~ # free -m
total used free shared buffers cached
Mem: 48295 47026 1269 0 22 45010
-/+ buffers/cache: 1993 46302
Swap: 24559 12 24547
container docker exec f1b free -m
user#0000 ~ # docker exec f1b free -m
total used free shared buffers
Mem: 48295 47035 1259 0 22
-/+ buffers: 47013 1282
Swap: 24559 12 24547
Once a container is paused, I cannot check memory as seen by the container.
FATA[0000] Error response from daemon: Container f1 is paused, unpause the container before exec

Related

Docker - why do 50 containers running ubuntu take similar amount of RAM as 50x alpine?

The size of the image of alpine is about 5MB, while of ubuntu - 65MB. So why there is no noticeable difference in RAM usage? If I understand it right, the docker image of X is load into the memory, and then is shared by all the containers (ran from the same image). And then for each container there needs to be created a separate filesystem, etc., and this is why in both cases memory usage was similar.
Is my understanding correct?
$ cat script.sh
#!/bin/bash
for i in {1..50}
do
docker container run alpine sleep 60 &
done
$ free -mh
total used free shared buff/cache available
Mem: 7.8Gi 1.8Gi 4.5Gi 119Mi 1.4Gi 5.6Gi
Swap: 2.0Gi 103Mi 1.9Gi
$ ./script.sh
$ free -mh
total used free shared buff/cache available
Mem: 7.8Gi 3.4Gi 2.9Gi 122Mi 1.5Gi 4.0Gi
Swap: 2.0Gi 103Mi 1.9Gi
$ vim script.sh
$ free -mh
total used free shared buff/cache available
Mem: 7.8Gi 1.9Gi 4.5Gi 119Mi 1.4Gi 5.6Gi
Swap: 2.0Gi 103Mi 1.9Gi
$ cat script.sh
#!/bin/bash
for i in {1..50}
do
docker container run ubuntu sleep 60 &
done
$ ./script.sh
$ free -mh
total used free shared buff/cache available
Mem: 7.8Gi 3.4Gi 2.9Gi 122Mi 1.5Gi 4.0Gi
Swap: 2.0Gi 103Mi 1.9Gi

Docker containers: drop cache without root - other options?

I'm doing some query tests with Impala/HDFS inside docker containers (swarm). In order to compare the queries (different scale factors), I want to drop the cache. Normally this is easily done by
$ sync
$ echo 1 > /proc/sys/vm/drop_caches
but I don't have admin rights on the host system. Is there another way to drop the cache from the inside of the containers? Is it an option to create another big table and execute queries on this table so that its data overwrite the cache?
You cannot do this from inside the conatiner. The root user in the container is in a different namespace than the actual root, and only the latter has access to /proc.
You could try volume mount in the host proc filesystem into the container. This seems to work for me:
$ docker run -ti --rm -v /proc/sys/vm/drop_caches:/drop_caches alpine
/ # free
total used free shared buffers cached
Mem: 2046644 808236 1238408 688 2248 118244
-/+ buffers/cache: 687744 1358900
Swap: 1048572 31448 1017124
/ # dd if=/dev/zero of=/dummy count=500 bs=1M
500+0 records in
500+0 records out
/ # free
total used free shared buffers cached
Mem: 2046644 1333892 712752 688 2268 630268
-/+ buffers/cache: 701356 1345288
Swap: 1048572 31448 1017124
/ # echo 3 > drop_caches
/ # free
total used free shared buffers cached
Mem: 2046644 790136 1256508 688 764 101552
-/+ buffers/cache: 687820 1358824
Swap: 1048572 31448 1017124
/ #
... but that is assuming you control how the container is started, which would more or less mean you're admin.
Can be achieved by starting container in privileged mode using --privileged flag.

How to check the number of cores used by docker container?

I have been working with Docker for a while now, I have installed docker and launched a container using
docker run -it --cpuset-cpus=0 ubuntu
When I log into the docker console and run
grep processor /proc/cpuinfo | wc -l
It shows 3 which are the number of cores I have on my host machine.
Any idea on how to restrict the resources to the container and how to verify the restrictions??
The issue has been already raised in #20770. The file /sys/fs/cgroup/cpuset/cpuset.cpus reflects the correct output.
The cpuset-cpus is taking effect however is not being reflected in /proc/cpuinfo
docker inspect <container_name>
will give the details of the container launched u have to check for "CpusetCpus" in there and then u will find the details.
Containers aren't complete virtual machines. Some kernel resources will still appear as they do on the host.
In this case, --cpuset-cpus=0 modifies the resources the container cgroup has access to which is available in /sys/fs/cgroup/cpuset/cpuset.cpus. Not what the VM and container have in /proc/cpuinfo.
One way to verify is to run the stress-ng tool in a container:
Using 1 cpu will be pinned at 1 core (1 / 3 cores in use, 100% or 33% depending on what tool you use):
docker run --cpuset-cpus=0 deployable/stress -c 3
This will use 2 cores (2 / 3 cores, 200%/66%):
docker run --cpuset-cpus=0,2 deployable/stress -c 3
This will use 3 ( 3 / 3 cores, 300%/100%):
docker run deployable/stress -c 3
Memory limits are another area that don't appear in kernel stats
$ docker run -m 64M busybox free -m
total used free shared buffers cached
Mem: 3443 2500 943 173 261 1858
-/+ buffers/cache: 379 3063
Swap: 1023 0 1023
yamaneks answer includes the github issue.
it should be in double quotes --cpuset-cpus="", --cpuset-cpus="0" means it make use of cpu0.

Rethinkdb container: rethinkdb process takes less RAM than the whole container

I'm running my rethinkdb container in Kubernetes cluster. Below is what I notice:
Running top in the host which is CoreOS, rethinkdb process takes about 3Gb:
$ top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
981 root 20 0 53.9m 34.5m 20.9m S 15.6 0.4 1153:34 hyperkube
51139 root 20 0 4109.3m 3.179g 22.5m S 15.0 41.8 217:43.56 rethinkdb
579 root 20 0 707.5m 76.1m 19.3m S 2.3 1.0 268:33.55 kubelet
But running docker stats to check the rethinkdb container, it takes about 7Gb!
$ docker ps | grep rethinkdb
eb9e6b83d6b8 rethinkdb:2.1.5 "rethinkdb --bind al 3 days ago Up 3 days k8s_rethinkdb-3.746aa_rethinkdb-rc-3-eiyt7_default_560121bb-82af-11e5-9c05-00155d070266_661dfae4
$ docker stats eb9e6b83d6b8
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
eb9e6b83d6b8 4.96% 6.992 GB/8.169 GB 85.59% 0 B/0 B
$ free -m
total used free shared buffers cached
Mem: 7790 7709 81 0 71 3505
-/+ buffers/cache: 4132 3657
Swap: 0 0 0
Can someone explain why the container is taking a lot more memory than the rethinkdb process itself?
I'm running docker v1.7.1, CoreOS v773.1.0, kernel 4.1.5
In top command, your are looking at physical memory amount. in stats command, this also include the disk cached ram, so it's always bigger than the physical amount of ram. When you really need more RAM, the disk cached will be released for the application to use.
In deed, the memmory usage is pulled via cgroup memory.usage_in_bytes, you can access it in /sys/fs/cgroup/memory/docker/long_container_id/memory.usage_in_bytes. And acording to linux doc https://www.kernel.org/doc/Documentation/cgroups/memory.txt section 5.5:
5.5 usage_in_bytes
For efficiency, as other kernel components, memory cgroup uses some
optimization to avoid unnecessary cacheline false sharing.
usage_in_bytes is affected by the method and doesn't show 'exact'
value of memory (and swap) usage, it's a fuzz value for efficient
access. (Of course, when necessary, it's synchronized.) If you want to
know more exact memory usage, you should use RSS+CACHE(+SWAP) value in
memory.stat(see 5.2).

Docker error at higher core counts on a multi core machine

I am running a Centos Container using docker on a RHEL 65 machine. I am trying to run an MPI application (MILC) on 16 cores.
My server has 20 cores and 128 GB of memory.
My application runs fine until 15 cores but fails with the APPLICATION TERMINATED WITH THE EXIT STRING: Bus error (signal 7) error when using 16 cores and up. At 16 cores and up these are the messages I see in the logs.
Jul 16 11:29:17 localhost abrt[100668]: Can't open /proc/413/status: No such file or directory
Jul 16 11:29:17 localhost abrt[100669]: Can't open /proc/414/status: No such file or directory
Jul 16 11:29:17 localhost abrt[100670]: Can't open /proc/417/status: No such file or directory
A few details on the container:
kernel 2.6.32-431.el6.x86_64
Official centos from docker hub
Started container as:
docker run -t -i -c 20 -m 125g --name=test --net=host centos /bin/bash
I would greatly appreciate any and all feedback regarding this. Please do let me know if I can provide any further information.
Regards

Resources