I'm doing some query tests with Impala/HDFS inside docker containers (swarm). In order to compare the queries (different scale factors), I want to drop the cache. Normally this is easily done by
$ sync
$ echo 1 > /proc/sys/vm/drop_caches
but I don't have admin rights on the host system. Is there another way to drop the cache from the inside of the containers? Is it an option to create another big table and execute queries on this table so that its data overwrite the cache?
You cannot do this from inside the conatiner. The root user in the container is in a different namespace than the actual root, and only the latter has access to /proc.
You could try volume mount in the host proc filesystem into the container. This seems to work for me:
$ docker run -ti --rm -v /proc/sys/vm/drop_caches:/drop_caches alpine
/ # free
total used free shared buffers cached
Mem: 2046644 808236 1238408 688 2248 118244
-/+ buffers/cache: 687744 1358900
Swap: 1048572 31448 1017124
/ # dd if=/dev/zero of=/dummy count=500 bs=1M
500+0 records in
500+0 records out
/ # free
total used free shared buffers cached
Mem: 2046644 1333892 712752 688 2268 630268
-/+ buffers/cache: 701356 1345288
Swap: 1048572 31448 1017124
/ # echo 3 > drop_caches
/ # free
total used free shared buffers cached
Mem: 2046644 790136 1256508 688 764 101552
-/+ buffers/cache: 687820 1358824
Swap: 1048572 31448 1017124
/ #
... but that is assuming you control how the container is started, which would more or less mean you're admin.
Can be achieved by starting container in privileged mode using --privileged flag.
Related
I've run mongodb service via docker-compose like this:
version: '2'
services:
mongo:
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
mem_limit: 4GB
If I run docker stats I can see 4 GB allocated:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
cf3ccbd17464 michal_mongo_1 0.64% 165.2MiB / 4GiB 4.03% 10.9kB / 4.35kB 0B / 483kB 35
But I run this command I get RAM from my laptop which is 32 GB:
~$ docker exec michal_mongo_1 free -g
total used free shared buff/cache available
Mem: 31 4 17 0 9 24
Swap: 1 0 1
How does mem_limit affect the memory size then?
free (and other utilities like top) will not report correct numbers inside a memory-constraint container because it gathers its information from /proc/meminfo which is not namespaced.
If you want the actual limit, you must use the entries populated by cgroup pseudo-filesystem under /sys/fs/cgroup.
For example:
docker run --rm -i --memory=128m busybox cat /sys/fs/cgroup/memory/memory.limit_in_bytes
The real-time usage information is available under /sys/fs/cgroup/memory/memory.stat.
You will probably need the resident-set-size (rss), for example (inside the container):
grep -E -e '^rss\s+' /sys/fs/cgroup/memory/memory.stat
For a more in-depth explanation, see also this article
The size of the image of alpine is about 5MB, while of ubuntu - 65MB. So why there is no noticeable difference in RAM usage? If I understand it right, the docker image of X is load into the memory, and then is shared by all the containers (ran from the same image). And then for each container there needs to be created a separate filesystem, etc., and this is why in both cases memory usage was similar.
Is my understanding correct?
$ cat script.sh
#!/bin/bash
for i in {1..50}
do
docker container run alpine sleep 60 &
done
$ free -mh
total used free shared buff/cache available
Mem: 7.8Gi 1.8Gi 4.5Gi 119Mi 1.4Gi 5.6Gi
Swap: 2.0Gi 103Mi 1.9Gi
$ ./script.sh
$ free -mh
total used free shared buff/cache available
Mem: 7.8Gi 3.4Gi 2.9Gi 122Mi 1.5Gi 4.0Gi
Swap: 2.0Gi 103Mi 1.9Gi
$ vim script.sh
$ free -mh
total used free shared buff/cache available
Mem: 7.8Gi 1.9Gi 4.5Gi 119Mi 1.4Gi 5.6Gi
Swap: 2.0Gi 103Mi 1.9Gi
$ cat script.sh
#!/bin/bash
for i in {1..50}
do
docker container run ubuntu sleep 60 &
done
$ ./script.sh
$ free -mh
total used free shared buff/cache available
Mem: 7.8Gi 3.4Gi 2.9Gi 122Mi 1.5Gi 4.0Gi
Swap: 2.0Gi 103Mi 1.9Gi
I got some application which will call the pvcreate each time.
I can see the volumes in my vm as follow:
$ pvscan
PV /dev/vda5 VG ubuntu-vg lvm2 [99.52 GiB / 0 free]
Total: 1 [99.52 GiB] / in use: 1 [99.52 GiB] / in no VG: 0 [0 ]
$ pvcreate --metadatasize=128M --dataalignment=256K '/dev/vda5'
Can't initialize physical volume "/dev/vda5" of volume group "ubuntu-vg" without -ff
$ pvcreate --metadatasize=128M --dataalignment=256K '/dev/vda5' -ff
Really INITIALIZE physical volume "/dev/vda5" of volume group "ubuntu-vg" [y/n]? y
Can't open /dev/vda5 exclusively. Mounted filesystem?
I have also tried wipsfs and observed the same result for above commands
$ wipefs -af /dev/vda5
/dev/vda5: 8 bytes were erased at offset 0x00000218 (LVM2_member): 4c 56 4d 32 20 30 30 31
How can I execute pvcreate?
Anything to be added for my vm?
It seems your hdd (/dev/vda5) is already been used in your ubuntu-vg. I think you can not use same hdd partition in 2 different PV's. or you can not add it again.
I have been working with Docker for a while now, I have installed docker and launched a container using
docker run -it --cpuset-cpus=0 ubuntu
When I log into the docker console and run
grep processor /proc/cpuinfo | wc -l
It shows 3 which are the number of cores I have on my host machine.
Any idea on how to restrict the resources to the container and how to verify the restrictions??
The issue has been already raised in #20770. The file /sys/fs/cgroup/cpuset/cpuset.cpus reflects the correct output.
The cpuset-cpus is taking effect however is not being reflected in /proc/cpuinfo
docker inspect <container_name>
will give the details of the container launched u have to check for "CpusetCpus" in there and then u will find the details.
Containers aren't complete virtual machines. Some kernel resources will still appear as they do on the host.
In this case, --cpuset-cpus=0 modifies the resources the container cgroup has access to which is available in /sys/fs/cgroup/cpuset/cpuset.cpus. Not what the VM and container have in /proc/cpuinfo.
One way to verify is to run the stress-ng tool in a container:
Using 1 cpu will be pinned at 1 core (1 / 3 cores in use, 100% or 33% depending on what tool you use):
docker run --cpuset-cpus=0 deployable/stress -c 3
This will use 2 cores (2 / 3 cores, 200%/66%):
docker run --cpuset-cpus=0,2 deployable/stress -c 3
This will use 3 ( 3 / 3 cores, 300%/100%):
docker run deployable/stress -c 3
Memory limits are another area that don't appear in kernel stats
$ docker run -m 64M busybox free -m
total used free shared buffers cached
Mem: 3443 2500 943 173 261 1858
-/+ buffers/cache: 379 3063
Swap: 1023 0 1023
yamaneks answer includes the github issue.
it should be in double quotes --cpuset-cpus="", --cpuset-cpus="0" means it make use of cpu0.
I have a Java application running in Ubuntu 14.04 container. The application relies OS pagecache to speed-up reads and writes. The container is issued a pause command which according to docker documentation triggers a cgroup freezer https://www.kernel.org/doc/Documentation/cgroups/freezer-subsystem.txt.
What happens to dirty pages and pagecache of the paused container? Are they flushed to disk? Or the whole notion of container-scope pagecache is wrong and dirty pages for all containers are managed at the docker host level?
docker host free -m:
user#0000 ~ # free -m
total used free shared buffers cached
Mem: 48295 47026 1269 0 22 45010
-/+ buffers/cache: 1993 46302
Swap: 24559 12 24547
container docker exec f1b free -m
user#0000 ~ # docker exec f1b free -m
total used free shared buffers
Mem: 48295 47035 1259 0 22
-/+ buffers: 47013 1282
Swap: 24559 12 24547
Once a container is paused, I cannot check memory as seen by the container.
FATA[0000] Error response from daemon: Container f1 is paused, unpause the container before exec