Make docker build --memory-swap=20g use the available swap space? - docker

I have run free -h and see that I have 29G of swap space.
total used free shared buff/cache available
Mem: 15G 6.9G 8.8G 17M 223M 8.9G
Swap: 29G 2.0M 29G
I have also enabled 100 swappiness.
$ sudo sysctl vm.swappiness=100
vm.swappiness = 100
$ cat /proc/sys/vm/swappiness
100
However, docker build --memory-swap=20g does not appear to use the swap space. This is the output of htop throughout the docker build.
1 [|||||||||||||||| 18.7%]
2 [||||||| 7.3%]
3 [|||||||||||||||||||||| 26.5%]
4 [||||||||||||||| 18.0%]
Mem[||||||||||||||||||||||||||||||||||| 6.47G/15.9G]
Swp[| 2.00M/29.6G]
This is the docker build command:
docker build --build-arg NODE_OPTIONS="--max-old-space-size=325" \
--memory=600m --memory-swap=20g \
--cpu-period=100000 --cpu-quota=50000 \
--no-cache --tag farm_app_image:latest --file Dockerfile .
The docker build appears to be running out of RAM, because the build's internal process (NodeJS) runs out of heap space and crashes. Also, immediately before the crash the memory is maxed:
shaun#DESKTOP-5T629JB:/mnt/c/Users/bigfo$ docker ps -q | xargs docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
66bdf8efb492 charming_maxwell 51.72% 562.2MiB / 600MiB 93.70% 46.8MB / 1.53MB 277MB / 230MB 94
Why is it running out of RAM without using the swap space? How can we make it use the available swap space?

May be you should try to run it with --privileged flag.
docker run -ti --privileged yourimage
But make sure that you know what you are doing.
You should also read docker-tips-privilaged-flag

Related

"docker run --memory" doesn't account hugepages

Docker is running in privileged mode.
I want to know if this behavior is expected.
I am running DPDK based application in container.
My server has total 128G memory, I have limited container memory resource to 4G.
which I can see in docker stats.
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS [0/18152]
4deda4634b22 my_docker 38.12% 1.455GiB / 4GiB 36.37% 1.53kB / 0B 1.94GB / 755MB 69
I am seeing that even after docker memory is constraint to 4G.
application is able to allocate 32G huge pages memory along with other non huge page memory.
Is it expected?
Does docker run --memory option work only with non-huge page memory?
root#server# docker exec -ti my_docker bash
root#4deda4634b22:/#
root#4deda4634b22:/# ps aux |grep riot
root 893 17.2 0.0 68345740 105260 pts/0 Sl 05:54 1:02 /app/riot <<<<<< application.
root#4deda4634b22:/# cat /proc/meminfo |grep -i huge
AnonHugePages: 909312 kB
ShmemHugePages: 0 kB
**HugePages_Total: 32**
**HugePages_Free: 0**
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
root#4deda4634b22:/# ls -rlt /mnt/huge/* | wc -l
32
I normally pass the access for huge page and vfio devices via docker run -it --privileged -v /sys/bus/pci/drivers:/sys/bus/pci/drivers -v /sys/kernel/mm/hugepages:/sys/kernel/mm/hugepages -v /sys/devices/system/node:/sys/devices/system/node -v /dev:/dev.
It looks like you are missing the same.

Docker - why do 50 containers running ubuntu take similar amount of RAM as 50x alpine?

The size of the image of alpine is about 5MB, while of ubuntu - 65MB. So why there is no noticeable difference in RAM usage? If I understand it right, the docker image of X is load into the memory, and then is shared by all the containers (ran from the same image). And then for each container there needs to be created a separate filesystem, etc., and this is why in both cases memory usage was similar.
Is my understanding correct?
$ cat script.sh
#!/bin/bash
for i in {1..50}
do
docker container run alpine sleep 60 &
done
$ free -mh
total used free shared buff/cache available
Mem: 7.8Gi 1.8Gi 4.5Gi 119Mi 1.4Gi 5.6Gi
Swap: 2.0Gi 103Mi 1.9Gi
$ ./script.sh
$ free -mh
total used free shared buff/cache available
Mem: 7.8Gi 3.4Gi 2.9Gi 122Mi 1.5Gi 4.0Gi
Swap: 2.0Gi 103Mi 1.9Gi
$ vim script.sh
$ free -mh
total used free shared buff/cache available
Mem: 7.8Gi 1.9Gi 4.5Gi 119Mi 1.4Gi 5.6Gi
Swap: 2.0Gi 103Mi 1.9Gi
$ cat script.sh
#!/bin/bash
for i in {1..50}
do
docker container run ubuntu sleep 60 &
done
$ ./script.sh
$ free -mh
total used free shared buff/cache available
Mem: 7.8Gi 3.4Gi 2.9Gi 122Mi 1.5Gi 4.0Gi
Swap: 2.0Gi 103Mi 1.9Gi

How to check the number of cores used by docker container?

I have been working with Docker for a while now, I have installed docker and launched a container using
docker run -it --cpuset-cpus=0 ubuntu
When I log into the docker console and run
grep processor /proc/cpuinfo | wc -l
It shows 3 which are the number of cores I have on my host machine.
Any idea on how to restrict the resources to the container and how to verify the restrictions??
The issue has been already raised in #20770. The file /sys/fs/cgroup/cpuset/cpuset.cpus reflects the correct output.
The cpuset-cpus is taking effect however is not being reflected in /proc/cpuinfo
docker inspect <container_name>
will give the details of the container launched u have to check for "CpusetCpus" in there and then u will find the details.
Containers aren't complete virtual machines. Some kernel resources will still appear as they do on the host.
In this case, --cpuset-cpus=0 modifies the resources the container cgroup has access to which is available in /sys/fs/cgroup/cpuset/cpuset.cpus. Not what the VM and container have in /proc/cpuinfo.
One way to verify is to run the stress-ng tool in a container:
Using 1 cpu will be pinned at 1 core (1 / 3 cores in use, 100% or 33% depending on what tool you use):
docker run --cpuset-cpus=0 deployable/stress -c 3
This will use 2 cores (2 / 3 cores, 200%/66%):
docker run --cpuset-cpus=0,2 deployable/stress -c 3
This will use 3 ( 3 / 3 cores, 300%/100%):
docker run deployable/stress -c 3
Memory limits are another area that don't appear in kernel stats
$ docker run -m 64M busybox free -m
total used free shared buffers cached
Mem: 3443 2500 943 173 261 1858
-/+ buffers/cache: 379 3063
Swap: 1023 0 1023
yamaneks answer includes the github issue.
it should be in double quotes --cpuset-cpus="", --cpuset-cpus="0" means it make use of cpu0.

"update --memory" can not work

Docker Version :17.04.0-ce
os :windows 7
I start container using the command :docker run -it -memory 4096MB <container-id>
check the memory using the command :docker stats --no-stream | grep <container-id>
the result is :
5fbc6df8f90f 0.23% 86.52 MB / 995.8 Mib 2.59% 648B / 0B 17.2G / 608 MB 31
when update the memory,the result is also the same:
$ docker update -m 4500MB --memory-swap 4500MB --memory-reservation 4500MB 5fbc6df8f90f
5fbc6df8f90f
$ docker stats --no-stream | grep 5fbc6df8f90f
5fbc6df8f90f 0.23% 86.52 MB / 995.8 Mib 2.59% 648B / 0B 17.2G / 608 MB 31
why "--memory" can not work ,the memory is always the same 995.8Mib?
The docker stats command is showing you how much memory the entire docker host has, or with D4W, how much memory you have in the Linux VM. To increase this threshold, go into the settings of Docker to change the memory allocated to the VM. See this documentation for more details.

How to increase the swap space available in the boot2docker virtual machine?

I would like to run a docker container that requires a lot of memory on a machine that doesn't have much RAM. I have been trying to increase the swap space available for the container to no avail. Here is the last command I tried:
docker run -d -m 1000M --memory-swap=10000M --name=my_container my_image
Following these tips on how to check memory metrics I found the following:
$ boot2docker ssh
docker#boot2docker:~$ cat /sys/fs/cgroup/memory/docker/35af5a072751c7af80ce7a255a01ab3c14b3ee0e3f15341f7bb22a777091c67b/memory.stat
cache 454656
rss 65015808
rss_huge 29360128
mapped_file 208896
writeback 0
swap 0
pgpgin 31532
pgpgout 22702
pgfault 49372
pgmajfault 0
inactive_anon 28672
active_anon 65183744
inactive_file 241664
active_file 16384
unevictable 0
hierarchical_memory_limit 1048576000
hierarchical_memsw_limit 10485760000
total_cache 454656
total_rss 65015808
total_rss_huge 29360128
total_mapped_file 208896
total_writeback 0
total_swap 0
total_pgpgin 31532
total_pgpgout 22702
total_pgfault 49372
total_pgmajfault 0
total_inactive_anon 28672
total_active_anon 65183744
total_inactive_file 241664
total_active_file 16384
total_unevictable 0
Is it possible to run a container that requires 5G of memory on a machine that only has 4G of physical memory?
This GitHub issue was very helpful in figuring out how to increase the swap space available in the boot2docker-vm. Adapting it to my situation I used the following commands to ssh into the boot2docker-vm and set up a new swapfile:
boot2docker ssh
export SWAPFILE=/mnt/sda1/swapfile
sudo dd if=/dev/zero of=$SWAPFILE bs=1024 count=4194304
sudo mkswap $SWAPFILE
sudo chmod 600 $SWAPFILE
sudo swapon $SWAPFILE
exit

Resources