With some sites headless Chromium is failing when it is running inside Docker container:
[0520/093103.024239:ERROR:platform_shared_memory_region_posix.cc(268)] Failed to reserve 16728064 bytes for shared memory.: No space left on device (28)
[0520/093103.024591:ERROR:validation_errors.cc(76)] Invalid message: VALIDATION_ERROR_UNEXPECTED_NULL_POINTER (null field 1)
[0520/093103.024946:FATAL:memory.cc(22)] Out of memory. size=16723968
How should I tune Docker to fix this?
You're running out of shared memory as is described in line 1.
[0520/093103.024239:ERROR:platform_shared_memory_region_posix.cc(268)] Failed to reserve 16728064 bytes for shared memory.: No space left on device (28)
This is handled by /dev/shm which is set to a default of 64mb in Docker, which isn't that much for modern web applications.
For context on /dev/shm see here https://superuser.com/questions/45342/when-should-i-use-dev-shm-and-when-should-i-use-tmp
Option 1:
Run chrome with --disable-dev-shm-usage
Option 2:
Set /dev/shm size to a reasonable amount docker run -it --shm-size=1g replacing 1g with whatever amount you want.
Related
I have enabled virtualization in the BIOS and enabled the IOMMU on kernel command line (intel_iommu=on).
I bound a solarflare NIC to the vfio-pci device and added a udev rule to ensure the vfio device is accessible by my non-root user (e.g., /etc/udev/rules.d/10-vfio-docker-users.rules):
SUBSYSTEM=="vfio", OWNER="myuser", GROUP=="myuser"
I've launched my container with -u 1000 and mapped /dev (-v /dev:/dev). Running in an interactive shell in the container, I am able to verify that the device is there with the permissions set by my udev rule:
bash-4.2$ whoami
whoami: unknown uid 1000
bash-4.2$ ls -al /dev/vfio/35
crw-rw---- 1 1000 1000 236, 0 Jan 25 00:23 /dev/vfio/35
However, if I try and open it (e.g., python -c "open('/dev/vfio/35', 'rb')" I get IOError: [Errno 1] Operation not permitted: '/dev/vfio/35'. However, the same command works outside the container as the normal non-root user with user-id 1000!
It seems that there are additional security measures that are not allowing me to access the vfio device within the container. What am I missing?
Docker drops a number of privileges by default, including the ability to access most devices. You can explicitly grant access to a device using the --device flag, which would look something like:
docker run --device /dev/vfio/35 ...
Alternately, you can ask Docker not to drop any privileges:
docker run --privileged ...
You'll note that in both of the above examples it was not necessary to explicitly bind-mount /dev; in the first case, the device(s) you have exposed with --device will show up, and in the second case you see the host's /dev by default.
Is their a way to run replicas on memory tmpfs on host. I got the problem (infinity restart)
time="2018-11-02T21:55:05Z" level=fatal msg="Error running start replica command: failed to find extents, error: invalid argument"
Is the service able to work on disks mounted in memory?
Currently OpenEBS Jiva storage engine support only those file systems which supports extents mapping ext4,XFS etc...
where as tmpfs does not support extents mapping hence it fails.
I'm trying to limit my container so that it doesn't take up all the RAM on the host. From the Docker docs I understand that --memory limits the RAM and --memory-swap limits (RAM+swap). From the docker-compose docs it looks like the terms for those are mem_limit and memswap_limit, so I've constructed the following docker-compose file:
> cat docker-compose.yml
version: "2"
services:
stress:
image: progrium/stress
command: '-m 1 --vm-bytes 15G --vm-hang 0 --timeout 10s'
mem_limit: 1g
memswap_limit: 2g
The progrium/stress image just runs stress, which in this case spawns a single thread which requests 15GB RAM and holds on to it for 10 seconds.
I'd expect this to crash, since 15>2. (It does crash if I ask for more RAM than the host has.)
The kernel has cgroups enabled, and docker stats shows that the limit is being recognised:
> docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
7624a9605c70 0.00% 1024MiB / 1GiB 99.99% 396B / 0B 172kB / 0B 2
So what's going on? How do I actually limit the container?
Update:
Watching free, it looks like the RAM usage is effectively limited (only 1GB of RAM is used) but the swap is not: the container will gradually increase swap usage until it's eaten though all of the swap and stress crashes (it takes about 20secs to get through 5GB of swap on my machine).
Update 2:
Setting mem_swappiness: 0 causes an immediate crash when requesting more memory than mem_limit, regardless of memswap_limit.
Running docker info shows WARNING: No swap limit support
According to https://docs.docker.com/engine/installation/linux/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities this is disabled by default ("Memory and swap accounting incur an overhead of about 1% of the total available memory and a 10% overall performance degradation.") You can enable it by editing the /etc/default/grub file:
Add or edit the GRUB_CMDLINE_LINUX line to add the following two key-value pairs:
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
then update GRUB with update-grub and reboot.
I have been working with Docker for a while now, I have installed docker and launched a container using
docker run -it --cpuset-cpus=0 ubuntu
When I log into the docker console and run
grep processor /proc/cpuinfo | wc -l
It shows 3 which are the number of cores I have on my host machine.
Any idea on how to restrict the resources to the container and how to verify the restrictions??
The issue has been already raised in #20770. The file /sys/fs/cgroup/cpuset/cpuset.cpus reflects the correct output.
The cpuset-cpus is taking effect however is not being reflected in /proc/cpuinfo
docker inspect <container_name>
will give the details of the container launched u have to check for "CpusetCpus" in there and then u will find the details.
Containers aren't complete virtual machines. Some kernel resources will still appear as they do on the host.
In this case, --cpuset-cpus=0 modifies the resources the container cgroup has access to which is available in /sys/fs/cgroup/cpuset/cpuset.cpus. Not what the VM and container have in /proc/cpuinfo.
One way to verify is to run the stress-ng tool in a container:
Using 1 cpu will be pinned at 1 core (1 / 3 cores in use, 100% or 33% depending on what tool you use):
docker run --cpuset-cpus=0 deployable/stress -c 3
This will use 2 cores (2 / 3 cores, 200%/66%):
docker run --cpuset-cpus=0,2 deployable/stress -c 3
This will use 3 ( 3 / 3 cores, 300%/100%):
docker run deployable/stress -c 3
Memory limits are another area that don't appear in kernel stats
$ docker run -m 64M busybox free -m
total used free shared buffers cached
Mem: 3443 2500 943 173 261 1858
-/+ buffers/cache: 379 3063
Swap: 1023 0 1023
yamaneks answer includes the github issue.
it should be in double quotes --cpuset-cpus="", --cpuset-cpus="0" means it make use of cpu0.
I have a docker running and it gives me disk space warning. How can i increase the docker space and start again? (The same container)
Lets say I want to give like 15gb.
You can also increase disk space through the docker GUI
I assume you are talking about disk space to run your containers.
Make sure that you have enough space on whatever disk drive you are using for /var/lib/docker which is the default used by Docker. You can change it with the -g daemon option.
If you don't have enough space you may have to repartition your OS drives so that you have over 15GB. If you are using boot2docker or docker-machine you will have to grow the volume on your Virtual Machine. It will vary depending on what you are using for Virtualization (i.e VirtualBox, VMware, etc)
For example if you are using VirtualBox and docker-machine you can start with something like this for a 40GB VM.
docker-machine create --driver virtualbox --virtualbox-disk-size "40000" default
I ran into similar problem with my docker-vm (which is 'alpine-linux' on VMware Fusion in OS X):
write error: no space left on device alpinevm:/mnt/hgfs
failed to build: .. no space left on device
.. eventually this guide helped me to resize/expand my docker volume.
TL;DR:
1 - Check size of partition containing /var/lib/docker
> df -h
/dev/sda3 17.6G 4.1G 12.6G 25% /var/lib/docker
look for '/dev/sdaN', where N is your partition for '/var/lib/docker', in my case /dev/sda3
2 - Shut down your VM, open VM Settings > Hard Disk(s) > change size of your 'virtual_disk.vmdk' (or whatever is your machine's virtual disk), then click Apply (see this guide).
3 - Install cfdisk and e2fsprogs-extra which contains resize2fs
> apk add cfdisk
> apk add e2fsprogs-extra
4 - Run cfdisk and resize/expand /dev/sda3
> cfdisk
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 206847 204800 100M 83 Linux
/dev/sda2 206848 4241407 4034560 1.9G 82 Linux swap / Solaris
/dev/sda3 4241408 83886079 79644672 12.6G 83 Linux
[Bootable] [ Delete ] [ Resize ] [ Quit ] [ Type ] [ Help ] [ Write ] [ Dump ]
.. press down/up to select '/dev/sda3'
.. press left/right/enter to select 'Resize' -> 'Write' -> 'Quit'
5 - Run resize2fs to expand the file system of /dev/sda3
> resize2fs /dev/sda3
6 - Verify resized volume
> df -h
/dev/sda3 37.3G 4.1G 31.4G 12% /var/lib/docker
To increase space available for Docker you will have to increase your docker-pool size. If you do a
lvs
You will see the docker-pool logical volume and its size. If your docker pool is sitting on a volume group that has free space you can simply increase the docker-pool LV by
lvextend -l 100%FREE <path_to_lv>
# An example using this may looks like this:
# lvextend -l 100%FREE /dev/VolGroup00/docker-pool
You can check out more docker diskspace tips here
Thanks
Docker stores all layers/images in its file formate (i.e. aufs) in default /var/lib/docker directory.
If you are getting disk space warning because of docker then there must of lot of docker images and you need to clean up it.
If you have option to add disk space then can you create separate partition with bigger size and mount your /var/lib/docker over there which will help you to get rid of filling root partition.
some extra information can be found here on managing disk space for docker .
http://www.scmtechblog.net/2016/06/clean-up-docker-images-from-local-to.html