Vda devices in podman - docker

I have a podman image with Ubuntu 18.04 and if I run a container with the command:
podman run -it <my image> /bin/bash
and I run the command:
cat /proc/partitions
I see the following:
root#0382d078cd30:/# cat /proc/partitions
major minor #blocks name
11 0 1048575 sr0
252 0 104857600 vda
252 1 1024 vda1
252 2 130048 vda2
252 3 393216 vda3
252 4 104332271 vda4
Can anyone explain to me what are these devices? I tried to do some search but no luck. The reason I want to know this is because I need to test some commands like: parted, mkfs.ext4, tune2fs, and e2fsck on the /dev/vda1 device. However, I don't know how these virtual devices are mapped on my host system. I'm afraid that formatting the /dev/vda1 device with the mkfs.ext4 command I lost some data on my MacOS host.
I didn't map explicitely these devices with something on my host, so I think that podman created them by default and map them on my host in some way.
Is it safe to run the above command against these virtual devices or is there the risk I break something on my host?
Thank you in advance for your help.

Related

Cannot open vfio device in docker container as non-root user

I have enabled virtualization in the BIOS and enabled the IOMMU on kernel command line (intel_iommu=on).
I bound a solarflare NIC to the vfio-pci device and added a udev rule to ensure the vfio device is accessible by my non-root user (e.g., /etc/udev/rules.d/10-vfio-docker-users.rules):
SUBSYSTEM=="vfio", OWNER="myuser", GROUP=="myuser"
I've launched my container with -u 1000 and mapped /dev (-v /dev:/dev). Running in an interactive shell in the container, I am able to verify that the device is there with the permissions set by my udev rule:
bash-4.2$ whoami
whoami: unknown uid 1000
bash-4.2$ ls -al /dev/vfio/35
crw-rw---- 1 1000 1000 236, 0 Jan 25 00:23 /dev/vfio/35
However, if I try and open it (e.g., python -c "open('/dev/vfio/35', 'rb')" I get IOError: [Errno 1] Operation not permitted: '/dev/vfio/35'. However, the same command works outside the container as the normal non-root user with user-id 1000!
It seems that there are additional security measures that are not allowing me to access the vfio device within the container. What am I missing?
Docker drops a number of privileges by default, including the ability to access most devices. You can explicitly grant access to a device using the --device flag, which would look something like:
docker run --device /dev/vfio/35 ...
Alternately, you can ask Docker not to drop any privileges:
docker run --privileged ...
You'll note that in both of the above examples it was not necessary to explicitly bind-mount /dev; in the first case, the device(s) you have exposed with --device will show up, and in the second case you see the host's /dev by default.

Docker container update --memory didn't work as expected

Good morning all,
In the process of trying to train myself in Docker, I'm having trouble.
I created a docker container from a wordpress image, via docker compose.
[root#vps672971 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
57bb123aa365 wordpress:latest "docker-entrypoint.s…" 16 hours ago Up 2 0.0.0.0:8001->80/tcp royal-by-jds-wordpress-container
I would like to allocate more memory to this container, however after the execution of the following command, the information returned by docker stats are not correct.
docker container update --memory 3GB --memory-swap 4GB royal-by-jds-wordpress-container
docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
57bb123aa365 royal-by-jds-wordpress-container 0.01% 9.895MiB / 1.896GiB 0.51% 2.68kB / 0B 0B / 0B 6
I also tried to request API engine to retrieve information about my container, but the limitation displayed is not correct either.
curl --unix-socket /var/run/docker.sock http:/v1.21/containers/royal-by-jds-wordpress-container/stats
[...]
"memory_stats":{
"usage":12943360,
"max_usage":12955648,
"stats":{},
"limit":2035564544
},
[...]
It seems that the modification of the memory allocated to the container didn't work.
Anyone have an idea?
Thank you in advance.
Maxence

How to check the number of cores used by docker container?

I have been working with Docker for a while now, I have installed docker and launched a container using
docker run -it --cpuset-cpus=0 ubuntu
When I log into the docker console and run
grep processor /proc/cpuinfo | wc -l
It shows 3 which are the number of cores I have on my host machine.
Any idea on how to restrict the resources to the container and how to verify the restrictions??
The issue has been already raised in #20770. The file /sys/fs/cgroup/cpuset/cpuset.cpus reflects the correct output.
The cpuset-cpus is taking effect however is not being reflected in /proc/cpuinfo
docker inspect <container_name>
will give the details of the container launched u have to check for "CpusetCpus" in there and then u will find the details.
Containers aren't complete virtual machines. Some kernel resources will still appear as they do on the host.
In this case, --cpuset-cpus=0 modifies the resources the container cgroup has access to which is available in /sys/fs/cgroup/cpuset/cpuset.cpus. Not what the VM and container have in /proc/cpuinfo.
One way to verify is to run the stress-ng tool in a container:
Using 1 cpu will be pinned at 1 core (1 / 3 cores in use, 100% or 33% depending on what tool you use):
docker run --cpuset-cpus=0 deployable/stress -c 3
This will use 2 cores (2 / 3 cores, 200%/66%):
docker run --cpuset-cpus=0,2 deployable/stress -c 3
This will use 3 ( 3 / 3 cores, 300%/100%):
docker run deployable/stress -c 3
Memory limits are another area that don't appear in kernel stats
$ docker run -m 64M busybox free -m
total used free shared buffers cached
Mem: 3443 2500 943 173 261 1858
-/+ buffers/cache: 379 3063
Swap: 1023 0 1023
yamaneks answer includes the github issue.
it should be in double quotes --cpuset-cpus="", --cpuset-cpus="0" means it make use of cpu0.

How to increase the swap space available in the boot2docker virtual machine?

I would like to run a docker container that requires a lot of memory on a machine that doesn't have much RAM. I have been trying to increase the swap space available for the container to no avail. Here is the last command I tried:
docker run -d -m 1000M --memory-swap=10000M --name=my_container my_image
Following these tips on how to check memory metrics I found the following:
$ boot2docker ssh
docker#boot2docker:~$ cat /sys/fs/cgroup/memory/docker/35af5a072751c7af80ce7a255a01ab3c14b3ee0e3f15341f7bb22a777091c67b/memory.stat
cache 454656
rss 65015808
rss_huge 29360128
mapped_file 208896
writeback 0
swap 0
pgpgin 31532
pgpgout 22702
pgfault 49372
pgmajfault 0
inactive_anon 28672
active_anon 65183744
inactive_file 241664
active_file 16384
unevictable 0
hierarchical_memory_limit 1048576000
hierarchical_memsw_limit 10485760000
total_cache 454656
total_rss 65015808
total_rss_huge 29360128
total_mapped_file 208896
total_writeback 0
total_swap 0
total_pgpgin 31532
total_pgpgout 22702
total_pgfault 49372
total_pgmajfault 0
total_inactive_anon 28672
total_active_anon 65183744
total_inactive_file 241664
total_active_file 16384
total_unevictable 0
Is it possible to run a container that requires 5G of memory on a machine that only has 4G of physical memory?
This GitHub issue was very helpful in figuring out how to increase the swap space available in the boot2docker-vm. Adapting it to my situation I used the following commands to ssh into the boot2docker-vm and set up a new swapfile:
boot2docker ssh
export SWAPFILE=/mnt/sda1/swapfile
sudo dd if=/dev/zero of=$SWAPFILE bs=1024 count=4194304
sudo mkswap $SWAPFILE
sudo chmod 600 $SWAPFILE
sudo swapon $SWAPFILE
exit

Docker error at higher core counts on a multi core machine

I am running a Centos Container using docker on a RHEL 65 machine. I am trying to run an MPI application (MILC) on 16 cores.
My server has 20 cores and 128 GB of memory.
My application runs fine until 15 cores but fails with the APPLICATION TERMINATED WITH THE EXIT STRING: Bus error (signal 7) error when using 16 cores and up. At 16 cores and up these are the messages I see in the logs.
Jul 16 11:29:17 localhost abrt[100668]: Can't open /proc/413/status: No such file or directory
Jul 16 11:29:17 localhost abrt[100669]: Can't open /proc/414/status: No such file or directory
Jul 16 11:29:17 localhost abrt[100670]: Can't open /proc/417/status: No such file or directory
A few details on the container:
kernel 2.6.32-431.el6.x86_64
Official centos from docker hub
Started container as:
docker run -t -i -c 20 -m 125g --name=test --net=host centos /bin/bash
I would greatly appreciate any and all feedback regarding this. Please do let me know if I can provide any further information.
Regards

Resources