Docker performance on ubuntu host 2x+ worse than OSX host - docker

Ok, let's start over after a bunch of investigation. Here is what we know:
Ubuntu host
15.04 on i7 3820 (quad 3.6) and Samsung 850 pro SSD 512gb SATA 6
$ docker info
Containers: 2
Images: 101
Storage Driver: overlay
Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.0-28-generic
Operating System: Ubuntu 15.04
CPUs: 8
Total Memory: 15.61 GiB
Name: camacho
ID: ZOYN:QGDO:UGMJ:TDDM:WEEM:ZEHJ:4OKB:V5WR:RGCL:NOKG:F5W5:SDEL
WARNING: No swap limit support
OSX host
10.10.5 on i7 (quad 2.7) and Apple SSD 512gb SD512E SATA 6 (2+ years old)
$ docker info
Containers: 3
Images: 185
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 191
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.0.9-boot2docker
Operating System: Boot2Docker 1.8.1 (TCL 6.3); master : 7f12e95 - Thu Aug 13 03:24:56 UTC 2015
CPUs: 8
Total Memory: 3.858 GiB
Name: dinghy
ID: PNNP:PI3E:CRUK:27RI:IPHW:HROF:NQA2:XKV6:VGCZ:WT7B:BZ7R:USWD
Debug mode (server): true
File Descriptors: 21
Goroutines: 54
System Time: 2015-09-24T19:16:01.715069994Z
EventsListeners: 1
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Labels:
provider=virtualbox
Observations
dockerized rspec running 2x+ faster on OSX
sysbench io results are terrible on Ubuntu
dockerized iozone results are as expected (ubuntu slightly faster)
hardware check complete, ubuntu host is on a SATA 6 cable and port
iozone results
docker run -it threadx/docker-ubuntu-iozone
$ iozone -R -l 5 -u 5 -r 4k -s 100m -F /home/f1 /home/f2 /home/f3 /home/f4 /home/f5 | tee -a /tmp/iozone_results.txt &
Summary
I'm dockerizing our test process because we need stability and concurrency. We've done a bunch of work to limit test times, and a 2x increase in times is a terrible step backwards.
Sysbench shows the ubuntu host performing 4x worse than OSX, and I have no understanding of that. dd and iozone tests show the ubuntu host to be performing as expected.
Question
Why is my rspec performance worse on the ubuntu host which has faster resources? Where should I investigate? Is sysbench io test an indicator or an anomaly? What are sources of bad performance in dockerized ubuntu?

The difference is PCIe vs SATA 6 interfaces. The latest generation of Macbook Pro PCIe is even faster.

Related

Podman unable to mount local file into container

I'm planning to move away from Docker to Podman.
I use docker-compose a lot so am planning to switch to podman-compose as well.
However I'm stuck at the simplest of podman examples, I can't seem to mount a volume onto my container? Obviously I'm doing something wrong however I cant figure out what it is.
My source file definitely exists on my (hardware) host (so not the podman machine). but I keep getting the error 'no such file or directory'.
Funny thing is if I manually create the same file locally on the podman machine (podman machine ssh --> touch /tmp/test.txt) it works perfectly fine.
Question is;
should I (manually?) mount all my local files onto the Fedora VM (podman machine) so that in turn this Fedora mount can be used in my actual container? and if so, how do I do this?
The podman run cmd below should work and there is something else I'm doing wrong?
$ ls -al /tmp/test.txt
-rw-r--r-- 1 <username> <group> 10 Dec 8 13:33 /tmp/test.txt
$ podman run -it -v /tmp/test.txt:/tmp/test.txt docker.io/library/busybox
Error: statfs /tmp/test.txt: no such file or directory
$ podman run -it -v /tmp/test.txt:/tmp/test.txt:Z docker.io/library/busybox
Error: statfs /tmp/test.txt: no such file or directory
Additional information:
$ podman info --debug
host:
arch: amd64
buildahVersion: 1.23.1
cgroupControllers:
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.0.30-2.fc35.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.30, commit: '
cpus: 10
distribution:
distribution: fedora
variant: coreos
version: "35"
eventLogger: journald
hostname: localhost.localdomain
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.15.6-200.fc35.x86_64
linkmode: dynamic
logDriver: journald
memFree: 11733594112
memTotal: 12538863616
ociRuntime:
name: crun
package: crun-1.3-1.fc35.x86_64
path: /usr/bin/crun
version: |-
crun version 1.3
commit: 8e5757a4e68590326dafe8a8b1b4a584b10a1370
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.12-2.fc35.x86_64
version: |-
slirp4netns version 1.1.12
commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
libslirp: 4.6.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.3
swapFree: 0
swapTotal: 0
uptime: 7h 9m 29.12s (Approximately 0.29 days)
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /var/home/core/.config/containers/storage.conf
containerStore:
number: 4
paused: 0
running: 0
stopped: 4
graphDriverName: overlay
graphOptions: {}
graphRoot: /var/home/core/.local/share/containers/storage
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 8
runRoot: /run/user/1000/containers
volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
APIVersion: 3.4.2
Built: 1636748737
BuiltTime: Fri Nov 12 20:25:37 2021
GitCommit: ""
GoVersion: go1.16.8
OsArch: linux/amd64
Version: 3.4.2
As mentioned by #ErikSjölund there has been an active treat on https://github.com/containers/podman. Apparantely Centos (Podman Machine) does not (yet) support different types of volume creation on the machine.
It's not perse Podman lacking this feature it's waiting for CentOS to support this feature as well.
However, should you want to mount a local directory onto the machine I recommend have a look at https://github.com/containers/podman/issues/8016#issuecomment-995242552. It describes how to do a read-only mount on CoreOS (or break compatibility with local version).
Info:
https://github.com/containers/podman/pull/11454
https://github.com/containers/podman/pull/12584

How to enable Google Container Optimized OS swap limit support

I'm running Kubernetes/Docker on Google Container Optimized OS on a GCE instance. When I run docker info it says
$ docker info
Containers: 116
Running: 97
Paused: 0
Stopped: 19
Images: 8
Server Version: 1.11.2
Storage Driver: overlay
Backing Filesystem: extfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 4.4.21+
Operating System: Container-Optimized OS from Google
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 14.67 GiB
Name: REDACTED
ID: REDACTED
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
The last line says that there is no swap limit support. I'm having trouble figuring out how to enable swap limit support. I found instructions for Ubuntu/Debian here.
My problem is that my docker containers get OOMKilled as soon as they reach their memory limit instead of trying swapping. I want the containers to use swap as a buffer instead of dying immediately.
Container-Optimized OS (COS) is actually configured with swap disabled completely. You can verify this via running cat /proc/meminfo | grep SwapTotal in a COS VM, which will say that it is configured to 0 kB.
I'm not sure whether it's a good idea to enable swap in your environment, as it may cause more problems (e.g. disk IO starvation/slowdown, kernel hung) if you are using swap frequently.
But if you wanna try it out, these commands might help you (run all of them as root):
cos-swap / # sysctl vm.disk_based_swap=1
vm.disk_based_swap = 1
cos-swap / # fallocate -l 1G /var/swapfile
cos-swap / # chmod 600 /var/swapfile
cos-swap / # mkswap /var/swapfile
Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
no label, UUID=406d3dfc-3780-44bf-8add-d19a24fdbbbb
cos-swap / # swapon /var/swapfile
cos-swap / # cat /proc/meminfo | grep Swap
SwapCached: 0 kB
SwapTotal: 1048572 kB
SwapFree: 1048572 kB

Unable to start container

I'm new to docker and trying to implement docker using chef on centos 7.1
Below is the basic code I wrote for installing, pulling centos image and creating container.
All the 3 tasks are executing successfully. Since the containers are in stop mode I tried to start manually by typing docker start containerid. When I checked docker ps I found the container is not started . I tried for several times but couldn't start the container.
Docker code using chef
docker_service 'default' do
action [:create, :start]
end
docker_image 'centos' do
action :pull
end
docker_container 'check2' do
repo 'centos'
action :create
end
Docker info:
Containers: 6
Running: 1
Paused: 0
Stopped: 5
Images: 3
Server Version: 1.12.1
Storage Driver: devicemapper
Pool Name: docker-8:1-523814-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 441.3 MB
Data Space Total: 107.4 GB
Data Space Available: 28.08 GB
Metadata Space Used: 1.159 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.146 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2016-06-09)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-229.4.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 6.807 GiB
ID: R24R:ORHY:XJQW:2HNI:U5TV:UGF7:B7VX:P6Z6:UHSR:YIMR:VGJT:4URU
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
127.0.0.0/8
Would you please help me
By default, the centos image will run /bin/bash which will exit immediately without a tty (-t) and standard input (-i) available.
Try running something in the container
docker_container 'check2' do
repo 'centos'
command 'top -b -d 5'
end

Docker daemon restart and reattaching to containers

If I kill my docker daemon process and then restart it, any containers that were running are now listed with Exited status and cannot be restarted using docker-compose as it will complain about the container name already being in use.
Docker containers can be started again using docker start but this could be hard when you've many containers.
Is there any way to restart the docker daemon, leave containers running (so as to not disrupt traffic) and have the daemon reattach to the containers?
uname -a:
Linux localhost.localdomain 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
docker info:
Containers: 23
Running: 2
Paused: 0
Stopped: 21
Images: 16
Server Version: 1.11.1
Storage Driver: devicemapper
Pool Name: docker-253:0-1567975-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 3.738 GB
Data Space Total: 107.4 GB
Data Space Available: 28 GB
Metadata Space Used: 7.688 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.14 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-10-14)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 3.10.0-327.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 7.64 GiB
Name:
ID:
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Username:
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Looks like this is something that will be addressed in 1.12:
https://github.com/docker/docker/issues/2658

Is Docker slow when using device mapper on Fedora?

I am experimenting docker and I plan to dockerize my project's CI infratsructure.
I am building dockerfiles on 3 different machines: Fedora, Ubuntu, and Boot2Docker (a virtiual machine under windows).
Docker builds on fedora are a lot slower than on the other 2 machines, especially the opration that creates a new image after a Step in the Dockerfile.
So my question is: Is device mapper a lot slower than AUFS or should I search for some other reason ? Should I expect better if I put in place a RHEL7 config ?
Config1: fedora 21 (3.18.3-201.fc21.x86_64)
sudo docker info
Containers: 27
Images: 1353
Storage Driver: devicemapper
Pool Blocksize: 65.54 kB
Backing Filesystem: extfs
Data Space Used: 82.77 GB
Data Space Total: 107.4 GB
Metadata Space Used: 103.9 MB
Metadata Space Total: 2.147 GB
Udev Sync Supported: true
Library Version: 1.02.93 (2015-01-30)
Execution Driver: native-0.2
Kernel Version: 3.18.3-201.fc21.x86_64
Operating System: Fedora 21 (Twenty One)
CPUs: 8
Total Memory: 31.38 GiB
Config2: Ubuntu 14.04.2 LTS
Containers: 89
Images: 589
Storage Driver: aufs
Backing Filesystem: extfs
Dirperm1 Supported: false
Execution Driver: native-0.2
Kernel Version: 3.13.0-49-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 8
Total Memory: 31.38 GiB
Config3: Boot2Docker (virtual linux machine running under Windows. docker default solution for Windows)
docker info
Containers: 14
Images: 215
Storage Driver: aufs
Backing Filesystem: extfs
Dirperm1 Supported: true
Execution Driver: native-0.2
Kernel Version: 3.18.11-tinycore64
Operating System: Boot2Docker 1.6.0 (TCL 5.4); master : a270c71 - Thu Apr
CPUs: 8
Total Memory: 1.961 GiB
I read this article but it still did not help me to clear things out.
Device Mapper's "slowness" has been documented, especially when used with a loop device.
Here's a useful presentation I found: http://jpetazzo.github.io/assets/2015-03-03-not-so-deep-dive-into-docker-storage-drivers.html.
I would look into overlay.

Resources