I'm planning to move away from Docker to Podman.
I use docker-compose a lot so am planning to switch to podman-compose as well.
However I'm stuck at the simplest of podman examples, I can't seem to mount a volume onto my container? Obviously I'm doing something wrong however I cant figure out what it is.
My source file definitely exists on my (hardware) host (so not the podman machine). but I keep getting the error 'no such file or directory'.
Funny thing is if I manually create the same file locally on the podman machine (podman machine ssh --> touch /tmp/test.txt) it works perfectly fine.
Question is;
should I (manually?) mount all my local files onto the Fedora VM (podman machine) so that in turn this Fedora mount can be used in my actual container? and if so, how do I do this?
The podman run cmd below should work and there is something else I'm doing wrong?
$ ls -al /tmp/test.txt
-rw-r--r-- 1 <username> <group> 10 Dec 8 13:33 /tmp/test.txt
$ podman run -it -v /tmp/test.txt:/tmp/test.txt docker.io/library/busybox
Error: statfs /tmp/test.txt: no such file or directory
$ podman run -it -v /tmp/test.txt:/tmp/test.txt:Z docker.io/library/busybox
Error: statfs /tmp/test.txt: no such file or directory
Additional information:
$ podman info --debug
host:
arch: amd64
buildahVersion: 1.23.1
cgroupControllers:
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.0.30-2.fc35.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.30, commit: '
cpus: 10
distribution:
distribution: fedora
variant: coreos
version: "35"
eventLogger: journald
hostname: localhost.localdomain
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.15.6-200.fc35.x86_64
linkmode: dynamic
logDriver: journald
memFree: 11733594112
memTotal: 12538863616
ociRuntime:
name: crun
package: crun-1.3-1.fc35.x86_64
path: /usr/bin/crun
version: |-
crun version 1.3
commit: 8e5757a4e68590326dafe8a8b1b4a584b10a1370
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.12-2.fc35.x86_64
version: |-
slirp4netns version 1.1.12
commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
libslirp: 4.6.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.3
swapFree: 0
swapTotal: 0
uptime: 7h 9m 29.12s (Approximately 0.29 days)
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /var/home/core/.config/containers/storage.conf
containerStore:
number: 4
paused: 0
running: 0
stopped: 4
graphDriverName: overlay
graphOptions: {}
graphRoot: /var/home/core/.local/share/containers/storage
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 8
runRoot: /run/user/1000/containers
volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
APIVersion: 3.4.2
Built: 1636748737
BuiltTime: Fri Nov 12 20:25:37 2021
GitCommit: ""
GoVersion: go1.16.8
OsArch: linux/amd64
Version: 3.4.2
As mentioned by #ErikSjölund there has been an active treat on https://github.com/containers/podman. Apparantely Centos (Podman Machine) does not (yet) support different types of volume creation on the machine.
It's not perse Podman lacking this feature it's waiting for CentOS to support this feature as well.
However, should you want to mount a local directory onto the machine I recommend have a look at https://github.com/containers/podman/issues/8016#issuecomment-995242552. It describes how to do a read-only mount on CoreOS (or break compatibility with local version).
Info:
https://github.com/containers/podman/pull/11454
https://github.com/containers/podman/pull/12584
Related
I'm having problems getting a listing of images from a specific registry that I've set up on a local server, or, maybe, I'm having issues publishing them to that registry in the first place, as this is my first adventure into docker registries, I may just be confused with the terms used.
There's an old question, here, that kind of looks like what I want to achieve, but it appears that docker has gained built-in support for this, in the meanwhile, so the methods mentioned here are no longer relevant.
I have 2 servers (for the purpose of this question):
rancher-server: This server has a rancher:v2.6.0 container running and a registry:2 container.
k8s-server: This is just a freshly installed server, with the docker and kubernetes packages installed, that I want the rancher server to administer.
On k8s-server, I'm trying to spin up a docker image rancher/rancher-agent:v2.6.0 with a few arguments, that should let it relinquish control to the rancher server.
The trick here is, that this is all required to work without internet access (currently there IS internet access, but it's a PoC for a task that requires to be air-gapped). For the purposes of this question, I really just want to be able to spin up docker containers on k8s-server, using the registry on rancher-server.
Currently, this is the state of rancher-server:
# docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b9a15ea00d5e registry:2 "/entrypoint.sh /e..." About an hour ago Up About an hour 0.0.0.0:5000->5000/tcp local-registry
1b6bc6b88a8e 08c9693b4357 "entrypoint.sh 08c..." 26 hours ago Up 2 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp goofy_minsky
# docker image ls --all (the list is big, this is just a sample):
REPOSITORY TAG IMAGE ID CREATED
rancher/rancher-agent v2.6.0 9c35a790aa16 2 weeks ago
rancher-server.example.com:5000/rancher/rancher-agent v2.6.0 9c35a790aa16 2 weeks ago
# docker info
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 225
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: /usr/libexec/docker/docker-init-current
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: 66aedde759f33c190954815fb765eedc1d782dd9 (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: fec3683b971d9c3ef73f284f176672c44b448662 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
WARNING: You're not using the default seccomp profile
Profile: /etc/docker/seccomp.json
selinux
Kernel Version: 3.10.0-1160.41.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 3
CPUs: 2
Total Memory: 3.701 GiB
Name: rancher-server
ID: SA2T:G2IA:CGER:6BC5:HIV2:4T6T:LF3Q:2YVS:SYU7:SQ5V:ACUS:BMEX
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
rancher-server.example.com:5000
127.0.0.0/8
Live Restore Enabled: false
Registries: docker.io (secure)
On the k8s-server, I try to list the contents of that registry:
# docker image ls --all rancher-server.example.com:5000
REPOSITORY TAG IMAGE ID CREATED SIZE
# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: /usr/libexec/docker/docker-init-current
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: 66aedde759f33c190954815fb765eedc1d782dd9 (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: fec3683b971d9c3ef73f284f176672c44b448662 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
WARNING: You're not using the default seccomp profile
Profile: /etc/docker/seccomp.json
selinux
Kernel Version: 3.10.0-1160.41.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 3
CPUs: 2
Total Memory: 3.701 GiB
Name: k8s-server
ID: QETJ:QSPQ:VS36:OOOA:ZPYL:CDHK:AJ5G:N4BD:ZQUH:UL6O:PHAB:5UOE
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
rancher-server.example.com:5000
127.0.0.0/8
Live Restore Enabled: false
Registries: docker.io (secure)
I had to jump through a few hoops to get there, in the first place, marking the registry as unsafe in /etc/docker/daemon.json on the k8s-server and disabling selinux on the rancher-server, for example.
I've tried to docker login rancher-server.example.com:5000 first, but that made no difference. It does look like, to me, that the k8s-server is configured correctly, but that the images on rancher-server haven't been tagged/pushed properly, but when I look back at the registry, I don't know how to do it differently, and, as far as I understand the registry, it looks fine to me?
I've changed the server names for anonymity and the output has been lightly edited for presentation.
EDIT:
I think I found a clue to what's happening here, it turns out that I can actually run the images from this registry remotely, just fine, it just so happens that I have no way to discover the names of the images, however, if I do a docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher-server.example.com:5000/rancher/rancher-agent:v2.6.0 --server https://rancher-server.example.com:5000 --token <token> --ca-checksum <ca-checksum> --etcd --controlplane it actually pulls and runs the container, so it looks like the registry itself is fine, but maybe the index isn't?
In the last couple of days I'm having some issues at building or running docker containers.
It seems that root doesn't have permission of having access to the filesystem.
Eg. I've created this very simple Dockerfile
FROM centos
RUN id && ls -l /usr/bin/yum /usr/bin/dnf-3 && yum install mlocate
and when I try to build the image I get the error
Step 1/2 : FROM centos
---> 470671670cac
Step 2/2 : RUN id && ls -l /usr/bin/yum /usr/bin/dnf-3 && yum install mlocate
---> Running in f7b32a009a74
uid=0(root) gid=0(root) groups=0(root)
-rwxr-xr-x 1 root root 1954 Dec 19 15:43 /usr/bin/dnf-3
lrwxrwxrwx 1 root root 5 Dec 19 15:43 /usr/bin/yum -> dnf-3
/usr/libexec/platform-python: can't open file '/usr/bin/yum': [Errno 13] Permission denied
The command '/bin/sh -c id && ls -l /usr/bin/yum /usr/bin/dnf-3 && yum install mlocate' returned a non-zero code: 2
The issue seems to be more generic as even with ubuntu or alpine I get similar errors, so I suspect is related to Ubuntu.
Consider that before I could perform any task without problems.
I've tried adding capabilities and stopping apparmor but it doesn't have any effect.
Docker info
Client:
Debug Mode: false
Server:
Containers: 18
Running: 0
Paused: 0
Stopped: 18
Images: 20
Server Version: 19.03.8
Storage Driver: overlay2
Backing Filesystem: <unknown>
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version:
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.4.0-31-generic
Operating System: Ubuntu Core 16
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 7.475GiB
Name: gurdulu-xps
ID: E5JA:3WKI:JWFQ:M5J2:CAZ7:VVKI:2ADB:3W7W:F3F4:VYXZ:7JLP:R7C4
Docker Root Dir: /var/snap/docker/common/var-lib-docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
It was apparmor in combination with snap. The profile coming with the snap installation had in some way become invalid in the last couple of days.
To be honest I didn't investigate and tried removing the snap and installing with apt.
Now it works fine.
I have researched this issue extensively to no avail, and also asked on unix.stackexchange.com also to no avail, so I'm asking here in hopes someone else has some insight into why this is occurring, as asking on both the unix board as well as github has shed no insight whatsoever.
I cannot get Docker to play nice on Antergos, or be reachable without sudo. Running container builds with sudo causes a number of issues, such as ssh keys not being detected and nginx not being recognized. This problem arose about 3 days ago, and rolling back has not made any difference. Uninstalling docker completely and reinstalling also did not make any difference. Neither has updating my configuration, permissions, or any other available setting.
System version: 4.17.8-1-ARCH #1 SMP PREEMPT Wed Jul 18 09:56:24 UTC 2018 x86_64 GNU/Linux
Current docker version: 18.04.0-ce (also tried on all versions up to current 18.05 to no avail, have rolled back one version at a time with no effect).
Existing research led to the typical issue being that the user needs to be in the docker group to circumvent sudo, however I am, and it is still not working. I have also checked here, here, and here, and all of them offer the same (not working) answer.
Please do not suggest checking my user group or adding my user to the docker group, as this is not the issue, as outlined below.
Everything worked fine until a couple of days ago. I am inclined to believe an automatic update broke it.
Below is some context:
Output of groups
root http docker users wheel
When calling any docker command without sudo (eg docker info, docker ps, docker run ... docker-compose up, etc), I get the following:
Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
It is definitely running. systemctl status docker yeilds the following:
● docker.service - Docker Application Container Engine
Loaded: loaded (/etc/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-07-20 14:52:54 EDT; 21min ago
Docs: https://docs.docker.com
Main PID: 472 (dockerd)
Tasks: 50 (limit: 4915)
Memory: 139.0M
CGroup: /system.slice/docker.service
├─ 472 /usr/bin/dockerd -H fd://
├─ 620 docker-containerd --config /var/run/docker/containerd/containerd.toml
├─ 802 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/e0942c95c35608cecbbe761d27a2c5386d9faec072cf8031>
├─ 818 bash -c echo "RESTARTING GUlP COMMAND" && npm rebuild node-sass && npm upgrade && npm update && npm install && gulp && tail -f /dev/null
└─1572 tail -f /dev/null
It is likewise displayed when running htop and ps aux | grep docker.
perms for ls -la $(which docker):
-rwxr-xr-x 1 root docker 36823912 Apr 17 18:48 /usr/bin/docker
According to this, it should absolutely be accessible without sudo, but still chokes on all commands without sudo. I cannot just run it with sudo due to a number of production build scripts that require user space locality failing, which break when sudo is applied.
output of sudo docker info
Containers: 15
Running: 1
Paused: 0
Stopped: 14
Images: 30
Server Version: 18.04.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk
syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.17.8-1-ARCH
Operating System: Antergos Linux
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.02GiB
Name: Indibog
ID: OCC4:P3QN:B5EU:J2Y4:LZN4:WAIC:2F5V:ZQZD:NLXY:DWVE:X2LB:TLEQ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 27
Goroutines: 39
System Time: 2018-07-20T15:04:01.745176194-04:00
EventsListeners: 0
Username: mopsyd
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
192.168.40.60:5000
sandbox.cdp.local:5000
127.0.0.0/8
Live Restore Enabled: false
my os version : centos7.3
kenerl version : 3.10.0-514.16.1.el7.x86_64
docker version : 1.12.6
I modify the config file "/lib/systemd/system/docker.service" ,after change the option "--exec-opt native.cgroupdriver" from systemd to cgroupfs. I find the docker can not run any images!
[root#surenode2 system]# cat /lib/systemd/system/docker.service |grep cgroup
--exec-opt native.cgroupdriver=cgroupfs \
[root#surenode2 system]# docker images | grep mysql
docker.io/mysql latest e799c7f9ae9c 3 weeks ago 407.3 MB
[root#surenode2 system]# docker run -p 3307:3307 -e MYSQL_ROOT_PASSWORD=123456 -d mysql
3395c8d505d3fc20d39e25c510a090649f9f447bce985028ea7274e79183d077
/usr/bin/docker-current: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:334: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: \\\"\"\n".
And,if I change exec-opt native.cgroupdriver to systemd ,Docker can run any images..
Docker 1.12.6 ("community edition") is EOL and no longer supported;
We can install the latest docker version refer to 'https://docs.docker.com/engine/installation/linux/centos/'.
When using the latest docker version ,this issue is no longer reproduce !
[root#surenode1 ~]# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.05.0-ce
Storage Driver: devicemapper
Pool Name: docker-253:1-393732-pool
Pool Blocksize: 65.54kB
Base Device Size: 10.74GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 11.8MB
Data Space Total: 107.4GB
Data Space Available: 50.86GB
Metadata Space Used: 581.6kB
Metadata Space Total: 2.147GB
Metadata Space Available: 2.147GB
Thin Pool Minimum Free Space: 10.74GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.135-RHEL7 (2016-11-16)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-514.16.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 3.702GiB
Name: surenode1
ID: NIYU:7BRX:JIQP:ZJMW:ZV6N:3336:5JSB:MWVQ:WR72:AO7J:QOEW:CHCA
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[root#surenode1 ~]# docker version
Client:
Version: 17.05.0-ce
API version: 1.29
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:10:29 2017
OS/Arch: linux/amd64
Server:
Version: 17.05.0-ce
API version: 1.29 (minimum version 1.12)
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:10:29 2017
OS/Arch: linux/amd64
Experimental: false
[root#surenode1 tmp]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mysql latest e799c7f9ae9c 3 weeks ago 407MB
[root#surenode1 tmp]# ps -ef | grep docker
root 10173 1 1 17:31 ? 00:00:03 /usr/bin/dockerd **--exec-opt native.cgroupdriver=cgroupfs**
root 10177 10173 0 17:31 ? 00:00:00 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc
root 10535 9634 0 17:34 pts/0 00:00:00 grep --color=auto docker
[root#surenode1 tmp]# docker run -p 3307:3307 -e MYSQL_ROOT_PASSWORD=123456 -d mysql
8f2d30ea779b872604bdf0d3d500de16e17d1409cdc5b2688893202bfcebbf16
[root#surenode1 tmp]#
Also,We can find a bug description about this issue.
https://bugzilla.redhat.com/show_bug.cgi?id=1444662
When run command:
docker run -it -v some_volume:/abc/xyz --volume-driver=btrfs a_docker_image /bin/bash
terminal shows:
docker: Error response from daemon: create some_volume: Error looking up volume plugin btrfs: plugin not found.
====================
But if create volume first:
docker volume create --opt type=btrfs --name some_volume
It will create volume successfully. Now if I try to run container and create a new volume:
docker run -it -v some_volume:/abc/xyz --volume-driver=btrfs a_docker_image /bin/bash
It shows (of course it makes sense, since the same name volume has been already created):
docker: Error response from daemon: create some_volume: conflict: volume name must be unique.
And if I try to run container with the existing volume:
docker run -it -v some_volume:/abc/xyz a_docker_image /bin/bash
It returns:
docker: Error response from daemon: missing device in volume options.
====================
Could anyone help tell me how to install the volume plugin btrfs for docker? I haven't found any useful information regarding that except some introduction about plugin (but not how to install). Thanks in advance.
As suggested by #forevergenin in comments, here is my docker environment:
docker version
Client:
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 18:13:28 2016
OS/Arch: darwin/amd64
Server:
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 19:36:04 2016
OS/Arch: linux/amd64
docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 39
Server Version: 1.11.0
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 121
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 4.1.19-boot2docker
Operating System: Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 996.1 MiB
Name: default
ID: 74TB:OVH5:S3GD:UQUG:ILWG:5NVH:2MSH:5H7R:A5H4:GSLV:2Q6D:ZIR6
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug mode (client): false
Debug mode (server): true
File Descriptors: 15
Goroutines: 32
System Time: 2016-08-15T13:57:03.866016657Z
EventsListeners: 0
Username: thyrlian
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
I am new to btrfs with docker, but here is my understanding:
Using btrfs as a storage driver means that docker will use btrfs internally for the images and containers (that is explained here). Specifically, look at the installation details here: they make you create a btrfs partition and mount /var/lib/docker on it. When you restart your docker daemon after that, docker info should tell you "Storage Driver: btrfs".
Using the btrfs driver, the image's base is saved int /var/lib/docker/btrfs/subvolumes, and then they do snapshots (but I am not sure where they save them exactly). But that is done automatically without you specifying the driver (I would guess that specifying the driver is useful when you have multiple drivers that can run on a given filesystem. But the btrfs driver seems to be the default when /var/lib/docker is formatted in btrfs.
Regarding volumes, I believe that they are not saved as btrfs subvolumes. They seem to be simple folders in /var/lib/docker/volumes/. Again, I can imagine this as being the normal behavior of docker: images and containers are layered, but volumes are simple directories.
At least, that is the behavior I observe:
If I pull an image or create a container, I get btrfs subvolumes created.
I could create a volume by simply using docker volume create testvol1 and mount it in a container. But then it is not a btrfs subvolume.
If you want to have your volumes in btrfs subvolumes, then I believe that you might need to create the subvolumes manually and mount the volumes in them directly.