When run command:
docker run -it -v some_volume:/abc/xyz --volume-driver=btrfs a_docker_image /bin/bash
terminal shows:
docker: Error response from daemon: create some_volume: Error looking up volume plugin btrfs: plugin not found.
====================
But if create volume first:
docker volume create --opt type=btrfs --name some_volume
It will create volume successfully. Now if I try to run container and create a new volume:
docker run -it -v some_volume:/abc/xyz --volume-driver=btrfs a_docker_image /bin/bash
It shows (of course it makes sense, since the same name volume has been already created):
docker: Error response from daemon: create some_volume: conflict: volume name must be unique.
And if I try to run container with the existing volume:
docker run -it -v some_volume:/abc/xyz a_docker_image /bin/bash
It returns:
docker: Error response from daemon: missing device in volume options.
====================
Could anyone help tell me how to install the volume plugin btrfs for docker? I haven't found any useful information regarding that except some introduction about plugin (but not how to install). Thanks in advance.
As suggested by #forevergenin in comments, here is my docker environment:
docker version
Client:
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 18:13:28 2016
OS/Arch: darwin/amd64
Server:
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 19:36:04 2016
OS/Arch: linux/amd64
docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 39
Server Version: 1.11.0
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 121
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 4.1.19-boot2docker
Operating System: Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 996.1 MiB
Name: default
ID: 74TB:OVH5:S3GD:UQUG:ILWG:5NVH:2MSH:5H7R:A5H4:GSLV:2Q6D:ZIR6
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug mode (client): false
Debug mode (server): true
File Descriptors: 15
Goroutines: 32
System Time: 2016-08-15T13:57:03.866016657Z
EventsListeners: 0
Username: thyrlian
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
I am new to btrfs with docker, but here is my understanding:
Using btrfs as a storage driver means that docker will use btrfs internally for the images and containers (that is explained here). Specifically, look at the installation details here: they make you create a btrfs partition and mount /var/lib/docker on it. When you restart your docker daemon after that, docker info should tell you "Storage Driver: btrfs".
Using the btrfs driver, the image's base is saved int /var/lib/docker/btrfs/subvolumes, and then they do snapshots (but I am not sure where they save them exactly). But that is done automatically without you specifying the driver (I would guess that specifying the driver is useful when you have multiple drivers that can run on a given filesystem. But the btrfs driver seems to be the default when /var/lib/docker is formatted in btrfs.
Regarding volumes, I believe that they are not saved as btrfs subvolumes. They seem to be simple folders in /var/lib/docker/volumes/. Again, I can imagine this as being the normal behavior of docker: images and containers are layered, but volumes are simple directories.
At least, that is the behavior I observe:
If I pull an image or create a container, I get btrfs subvolumes created.
I could create a volume by simply using docker volume create testvol1 and mount it in a container. But then it is not a btrfs subvolume.
If you want to have your volumes in btrfs subvolumes, then I believe that you might need to create the subvolumes manually and mount the volumes in them directly.
Related
I'm getting this error when pulling some docker images (but not all):
failed to register layer: Error processing tar file(exit status 1): operation not permitted
For example: docker pull nginx works, but not docker pull redis.
I get the same result wether i run the command with a user that is part of the docker group, using sudo or as root.
If i run dockerd in debug mode i see this in the logs:
DEBU[0025] Downloaded 5233d9aed181 to tempfile /var/lib/docker/tmp/GetImageBlob023191751
DEBU[0025] Applying tar in /var/lib/docker/overlay2/e5290b8c50d601918458c912d937a4f6d4801ecaa90afb3b729a5dc0fc405afc/diff
DEBU[0027] Applied tar sha256:16ada34affd41b053ca08a51a3ca92a1a63379c1b04e5bbe59ef27c9af98e5c6 to e5290b8c50d601918458c912d937a4f6d4801ecaa90afb3b729a5dc0fc405afc, size: 79185732
(...)
DEBU[0029] Applying tar in /var/lib/docker/overlay2/c5c0cfb9907a591dc57b1b7ba0e99ae48d0d7309d96d80861d499504af94b21d/diff
DEBU[0029] Cleaning up layer c5c0cfb9907a591dc57b1b7ba0e99ae48d0d7309d96d80861d499504af94b21d: Error processing tar file(exit status 1): operation not permitted
INFO[0029] Attempting next endpoint for pull after error: failed to register layer: Error processing tar file(exit status 1): operation not permitted
INFO[0029] Layer sha256:938f1cd4eae26ed4fc51c37fa2f7b358418b6bd59c906119e0816ff74a934052 cleaned up
(...)
If i run watch -n 0 "sudo ls -lt /var/lib/docker/overlay2/" while the image is pulling, i can see new folders appearing (and disappearing after it fails) and the permissions on /var/lib/docker/overlay2/ are root:root:700 so i don't think it's exactly a permission issue.
Here are some detail about the environment:
I have a proxmox running the LXC container where i'm having the issue.
The container itself is running Debian 8.
And here are the various versions:
$> uname -a
Linux [redacted-hostname] 4.10.15-1-pve #1 SMP PVE 4.10.15-15 (Fri, 23 Jun 2017 08:57:55 +0200) x86_64 GNU/Linux
$> docker version
Client:
Version: 17.06.0-ce
API version: 1.30
Go version: go1.8.3
Git commit: 02c1d87
Built: Fri Jun 23 21:20:04 2017
OS/Arch: linux/amd64
Server:
Version: 17.06.0-ce
API version: 1.30 (minimum version 1.12)
Go version: go1.8.3
Git commit: 02c1d87
Built: Fri Jun 23 21:18:59 2017
OS/Arch: linux/amd64
Experimental: false
$>docker info
Containers: 20
Running: 0
Paused: 0
Stopped: 20
Images: 28
Server Version: 17.06.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfb82a876ecc11b5ca0977d1733adbe58599088a
runc version: 2d41c047c83e09a6d61d464906feb2a2f3c52aa4
init version: 949e6fa
Kernel Version: 4.10.15-1-pve
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.906GiB
Name: resumed-dev
ID: EBJ6:AFVS:L3RC:ZEE7:A6ZJ:WDQE:GTIZ:RXHA:P4AQ:QJD7:H6GG:YIQB
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 16
Goroutines: 24
System Time: 2017-08-17T14:17:07.800849127+02:00
EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
EDIT: This will be fixed by any release after December 18, 2017 of Moby via this merge. Will update again when fully incorporated into Docker.
If your container is unprivileged, this appears to be an issue with the overlay2 storage driver for Docker. This does not appear to be an issue with overlay (GitHub issue). So either utilize the overlay storage driver instead of overlay2, or make your container privileged.
I have almost the same environment as you, and met the same problem.
Some image works perfectly (alpine), while some images fails at cleaning up (ubuntu).
strace -f dockerd -D then docker pull or docker load gives the reason:
mknodat(AT_FDCWD, "/dev/agpgart", S_IFCHR|0660, makedev(10, 175)) = -1 EPERM (Operation not permitted)
Unprivileged container prohibit mknod by design. If you insist nesting Docker inside lxc, you will have to choose privileged container. (And notice that existing unprivileged container cannot be converted to privileged container directly due to uid/gid mapping)
How do I get the sha256 checksum of an already locally built docker image?
I want to use the checksum to annotate a FROM instruction in a derived image:
FROM name#sha256:checksum
I already tried checksums from docker inspect.
Neither the first nor the last of the checksums in the Layers list worked.
The one in "Id" did not work.
The one in "Parent" did not work.
The one in "Container" did not work.
The one in "Image" did not work.
Some of them I only tried out of desperation to finally find the correct checksum for my docker image, but I cannot find the correct checksum. Only thing I did not try yet, because of the number of layers, is to go through all of the layers in case they are in a random order. But to put them there like that would not make sense to begin with.
The error I see when I run docker build -t <some name> . in the directory of the Dockerfile of the derived image when it is not working is:
Step 1/7 : FROM name#sha256:<checksum> repository name not found: does not exist or no pull access
Info
Docker version: Docker version 17.05.0-ce, build 89658be (obtained via docker --version)
Output of docker info:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 3841
Server Version: 17.05.0-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 2620
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-78-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.684GiB
Name: xiaolong-hp-pavilion
ID: QCJS:JPK4:KC7J:6MYF:WWCA:XQM2:7AF7:HWWI:BRZK:GT6B:D2NP:OJFS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
The checksum docker is looking for in the FROM line comes from the registry server. In the inspect output, you'll see this in the RepoDigest section:
docker inspect -f '{{.RepoDigests}}' $image_name
If you haven't pushed this image to a registry server, then you won't be able to use this hash value.
E.g.:
$ docker inspect -f '{{.RepoDigests}}' busybox:latest
[busybox#sha256:32f093055929dbc23dec4d03e09dfe971f5973a9ca5cf059cbfb644c206aa83f]
$ cat df.testsha
FROM busybox#sha256:32f093055929dbc23dec4d03e09dfe971f5973a9ca5cf059cbfb644c206aa83f
CMD echo "hello world"
$ docker build -f df.testsha -t test-sha .
Sending build context to Docker daemon 23.35MB
Step 1/2 : FROM busybox#sha256:32f093055929dbc23dec4d03e09dfe971f5973a9ca5cf059cbfb644c206aa83f
---> 00f017a8c2a6
Step 2/2 : CMD echo "hello world"
---> Running in c516e5b6a694
---> 68dc47866183
Removing intermediate container c516e5b6a694
Successfully built 68dc47866183
Successfully tagged test-sha:latest
$ docker run --rm test-sha
hello world
I want to dynamically change the contents in the container's directory according to the mounted removable USB disks. To fulfill this, I do the following steps.
Run the container with -v option, which mount the host directory (/mnt) into container (/share). Assume the name of the new container is test. The command should look like docker run --name test -d -v /mnt:/share ubuntu:latest.
Inspect the contents via docker exec -it test /usr/bin/bash. For now, the /share is empty.
mount the USB disk to host. Execute mount /dev/sdxY /mntcommand. the /mnt directory on the host now contains files and directories which are stored on the removable USB disk.
Inspect the contents in the containers again. The /share directory in the container is still empty. Nothing has been changed at all.
If I do this reversely: 1) first mount the USB disk to host, 2) run the container, 3) umount the USB disk. The contents in the container keep remained, but the /mnt directory on the host is swept.
Do docker has some mechanism to keep the contents synchronous across the container and the host after I mount/umount the disk.
docker info:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 1
Server Version: 17.03.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 14
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.8.0-46-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.684 GiB
Name: tri-xps
ID: LMPY:EGYU:QUAF:DPUF:GZNR:AHFS:URFD:EFW3:5DFV:WHR3:NAYJ:PKQV
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
You can use --device option for accessing usb device directly within container.
docker run -t -i --device=/dev/ttyUSB0 ubuntu bash
More documentation available at https://docs.docker.com/engine/reference/commandline/run/#add-host-device-to-container---device
Sorry for my late post. After creating an issue on docker's official github page. #cpuguy83 gave me the answer. https://github.com/moby/moby/issues/32512.
To make the the mount operations propagate to the container, append slave flag to -v options. e.g:
-v media/usb:/smb_share:slave
For more information, check HERE.
when I volume mapping in Docker 1.13.0, some file is corrupted.
In docker container, when "ls -l" the folder, it displayed like this:
"?????????? ? ? ? ? ? file_corrpted_and_cant_access.conf"
and can't edit it or delete it.
it just show "No such file or directory".
I think it can't link file inode and path.
how to fix it?
Additional information
after volume mapping, I soft link volume mapping folder inside docker container.
docker run --privileged -d -v /opt/volume_mapping_folder/:/inside_container/inside_folder --restart=always testcontainer
and insde docker container I soft link folder
docker -it testcontainer /bin/bash
ln -s /inside_container/inside_folder /opt/appFolder
touch /opt/appFolder/file_corrpted_and_cant_access.conf
Output of docker info:
Server Version: 1.13.0
Storage Driver: overlay
Backing Filesystem: xfs
Supports d_type: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-514.6.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 5.671 GiB
ID: 2D2E:73MA:BJQ3:WQAJ:BR3W:TYF5:F3MQ:E7S3:KZGV:A64K:ASZK:UEXE
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
This is is related xfs filesystem with overlay mode.
https://github.com/docker/docker/issues/27358
the solution is one of these.
xfs partitions need to be formatted with ftype set to 1
use ext4 filesystem instead of xfs
setting docker Storage Driver to 'devicemapper'
I hit a similar problem today using Docker 1.13.0 running a CentOS image on a RHEL 7.3 system. Files in my containers root filesystem were acting odd. For example, I would try to delete a file but it would not be removed from the filesystem and display the same '?' you posted when running 'ls'. I also had problems trying to change ownership of files. Anyway, my guess is Docker 1.13.0 changed the default Storage Driver to overlay when in the past (Docker 1.12.5) it was devicemapper. I changed the default back to devicemapper and the problems went away.
I am not a Docker or Linux Filesystem expert and I am not sure if the change to overlay as a default was intentional.
I was trying to install gitlab using docker containers and was able to bring up gitlab successfully using docker compose file from sameersbn.
However after few uninstalls and (docker rm ) reinstalls (docker-compose up) as part of CI testing, I started getting this weird error while running docker-compose up or docker run
[root#server.com ~]# docker run java
Unable to find image 'java:latest' locally
Pulling repository docker.io/library/java
docker: Error while pulling image: Get https://index.docker.io/v1/repositories/library/java/images: malformed MIME header line: Too Many Requests (HAP429)..
See 'docker run --help'.
I can't seem to be able to pull any of the docker containers using docker run or docker-compose.
Couldn't find much help online reg this issue.
As per the docker hub forum the issue https://forums.docker.com/t/429-too-many-requests-how-to-fix-this-isssue/3971/7 should disappear after an hour but I waited half a day without much luck!
Here are the details of my installation:
[root#server build]# docker version
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built:
OS/Arch: linux/amd64
Server:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built:
OS/Arch: linux/amd64
[root#server build]# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 15
Server Version: 1.12.1
Storage Driver: devicemapper
Pool Name: docker-thinpool
Pool Blocksize: 524.3 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 3.077 GB
Data Space Total: 61.2 GB
Data Space Available: 58.12 GB
Metadata Space Used: 1.204 MB
Metadata Space Total: 641.7 MB
Metadata Space Available: 640.5 MB
Thin Pool Minimum Free Space: 6.119 GB
Udev Sync Supported: true
Deferred Removal Enabled: true
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Library Version: 1.02.107-RHEL7 (2015-10-14)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-327.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.64 GiB
Name: server.com
ID: SDFS:SDEF:GKY5:UKGK:QHWR:H4EC:wEFw:YVAS:JE2V:A5YB:FDSW
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 17
Goroutines: 23
System Time: 2016-10-09T18:34:43.969512367-05:00
EventsListeners: 0
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-ip6tables is disabled
Insecure Registries:
127.0.0.0/8
Any help would be much appreciated. I'm stuck with this error and can't proceed any further with my gitlab.
Thanks.
This may or may not be relevant to your situation, but I can report that I had the same error (didn't go away within an hour) and it was related to the fact that I was on a VPN to my office. I don't know if the VPN was the issue, or the NAT of my workplace, but when I turned off the VPN, the issue went away.
Note, I was running Docker for Windows (W7), so my circumstances are quite different from yours. But perhaps this answer will be useful to you or to anyone else looking for an answer.
Bottom line: If you are using a VPN, switch it off and try again. If you are inside a corporate filewall, try from outside.