I am trying to mount a network drive as a volume. This is the command I am trying
docker run -v //NetworkDirectory/Folder:/data alpine ls /data
I am running this command on windows and the data directory is coming up empty. How can I mount this network directory as a volume on the windows host and access it inside the container?
Working with local directories works just fine, so the following command works as expected.
docker run -v c:/Users/:/data alpine ls /data
I can make it work in linux since I can mount the share with cifs-utils on a local directory and use that directory as the volume.
Edit: Looks like this is not possible: How to mount network Volume in Docker for Windows (Windows 10)
My colleague came up with this and it works with our company network drive and it might help someone out there.
We start by creating a docker volume named mydockervolume.
docker volume create --driver local --opt type=cifs --opt device=//networkdrive-ip/Folder --opt o=user=yourusername,domain=yourdomain,password=yourpassword mydockervolume
--driver specifies the volume driver name
--opt Sets driver specific options. I guess they are given to the linux mount command when the container starts up.
We can then test that the volume works with
docker run -v mydockervolume:/data alpine ls /data
Here you can read more about driver specific options and docker volume create
I found this when looking for something similar but see that though it's old it's missing some key information, possibly because they weren't available at the time
The CIFS storage is, I believe, only for when you are connecting to a Windows System as I do not believe it is used by Linux at all.
EDIT: It looks like Docker considered SMB(Samba) to be CIFS Volumes
This same thing can be done with NFS, which is less secure, but is supported by almost everything.
you can create an NFS volume in a similar way to the CIFS one, just with a few changes. I'll list both so they can be seen side by side
When using NFS on WSL2 you 1st need to install the NFS service into the Linux Host OS. I believe CIFS requires a similar one, but as I don't use it I'm not certain.
EDIT: It looks like WSL2 Docker at least for SMB(Samba), CIFS Volumes either don't require any dependencies or I already have them, possibly the same one I install for NFS bellow
In my case the Host OS is Ubuntu, but you should be able to find the appropriate one by finding your system's equivalent for nfs-common installation
sudo apt update
sudo apt install nfs-common
That's it. That will install the service so NFS works on Docker (It took me forever to realize that was the problem since it doesn't seem to be mentioned as needed anywhere)
On the network device you need to have set NFS permissions for the NFS folder, in my case this would be done at the folder folder with the mount then being to a folder inside it. That's fine. In my case the NAS that is my server mounts to #IP#/volume1/folder, within the NAS I never see the volume1 in the directory structure, but that full path to the shared folder is shown in the settings page when I set the NFS permissions. I'm not including the volume1 part as your system will likely be different & you want the FULL PATH after the IP (use the IP as the numbers NOT the HostName), according to your NFS share, whatever it may be.
The nolock option is often needed but may not be on your system. It just disables the ability to "lock" files.
The soft option means that if the system cannot connect to the mount directory it will not hang. If you need it to only work if the mount is there you can change this to hard instead.
The rw (read/write) option is for Read/Write, ro (read-only) would be for Read Only
As I don't personally use the CIFS volume the options set are just ones in the examples I found, whether they are necessary for you will need to be looked into.
The username & password are required & must be included for CIFS
uid & gid are Linux user & group settings & should be set, I believe, to what your container needs as Windows doesn't use them to my knowledge
file_mode=0777 & dir_mode=0777 are Linux Read/Write Permissions essentially like chmod 0777 giving anything that can access the file Read/Write/Execute permissions (More info Link #4) & this should also be for the Docker Container not the CIFS host
noexec has to do with execution permissions but I don't think actually function here, nosuid limits it's ability to access files that are specific to a specific user ID & shouldn't need to be removed unless you know you need it to be, as it's a protection, nosetuids means that it won't set UID & GUID for newly created files, nodev means no access to/creation of devices on the mount point, vers=1.0 I think is a fallback for compatibility, I personally would not include it unless there is a problem or it doesn't work without it
In these examples I'm mounting //NET.WORK.DRIVE.IP/folder/on/addr/device to a volume named "my-docker-volume" in Read/Write mode. The CIFS volume is using the user supercool with password noboDyCanGue55
NFS from the CLI
docker volume create --driver local --opt type=nfs --opt o=addr=NET.WORK.DRIVE.IP,nolock,rw,soft --opt device=:/folder/on/addr/device my-docker-volume
CIFS from CLI (May not work if Docker is installed on a system other than Windows, will only connect to an IP on a Windows system)
docker volume create --driver local --opt type=cifs --opt o=user=supercool,password=noboDyCanGue55,rw --opt device=//NET.WORK.DRIVE.IP/folder/on/addr/device my-docker-volume
This can also be done within Docker Compose or Portainer.
When you do it there, you will need to add a Volumes: at the bottom of the compose file, with no indent, on the same level as services:
In this example I am mounting the volumes
my-nfs-volume from //10.11.12.13/folder/on/NFS/device to "my-nfs-volume" in Read/Write mode & mounting that in the container to /nfs
my-cifs-volume from //10.11.12.14/folder/on/CIFS/device with permissions from user supercool with password noboDyCanGue55 to "my-cifs-volume" in Read/Write mode & mounting that in the container to /cifs
version: '3'
services:
great-container:
image: imso/awesome/youknow:latest
container_name: totally_awesome
environment:
- PUID=1000
- PGID=1000
ports:
- 1234:5432
volumes:
- my-nfs-volume:/nfs
- my-cifs-volume:/cifs
volumes:
my-nfs-volume:
name: my-nfs-volume
driver_opts:
type: "nfs"
o: "addr=10.11.12.13,nolock,rw,soft"
device: ":/folder/on/NFS/device"
my-cifs-volume:
driver_opts:
type: "cifs"
o: "username=supercool,password=noboDyCanGue55,uid=1000,gid=1000,file_mode=0777,dir_mode=0777,noexec,nosuid,nosetuids,nodev,vers=1.0"
device: "//10.11.12.14/folder/on/CIFS/device/"
More details can be found here:
https://docs.docker.com/engine/reference/commandline/volume_create/
https://www.thegeekdiary.com/common-nfs-mount-options-in-linux/
https://web.mit.edu/rhel-doc/5/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-client-config-options.html
https://www.maketecheasier.com/file-permissions-what-does-chmod-777-means/
I didn't find a native CIFS storage driver on docker.
You can use an external volume plugin like this one: https://github.com/ContainX/docker-volume-netshare which support NFS, AWS EFS & Samba/CIFS
Related
I have a bit of a conundrum with mounting a remote folder here.
What we have is a PC in an active directory, as well as a remote server in the same active directory. In order to get files for the script, we need to mount a folder from the remote server into a docker container (using ubuntu 20.04).
So far we've tried to directly mount the folder into the container using WebDAV, but this didn't work saying that the directory of remote folder doesn't exist.
Then we tried to first mount it locally through WSL using the mount command so docker could see the mounted folder on the local pc, but this didn't work either: in this case error said that instead, the folder that didn't exist was the target directory (even though it was created in advance).
The question at hand is, what would be the best and most correct way to mount a remote shared folder that is accessible with URL link to a docker container?
we have a similar issue/use case, but in our case, it was possible to create a Samba 4 share on the host where we had a folder with some .pdf documents to "work with".
We then created a docker volume with SMB share (on the host). With the:
docker volume create --driver local --opt type=cifs --opt device=//192.168.XX.YY/theShare --opt o=username=shareUsername,password='sharePassword',domain=company.com,vers=3.0,file_mode=0777,dir_mode=0777 THE_SHARE" command.
Note: we have centos 7 still on that host running docker (where we need samba mount) so we had to install some dependencies on the host system:
sudo yum update
sudo yum install samba-client samba-common cifs-utils
Then in a container, we simply mounted a volume (using -v)
-v THE_SHARE:/mnt/the_share
and inside application it can refer to the content using local RW to the file system on the /mnt/the_share path.
Description
Official Docker documentation are usually not very useful, and alot of times things remain unclear even after reading through their sections.
There are many things unclear, but this question I just want to target these:
When running docker volume create:
--driver
--opt device
--opt type
When I run docker volume create --driver local --opt device=:/var/www/html/app --opt type=volume volumename I actually do get a volume :
$docker volume inspect customvolume`
[
{
"CreatedAt": "2020-08-03T09:28:10Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/customvolume/_data",
"Name": "customvolume",
"Options": {
"device": ":/var/www/html/customfolder",
"type": "volume"
},
"Scope": "local"
}
]
Trying to mount this new volume:
docker run --name test-with-volume \
--mount source=customvolume,target=/var/www/html/app77' \
my-app-only:latest
Error:
Error response from daemon: error while mounting
volume '/var/lib/docker/volumes/customvolume/_data': failed to
mount local volume: mount :/var/www/html/customfolder:/var/lib/docker/volumes/customvolume/_data: no such device.
Questions
Clearly the options allow you to do some unexpected things, I was able to create a volume volume at a custom location, but it is not mountable.
What are the options for type (with difference of each explained) : when using docker volume create, they are unclear to me.
docker run --mount documentation talks about volume, bind, tmp, but on docker volume create they only show examples, which are tmpfs, btrfs, nfs.
When can you use device?
I thought this could be used to create a custom location for volume type (aka named volumes) on the source host (similar to how bind-mounts can be mounted)
I assumed I could use the 'recommended way of named volumes including a custom folder location' instead of host mounts (bind-mounts).
Finally, how could you setup a docker-compose.yml volume custom driver correctly as well.
I think the confusion lies in the fact that docker run --mount vs docker volume create seems to be inconsistent, because of how unclear Docker documentation quality is
There are two main categories of data — persistent and non-persistent.
Persistent is the data you need to keep. Things like; customer records, financial data, research results, audit logs, and even some types of application log data. Non-persistent is the data you don’t need to keep.
Both are important, and Docker has solutions for both.
To deal with non-persistent data, every Docker container gets its own non-persistent storage. This is automatically created for every container and is tightly coupled to the lifecycle of the container. As a result, deleting the container will delete the storage and any data on it.
To deal with persistent data, a container needs to store it in a volume. Volumes are separate objects that have their lifecycles decoupled from containers. This means you can create and manage volumes independently, and they’re not tied to the lifecycle of any container. Net result, you can delete a container that’s using a volume, and the volume won’t be deleted.
This writable layer of local storage is managed on every Docker host by a storage driver (not to be confused with a volume driver). If you’re running Docker in production on Linux, you’ll need to make sure you match the right storage driver with the Linux distribution on your Docker host. Use the following list as a guide:
Red Hat Enterprise Linux: Use the overlay2 driver with modern
versions of RHEL running Docker 17.06 or higher. Use the devicemapper
driver with older versions. This applies to Oracle Linux and other
Red Hat related upstream and downstream distros.
Ubuntu: Use the overlay2 or aufs drivers. If you’re using a Linux 4.x
kernel or higher you should go with overlay2.
SUSE Linux Enterprise Server: Use the btrfs storage driver.
Windows Windows only has one driver and it is configured by default.
By default, Docker creates new volumes with the built-in local driver. As the name suggests, volumes created with the local driver are only available to containers on the same node as the volume. You can use the -d flag to specify a different driver. Third-party volume drivers are available as plugins. These provide Docker with seamless access external storage systems such as cloud storage services and on-premises storage systems including SAN or NAS.
$ docker volume inspect myvol
[
{
"CreatedAt": "2020-05-02T17:44:34Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/myvol/_data",
"Name": "myvol",
"Options": {},
"Scope": "local"
}
]
Notice that the Driver and Scope are both local. This means the volume was created with the local driver and is only available to containers on this Docker host. The Mountpoint property tells us where in the Docker host’s filesystem the volume exists.
With bind mounts
version: '3.7'
services:
maria_db:
image: mariadb:10.4.13
environment:
MYSQL_ROOT_PASSWORD: Test123#123
MYSQL_DATABASE: database
ports:
- 3306:3306
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data_mariadb/:/var/lib/mysql/
With volume mount
version: "3.8"
services:
web:
image: mariadb:10.4.13
volumes:
- type: volume
source: dbdata
target: /var/lib/mysql/
volumes:
dbdata:
Bind mounts explanation
Bind mounts have been around since the early days of Docker. Bind mounts have limited functionality compared to volumes. When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its full or relative path on the host machine. By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents.
tmpfs mounts explanation
Volumes and bind mounts let you share files between the host machine and container so that you can persist data even after the container is stopped. If you’re running Docker on Linux, you have a third option: tmpfs mounts. When you create a container with a tmpfs mount, the container can create files outside the container’s writable layer. As opposed to volumes and bind mounts, a tmpfs mount is temporary and only persisted in the host memory. When the container stops, the tmpfs mount is removed, and files are written there won’t be persisted.
Volume explanation
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure of the host machine, volumes are completely managed by Docker.
Recently I searched for something similar: how to force a docker volume into writing its data to a custom path that is actually the mount point of a persistent disk. There were 2 motives:
first avoid that the docker volume would be stuck inside the VM
Image's disk space.
second have the data outlive the docker volume itself (e.g. easy to reuse on another VM instance and freshly created docker volume).
This seemed feasible by passing extra options to the standard local driver when executing docker volume create. For example the command below makes the docker volume tmp-volume write into the device's argument value. Note that docker volume inspect still outputs a completely different but unused MountPoint. It worked when Ubuntu was the host OS inside that VM instance:
docker volume create -d local --name tmp-volume\
--opt device="/mnt/disks/disk-instance-test-volume" \
--opt type="none" \
--opt o="bind"
Maybe this is overlapping with your use-case? I blogged the whole story here in more detail: https://medium.com/#francis.meyvis/how-to-force-a-docker-volume-on-a-gce-disk-45b59d4973e?source=friends_link&sk=0e71ef39db84f4cb0ecccc7cd0f3c254
Damith's detailed explanation about named-volumes vs bind-mounts is a good reference to read for anyone. To answer the question I had, he talked about 3rd party plugins so I had to investigate further.
There seems to be no way to use custom location when using a named-volume (only the bind-mounts are able to) with a default Docker installation, but there is indeed a plugin that acts similarly to named-volumes but with some extra functionality.
While this only partially answers some of the things I mentioned in question (and still not clear about), use this for reference if you want to use named-volume acting like bind-mounts
Solution
For my particular use case, the Docker plugin local-persist seems to solve my requirements, it has the capability to 1) persist data when containers get deleted and 2) provide a way to use a custom location.
Matchbooklab Docker local-persist
Installation:
Confirmed to work with Ubuntu 20.04 installation
Run this install script: note: there is also custom installation instructions at the github link if you want to install it manually.
curl -fsSL https://raw.githubusercontent.com/MatchbookLab/local-persist/master/scripts/install.sh | sudo bash
This will install and setup startup script for local-persist to monitor volumes.
Setup volume
Create a new local-persist volume:
docker volume create -d local-persist --opt mountpoint=/custom/path/on/host --name new-volume-name
Usage
Attach the volume to a container:
Newer --mount syntax:
docker run --name container-name --mount 'source=new-volume-name,target=/path/inside/container'
-v syntax: (not tested - as shown in github readme)
docker run -d -v images:/path/inside/container/ imagename:version
Or with docker-compose.yml: (example shows v2; not tested yet)
version: '2'
services:
one:
image: alpine
working_dir: /one/
command: sleep 600
volumes:
- data:/one/
two:
image: alpine
working_dir: /two/
command: sleep 600
volumes:
- data:/two/
volumes:
data:
driver: local-persist
driver_opts:
mountpoint: /data/local-persist/data
I'm aware that plugins like docker-volume-netshare exist and I've used them in the past but for this project I am constrained to the local driver only.
I can successfully create and use a CIFS volume with the local driver in the traditional sense (passing it the username/password inline) but now I want to pass the credentials via a credentials file. The Docker documentation says it supports similar commands as mount so, to that end, I've been trying to pass the credentials like I would if I were mounting it via the mount command.
I have a /root/.cifs file.
username=myusername
password=mypassword
Then I tested it by mount manually
mount -t cifs \
-o credentials=/root/.cifs,vers=3.0 \
//192.168.76.20/docker_01 /mnt
It works successfully and I can read/write data. So now I try to create the docker volume using the same logic.
docker volume create \
--driver local \
--name persistent \
--opt type=cifs \
--opt device=//192.168.76.20/docker_01 \
--opt o=credentials=/root/.cifs,vers=3.0
However, when I try to use the volume I get CIFS VFS: No username specified in the Docker log file.
I tried modifying the volume parameters by including the username (--opt o=credentials=/root/.cifs,username=docker01,vers=3.0) but that just results in 0xc000006d STATUS_LOGON_FAILURE
Is there a way to create a CIFS volume without having to specify the credentials inline?
I just digged into this to find out why it does not work. It seems the issue here is that the credentials-file is a feature of the wrapper binary "mount.cifs" while docker uses the systemcall SYS_MOUNT itself for mounting the volume:
If you look into the linux kernel's cifs extension it says:
When using the mount helper mount.cifs, passwords may be specified via alternate
mechanisms, instead of specifying it after -o using the normal "pass=" syntax
on the command line:
You can trace this down to the source code of the mount.cifs executable where you find the code to read the credentials file.
From this I conclude that unless you change the docker source code to use the mount.cifs executable instead of the linux system call this will not work.
I've created a separate volume on an Ubuntu machine with the intention to store docker volumes and persist data. So far, I've created volumes on the host machine for two services (jira and postgres), which I intent to backup offsite. I am using docker-compose like so
postgres:
volumes:
- /var/dkr/pgdata:/var/lib/postgresql/data
And for jira:
volumes:
- /var/dkr/jira:/var/atlassian/jira
My thinking is that I could just rsync the /var/dkr folder to a temporary location, tar it and send it to S3. Now that I've read a bit more on the process of hosted volumes I am worried that I might end up with messed up GIDs and UIDs for the services when I restore from a backup.
My questions are - has docker resolved this problem in the newer versions (I am using the latest). Is it safe to take this approach? What would be a better way to backup my persistent volumes?
There's no magic solution to uid/gid mapping issues between containers and hosts. It would need to be implemented by the filesystem drivers in the Linux kernel, which is how NFS and some of the VM filesystem mappings work. For "bind" mounts, forcing a uid/gid is not an option from Linux, and Docker is just providing an easy to use interface on top of that.
With your backups, ensure that uid/gid is part of your backup (tar does this by default). Also ensure that the uid/gid being used in your container is defined in the image or specified to a static value in your docker run or compose file. As long as you don't depend on a host specific uid/gid, and restore preserving the uid/gid (default for tar as root), you won't have any trouble.
Worst case, you run something like find /var/dkr -uid $old_uid -exec chown $new_uid {} \; to change your UID's. The tar command also has options for change uid/gid on extract (see the man page for more details).
I want to know why we have two different options to do the same thing, What are the differences between the two.
We basically have 3 types of volumes or mounts for persistent data:
Bind mounts
Named volumes
Volumes in dockerfiles
Bind mounts are basically just binding a certain directory or file from the host inside the container (docker run -v /hostdir:/containerdir IMAGE_NAME)
Named volumes are volumes which you create manually with docker volume create VOLUME_NAME. They are created in /var/lib/docker/volumes and can be referenced to by only their name. Let's say you create a volume called "mysql_data", you can just reference to it like this docker run -v mysql_data:/containerdir IMAGE_NAME.
And then there's volumes in dockerfiles, which are created by the VOLUME instruction. These volumes are also created under /var/lib/docker/volumes but don't have a certain name. Their "name" is just some kind of hash. The volume gets created when running the container and are handy to save persistent data, whether you start the container with -v or not. The developer gets to say where the important data is and what should be persistent.
What should I use?
What you want to use comes mostly down to either preference or your management. If you want to keep everything in the "docker area" (/var/lib/docker) you can use volumes. If you want to keep your own directory-structure, you can use binds.
Docker recommends the use of volumes over the use of binds, as volumes are created and managed by docker and binds have a lot more potential of failure (also due to layer 8 problems).
If you use binds and want to transfer your containers/applications on another host, you have to rebuild your directory-structure, where as volumes are more uniform on every host.
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts are dependent on the directory structure of the host machine, volumes are completely managed by Docker. Volumes are often a better choice than persisting data in a container’s writable layer, because a volume does not increase the size of the containers using it, and the volume’s contents exist outside the lifecycle of a given container. More on
Differences between -v and --mount behavior
Because the -v and --volume flags have been a part of Docker for a long time, their behavior cannot be changed. This means that there is one behavior that is different between -v and --mount.
If you use -v or --volume to bind-mount a file or directory that does not yet exist on the Docker host, -v creates the endpoint for you. It is always created as a directory.
If you use --mount to bind-mount a file or directory that does not yet exist on the Docker host, Docker does not automatically create it for you, but generates an error. More on
Docker for Windows shared folders limitation
Docker for Windows does make much of the VM transparent to the Windows host, but it is still a virtual machine. For instance, when using –v with a mongo container, MongoDB needs something else supported by the file system. There is also this issue about volume mounts being extremely slow.
More on
Bind mounts are like a superset of Volumes (named or unnamed).
Bind mounts are created by binding an existing folder in the host system (host system is native linux machine or vm (in windows or mac)) to a path in the container.
Volume command results in a new folder, created in the host system under /var/lib/docker
Volumes are recommended because they are managed by docker engine (prune, rm, etc).
A good use case for bind mount is linking development folders to a path in the container. Any change in host folder will be reflected in the container.
Another use case for bind mount is keeping the application log which is not crucial like a database.
Command syntax is almost the same for both cases:
bind mount:
note that the host path should start with '/'. Use $(pwd) for convenience.
docker container run -v /host-path:/container-path image-name
unnamed volume:
creates a folder in the host with an arbitrary name
docker container run -v /container-path image-name
named volume:
should not start with '/' as this is reserved for bind mount.
'volume-name' is not a full path here. the command will cause a folder to be created with path "/var/lib/docker/volumes/volume-name" in the host.
docker container run -v volume-name:/container-path image-name
A named volume can also be created beforehand a container is run (docker volume create). But this is almost never needed.
As a developer, we always need to do comparison among the options provided by tools or technology. For Volume & Bind mounts, I would suggest to list down what kind of application you are trying to containerize.
Following are the parameters that I would consider before choosing Volume over Bind Mounts:
Docker provide various CLI commands to Volumes easily outside containers.
For backup & restore, Volume is far easier than Bind as it depends upon the underlying host OS.
Volumes are platform-agnostic so they can work on Linux as well as on Window containers.
With Bind, you have 2 technologies to take care of. Your host machine directory structure as well as Docker.
Migration of Volumes are easier not only on local machines but on cloud machines as well.
Volumes can be easily shared among multiple containers.